Working on Brownfield Applications

When we learn software development for the first time most of us assume that we will be working on new and exciting applications using all the latest sparkly technology and a blank canvas to fill with our sophisticated architectural solutions.  The truth is that most development work is brownfield development; maintenance and additions to existing software that was originally written by someone who no longer works there.

If we’re honest, brownfield development is nowhere near as much fun as starting afresh but, especially in business systems, it’s a developer’s day-to-day bread and butter and we just have to get on with it and work within the constraints. We can’t all be rockstars.

To borrow an analogy from Richard Dawkins: we’re given an old bi-plane made of wood and canvas and expected to turn it into a jet fighter. The problem is that we can’t afford to just start afresh. After every small change we make the plane must still be able to fly; if the plane doesn’t fly we can’t keep paying for improvements.

Brownfield development is harder than blue-sky projects and it less fun. If you’re lucky and you stick with it long enough you may get a chance for a rewrite to take advantage of new languages or technologies, but these only happen once in a blue moon. Once the rewrite is done, it’s back to maintenance again.

So it’s harder and less fun – you’re a professional, not a hobbyist, and this is what you get paid for, this is what you do.  Take pride in your ability to do the hard work and take pride in the improvements that no-one else will notice (people notice errors, they don’t notice the lack of errors). Introduce better techniques and technologies piecemeal and take pleasure in the fact that you’re slowly making life easier for your future self by improving the code quality.

For our on-line mentoring and consultancy services, visit us on-line.

Tagged

Software Projects are like 18-wheel trucks

The business of writing software is full of folk who just want to delete it all and start over so it can be done “properly” (where “properly” => “written by me and not someone else”).  Most of the time this is a bad idea; rewrites cost a lot and have a reasonable chance of being worse than the original (which has often had time to mature).

Writing a large software project is like driving an 18-wheeler truck in the fast lane.  When you start to sense that something isn’t going quite right and you want to pull over and check, what do you do?  If you wrench the steering wheel over to full lock, you’re likely to crash and burn.  You have to steer gently; ease the truck with its momentum slowly over to where you want to be.

It is usually much better to go for incremental change; baby steps to get to the end result.

  • Analogy: You’re far less likely to paint yourself into a corner if you only paint a little bit of the floor at a time and leave it to dry before starting the next bit. The size of the area covered each time is a matter of judgement, but should be less than the width of your stride.
  • Rewriting a smaller subset of functionality will allow you to leave the original in place and swap between the two versions with compiler switches.  This allows you do make direct performance comparisons. Alternatively by leaving the original code in place you can compare both versions using unit tests.
  • If you replace a smaller subset of functionality and it doesn’t work out well, it’s much easier to roll back to the original.
  • Moving in small steps makes it much easier to keep track of progress and makes it easier for the assigned developers to understand the process, making it easier to check for problems as they code.
  • You can interleave rewrites with bug fixes and new features in your software, so your product doesn’t “stand still” during a rewrite and users won’t start to feel that it’s old and not updated much.
  • From a marketing perspective, rather than seeing just one announcement of “we’ve made it better” for a whole rewrite, they will see a series of “we’ve made this feature better” announcements that make them feel you are working hard for them. Your marketers can use this for extended marketing exposure.

When I make recommendations for adding a new technology to a system or for an improvement to existing functionality, I always map out a gentle path to move from where we are now to where we want to be. This lets project managers decide whether to do it all at once, or whether to spread it out over many iterations.

For our on-line mentoring and consultancy services, visit us on-line.

Random Obscure Analogy: DTOs are like mRNA

Well, if you understood the title then you probably already know what I’m going to say.

In programming, a DTO (Data Transfer Object) is a code class that just contains data (so not really an object by the OOP definition, but we’ll let that slide). DTOs are used to pass data in an architecturally agnostic manner between different parts of a system.  For example, a DTO may be used to pass data loaded by the data access layer up the stack to the business logic layer. The two different layers may have completely different architectural structures, but they both understand the simple structure of the DTO.

In cell biology, mRNA is a nucleic acid sequence (like DNA only simpler) that carries data from the DNA inside a cell’s nucleus, out of the nucleus membrane and into the cytoplasm (the rest of the cell that is not the nucleus and contains various ‘bits’ for cell function). In the cytoplasm they are ‘read’ by ribosomes that translate the data into protein molecules.

The analogy holds because the structure of the data in the DNA in the cell nucleus is ‘configured’ for handling the splitting of the cell in two and is not compatible with the construction of the proteins (apart from anything else it’s too large to fit through the nucleus membrane), so an intermediary form is used to ferry the data between the two systems.

My general opinion is that if a mechanism is produced by evolution, it is going to be optimal enough to use in my code.

For our on-line mentoring and consultancy services, visit us on-line.

Tagged , , , ,

Separation of Concerns should be done at all levels

Articles concerning separation of concerns generally deal with the separation of concerns between different classes.  However, there is more to this than separating out classes.

What is the point of separation of concerns?  It is a good concept to follow because it reduces ‘bleeding’ of code changes (side-effects) in one area from affecting code in other areas.  It also means that to fix a bug or change a feature you only have to make the change in one, small place.  It also contributes to code reuse as each piece of code has a single job that can often be used in different contexts.

So, separation of concerns is not really about classes, it’s about code generally.  This means you should apply separation of concerns at all levels within a solution.

For example:

Place your framework code (e.g. your generic code) into its own project. You can reuse this generic code in any solution; make it part of your personal toolkit.

Place your data access code in a separate project from your business logic.  This means you can transfer to some other data storage technology (like the cloud) without affecting your business logic. Most people already do this in a layered architecture.

Place your data structures in their own project (be they DTOs, EF Entities, or ADO.NET datasets), separate from data access or business logic; if you decide to extend your system or implement an SOA architecture, having the data structures separated will make them easily accessible from multiple places. Again, many people already to this and if you don’t you really should.

Separation of concerns within a method

At the method level you want to separate out the ‘ceremony’ code that supports the business logic from the actual business logic itself. Ceremony code includes things like logging and parameter checking; things that you need to have for robust code, but aren’t specifically involved in the business logic.

You will often read SOLID proponents state that you should create a virtual (but not abstract) base class with the business logic, then  derive into a class that wraps parameter checking around the methods, then derive a third class from that to wrap logging around that, etc.

To me this sounds awfully complicated and long-winded.

Aspect Oriented techniques can be a big help for the general aspects (like logging), and contract-based programming can help with parameter validation (especially Microsoft’s contract library with compile-time checking).  However, sometimes there isn’t an exact contract that performs the check you want or you’re just working on legacy code that doesn’t use advanced techniques and there isn’t time to fit them retrospectively.

In such cases I would suggest the following technique.

Example code here is from a simple logging system.  It’s probably true that most developers would write it in the form:

private void TimedExecution(object state)
{
    if (_logQueue.Count > 0)
    {
        // Do business logic here
    }
}

What we have here is the basic business logic surrounded by error checking (or, from another point of view, execution optimisation) code. That’s not separate, that’s highly coupled. If there are multiple parameter validation checks to be made, then the conditional statements build up and up and it can become difficult to see which code is actually the business logic. If you’re nesting all those conditional statements, you actual business logic might be indented so far you have to scroll to the right to even see it.

If we invert the if-statement, we can make these two concerns separate blocks within the method:

private void TimedExecution(object state)
{
    if (_logQueue.Count == 0)
    {
        return;
    }
    // Do business logic here
}

So we now have two, separate chunks of code – the ceremony code followed by the business logic code. If there are multiple checks to perform, split them out into individual if statements, one after the other, each exiting the method or function if the parameter is bad. Having separate if statements is also a level of separation – each conditional block has one validation to make.

This technique is both quicker and simpler that inheriting into wrapper classes and it is easy to identify which is which and changes in one block won’t break the other (although any unit tests will test both).  After you have done with writing/refactoring the method with separation, you could extract all the conditional parameter checks into a separate method (returning a Boolean) if you really felt the need, or you can leave it intact if you find that easier to read and debug. This method isn’t as cast-iron as inheriting multiple levels of classes, but I’d place it at about 80% of the effect for 20% of the effort which I think is a good pragmatic trade for projects that need to get finished.

For our on-line mentoring and consultancy services, visit us on-line.

Tagged , , , , ,

Code does not have to be perfect

One thing that took me a while to figure out is that in business software, code doesn’t have to be perfect; bugs can be left in the system. I originally came from a science background and the sort of code I started out on had to be perfect; a simulation with a bug is worthless and a piece of medical software with a bug can be fatal. Business software is never going to be fatal so a minor bug or two is not a problem. Leaving known bugs in the system is an anathema to many programmers (the justified programmer stereotype being obsessive perfectionists). The problem is that as a developer you can become completely absorbed in the code you are writing and forget that this code is just one cog in the machine. One thing that you need to remember when writing business software:

writing business software is not about writing software, it’s about improving the business

So, does spending two days chasing down that annoying screen update issue help the business, or perhaps could the users put up with that for now? Every bug fix is a business case: is it cheaper to get the developers to re-write code for that data breakage that occurs once every blue moon, or to just manually fix the database when it happens? Working on code has a cost (your wages at the very least), and so that cost must be used to its best advantage.  As frustrating as it might seem at times, leaving minor bugs in place is often the best business strategy.

Of course, the other side of the coin is that as these minor annoyances build up, the system moves more and more towards being viewed as ‘bad software’ by the people who actually use it, and so we have to end up walking the line between wasting resources fixing annoyances that don’t really bring business benefit and having the users think that we developers can’t do our jobs properly.

For our on-line mentoring and consultancy services, visit us on-line.

Tagged , , , ,