Categories
programming unit testing

Too fast to live, too impatient to unit test

James Dean, James Dean, you bought it sight unseen.
You were too fast to live, too young to die, bye bye. — The Eagles

The benefits of unit-testing are enormous. I don’t think anyone can deny that, but to make unit-testing work for you actually have to write tests.

As I see it I have three (no four) main problems with unit-testing which ultimately boil down to the time and hassle it takes to write certain types of tests:

  1. The generally approved technique is to start with the test and start by making a test that fails for a specific functionality. You can’t compile the failed test until you’ve written some code to call. Then you find yourself writing something that will take a few arguments. For simple methods that take some arguments and return some values means coding some stub, switching to the unit-test view, write a failing test, switch back and write the functionality. This method probably took 3 or 4 times longer to write than if I hadn’t tried to unit-test it. I know that I will get that payback later but my life contains plenty of context switches already and having to add these extra ones is a pain. I know (but can’t find a link) that there’s a special tool for Ruby that automates this process. You write a test with a well known name, it creates all the stubs and makes it throw exceptions. I need this feature!
  2. Sometimes you will end up with abstract classes in your design. These are awkward to test for a number of reasons, one of which is that you have to create a unit-testable implementation to test it since you can’t instantiate it because it’s abstract. This means creating especially derived inner classes for your abstract class.
  3. Unit testing data sources. Some unit-testing (like Rails for example) include the data sources in the test. But I’ve heard the view that this is a form of integration-testing since you are testing more than just the class when you get a bunch of data from a database. This raises the ugly question of how to test data source access classes without a data source. It’s a similar problem to the abstract class problem in as much as you need a special instance of a class for your unit-test purposes. To serve this purpose you can use a mocking framework like NMock or jMock or even Mockpp! Great though these tools are it’s very easy to create a monster mock object that has all sorts of complex and hard-to-maintain behaviour.
  4. Tests need to be maintained. Like the code you’re testing the tests themselves have to be maintained when refactoring work takes place because they tend to start failing a lot. This maintenance usually comes in two forms: really nasty problem that I’m really glad I found and annoying little deficiency in the test that means it needs tweaking to work. This illustrates that tests themselves can have good and bad design and we can save downstream time (with yet more upfront cost) if we try to make the tests less brittle.

It’s the second and third points that prove to be a tremendous time-sink for me. You want to unit-test the behaviour so you have to create a custom class or mock to do it. Before you know it the mock / custom class requires as much effort as the original code. So my development is half as effective.

I know that all this is an upfront cost that will get repaid many times over later. I even know that the writing of the custom classes will actually help to ensure that there is a clean contract between base classes and their derived classes. And I’ve already mentioned that good test design will save me time later. But all that upfront time adds up and I need to deliver something. Going back and doing it later is just not an option either. Once the code is written the chances that you’ll go back and add that very important unit test are slim. There’s new stuff to deliver.

I don’t think that James Dean would have written software and I’m positive if he had he wouldn’t of written unit-tests. But if I’m to write code that ‘lives’ I need to. It’s just forcing myself to find that time …

Categories
article programming

Shifting Sands of Time (or why Software Really Does Decay)

I’ve become morbidly obsessed with decay these days. My dentist doesn’t help but she’s nice about my receding gums and minor cavities so I let her off. But the more I look the more I see the decay, all around me.

I recently took this photograph on the back streets of my town. In the house of Bamboo The “Bamboo Bar” is long since abandoned but the newish bicycle suggests that perhaps there’s life inside. This abandonment theme is so common in Cyprus that it’s almost invisible. In some places the contrasts are stark, the new and glitzy sits yards from the old and beaten. Large stretches of land are waste ground not because it’s a waste-land but because the owners bought the plot a long time ago and will build a house. One day. Until that day the weeds grow tall and the tumble-rubbish piles high. Things in Cyprus take time, I have to remind myself to never forget that.

This theme pervades through all areas of life and I guess it happens everywhere it’s just the contrast here seems stark.

But what of software? It seems ludicrous to claim that software can decay because: it is abstract, has no moving parts and isn’t exposed to the environment. This seems true enough. But suppose such a notion of decay could be applied to software. If there’s no physical environment to aid decay then what environmental factors could there be?

Changing Requirements = Shifting Sands

For me this is the biggest challenge that faces us as software engineers. The shifting sands of requirements are real. Pinning them down for the delivery of the initial project is probably vital but once you have released the software the requirements can and will run free. Mentioning to your boss that ‘we didn’t factor that in to the original design’ sounds like an excuse and could seriously damage your bonus.

But what if new features are successfully added? New features can be added that upset whatever balance the application may of had. Over-time those new features could become poorly understood code dead-ends that end up polluting the source pool forever or even central aspects of the system (possibly eclipsing the original intended purpose of the software).

This consuming phenomenon can be seen in the physical world too. I used to own a series of bangers each car worse than the first. The worst purchase drove for about 100 miles and then needed a new engine. Others became accidental projects for an impatient and inept mechanic. Me. I remember the dawning horror when I realised that part of the problem with my Mini 1000

Mini 1000

was that I was putting new parts into it. The new parts would operate at peak efficiency in amongst a rotting husk of older parts. Those old parts surrounding the new parts understandably promptly failed under the new load. I did a lot of walking in those days.

Analogies are often imperfect and comparing cars to software is no different because software components don’t wear out! However, when the the requirements change you might replace one component with one that serves a slightly different purpose. Our replacement component might compromise the original design assumptions. We can do this successfully for some time, but we will eventually end up with a loss of direction. If things are going really bad you might even have a loss of vision.

Loss Of Vision & Bad Maintenance = Death by a thousand cuts

Every large project will have at least one or two architects. Those people are the knowledge-holders of why the system was built and what trade-offs were made and why. Over-time you will lose these people and when they’re gone you’ve got a problem. If the system is sufficiently large you will probably find that no one person has that original knowledge and when changing requirements happen the people you still have end up misunderstanding the code or taking shortcuts rather than refactoring. And this is undoubtedly because of Joel:

It’s harder to read code than to write it.

So, in my opinion any medium-to-large in-house software project decays. There’s no physical decay but any useful be-spoke software project will probably change as the business changes. The more time passes the further the source pool migrates from order to entropy and the harder to maintain and less efficient it becomes.

Running to stand still

The inevitability of this decay means you’ve got to do some things up-front in a new project to try and prevent your software being like my beloved Mini 1000

  • An identifiable design. This is largely impalpable but if a single person designed the original system and the coding standards were reasonably tight you might produce an identifiable design which others can follow. Of course design operates on many levels in software and when it comes to maintenance the chances are only the low-level design will be seen, the high-level design will probably be invisible to most.
  • When the knowledge-holders leave there is sufficient transference of understanding at all levels of the code. This probably also broaches the tricky topic of documentation
  • Finally, some of that knowledge can be encapsulated in tests (unit and integration). So that when we do end up on shifted sand we can at least assert that what we have now is of comparable quality to what we had then.

As for the “Bamboo Bar” I think its ship has sailed. My Mini 1000 is probably starring in a Scrapheap Challenge now. But I’m not sad, decay in some things can be beautiful. Sadly, my software is like my teeth. Decay doesn’t become them.

Categories
article programming

Love Your Code

Someone said to me the other day. “I love that code. It’s so structured”. This is the first time anyone had ever admitted to me that they could indeed love code. Before I lifted the receiver to call the men in white coats I tried to absorb what he’d said. What he really meant, I suspect, is that he appreciated the code. But the more I thought about it the more I thought that we do in fact enter into relationships with the things we write.

This is further evidence, I guess that writing software is a creative process like painting a picture or designing a building. It becomes emotional. I suspect I’m far from alone when I say that I’ve spent a significant proportion of my career in an emotional tug-of-war with the programs I’m working on or have written myself. They have a sort-of-life all their own and usually I’m the Doctor Kildare keeping them alive. I haven’t lost a patient yet. But the most curious part of this form of creation, at least to outsiders, is that it has no permanence and the structure that my friend so admired is almost entirely abstract.

This impermanence makes the code we write invisible to the people that see what the code does. They have no knowledge and probably little understanding of what it took to make that program. When those people are the ones making the decisions then Houston we have a problem.

But there’s clearly good reason to be in love with your code. The more you love it the better chance it has of doing its job, working most of the time and failing rarely. But the best part of all is that by expending that little extra effort you made something that someone else can not only appreciate and understand but can modify themselves.

The stark reality of writing code that will get used for a purpose (and why would you do anything else?), is it will very soon need to be changed. If that code is successful it will only be a short matter of time before a new feature is added or, in the extreme, that code is used for something else entirely other than what it was designed for. I’ve seen that happen on two very large projects (>100,000 lines of code) and the fact that it could be done at all is evidence that the original code was well loved. In fact it would be difficult, I suspect for that code to exist at all if wasn’t loved. But once the purpose of the software is changed all bets are probably off (and the subject of another post!).

Consider then what happens when we don’t love the code we write. You don’t love it because you built it to throw-away or you built it under pressure always meaning to come back and fix up later. I’d be the first to admit that I’ve done this. There’s probably a few repercussions from being in this position. The chances are that the user that is on the receiving end of this unloved creation will get frustrated with it. That frustration will find it’s way back to you. Beware.

Worse is when you end up being the one frustrated with it and unable, for whatever reason, to change it. It’s like being Dr Frankenstein and watching your monster rip up the town whilst you sit in your chair and scratch your head for ways to bring it under control. Perhaps if you hadn’t sewed the monster’s head on backwards you wouldn’t be in this position? Well you’d better break out the sewing needles friend. You’re going to need them.

Categories
.NET article patterns programming windows

MVP (a.k.a MVC) in VB.NET

Model-view-controller is an old, old, old but very good idea. It encourages the separation of model, presentation and control from each other. It’s used in so many places I can’t name them but frameworks like: Struts and Ruby-On-Rails actually enforce it.

For a long time it seems to me that Microsoft has lagged behind in allowing us to use this idea. Their once flagship product, Visual Basic 6, makes it almost impossible to write good MVC code. First of all, in VB6 there is no real inheritance which makes writing good models difficult. Secondly, if those models should contain any items that generate events then those items can not be defined in a class module and must be made public. Sure you can simulate and work-around these things by various means but in the end you will just be fighting the language. And that is never good.

So it’s good to see VB.NET, or .NET 2.0 to be precise, not only has excellent object support but a mechanism that can be used for MVC is actually built into the language.