Categories
article programming

Software Dream-ualisation

Ok, I admit it, by some measures I am a sad, sad individual. Why? Because sometimes I dream about programming. Now those that know me might be thinking that in my dreams I sit in amongst monster rigs hacking away at some monster problem.

I have been told that the best crackers in the world can do this under 60 minutes but unfortunately I need someone who can do this under 60 seconds. — Gabriel, from the movie Swordfish

Sadly the un-reality of my slumber is a little more prosaic and not like Swordfish at all. These dreams are always bizarrely specific programming tasks that would require a small amount of thought if I was awake but since I’m not conscious, they are a little harder.

Last night was different, I dreamt of nothing and woke to the sound of vomitting children (my own). Once that drama was resolved I couldn’t find a way to drift off to sleep again because for some strange reason I’d started thinking about software visualisation. I don’t even want to think about how I got onto that train of thought.

Anyway, the thought went something like this. Could software have colour? I couldn’t see why not. If software had colour then would it be useful? I reasoned that yes, it could be designed to be useful to give software a colour. For instance, colours could be assigned to code-patterns and this might ease in the understanding of that code. Since code is easier to write than to read this seemed like a worthwhile aim.

But then I thought perhaps a better visualisation would be to colour and orient objects on a plane based on the amount of messages that object issues / receives, or some other arbitary scheme. This sounded like a really rather jolly idea and I resolved to investigate it more fully in the morning and I promptly fell asleep.

When daylight arrived I found a link to this research that does something very similar to what I was dream-scribing but the site hadn’t been touched since 1998. Other research, whilst relevant, seemed similarly dormant. The most recent research I could find is here. It uses Vizz3D to turn software into ‘cities’ that can be navigated. This is indeed exciting stuff, even if it was done in C/C++.

It’s long since fascinated me that the world of software is a dynamic ever-shifting place but the tools with which we work on that software (especially for very large projects) don’t really help in trying to conceptualise that software. Indeed, the code most of us see in the maintenance phase is at a lower level of abstraction than that of the overall structure of that software and the structure can be very hard to see by just looking at the code.

Sure we can use various tools like profilers and coverage analysers to view different dimensions of the software plane but they are not the whole picture and compositing those analyses into a coherent whole is still not easy.

Fast forward ten years, perhaps DevStudio or Eclipse will ship with a project visualizer. The information transmitted in a single visualisation could save hours of code-grokking. It probably won’t change the world but it would be very, very, useful.

But perhaps in ten years we will have brains the size of water-melons and be able to program computers using only our minds (like in Firefox). I guess it’s time to go back to sleep now. Sweet dreams.

Categories
programming unit testing

Too fast to live, too impatient to unit test

James Dean, James Dean, you bought it sight unseen.
You were too fast to live, too young to die, bye bye. — The Eagles

The benefits of unit-testing are enormous. I don’t think anyone can deny that, but to make unit-testing work for you actually have to write tests.

As I see it I have three (no four) main problems with unit-testing which ultimately boil down to the time and hassle it takes to write certain types of tests:

  1. The generally approved technique is to start with the test and start by making a test that fails for a specific functionality. You can’t compile the failed test until you’ve written some code to call. Then you find yourself writing something that will take a few arguments. For simple methods that take some arguments and return some values means coding some stub, switching to the unit-test view, write a failing test, switch back and write the functionality. This method probably took 3 or 4 times longer to write than if I hadn’t tried to unit-test it. I know that I will get that payback later but my life contains plenty of context switches already and having to add these extra ones is a pain. I know (but can’t find a link) that there’s a special tool for Ruby that automates this process. You write a test with a well known name, it creates all the stubs and makes it throw exceptions. I need this feature!
  2. Sometimes you will end up with abstract classes in your design. These are awkward to test for a number of reasons, one of which is that you have to create a unit-testable implementation to test it since you can’t instantiate it because it’s abstract. This means creating especially derived inner classes for your abstract class.
  3. Unit testing data sources. Some unit-testing (like Rails for example) include the data sources in the test. But I’ve heard the view that this is a form of integration-testing since you are testing more than just the class when you get a bunch of data from a database. This raises the ugly question of how to test data source access classes without a data source. It’s a similar problem to the abstract class problem in as much as you need a special instance of a class for your unit-test purposes. To serve this purpose you can use a mocking framework like NMock or jMock or even Mockpp! Great though these tools are it’s very easy to create a monster mock object that has all sorts of complex and hard-to-maintain behaviour.
  4. Tests need to be maintained. Like the code you’re testing the tests themselves have to be maintained when refactoring work takes place because they tend to start failing a lot. This maintenance usually comes in two forms: really nasty problem that I’m really glad I found and annoying little deficiency in the test that means it needs tweaking to work. This illustrates that tests themselves can have good and bad design and we can save downstream time (with yet more upfront cost) if we try to make the tests less brittle.

It’s the second and third points that prove to be a tremendous time-sink for me. You want to unit-test the behaviour so you have to create a custom class or mock to do it. Before you know it the mock / custom class requires as much effort as the original code. So my development is half as effective.

I know that all this is an upfront cost that will get repaid many times over later. I even know that the writing of the custom classes will actually help to ensure that there is a clean contract between base classes and their derived classes. And I’ve already mentioned that good test design will save me time later. But all that upfront time adds up and I need to deliver something. Going back and doing it later is just not an option either. Once the code is written the chances that you’ll go back and add that very important unit test are slim. There’s new stuff to deliver.

I don’t think that James Dean would have written software and I’m positive if he had he wouldn’t of written unit-tests. But if I’m to write code that ‘lives’ I need to. It’s just forcing myself to find that time …

Categories
article programming

Shifting Sands of Time (or why Software Really Does Decay)

I’ve become morbidly obsessed with decay these days. My dentist doesn’t help but she’s nice about my receding gums and minor cavities so I let her off. But the more I look the more I see the decay, all around me.

I recently took this photograph on the back streets of my town. In the house of Bamboo The “Bamboo Bar” is long since abandoned but the newish bicycle suggests that perhaps there’s life inside. This abandonment theme is so common in Cyprus that it’s almost invisible. In some places the contrasts are stark, the new and glitzy sits yards from the old and beaten. Large stretches of land are waste ground not because it’s a waste-land but because the owners bought the plot a long time ago and will build a house. One day. Until that day the weeds grow tall and the tumble-rubbish piles high. Things in Cyprus take time, I have to remind myself to never forget that.

This theme pervades through all areas of life and I guess it happens everywhere it’s just the contrast here seems stark.

But what of software? It seems ludicrous to claim that software can decay because: it is abstract, has no moving parts and isn’t exposed to the environment. This seems true enough. But suppose such a notion of decay could be applied to software. If there’s no physical environment to aid decay then what environmental factors could there be?

Changing Requirements = Shifting Sands

For me this is the biggest challenge that faces us as software engineers. The shifting sands of requirements are real. Pinning them down for the delivery of the initial project is probably vital but once you have released the software the requirements can and will run free. Mentioning to your boss that ‘we didn’t factor that in to the original design’ sounds like an excuse and could seriously damage your bonus.

But what if new features are successfully added? New features can be added that upset whatever balance the application may of had. Over-time those new features could become poorly understood code dead-ends that end up polluting the source pool forever or even central aspects of the system (possibly eclipsing the original intended purpose of the software).

This consuming phenomenon can be seen in the physical world too. I used to own a series of bangers each car worse than the first. The worst purchase drove for about 100 miles and then needed a new engine. Others became accidental projects for an impatient and inept mechanic. Me. I remember the dawning horror when I realised that part of the problem with my Mini 1000

Mini 1000

was that I was putting new parts into it. The new parts would operate at peak efficiency in amongst a rotting husk of older parts. Those old parts surrounding the new parts understandably promptly failed under the new load. I did a lot of walking in those days.

Analogies are often imperfect and comparing cars to software is no different because software components don’t wear out! However, when the the requirements change you might replace one component with one that serves a slightly different purpose. Our replacement component might compromise the original design assumptions. We can do this successfully for some time, but we will eventually end up with a loss of direction. If things are going really bad you might even have a loss of vision.

Loss Of Vision & Bad Maintenance = Death by a thousand cuts

Every large project will have at least one or two architects. Those people are the knowledge-holders of why the system was built and what trade-offs were made and why. Over-time you will lose these people and when they’re gone you’ve got a problem. If the system is sufficiently large you will probably find that no one person has that original knowledge and when changing requirements happen the people you still have end up misunderstanding the code or taking shortcuts rather than refactoring. And this is undoubtedly because of Joel:

It’s harder to read code than to write it.

So, in my opinion any medium-to-large in-house software project decays. There’s no physical decay but any useful be-spoke software project will probably change as the business changes. The more time passes the further the source pool migrates from order to entropy and the harder to maintain and less efficient it becomes.

Running to stand still

The inevitability of this decay means you’ve got to do some things up-front in a new project to try and prevent your software being like my beloved Mini 1000

  • An identifiable design. This is largely impalpable but if a single person designed the original system and the coding standards were reasonably tight you might produce an identifiable design which others can follow. Of course design operates on many levels in software and when it comes to maintenance the chances are only the low-level design will be seen, the high-level design will probably be invisible to most.
  • When the knowledge-holders leave there is sufficient transference of understanding at all levels of the code. This probably also broaches the tricky topic of documentation
  • Finally, some of that knowledge can be encapsulated in tests (unit and integration). So that when we do end up on shifted sand we can at least assert that what we have now is of comparable quality to what we had then.

As for the “Bamboo Bar” I think its ship has sailed. My Mini 1000 is probably starring in a Scrapheap Challenge now. But I’m not sad, decay in some things can be beautiful. Sadly, my software is like my teeth. Decay doesn’t become them.