Categories
article step back

Step back: The Sinclair ZX81

Sometime in 1983 my mother noticed that a high street chemist that also sold photographic and electrical goods was also selling ZX81 computers for £20. It wasn’t christmas or a birthday but she thought that it might be useful for me. It wasn’t even something that I had asked for. She just thought it might be useful.

When we halved the polystrene casing we revealed the little black marvel, purest black with it’s name embossed in red. In case you ever forgot. I ran my fingers across its highly sensitive keypad and was sure I was witnessing something special. The form of the ZX81 is well known but for me as prominent in my memory was the little blue book which was the manual. ZX81 Manual Front Cover I spent a lot of time reading and referring to the ZX81 basic manual. The art work as much imprinted on my mind as the little black flashing  K  that was the ‘ear’ of the ZX81. The cover of the manual is futuristic with two tiny spacecraft parked on top of some space port or something. Completely stark raving crazy but it’s futuristic look added to the mystique of this little black box. One of the great parts about Sinclair research was their marketing. Their products, although remarkable for the time, were very poorly built and unreliable. But somehow they created desire.

The most memorable part of the book was a clock program from Chapter 19: Time & Motion. It’s hard to say quite what was so magical about this program, but I was awestruck when I ran it. The chapter doesn’t very clearly state that the code will draw a clock and a second hand but after typing it and pressing ‘RUN’ a numbered-dial slowly appears and then a dot sweeps around the outer-edge of the dial. For your pleasure I found a ZX81 emulator and typed the program in again and have recreated the magic for you right here.

Chapter 19: Time & Motion

Pretty heady stuff, I’m sure you’ll agree. Over the next two or so years I spent a lot of time buying Sinclair magazines and typing in programs from them. It was great. You bought a magazine that you could read and then you could type in the program and also get a game to play. All for 60p. The games mostly blew goats and I spent more time checking my typing than playing the game but that didn’t really matter. One of Sir Clive’s great ideas was to attach keywords to the keys themselves. This meant that there wasn’t really any need for a full-parser because the ZX81 knew what to expect and would make the keyboard accept the right keystrokes at the right time. Whilst not terribly flexible this solution also meant that there was a whole lot less typing.

I can’t claim that I learned a lot about computers or programming in those halcyon days but my cliche 1,000 mile journey had started with a single cliche’d step.

Oh Sinclair, oh my Sinclair ZX81,
We used to laugh and have such fun,
During our time together I have no regret,
I cherish the day that we met.

Everything about you was so damn fine,
From your RAM pack wobble to your sleek lines,
But now you do what time says you must,
You sit in a corner and you pick up dust.

Sniffle.

Categories
article programming

Software Dream-ualisation

Ok, I admit it, by some measures I am a sad, sad individual. Why? Because sometimes I dream about programming. Now those that know me might be thinking that in my dreams I sit in amongst monster rigs hacking away at some monster problem.

I have been told that the best crackers in the world can do this under 60 minutes but unfortunately I need someone who can do this under 60 seconds. — Gabriel, from the movie Swordfish

Sadly the un-reality of my slumber is a little more prosaic and not like Swordfish at all. These dreams are always bizarrely specific programming tasks that would require a small amount of thought if I was awake but since I’m not conscious, they are a little harder.

Last night was different, I dreamt of nothing and woke to the sound of vomitting children (my own). Once that drama was resolved I couldn’t find a way to drift off to sleep again because for some strange reason I’d started thinking about software visualisation. I don’t even want to think about how I got onto that train of thought.

Anyway, the thought went something like this. Could software have colour? I couldn’t see why not. If software had colour then would it be useful? I reasoned that yes, it could be designed to be useful to give software a colour. For instance, colours could be assigned to code-patterns and this might ease in the understanding of that code. Since code is easier to write than to read this seemed like a worthwhile aim.

But then I thought perhaps a better visualisation would be to colour and orient objects on a plane based on the amount of messages that object issues / receives, or some other arbitary scheme. This sounded like a really rather jolly idea and I resolved to investigate it more fully in the morning and I promptly fell asleep.

When daylight arrived I found a link to this research that does something very similar to what I was dream-scribing but the site hadn’t been touched since 1998. Other research, whilst relevant, seemed similarly dormant. The most recent research I could find is here. It uses Vizz3D to turn software into ‘cities’ that can be navigated. This is indeed exciting stuff, even if it was done in C/C++.

It’s long since fascinated me that the world of software is a dynamic ever-shifting place but the tools with which we work on that software (especially for very large projects) don’t really help in trying to conceptualise that software. Indeed, the code most of us see in the maintenance phase is at a lower level of abstraction than that of the overall structure of that software and the structure can be very hard to see by just looking at the code.

Sure we can use various tools like profilers and coverage analysers to view different dimensions of the software plane but they are not the whole picture and compositing those analyses into a coherent whole is still not easy.

Fast forward ten years, perhaps DevStudio or Eclipse will ship with a project visualizer. The information transmitted in a single visualisation could save hours of code-grokking. It probably won’t change the world but it would be very, very, useful.

But perhaps in ten years we will have brains the size of water-melons and be able to program computers using only our minds (like in Firefox). I guess it’s time to go back to sleep now. Sweet dreams.

Categories
programming unit testing

Too fast to live, too impatient to unit test

James Dean, James Dean, you bought it sight unseen.
You were too fast to live, too young to die, bye bye. — The Eagles

The benefits of unit-testing are enormous. I don’t think anyone can deny that, but to make unit-testing work for you actually have to write tests.

As I see it I have three (no four) main problems with unit-testing which ultimately boil down to the time and hassle it takes to write certain types of tests:

  1. The generally approved technique is to start with the test and start by making a test that fails for a specific functionality. You can’t compile the failed test until you’ve written some code to call. Then you find yourself writing something that will take a few arguments. For simple methods that take some arguments and return some values means coding some stub, switching to the unit-test view, write a failing test, switch back and write the functionality. This method probably took 3 or 4 times longer to write than if I hadn’t tried to unit-test it. I know that I will get that payback later but my life contains plenty of context switches already and having to add these extra ones is a pain. I know (but can’t find a link) that there’s a special tool for Ruby that automates this process. You write a test with a well known name, it creates all the stubs and makes it throw exceptions. I need this feature!
  2. Sometimes you will end up with abstract classes in your design. These are awkward to test for a number of reasons, one of which is that you have to create a unit-testable implementation to test it since you can’t instantiate it because it’s abstract. This means creating especially derived inner classes for your abstract class.
  3. Unit testing data sources. Some unit-testing (like Rails for example) include the data sources in the test. But I’ve heard the view that this is a form of integration-testing since you are testing more than just the class when you get a bunch of data from a database. This raises the ugly question of how to test data source access classes without a data source. It’s a similar problem to the abstract class problem in as much as you need a special instance of a class for your unit-test purposes. To serve this purpose you can use a mocking framework like NMock or jMock or even Mockpp! Great though these tools are it’s very easy to create a monster mock object that has all sorts of complex and hard-to-maintain behaviour.
  4. Tests need to be maintained. Like the code you’re testing the tests themselves have to be maintained when refactoring work takes place because they tend to start failing a lot. This maintenance usually comes in two forms: really nasty problem that I’m really glad I found and annoying little deficiency in the test that means it needs tweaking to work. This illustrates that tests themselves can have good and bad design and we can save downstream time (with yet more upfront cost) if we try to make the tests less brittle.

It’s the second and third points that prove to be a tremendous time-sink for me. You want to unit-test the behaviour so you have to create a custom class or mock to do it. Before you know it the mock / custom class requires as much effort as the original code. So my development is half as effective.

I know that all this is an upfront cost that will get repaid many times over later. I even know that the writing of the custom classes will actually help to ensure that there is a clean contract between base classes and their derived classes. And I’ve already mentioned that good test design will save me time later. But all that upfront time adds up and I need to deliver something. Going back and doing it later is just not an option either. Once the code is written the chances that you’ll go back and add that very important unit test are slim. There’s new stuff to deliver.

I don’t think that James Dean would have written software and I’m positive if he had he wouldn’t of written unit-tests. But if I’m to write code that ‘lives’ I need to. It’s just forcing myself to find that time …

Categories
article programming

Shifting Sands of Time (or why Software Really Does Decay)

I’ve become morbidly obsessed with decay these days. My dentist doesn’t help but she’s nice about my receding gums and minor cavities so I let her off. But the more I look the more I see the decay, all around me.

I recently took this photograph on the back streets of my town. In the house of Bamboo The “Bamboo Bar” is long since abandoned but the newish bicycle suggests that perhaps there’s life inside. This abandonment theme is so common in Cyprus that it’s almost invisible. In some places the contrasts are stark, the new and glitzy sits yards from the old and beaten. Large stretches of land are waste ground not because it’s a waste-land but because the owners bought the plot a long time ago and will build a house. One day. Until that day the weeds grow tall and the tumble-rubbish piles high. Things in Cyprus take time, I have to remind myself to never forget that.

This theme pervades through all areas of life and I guess it happens everywhere it’s just the contrast here seems stark.

But what of software? It seems ludicrous to claim that software can decay because: it is abstract, has no moving parts and isn’t exposed to the environment. This seems true enough. But suppose such a notion of decay could be applied to software. If there’s no physical environment to aid decay then what environmental factors could there be?

Changing Requirements = Shifting Sands

For me this is the biggest challenge that faces us as software engineers. The shifting sands of requirements are real. Pinning them down for the delivery of the initial project is probably vital but once you have released the software the requirements can and will run free. Mentioning to your boss that ‘we didn’t factor that in to the original design’ sounds like an excuse and could seriously damage your bonus.

But what if new features are successfully added? New features can be added that upset whatever balance the application may of had. Over-time those new features could become poorly understood code dead-ends that end up polluting the source pool forever or even central aspects of the system (possibly eclipsing the original intended purpose of the software).

This consuming phenomenon can be seen in the physical world too. I used to own a series of bangers each car worse than the first. The worst purchase drove for about 100 miles and then needed a new engine. Others became accidental projects for an impatient and inept mechanic. Me. I remember the dawning horror when I realised that part of the problem with my Mini 1000

Mini 1000

was that I was putting new parts into it. The new parts would operate at peak efficiency in amongst a rotting husk of older parts. Those old parts surrounding the new parts understandably promptly failed under the new load. I did a lot of walking in those days.

Analogies are often imperfect and comparing cars to software is no different because software components don’t wear out! However, when the the requirements change you might replace one component with one that serves a slightly different purpose. Our replacement component might compromise the original design assumptions. We can do this successfully for some time, but we will eventually end up with a loss of direction. If things are going really bad you might even have a loss of vision.

Loss Of Vision & Bad Maintenance = Death by a thousand cuts

Every large project will have at least one or two architects. Those people are the knowledge-holders of why the system was built and what trade-offs were made and why. Over-time you will lose these people and when they’re gone you’ve got a problem. If the system is sufficiently large you will probably find that no one person has that original knowledge and when changing requirements happen the people you still have end up misunderstanding the code or taking shortcuts rather than refactoring. And this is undoubtedly because of Joel:

It’s harder to read code than to write it.

So, in my opinion any medium-to-large in-house software project decays. There’s no physical decay but any useful be-spoke software project will probably change as the business changes. The more time passes the further the source pool migrates from order to entropy and the harder to maintain and less efficient it becomes.

Running to stand still

The inevitability of this decay means you’ve got to do some things up-front in a new project to try and prevent your software being like my beloved Mini 1000

  • An identifiable design. This is largely impalpable but if a single person designed the original system and the coding standards were reasonably tight you might produce an identifiable design which others can follow. Of course design operates on many levels in software and when it comes to maintenance the chances are only the low-level design will be seen, the high-level design will probably be invisible to most.
  • When the knowledge-holders leave there is sufficient transference of understanding at all levels of the code. This probably also broaches the tricky topic of documentation
  • Finally, some of that knowledge can be encapsulated in tests (unit and integration). So that when we do end up on shifted sand we can at least assert that what we have now is of comparable quality to what we had then.

As for the “Bamboo Bar” I think its ship has sailed. My Mini 1000 is probably starring in a Scrapheap Challenge now. But I’m not sad, decay in some things can be beautiful. Sadly, my software is like my teeth. Decay doesn’t become them.

Categories
article programming

Love Your Code

Someone said to me the other day. “I love that code. It’s so structured”. This is the first time anyone had ever admitted to me that they could indeed love code. Before I lifted the receiver to call the men in white coats I tried to absorb what he’d said. What he really meant, I suspect, is that he appreciated the code. But the more I thought about it the more I thought that we do in fact enter into relationships with the things we write.

This is further evidence, I guess that writing software is a creative process like painting a picture or designing a building. It becomes emotional. I suspect I’m far from alone when I say that I’ve spent a significant proportion of my career in an emotional tug-of-war with the programs I’m working on or have written myself. They have a sort-of-life all their own and usually I’m the Doctor Kildare keeping them alive. I haven’t lost a patient yet. But the most curious part of this form of creation, at least to outsiders, is that it has no permanence and the structure that my friend so admired is almost entirely abstract.

This impermanence makes the code we write invisible to the people that see what the code does. They have no knowledge and probably little understanding of what it took to make that program. When those people are the ones making the decisions then Houston we have a problem.

But there’s clearly good reason to be in love with your code. The more you love it the better chance it has of doing its job, working most of the time and failing rarely. But the best part of all is that by expending that little extra effort you made something that someone else can not only appreciate and understand but can modify themselves.

The stark reality of writing code that will get used for a purpose (and why would you do anything else?), is it will very soon need to be changed. If that code is successful it will only be a short matter of time before a new feature is added or, in the extreme, that code is used for something else entirely other than what it was designed for. I’ve seen that happen on two very large projects (>100,000 lines of code) and the fact that it could be done at all is evidence that the original code was well loved. In fact it would be difficult, I suspect for that code to exist at all if wasn’t loved. But once the purpose of the software is changed all bets are probably off (and the subject of another post!).

Consider then what happens when we don’t love the code we write. You don’t love it because you built it to throw-away or you built it under pressure always meaning to come back and fix up later. I’d be the first to admit that I’ve done this. There’s probably a few repercussions from being in this position. The chances are that the user that is on the receiving end of this unloved creation will get frustrated with it. That frustration will find it’s way back to you. Beware.

Worse is when you end up being the one frustrated with it and unable, for whatever reason, to change it. It’s like being Dr Frankenstein and watching your monster rip up the town whilst you sit in your chair and scratch your head for ways to bring it under control. Perhaps if you hadn’t sewed the monster’s head on backwards you wouldn’t be in this position? Well you’d better break out the sewing needles friend. You’re going to need them.

Categories
article

Inspiration, Inertia and I

One of the things about writing a blog is that there has to be something to write about. I am a regular reader of Coding Horror because of two reasons:

  1. Jeff Attwood writes good articles
  2. He does one every working day of the week

This is no mean feat. I’ve been reading his blog for about 6 months and I can honestly say that, although there is a central theme running in his posts, I haven’t detected any obvious repetition yet. When I think about what I’ve achieved in the same time frame I’ve probably managed one-or-two per month!

I recently heard Jeff talk and I think I have an idea at why he’s so much more prolific: he’s prolific. He has made a commitment to himself to publish every working day and he does. And this effort generates momentum, which generates traffic to his site that generates comments and trackbacks that fuel further ideas for him to write about. Where he has intertia, I am inert.

As for the inspiration it’s like anything. The more you do it the more you can do it. It’s a form of exercise for your writing brain. Once you start writing the ideas start to flow. I’ve been in love with the idea of writing for a long time, so perhaps I should follow the advice from his .NET rocks article and make a commitment to publish.

My original reason for beginning this blog was to write about the technical tricks and traps that I discovered. Mostly as a source of documentation but also as an experiment. But lots of people do this, because it means you don’t have to think about what you write. Yes, it serves a useful purpose because it means that a lot of problems I encounter can easily be looked up and fixed. But if someone else writes it and documents it am I adding anything by doing the same? I know the answer to this has to be no. If 95% of the internet disappeared tomorrow I’m still pretty sure I could Google to find out the syntax for adding a CONSTRAINT to a MySQL database table.

Please believe me. I’m not looking for the fame or hits. Just looking for my voice. When I find it I’ll be sure to let you know.

Categories
article lisp

Paul Graham wants me to think he is mad.

Paul's book I’ve been reading Paul Graham’s Hackers and Painters. Published in 2004, it only just found its way off of my reading shelf and into my brain. This Amazon reviewer really sums up the book nicely although I mildly disagree with some of the statements.

Most of the book is a long-winded lecture about how great Lisp is, and about how that is the language of the cool, smart people.

There is very little about painting, it is only briefly mentioned in the beginning.

The first chapter is quite good, then it gets more preaching and more dull rapidly.

Mr Graham is obviously a smart guy and a capable writer. The fact that he was part of a dot-com start up that actually succeeded seems to have gone to his head though. Somehow the fact that he was in the right place with the right idea at the right time enables him to declare Lisp as the uber language, and everybody who doesn’t see that is a dullard?

The title suggests a book which is whimisical and fun. This book is a preachy diatribe by a pompous hacker who things he has the proper world-view for everyone.
— Kevin Stokes

But to give Paul his due he covers some excellent topics. One of the most intriguing ones is the third chapter about thinking the un-thinkable. The suggestion is that by thinking the un-thinkable you will enter a new head-space where perhaps you can make a concerted difference to yourself or others. When I read it, I thought: “I like this idea, I might try it out”. A few more chapters passed and he is talking about Lisp (which I will return to shortly) and I’m thinking: “I’m bored. Where’s my unthinkable thought got to?”. I thought really hard and I got one: “Paul Graham is mad”. Plain and simple. As instructed by Mr Graham I started to explore this idea right away to test its validity.

  • If Paul had a mania what would it be?
  • Where should we look for manifestations of this mania?
  • Will trying out Lisp make me as mad as I perceive all the other people that use it to be?
  • Am I mad and if I am would I be able to answer this question?

And I came to the conclusion that Paul Graham is not mad. Sorry folks. It’s just not true. However, the majority of his un-thinkable ideas (at least in this book) all seem to originate from a single perspective point (you get these in art too I understand!) that choosing Lisp allowed him and his colleagues at Viaweb to produce something others could not. Which is in itself a remarkable statement if it is as true as he suggests. So I thought I’d have another try at Lisp. I tried it once before and wasn’t totally turned off by the idea I just never really got going.

Now don’t get me wrong. I’ll not be trying it just yet. This is just a warning you understand, especially since I wouldn’t want Paul to think that he’d succesfully goaded me into it when he said:

… but I don’t expect to change anyone’s mind (about Lisp) over the age of 25 …

I reserve the right to not let Paul Graham into my head and eat my brain. It’s mine and I’m keeping it mine until I decide otherwise.

One of the cricisms of Lisp is that it doesn’t have very good library support. A little investigation into Unit testing frameworks and MySQL wrappers seemed to bear out what the Reddit development team are saying.

What makes Lisp so much greater (it seems) is that data and program are inter-changeable in as much as you can treat pieces of program like they’re data and vice-versa. And it’s difficult to conceive of a programming language that has the two concepts so fully inter-twined. Reflection is a way of achieving it but in my view reflection is evil because it makes compiler optimisations and function call traceability harder. These two arguments don’t apply to Lisp so Lisp wins again. It is partly because Lisp has all those brackets that it is so powerful.

And so we link into another one of Paul’s un-thinkable ideas which is to design the programming language of the future. His belief is that by imagining what we will want we can make this language today. Anyone can do it. And probably either has or will, we just don’t know it yet. He believes that this language will be optionally OO, with very few axioms, and support something like macros. Smells like Lisp to me. Change the meat on the barbecue Paul.

I refuse to believe that Lisp is the future. It would be quite extraordinary and quite exciting if it were to become so, but my crystal ball says no-way padre. But it is one of the languages in the evolution of languages that has played a significant role. And I do believe it has more to offer. Its time will come again and some spark will incorporate the final pieces of high-hanging Lisp fruit into the language of tomorrow.

It’s just that we’ll be able to do it without having a permanant speech impediment. Thhhorry Paul.

Categories
article

Too much information!

So, I was looking for something to listen to whilst I decided what I was going to do today. Would I go and: hunt for a much needed filing cabinet, work a bit more on the tibco/rv project, try working some more on my pet internet project, or have another go at some Rails programming? Couldn’t decide.

So I hunted my music collection for inspiration.

Bizarrely, I chose Duran Duran‘s Wedding Album which I always kind-a linked even though I’m not a huge fan of Duran Duran or Weddings (apart from my own of, course).

I played it and I was immediately taken back to 1992/1993 when I drove around Tadley in a car not too dissimilar to this one.

Ford Sierra

It was sadly, not a Cosworth (like the one above) so was similar but not quite as good. Savour the scene. A 20 year old youth, with patchy stubble and DMs and driving around a nowhere-town in a 8 year old knackered family saloon playing Duran Duran. I probably thought I was cool. I was most definitely wrong.

There’s a few misty-eyed memories from that time, most of which I’m not going to share. But the thing I was doing at the time was working on a program called the “Fragment Data System” for a forensic research company. So I duly typed those words into the internet and discovered a link to something about it.

From the link I figured that it was from a table of contents from a book published in 2000 called “Forensic Interpretation of Glass Evidence”. One of the co-authors of which is an ex-colleague of mine (John Buckleton).

Forensic Interpretation of Glass 
Evidence

Now let’s get one thing straight, the Fragment Data System, was not a great product. It was probably the best I could have done in 1992/93 as an intern and it definitely worked. So, to have it mentioned in a book is kind-of surprising. But that’s not the most surprising thing. The most surprising thing is that I can find echoes of my 15 year old past on this damn internet. It probably has more things about me hidden in its dusty corners.

It gets you thinking. In 1992/93 the public internet was a new thing, it was starting to gain popular ground and dial-up was king. The youth of today have the internet available as soon as they want and it seems that disaffected youffs everywhere need to write about their deepest feelings on a myspace somewhere. Well fast forward 15 years and you’ll find most of your adult life documented in a publicly viewable place. Kind of scary. But not maybe for the reason you might think. If everything you do and everything you are is on the other end of a TCP/IP socket, and those boys are everywhere, you don’t need to remember anything. It’s all there. You just have to know how to find it.

This is great news! I just attach myself to a computer and I no longer need to know what I’m doing or who I am because the internet has all this information. A little like Memento but without the need for body defacement. Now if I could only remember what happened to my wife it would be really helpful …

Categories
tibcorv

Learning Ruby by Extension

So I’ve spent the past couple of months learning Ruby. My previous employers are python nuts. And I have to admit as languages go Python is pretty good. But then I heard about Rails and so I gave that a try.

Already a fan of MVC, Rails appealed to me for the same reasons it appeals to everyone else. It’s quick to develop! Still hunting for that killer Rails app I decided to take a look at what else Ruby could do and bought a copy of the pickaxe.

Now it just so happened that I learnt Python by writing a wrapper to the TibCo Rendezvous library. I learnt a lot about Python that way. Especially nasty, dirty stuff, like garbage collection, the GIL and other threading nasties. This is because the Rendezvous library does/requires a number of things internally:

  1. creates/destroys memory that could have been referenced by a Python object
  2. requires messages to be ‘pumped’ from its message queue and delivered possibly in the same or different threads
  3. requires a callback from the RendezVous library back into Python code

These things generate a few programming challenges and require you to understand some internals of the language before you can make much progress. So it was by some curious happen-chance that I ended up involved with the Ruby version of the project.

So now I get to do it all again, but in Ruby! I’m hoping to make a better job of it this time, and I’m also helped by having some of the hard work already done. I’ll be posting various bits and bobs about the progress of this project as time passes.

In the main though, learning Ruby by extension has been overall quite pleasant. I disliked Python’s extension mechanism because it’s a bit of a pain to use and in the end I used Boost::Python instead of using the C fn() calls. In contrast then the Ruby extension mechanism is as clean as it can be and once a few simple rules are observed things mostly just work how you’d expect.

Now then, every concept in computing these days seems to have a TLA to accompany it which is also its mission statement, e.g: REST, DRY, blah, blah. Well I’ve now got my own. Here’s mine: SANS., Simple And No Surprises.

That’s what I want, simple programs that work how you’d expect, SANS-crap.

Vive l’ordinateur!

Categories
c++

My C++ wants to kill your momma

… well actually it doesn’t. But it sounded good when I made it up. So recently I found myself needing to use a C++ partial function specialisation. No-one was as surpsied as me when this happened.

To say I was anti-templates would be too strong. I find the code they produce hard to follow, regardless of how elegant and efficient they really are. But then they do serve a really useful need. Obviously containers are an example of something where templates are indispensable. So you can’t just write them off.

But now that we have the standard library (and boost) do mere mortals really need to worry or bother about templates in the large? I think the answer to this is probably no.

But then again it depends on what you’re doing. And since I’ve started working on writing a wrapper around the TIBCo/RV library for Ruby I found myself writing a lot of C++ code that was very similar apart from the types involved. So then a little light bulb goes on in my head (40w supermarket own brand, cliche retardant) . Surely I can write a template to do this for me? This will mean that there’s only one code base for a bunch of similar behaviour and when I need to write that internationisation add-on (i18n) for my error messages I’ll only need to change one or two methods instead of a zillion.