Categories
programming

Programming Like It’s 1995

Object-oriented programming, where is it now? When I was at college we learnt a little about object methods and techniques and when I left I kept reading about them. I remember in the 90s I was incensed that an employer was NOT using object methods appropriately. I EVEN read books about it. You could say I was an object fan-boy. But even I have to admit that OOP didn’t exactly deliver how I expected it would.

You may say that this is a crazy talk and that OOP was/is hugely succesful. Shark-jumping mumbo jumbo. But I might disagree. In fact, sometimes I think the object-baggage we’ve inherited is perhaps as much a hinderance as it is a help. The wake-up call came a few months ago when I realised that almost every piece of code that I’m in contact with these days makes only limited use of object-techniques.

Before I talk about all the ways that OOP is dead let me be clear in the ways that it is not. Because when I say OOP is mostly-dead, I mean it is mostly dead to me. In some areas OOP has delivered in spades. For example in modern OO programming languages nearly all come complete with large object libraries. This much is true. There’s objects in them-thar binaries for sure. But my code? Not so much.

Here’s all the ways I manage NOT to write truly object-oriented software:

The Task is Too Small

Some systems are just too small to develop a complex object class hierarchy for. It would be a waste of time to do so. I estimate these ‘glue’ applications could take up as much as 5% of the total LOC of a large enterprise system. It doesn’t matter whether these glue applications are written in Java or Python or Bash because really they’re just scripts. Scripts are their own thing. I would argue that the less scripts you have in your system the better you designed it because you’re not just duct-taping over the seams.

The Spreadhseets Rule

I would also estimate that some appreciable percentage of your enterprise is run entirely from spreadsheets. Whether you know about it or not. Be it phone book, accounts, trading system or stock inventory. These little beauties contain little or-no object code and are spread far-and-wide. I’ve ranted about the pervasiveness of spreadsheets before, no need to go over it again. However, as far as I know, no-one has implemented the idea of an OO spreadsheet. For that we can all be thankful.

World Wide Wasteland

Although the web does lend itself beautifully to model-view-controller, on the client side a lot of it is only markup and Javascript. Neither markup nor JS have particularly strong OO characteristics and both are hugely succesful without those OO characteristics. Indeed many WWW apps are really CRUD applications.

CRUD

The create-update-delete application is everywhere. Be it web-or-desktop. These apps are effectively database front-ends that organise the interactions between user and DB in a more user-centric way. For example in .NET there’s not much need nor desire to map your data into real-objects because the data-binding layer is phenomenally powerful at making data-bound apps quickly. There’s no support to help you map from data to objects to Infragistics. Indeed, nor should there be.

Enterprise Business Objects

JSP And this is the bit that makes me a bit sad inside. This is what OO was really meant for. I used to have arguments with business-analysts about the right object model to use and whether a method should exist in a base-class or a derived-class. But now it doesn’t seem like anyone, including me, really cares. It’s just that somewhere along the line it became a little irrelevant. Don’t get me wrong, I work with objects all the time. But they’re not really objects that were sold to me.

They’re just data-holders or as our fathers used to say: data-records.

No Methods? No Object

The thing is that to me at least, without methods on your objects there’s literally no object. If objects doesn’t respond to messages they’re just data-records transferring data to some other module that can operate on those records. Usually this other module takes the role of a controller. This, to me, sounds very similar to the programming that our fathers used to-do before C++ and Java 1.1. So much so that it’s tempting to break open my book on JSP.

I think there are two fundamental areas where objects failed to deliver.

Finance This

Firstly, whilst OO techniques are very flexible in some business domains they aren’t flexible in exactly the right ways for all business domains. I’m thinking particularly about my own area of expertise, which is financial trading systems.

The objects in trading systems tend to be difficult to compose into a meaningful hierarchy that is both expressive and not too abstract. I think ultimately this failing is because financial instruments are actually themselves models of physical events. This means that it’s straight-forward enough to construct an object-model of financial instruments. However as soon as I start innovating with my financial instruments (i.e. construct new financial models from old ones) the original object models tend to break-down pretty fast.

The Technology Stack Sandwich

The second reason is that there are too many different technologies involved in many enterprise-sized solution stacks to make consistent application of OO methods viable. What does that mean? Well this is perhaps a post in itself but essentially as soon as you are using two or more programming languages that must share objects you’re essentially entering an object-desert.

The End?

Oh no. Very definitely not. Objects are the only way to make sense of a deep-and-wide library. If the domain allows it they are the only way to go.

The surprise is that objects just didn’t deliver in the way that I thought that they might for me. Which is kind-of interesting because it suggests to me that perhaps me, and a lot of people like me, might benefit from forgetting about objects sometimes and just Programming Like It’s 1995.

Categories
programming

The Not So Super, Super API

Now and again you come across the ‘one size fits all’ of computer programming the ‘Super API’. It derives its name from the fact that it tries to act as a superset of a bunch of other platform-specific APIs. One good example of this is JDBC/ODBC, but there are many others. JDBC/ODBC allow access to many many database servers because they’ve normalised the API so that you can write code that doesn’t care which database it is connected to.

Now and again, though, I find myself requiring features that are not exposed in a Super API. They’re not exposed because the feature is either too niche or because it is not standard across implementations. Creating a Super API is in some ways like plastering. Once you’ve created that nice even finish across an entire wall you might find that you’re missing a few light switches.

I was reminded of this the other day when one particular super SQL API: CLSQL uses another super API: UFFI, what they do and how they do it is largely irrelevant. The important point is that I’m only going to be concerned about the operation of one of the combination of technologies (MySQL with SBCL:SB-ALIEN). I’m concerned because there is a particular SBCL feature that I want CLSQL to make use of when it calls SBCL:LOAD-SHARED-OBJECT from via UFFI. Confused? Don’t worry, it’s not that important.

In the end I found a (most heinous) way to do what I want but it occurred to me that there has to be a better way. I appreciate that there is no way that the library writer could know what I want before I want it and neither could they potentially anticipate future underlying API changes to provide for them. However, what if the library writer gave me the option of registering a per-API first-chance callback function? This function could then perform custom processing for my platform combination and then yield control back to the library when it was completed. That way I could inject any code I wanted at potentially any supported depth.

I don’t doubt that it would be difficult to produce an API like this. Certain classes of API (non reentrant APIs being one that jumps to mind) could potentially break in new and unforseen ways and library maintainers would have to be creative about how they detect and cope with errors happening in and around user injected code.

But if the injected code was simply a one-line replacement for the one-line that would otherwise have be called then it might work. Perhaps it might even be fun to try it.

Categories
programming

Look Ma! No F9!

More people should read the Design of Everyday Things. I couldn’t do the book justice but it did explain something to me that I’d never seen written down anywhere else before and was sorely reminded of today.

It seems that everyday things have something called ‘affordances’ which should give us mental clues about how they work. The design of everyday things attempts to teach us that poorly designed things lead us down wrong mental-paths and make mistakes when using those things. Mistakes that at best might get our fingers burnt or at worst cost lives.

So far so good. What do you do then when the mental-paths that people are sent down get so worn that they can’t think straight anymore? Of course, I’m talking about spreadsheets again. It seems spreadsheets are so ubiquitous and are so well understood by so many people that some people when faced with technology view it as one enormous spreadsheet. Which is ironic because we know by now that spreadsheets simply can’t be enormous because they don’t scale.

I really don’t think technologists should beat non-technologists up about their lack of technical expertise. That’s just counter-productive and plain wrong. But what do you do when you are faced with someone who has convinced themselves that it’s all one big spreadsheet? How do you explain to them that in my world there is no F9? Where’s that pencil …

Categories
article programming

The Spread-able System

Spreadsheets are everywhere. They are simple to create and are an immensely powerful tool. Unsurprisingly then this means that a lot of areas of business rely on spreadsheets to function correctly. But spreadsheets are dangerous too. They suffer from, well-known, fundamental flaws.

The problem is that spreadsheets are a special type of code, and I’m not talking about the Excel ‘macros’ I’m talking about the formulas. As such they probably need to be treated the same way as other types of code, but their very nature makes this difficult. But I’m getting ahead of myself, let’s first look at some of what is good and bad about spreadsheets.

Pros

Spreadsheets are remarkable for their:

  • Utility – we can bend them into almost any shape we want because they give one way to represent almost any business process;
  • Portability – we can pick up our little gobbets of data and logic and relocate them to almost anywhere inside or outside the company, in file-systems, mail servers and web-sites;
  • Simplicity – you don’t have to explain a spreadsheet to anyone. They might have to be a proto-genius to figure out how it works but the working knowledge they would need to get started is pre-loaded in their heads and ready-to-run.

Cons

So they sound pretty useful, and I like to think that I’m a pragmatic guy, so why do I hate them so much? Many have noted about the shortcomings of spreadsheets. The page on spreadsheets at Wikipedia spells it out clearly enough so I’ll paraphrase:

  1. Productivity – Working with spreadsheets requires a lot of “sheet-shuffling” to reach the required goal. The bigger the sheet, the more time is spent copying, cutting and pasting cells around.
  2. Reliability – Although what consitutes an error in a spreadsheet is subjective, the paper A Critical Review of the Literature on Spreadsheet Errors” (pdf) reveals a series of studies (some more recent than others) that have shown that approximately 5% of cells contain errors.
  3. Collaboration – Sharing a spreadsheet is difficult. Having two independent people working on the same sheet and merging their results is as far as I know impossible.

The first two items don’t bother me overly. Yes, it’s a problem but then the alternatives aren’t that great either. Consider what you would do if you didn’t have a spreadsheet to fulfill the task. You’d either do it with a bit of paper and a calculator (i.e. simulate a spreadsheet) or get a programmer to do the task for you. Either way the amount of productivity loss/gain and the amount of errors aren’t going to be that significantly different from using a spreadsheet. Don’t get me wrong, I love my fellow programmer, but we make a LOT of mistakes too. The difference perhaps is that bespoke systems usually end up getting audited (and hence fixed) and spreadsheets often don’t. Although this point is probably moot.

Good + Bad = Too Bad

My real beef is with what happens when you have the ‘pro’ of high portability with the ‘con’ of low collaborative power. You have no way of knowing which version of the spreadsheet you have is the “true” one, and which version is duff. Every copy, whether it be inadvertently through forwarding a sheet by email to someone else or explicitly by taking a ‘backup’ is a 12 foot tall baby-eating, business-crushing monster waiting to rip you and everyone you love apart.

Hug the Monster, Then Run

The thing is we kind of have to embrace the baby-business-beating monster because it’s about all we’ve got. There are some tasks, as a programmer, that I’m really happy that you as the non-programmer don’t bother me with and solve yourself in sheets. Want to set-up an intra-company phone-book as a spreadsheet so you don’t have to bother will all that “Access” voodoo? Be my guest, but I’m watching you. Want to set-up a spreadsheet to run your fantasy football so you don’t have to add two numbers together? Go right ahead, I’ll even drive you to the game so you don’t miss the turn. Want to set up a spreadsheet to calculate payments and and do a mail-merge with the results … STOP. RIGHT. NOW.

The truth is though that you might not know that you’re creating the mother-of-all spreadsheets when you start. I might not know it either but there will probably come a time when a line is crossed and then I will want to know what you’ve been doing and who you’ve been doing it with. I’m just like that.

Unless you are small company (and hence don’t have a lot of choice) you have to be very afraid of trusting anything that might lose you money to a spreadsheet. You need to be very aware of the risks and the potential-costs you are letting yourself in for. Here in Europe there is even a special interest group dedicated to highlighting the risks of spreadsheets. Those guys must throw wild parties …

The Missing Links

In my opinion there is something missing, something that can fill the gap between spreadsheet and system.

I think we need something that can:

  1. Track spreadsheet changes – Not knowing which spreadsheet is “true” and which lies (by being able to identify revisions of the sheet that have happened after yours was ‘branched’), and not being able to merge sheets is a problem. Perhaps someone solved it already, if they had that would be great.
  2. Track spreadsheets themselves – Having some more information about what sort of corporate-data was being accessed, who was using it and how frequently they ran it might alert us to potential spreadsheet monsters being born.
  3. Narrow the gap – Making spreadsheets more like traditional software systems, without significantly castrating the usefulness of the spreadsheet, would be great too. This is a little like asking for the moon on a stick though.

Perhaps I’ll make something like this one day. I have to admit it’s not a terribly exciting project but it has some potential I think. Perhaps I could spice it up by throwing a party and invite the guys from the “European Spreadsheet Risks Interest Group”. Now we’re talking. How will I budget for the 7-up, party hats and streamers? In a spreadsheet of course.

Categories
programming

The Ever Diminishing Deliverable

When you’re making software in-house you can largely ship what you want, within reason. Conversely, when you’re making software for customers or clients I’m guessing you owe it to your customer, and perhaps your bottom-line, that what you produce is of the very best quality. If you don’t your customer goes somewhere else. However, the additional effort required in producing the quality software for customers can be substantial. Since you want to keep your customer, and attract new ones, you must expend the effort at a potentially large personal cost.

When you’re making software in-house there is a well-known danger of gilding the lilly. Indeed, in-house the law of diminishing returns comes into force if you spend too much time making your in-house project of the very highest quality. The law of diminishing returns can best be described as:

… in a production system with fixed and variable inputs (say factory size and labor), beyond some point, each additional unit of variable input yields less and less output. Conversely, producing one more unit of output costs more and more in variable inputs.

Since I don’t work for a software house, if I can release my code in-house when it is only partially complete, then I should be able to get a lot more stuff done for less cost.

So far so good. A problem arises, though, because I have observed that different programmers place different levels of importance on software quality. This is as probably as you’d expect, one thing we have in common is that we’re all individuals.

The sad truth is that, based on someone’s own personal standards, the bare minimum of what is required to get a job done is usually all that is done. This probably goes someway to explaining the appalling state of some of the in-house software I’ve seen, and written myself. It’s understandable because often, once the main problem has been solved the other issues like usability, maintainability, extensibility and support can be overlooked without any immediately dire consequences.

But here’s the sting, once you release something you usually have to support it too. That’s just the way in-house software works (sucks?), I guess. If you made a poor job of it then you’ll probably pay for it many many times over in support queries. In the end it seems to me that unless you quit (or are fired!) the diminishing deliverable cost turns into a potentially huge support cost. Especially if you end up layering new solutions on to an originally broken solution.

All, in all, you might as well have tried to make something a little more durable and complete at the outset. Sure you can re-factor your mistakes later, but even refactoring costs a lot more to do later than it does to get it ‘right’ at the beginning.

It seems to me that it is essentially a problem of planning. In-house software projects, big and small, aren’t usually planned properly. As a result secondary factors that would improve the overall quality are not included in any estimates. As a result of this bad planning those in-house projects are often late and buggy.

This problem is not getting any better. Years of software-development in an in-house setting have shown me that where in-house plans & design meetings exist issues of usability, maintainability and support are very minor concerns if they are concerns at all. Perhaps that should change. Just a little …

Categories
programming

Evolved Programmers Wanted. No Returns.

Here’s an interesting quote from Jeff Attwood (emphasis is mine):

“I use implicit variable typing whenever and wherever it makes my code more concise. Anything that removes redundancy from our code should be aggressively pursued — up to and including switching languages.”

It sounded like something I’d heard before. Here’s another quote from Paul Graham:

“The kind of dirtiness Arc seeks to avoid is verbose, repetitive source code. The way you avoid that is not by forbidding programmers to write it, but by making it easy to write code that’s compact.”

I’m not in anyway claiming that Jeff is lacking originality I’m suggesting that when two influential people with a large audience express the same thought … well something ought to happen right? Perhaps programming languages are going to become more expressive and concise. This certainly seems to be one of Paul’s aims at least.

Programming is changing, that’s for sure. It seems like only yesterday that Python was new and I was bending my head around it and liking being released from static types and embracing dynamic languages. Indeed if you watch this video (and I highly recommend you do) you can get a feel for how far Python itself has come in about 5 minutes.

The point though is our industry is still evolving. Natural selection finds the best parts of all those programming languages and software products and begats them into new ones. 10 years ago I believed that the future had arrived and that we would all use C++ where performance mattered and Java everywhere else. It seems that I was hopelessly naive to believe that evolution had ended. It had scarcely even begun.

I’m also eternally grateful to the innovators and early adopters that I don’t have to write much of either C++ or Java anymore. But that’s another story.

Categories
lisp programming

Know Your Idioms

A minor thought for today. Part of becoming an expert in any programming language is learning the idioms. Through experience, research and your peers you discover what makes your programming language hum. How to get the best performance and how to trade that against the best expressiveness. Since spoken language affects the way that what we think it’s reasonable to expect that programming languages affect the way we think about programming problems.

I have on a number of occassions wanted to write an expression in Lisp that will return true if all the elements are T and nil if any are not T. A sort of ‘and’ for a list. It seemed like a classic case of a ‘reduce’ or fold operation to me and so I went about doing that as a starting point.

CL-USER> (reduce #'(lambda (x y) (and x y)) '(t t t t t t t))
T

Hmm, because AND is a macro it can’t be an argument to reduce therefore I have to wrap it in a lambda. Which is ok but it removes the short-circuit feature from the AND.

So, I wasn’t terribly excited by this result but it served the purpose and life continued as normal. That was until today when I discovered the LOOP tutorial which inspired me with this trivial alternative …

CL-USER> (loop for x in '(t t t t t t) always (eq t x))
T

Yes it’s a little bit less functional but it it’s very clear what’s going on here and that was my original problem with the second reduce form. I was interested to see how the LOOP expanded too, so:

CL-USER> (macroexpand-1 '(loop for x in '(t t t t t t) always (eq t x)))
(BLOCK NIL
  (LET ((X NIL) (#:LOOP-LIST-1613 '(T T T T T T)))
    (DECLARE (TYPE LIST #:LOOP-LIST-1613))
    (SB-LOOP::LOOP-BODY NIL
                        (NIL
                         (SB-LOOP::LOOP-REALLY-DESETQ X (CAR #:LOOP-LIST-1613))
                         NIL
                         (SB-LOOP::LOOP-REALLY-DESETQ #:LOOP-LIST-1613
                                                      (CDR #:LOOP-LIST-1613)))
                        ((UNLESS (EQ T X) (RETURN-FROM NIL NIL)))
                        ((WHEN (ENDP #:LOOP-LIST-1613) (GO SB-LOOP::END-LOOP))
                         (SB-LOOP::LOOP-REALLY-DESETQ X (CAR #:LOOP-LIST-1613))
                         NIL
                         (SB-LOOP::LOOP-REALLY-DESETQ #:LOOP-LIST-1613
                                                      (CDR #:LOOP-LIST-1613)))
                        ((RETURN-FROM NIL T)))))
T
CL-USER> 

... Jeeeeezus. No way that can be efficient surely? So I ran both arguments with a temporary list of 1,000,000 elements all equal to true:

CL-USER> (time (reduce #'(lambda (x y) (and x y)) (loop for i below 1e6 collect t)))
Evaluation took:
  0.149 seconds of real time
  0.100006 seconds of user run time
  0.040002 seconds of system run time
  [Run times include 0.068 seconds GC run time.]
  0 calls to %EVAL
  0 page faults and
  7,999,488 bytes consed.
T
CL-USER> (time (loop for x in (loop for i below 1e6 collect t) always (eq t x)))
Evaluation took:
  0.059 seconds of real time
  0.052003 seconds of user run time
  0.0 seconds of system run time
  0 calls to %EVAL
  0 page faults and
  7,995,392 bytes consed.
T

Guess again brother. The more I thought about it the more other solutions I could find to this problem. For instance I could just count all the nulls and if I find one then the answer is false.

(eq 0 (count-if #'null 
           (loop for i below 1e6 collect t)))

This was marginally faster than the 'loop' on my implementation, but then I reasoned that on a list of a million items 'AND' should really retain that short-circuit ability to make it stop on the first false. So perhaps the correct thing to do is return the first non-true argument:

(not (find-if #'null 
          (loop for i below 1e6 collect t)))

This solution is less imperative in style and less complicated than all the other solutions. Now that I've thought of it I can't imagine how I didn't think of it sooner. A winner!

Lisp is such an old language that there seems literally dozens of ways to attack most problems and that really is the point. Since I don't know what I don't know there's probably dozens more ways I could try to solve this trivial problem that only experience, research and interaction with my peers will get me.

This is where books like the O'Reilly cookbook series should come in right? They should have all that hard work done for me so I can stop making mistakes and start making systems. Books, however, are not very easily searchable when they are on the shelf. Therefore on-line resources are best for searching for optimal solutions. So I was pleased to find that Common Lisp has an online cookbook. However you still have to KNOW where to look in a resource like this because I might not have found the 'find-if' solution if I was stuck in a rut banging on the 'reduce' solution. Not only that, every situation is different, 'reduce' definitely has its uses but perhaps not in this case.

This further suggests to me that it's only your peers and your own research (i.e. trial and error) that can show you the way. Just make sure you give yourself enough time for the research and pick your peers carefully 😉

Categories
article programming

Floating Point Flotsam

I have never been particularly clear about when to choose single or double floating point arithmetic. I think I have been operating on a sort of ‘trial-and-error’ approach for some time. It seems ludicrous that I should not, after all these years, know exactly when to chose single or double precision arithmetic. The truth is that I don’t, and it’s now time to fix that.

First, a little background. I was, for reasons that are too sad to explain, interested in how to calculate (accurately) the time of the Vernal Equinox. That is the exact time of the year (to the minute – if possible) that the sun is 90o above the earth at the equator. Anyway, I found some code which is an implementation of an astronomical algorithim designed expressly for the purpose of calculating the vernal equinox and is accurate to about 20 minutes. Which is good enough for now.

Now the next piece of background is that I was also doing this in Lisp and up until now I have let the reader interpret all the literals I enter. When I tried to do the calculation in the REPL I could get no closer than being in the same part of the day as I was expecting and with the result somewhat rounded to the nearest half day. After some head scratching it finally occurred to me that the the computation required more significant digits than I really had. It seems that the formula I was entering was being interpreted as single precision floating point numbers because if I had wanted double’s I would have suffixed my literal numbers with a ‘d’. It would seem that d could also stand for D’uh. Seems fair enough. Time to do some research then …

You see, according to the IEEE standard 754-1985 a single precision floating point number has 23 significant bits and a double has 53 significant bits. For me to answer the question of how many significant figures I can get in decimal would require me to know that the smallest decimal fraction that I can represent in binary is. This number would be 1/223 which is about .00000011920928955078125.

Now, floating point numbers are represented from a fractional part and an exponent part to give the decimal representation of the number. Therefore you can never get more accuracy than the smallest binary fraction multiplied by the exponent you have. This means that the significant figures should be something like:

log10(1/(2^23)) = -6.9

Therefore when a number has 7 significant figures you are already losing a little accuracy, the more significant figures you add the worse it will get. It was then clear that my astronomical antics were less than stellar since the first literal in the computation has 11 significant figures.

Indeed whilst I was thinking about this problem it occurred to me that if I wanted to continue using single precision I could split the fractional part from the integer part and continue this way. However, this is still inferior to a double because it would give me 7 significant figures for both parts and therefore a total of 14 significant figures. This is inferior because using my shiny new brain I can show that a double will give about 16 signficant figures. Of course I could have also concluded that double’s are better than two floats by noting that 23 bits+ 23 bits < 53 bits but that would never have been as much fun.

Type Word Size Mantissa Dec Sig. Figs.
Single 32 23 7
Double 64 53 16
Extended 96 63 19
Quad-Extended 128 113 34

You could argue that I could have saved myself 20 minutes of time by looking the answer up but then again I would never remember the answer unless I proved it to myself first. So, now my spring occurs at the same time as everyone else’s and I know why. Oh it happened already? Sheeeeiiiitttt.

Categories
programming

The Robustness Principle

It seems I’m becoming a man of principles. Last week I wrote about the principle of least surprise. This week I was reading this article about martians over at Joel it reminded me about my previous job in some non-obvious ways. And before you say it, no I didn’t work with martians or any sort of aliens for that matter. The main topic of Joel’s blog is very interesting but a side issue is the discussion of the robustness principle as defined by Jon Postel in RFC 793.

2.10. Robustness Principle

TCP implementations will follow a general principle of robustness:  
be conservative in what you do, be liberal in what you accept 
from others.

Whilst Joel is talking about HTML standards Jon is talking about TCP standards. Anyway, whilst this difference is minor the pedant in me is forced to point it out ;-).

However it’s fair to say that such a principle exists and that I’ve been living by it, without actually having a name for it, for some time. The reason I started doing it was because I inherited a large code-base for a distributed system once and I discovered that occasionally components would simply terminate, users would complain and I would have to go and invent some bullshit to explain why our system wasn’t working.

I felt like I had to invent some bullshit because I’d found that some components had statements in them like the following:

    if (such and such)
        abort();

The ‘such and such’ that caused the termination would be some condition that should not have occurred. You see, the original programmer felt the need to have all unknown conditions delivered to him by aborting the process. This is rather like demanding a criminal’s head on a plate because aborting a process under UNIX will usually cause a core dump. This memory dump file contains the contents of all the memory and registers at the time that abort() was called. With the appropriate tools this gives something to indicate what the cause of the problem might have been. This style of programming is sometimes referred to as kamikaze programming but I think seppuku programming is probably more apt.

My sin was to systematically remove these little time-bombs and attempt to handle the condition that had occurred. The problem with this, of course, is that if any of these conditions have undesirable side-effects then they can store problems up for the future which, I guarantee, will be a lot harder to find and fix.

It seems there is no right solution to this problem either. Extending Joel’s conclusion we could say that the idealist (the original code author) wanted to control the error state and deal with each case appropriately by hand. Perhaps, eventually, all these little abort() statements would get to be unused code and the system would run very smoothly. Perhaps. The pragmatist (me) would have a hard enough time of keeping the system running as it was without having components voluntarily dying all the time.

I stand by my decision to improve stability but I think I probably should have left a couple of the more esoteric abort()’s in place. I did improve the stability of the system to some extent. But my effort to restore faith in the system by making it completely stable would be somewhat thwarted for other reasons (more closely related to Joel’s post about martians). As we shall see next week …

Categories
.NET programming

The Principle of Least Surprise

I first came across the principle of least surprise a couple of years ago when someone pointed out to me that some script that I had written had violated it. Naturally defensive, and of course mildly arrogant, I asked my critic what he was talking about. He said that a command line script should return an explanation of its accepted arguments when given a –help or -h argument. He was right and I was ashamed and so I naturally did what any respectful programmer would do, I fumed quietly for a few hours and then fixed it when he wasn’t looking.

It seems then that almost any user-interface should not violate the principle of least surprise and this is common sense. When I click on a cancel button in a dialog it should not accept my input it should return to where I had come from leaving everything unchanged. This was my first violation this week and it’s a common one, a ‘Cancel’ button in a dialogue did the opposite of what I expected. This was in a popular source control tool (that I won’t name because my version is old and the bug is doubtless fixed now) and it did in fact ‘Submit’ the broken change that I was so keen to cancel. After a short bout of tourrette’s and 5 minutes of trying to figure out how to undo the change I was back on track but my confidence shaken a little.

It’s not just user-interfaces that should not violate the principle of least surprise, APIs shouldn’t violate it either. You know the sort of thing. For instance, a ‘getter()’ should be idempotent and not change any state. Languages like C++ have support for this through the ‘const’ method modifier but I can’t think of another language that I know of that enforces this in the same way.

So I was incensed when I thought I’d discovered an example of such a violation in the .NET framework. I was having trouble using the method Uri.MakeRelative(Uri uri) because I had misunderstood exactly what it did (because I didn’t RTFM) and I expected it to simply strip the routing information from the Uri and return the document path. What it actually does is a more general case and it will effectively ‘subtract’ one Uri from the other. Which is more useful when you have things like embedded Uri’s. Anyway, the depth of my sickness isn’t important, the important thing is that I realised that I could have used the principle of least surprise to save me.

As far as I can tell I haven’t ever found a .NET framework method that didn’t do as I expected, that’s partly because of the number of people who use early-release versions of .NET and iron out these problems for me. Thanks! Not only that over the years I have entered into a love-hate relationship with Microsoft. I won’t enumerate the things I hate because disk space is finite but what I really like is the way that M$ have a vast array of tools and products that seem to co-exist with very little problems. This is no small feat, it’s a triumph of quality control and standards and I applaud them.

The conclusion is then, that I should abide and uphold the principle of least surprise wherever possible. Additionally though, I can apply the principle in reverse when I trust the source of the interface to not violate the principle. Moreover if I can succeed in not violating the principle myself then people who use my and my team’s work will have more confidence in it … I can dream, can’t I?