Categories
lisp programming

Know Your Idioms

A minor thought for today. Part of becoming an expert in any programming language is learning the idioms. Through experience, research and your peers you discover what makes your programming language hum. How to get the best performance and how to trade that against the best expressiveness. Since spoken language affects the way that what we think it’s reasonable to expect that programming languages affect the way we think about programming problems.

I have on a number of occassions wanted to write an expression in Lisp that will return true if all the elements are T and nil if any are not T. A sort of ‘and’ for a list. It seemed like a classic case of a ‘reduce’ or fold operation to me and so I went about doing that as a starting point.

CL-USER> (reduce #'(lambda (x y) (and x y)) '(t t t t t t t))
T

Hmm, because AND is a macro it can’t be an argument to reduce therefore I have to wrap it in a lambda. Which is ok but it removes the short-circuit feature from the AND.

So, I wasn’t terribly excited by this result but it served the purpose and life continued as normal. That was until today when I discovered the LOOP tutorial which inspired me with this trivial alternative …

CL-USER> (loop for x in '(t t t t t t) always (eq t x))
T

Yes it’s a little bit less functional but it it’s very clear what’s going on here and that was my original problem with the second reduce form. I was interested to see how the LOOP expanded too, so:

CL-USER> (macroexpand-1 '(loop for x in '(t t t t t t) always (eq t x)))
(BLOCK NIL
  (LET ((X NIL) (#:LOOP-LIST-1613 '(T T T T T T)))
    (DECLARE (TYPE LIST #:LOOP-LIST-1613))
    (SB-LOOP::LOOP-BODY NIL
                        (NIL
                         (SB-LOOP::LOOP-REALLY-DESETQ X (CAR #:LOOP-LIST-1613))
                         NIL
                         (SB-LOOP::LOOP-REALLY-DESETQ #:LOOP-LIST-1613
                                                      (CDR #:LOOP-LIST-1613)))
                        ((UNLESS (EQ T X) (RETURN-FROM NIL NIL)))
                        ((WHEN (ENDP #:LOOP-LIST-1613) (GO SB-LOOP::END-LOOP))
                         (SB-LOOP::LOOP-REALLY-DESETQ X (CAR #:LOOP-LIST-1613))
                         NIL
                         (SB-LOOP::LOOP-REALLY-DESETQ #:LOOP-LIST-1613
                                                      (CDR #:LOOP-LIST-1613)))
                        ((RETURN-FROM NIL T)))))
T
CL-USER> 

... Jeeeeezus. No way that can be efficient surely? So I ran both arguments with a temporary list of 1,000,000 elements all equal to true:

CL-USER> (time (reduce #'(lambda (x y) (and x y)) (loop for i below 1e6 collect t)))
Evaluation took:
  0.149 seconds of real time
  0.100006 seconds of user run time
  0.040002 seconds of system run time
  [Run times include 0.068 seconds GC run time.]
  0 calls to %EVAL
  0 page faults and
  7,999,488 bytes consed.
T
CL-USER> (time (loop for x in (loop for i below 1e6 collect t) always (eq t x)))
Evaluation took:
  0.059 seconds of real time
  0.052003 seconds of user run time
  0.0 seconds of system run time
  0 calls to %EVAL
  0 page faults and
  7,995,392 bytes consed.
T

Guess again brother. The more I thought about it the more other solutions I could find to this problem. For instance I could just count all the nulls and if I find one then the answer is false.

(eq 0 (count-if #'null 
           (loop for i below 1e6 collect t)))

This was marginally faster than the 'loop' on my implementation, but then I reasoned that on a list of a million items 'AND' should really retain that short-circuit ability to make it stop on the first false. So perhaps the correct thing to do is return the first non-true argument:

(not (find-if #'null 
          (loop for i below 1e6 collect t)))

This solution is less imperative in style and less complicated than all the other solutions. Now that I've thought of it I can't imagine how I didn't think of it sooner. A winner!

Lisp is such an old language that there seems literally dozens of ways to attack most problems and that really is the point. Since I don't know what I don't know there's probably dozens more ways I could try to solve this trivial problem that only experience, research and interaction with my peers will get me.

This is where books like the O'Reilly cookbook series should come in right? They should have all that hard work done for me so I can stop making mistakes and start making systems. Books, however, are not very easily searchable when they are on the shelf. Therefore on-line resources are best for searching for optimal solutions. So I was pleased to find that Common Lisp has an online cookbook. However you still have to KNOW where to look in a resource like this because I might not have found the 'find-if' solution if I was stuck in a rut banging on the 'reduce' solution. Not only that, every situation is different, 'reduce' definitely has its uses but perhaps not in this case.

This further suggests to me that it's only your peers and your own research (i.e. trial and error) that can show you the way. Just make sure you give yourself enough time for the research and pick your peers carefully 😉

Categories
article programming

Floating Point Flotsam

I have never been particularly clear about when to choose single or double floating point arithmetic. I think I have been operating on a sort of ‘trial-and-error’ approach for some time. It seems ludicrous that I should not, after all these years, know exactly when to chose single or double precision arithmetic. The truth is that I don’t, and it’s now time to fix that.

First, a little background. I was, for reasons that are too sad to explain, interested in how to calculate (accurately) the time of the Vernal Equinox. That is the exact time of the year (to the minute – if possible) that the sun is 90o above the earth at the equator. Anyway, I found some code which is an implementation of an astronomical algorithim designed expressly for the purpose of calculating the vernal equinox and is accurate to about 20 minutes. Which is good enough for now.

Now the next piece of background is that I was also doing this in Lisp and up until now I have let the reader interpret all the literals I enter. When I tried to do the calculation in the REPL I could get no closer than being in the same part of the day as I was expecting and with the result somewhat rounded to the nearest half day. After some head scratching it finally occurred to me that the the computation required more significant digits than I really had. It seems that the formula I was entering was being interpreted as single precision floating point numbers because if I had wanted double’s I would have suffixed my literal numbers with a ‘d’. It would seem that d could also stand for D’uh. Seems fair enough. Time to do some research then …

You see, according to the IEEE standard 754-1985 a single precision floating point number has 23 significant bits and a double has 53 significant bits. For me to answer the question of how many significant figures I can get in decimal would require me to know that the smallest decimal fraction that I can represent in binary is. This number would be 1/223 which is about .00000011920928955078125.

Now, floating point numbers are represented from a fractional part and an exponent part to give the decimal representation of the number. Therefore you can never get more accuracy than the smallest binary fraction multiplied by the exponent you have. This means that the significant figures should be something like:

log10(1/(2^23)) = -6.9

Therefore when a number has 7 significant figures you are already losing a little accuracy, the more significant figures you add the worse it will get. It was then clear that my astronomical antics were less than stellar since the first literal in the computation has 11 significant figures.

Indeed whilst I was thinking about this problem it occurred to me that if I wanted to continue using single precision I could split the fractional part from the integer part and continue this way. However, this is still inferior to a double because it would give me 7 significant figures for both parts and therefore a total of 14 significant figures. This is inferior because using my shiny new brain I can show that a double will give about 16 signficant figures. Of course I could have also concluded that double’s are better than two floats by noting that 23 bits+ 23 bits < 53 bits but that would never have been as much fun.

Type Word Size Mantissa Dec Sig. Figs.
Single 32 23 7
Double 64 53 16
Extended 96 63 19
Quad-Extended 128 113 34

You could argue that I could have saved myself 20 minutes of time by looking the answer up but then again I would never remember the answer unless I proved it to myself first. So, now my spring occurs at the same time as everyone else’s and I know why. Oh it happened already? Sheeeeiiiitttt.

Categories
article

A Version Aversion

I like war stories. They remind me that I’m not alone on this journey.

This is a war story. It involves a lot of entrails, questionable surgery and plenty of walking wounded. I carry the scars so perhaps you don’t have to …

… when I joined the team the system consisted of:

  1. A few thousand lines of C++ code which ran as 10 process on two Solaris hosts;
  2. Some Java code that ran on an NT4 J2EE server;
  3. A bunch of windows client PCs all running the same (but different to the server) Java VM
  4. A collection of ‘glue’ scripts written in shell script and Tcl.

It ran in two locales, and the hardware was broadly equivalent in both, from what I recall. Now compare this to what we had by the time I left:

  1. A larger body C++ code with 15 or so processes
    on one Solaris host as well as a set of additional x86 UNIX hosts (of mixed hardware pedigree) that were sprinkled with various flavours of Solaris and Linux (RedHat) that would run between 2-4 processes depending on the number of cores;
  2. 2 J2EE servers of the same vendor;
  3. A further 2 J2EE servers of a different vendor (don’t ask!);
  4. A collection of scripts written in shell script & Python (thankfully we stamped out the Tcl);
  5. 3 primary locations each running a 2 different versions of our software and a single satellite location (hanging from a primary location).

Each server release involved somewhere around 10 hosts running different hardware, OS & JVM. It’s similar and different to the problem that software vendors must have when they need to make their product run on multiple platforms. However the difference for us was that our software was a distributed system and each component needed to seamlessly interact with its peers. Something that not many software vendors make a habit of, other than Microsoft I suppose.

Against this background was a team of 10 developers in 3 timezones developing software for a constantly changing and fairly lucrative business. Quickly made enhancements could secure profits, instability and failures might secure losses so it was important to try and keep the system running as smoothly as possible. However, the large code base (>100,000 lines) and confusing deployment array made every release a roller coaster ride. In my last two years of the job the release cycle, whilst somewhat improved from when I started, had increased from 1-2 months to almost 6. This had an unforseen consequence that developers would, out of necessity, place new features onto release branches to be able to get features out faster. That’s when the madness started.

There were too many release versions, operating system versions and client library versions to contend with. Sometimes even trivial changes become enormous chess games where the order of the changes that we made would determine whether the system would actually run or not. Eventually it was bound to grind to a halt because with that many deployment configurations each release had too many testing dependencies. There were two problems here, firstly since this was now a very widely distributed system it would be difficult for us to have an accurate test deployment that worked. Further, making distributed systems work is hard anyway and the more configurations you have to manage the more complex it’s going to be. We were trying to help ourselves by retrospectively adding unit-tests but the coverage was still fairly low and so we could never have very much confidence that a built system was actually going to work. What we really desperately needed were integration tests but we never quite managed it.

That’s where Joel’s post from last week comes in. As described by Joel we essentially had a SEQUENCE-MANY situation. Where to be sure of stability we had to test many releases against many deployment configurations. It would be fair to say that we failed to do this adequately. I sometimes wonder if we could have done it a little better.

  1. Could We Have Had Tighter Control Over The Hardware? Unlike the problem of enforcing standards in Web browsers we of course had full control over the deployment environment so we could have mandated a common platform for it. As enticing as this sounds, talking to system admins now and then would tend to suggest that this is simply not possible if the hardware is to be purchased incrementally. This is because after you buy the first 2 Dell servers with a standard specification, a month later that specification will have changed. As more time passes the drift between the hardware is larger.

    If, however, you sourced a job lot of the hardware in the same place at the same time, you could buy extra (for spares and future requirements) and attempt to keep this variable constant. It would have been expensive to do but it is at least possible in this scenario. I think that this probably would have reduced the number of different cross-compilations that were required and reduced the number of different JVMs that we had to manage. The biggest problem though is that we would have, to a certain extent, needed to know the future to be able to predict what sorts and what amounts of hardware we would need when we set out. That kind of makes it a non-starter, coupled with the fact that I’ve never actually heard of anyone doing this for real.

  2. Could We Have Had Tighter Control Over The Software This is the thing that concerns me the most and is definitely a place we didn’t do as well as we should have. We let people go ahead and implement locale specifc solutions that were unworkable globally but those created internal system dependencies that would later need to be ‘undone’. Anyone who has ever worked on a system after its release will know it’s much easier to get it right first time. This is because if you create an intermediate solution that ends up being used then you have to manage the old intermediate-version ‘out’.

    Indeed, there was a story here too. The original system architect moved on a year or so after I joined. He used to worry about 80% of the code that got committed, when he left no-one really had his insight into the architecture and the rust quickly set in. Related to the loss of architect, as already mentioned, was the lack of integration tests. Both would have helped us to identify which code was bogus and have it fixed before it reached a release stage.

The one thing we did succesfully manage to do was to stop developers changing release branches. But the effect of that was to make us look like chumps when the business had to be denied features until new releases could be rolled out. Ho hum.

The idealist in me thinks we could have done a few things to make it work better but the pragmatist thinks that we did what we had to do. Whilst the idealist in my head makes a lot of noise and gets listened to an awful lot the pragmatist is the one who gets the most results. When you are faced with a daily tightrope walk, like we were, you have to try and be both idealist and pragmatist. Choosing the idealist’s course when you think you can get away with it and the pramatist when you can’t.

But when all else fails just hope for the best. The scars will heal. Eventually.