Categories
article finance programming python

Calculating peak-to-trough drawdown

Ok, so this is a little bit technical but it’s an intriguing puzzle that got me thinking quite hard. So here’s the problem. Sometimes investors want to be able to judge what the absolute worst case scenario would have been if they’d invested in something. Look at the following random graph of pretend asset prices:

Peak-To-Trough

You’ll see that there are two points on the graph (marked in red) where if you had invested at the first point and pulled out on the second point you would have the worst-case loss. This is the point of this analysis and is a way for investors in the asset to see how bad, ‘bad’ has really been in the past. Clearly past prices are not an indicator of future losses. 🙂

The upper one is the ‘peak’ and the lower one is the ‘trough’. Well, finding these two babys by eye is trivial. To do it reliably (and quickly) on a computer, is not that straight forward. Part of the problem is coming up with a consistent natural language of what you want your peak and trough to be. This took me some time. I believe what I really want is: the largest positive difference of high minus low where the low occurs after the high in time-order. This was the best I could do. This led to the first solution (in Python):


def drawdown(prices):
	maxi = 0
	mini = 0
	for i in range(len(prices))[1:]:
	   maxj = 0
	   max = 0
	   for j in range(i+1, len(prices)):
		if prices[i] - prices[j] > max:
		    maxj = j
		    max = prices[i] - prices[j]
	   if max > prices[maxi] - prices[mini]:
	   	maxi = i
		mini = maxj
	return (prices[maxi], navs[mini])

Now this solution is easy to explain. It’s what I have come to know as a ‘between’ analysis. I don’t know if that’s the proper term but it harks back to the days when I used to be a number-cruncher for some statisticians. The deal is relatively straight-forward: compare the fist item against every item after it in the list and store the largest positive difference. If this difference is also the largest seen in the data-set so far then make it the largest positive difference of all points. At the end you just return the two points you found. This is a natural way to solve the problem because it looks at all possible start points and assesses what the worst outcome would be.

The problem with this solution is that it has quadratic complexity. That is for any data-series of size N the best and worst case will result in N * N-1 iterations, in shorthand this is O(N^2). For small n this doesn’t really matter, but for any decently sized data-series this baby will be slow-as-molasses. The challenge then is to find an O(N) solution to the problem and to save-those-much-needed-cycles for something really important:


def drawdown(prices):
  prevmaxi = 0
  prevmini = 0
  maxi = 0

  for i in range(len(prices))[1:]:
    if prices[i] >= prices[maxi]:
      maxi = i
    else:
      # You can only determine the largest drawdown on a downward price!
      if (prices[maxi] - prices[i]) > (prices[prevmaxi] - prices[prevmini]):
	prevmaxi = maxi
	prevmini = i
      return (prices[prevmaxi], prices[prevmini])

This solution is a bit harder to explain. We move through the prices and the first part of the ‘if’ will find the highest part of the peak so far. However, the second part of the ‘if’ is where the magic happens. If the next value is less than the maximum then we see if this difference is larger than any previously encountered difference, if it is then this is our new peak-to-trough.

The purist in me likes that fact that the O(N) solution looks like easier code to understand than the O(N^2) solution. Although the O(N^2) solution is, I think, an easier concept to grapple with, when it’s translated into code it just doesn’t grok.

Categories
rant

The Decline of Hewlett-Packard

I remember when I was a kid that I had a certain amount of awe for some companies. I particularly remember that I used to think that HP was cool. I mean, how couldn’t they be? For one thing they made all that fancy scientific equipment but not only that they made all those neat calculators that did the whole reverse-polish thing.

The Mighty HP42s

I owned a HP42s (pictured left) and it was a very very cool item indeed. The ‘s’ stood for scientifc but it had it’s own stack based programming language that allowed you to write simple applications. You could even plot graphs onto its two line display. But only if you were sick and twisted, like me.

In the early 90s I spent a long summer trying to program an HP Deskjet 560c from a DOS program using only PCL5. It was quite a slow task and Windows would make the entire thing a subsequent waste of time but I had nothing but the utmost respect for the hardware.

But everything made by HP I’ve owned since then has been of poor or low quality. I owned an inkjet that broke down before the cartridges ran out. Then I bought a WiFi printer that has lasted longer but never loads a page straight. Then I bought a scanner that:

  • for some reason comes with about 300Mb of software that is garbage and I don’t want but the scanner can’t do without
  • the software doesn’t work very well and has a clunky non-standard interface which makes the whole experience regrettable

Last night a friend ‘dropped’ round an old HP notebook ‘nx6110’ for me to fix. The notebook was not broken just missing a driver, but the entire experience of getting that driver from HP left me sad and demoralised. Once I had discovered the product model number and located the driver I noticed that the driver was 24.7Mb (it subsequently transpired that this package contained the driver for 3 different OSs but still)! When I tried to install it it wanted to install itself into a non-standard folder in C:\ which annoyed me. Nor did the folder get automatically cleaned up afterwards.

After that joyless experience I wanted to install the WiFi drivers but it was impossible to tell from the notebook and the website which one to install. They had at least 5. Should I install one? Two? A combination? Simply unclear. I unsuccesfully tried to install 3 or 4 and just gave up in the end. I was sick, tired and angry of the low quality HP product and support.

What is going on? There’s no joy in owning an HP product anymore, and in fact I would say that this has been true since before the Compaq merger. I can’t remember the last time they made a product that generated desire like Apple or even Dell. So I guess I’ll just polish my reverse-polish HP42S and remember the gold old days and how things used to be. Perhaps new research direction can save them but as a consumer I’m not going to buying any more of their products unless they produce something exceptional.

Categories
article project management

You think your code don’t smell?

So, code reviews are great. Get the benefit of some ass-hole telling you that your comments should be C-style (/*) and not C++-style (//) and remind you that the member name ‘mSuckThis’ is not suitable, ever. No really, code reviews are great. It’s just that a lot of times they just don’t work.

The first time I encountered code-review was when my boss of the time had just read some book on how to manage programmers and was keen to inflict it on all his employees. His code-review process was to take all my work print it out and go through it line-by-line. Master and student style.

This type of code-review, in the way that he implemented it, was meaningless. It concentrated on an important but largely automatable aspect of code review and that is: adherence to coding guidelines.

As I see it there are three types of defect that code review is trying to identify:

  1. Adherence to coding guidelines (or lack of it) and inter-package dependencies.
  2. Identification of localised errors: “that loop is infite”, or “that algorithim should be log(N) and not N^2”, “that module is way too big”
  3. Identification of non-local errors. Where local means local to the code-review. For instance the impact of adding a throw on a widely used method and how that affects all the dependent code paths.

I question anyone’s ability to fully understand the dynamic nature of any reasonable sized piece of software by just looking at a small excerpt. Every time you review that code you have to ‘load’ that supporting information into your head to be able to identify whether the code is totally awesome or tragically bogus. In my experience defects of a local type (type 2) were sadly rarely identified by code review and defects of a non-local type (type 3) almost never.

The improvement of code-quality I’m passionate about. But I don’t see any realistic way to achieve what I want. To identify non-local errors you really need your code reviewer to sit next to you during the development or be as deeply involved in the code as you are. It probably would need a similar approach to reliably find local errors too. However your reviewer is rarely as involved as that. It seems that some judicious use of pair programming might be the answer but that comes with its own problems.

It seems that to get the best out of code-reviews you have to be very careful about how you implement them. Sure, let’s automate what we can automate and pair program on tricky items but the real code-review needs to be extremely skilfully handled to get the best bang-for-your-buck-chuck.