Ok, so this is a little bit technical but it’s an intriguing puzzle that got me thinking quite hard. So here’s the problem. Sometimes investors want to be able to judge what the absolute worst case scenario would have been if they’d invested in something. Look at the following random graph of pretend asset prices:

You’ll see that there are two points on the graph (marked in red) where if you had invested at the first point and pulled out on the second point you would have the worst-case loss. This is the point of this analysis and is a way for investors in the asset to see how bad, ‘bad’ has really been in the past. Clearly past prices are not an indicator of future losses. ðŸ™‚

The upper one is the ‘peak’ and the lower one is the ‘trough’. Well, finding these two babys by eye is trivial. To do it reliably (and quickly) on a computer, is not that straight forward. Part of the problem is coming up with a consistent natural language of what you want your peak and trough to be. This took me some time. I believe what I really want is: the largest positive difference of high minus low where the low occurs after the high in time-order. This was the best I could do. This led to the first solution (in Python):

```
def drawdown(prices):
maxi = 0
mini = 0
for i in range(len(prices))[1:]:
maxj = 0
max = 0
for j in range(i+1, len(prices)):
if prices[i] - prices[j] > max:
maxj = j
max = prices[i] - prices[j]
if max > prices[maxi] - prices[mini]:
maxi = i
mini = maxj
return (prices[maxi], navs[mini])
```

Now this solution is easy to explain. It’s what I have come to know as a ‘between’ analysis. I don’t know if that’s the proper term but it harks back to the days when I used to be a number-cruncher for some statisticians. The deal is relatively straight-forward: compare the fist item against every item after it in the list and store the largest positive difference. If this difference is also the largest seen in the data-set so far then make it the largest positive difference of all points. At the end you just return the two points you found. This is a natural way to solve the problem because it looks at all possible start points and assesses what the worst outcome would be.

The problem with this solution is that it has quadratic complexity. That is for any data-series of size N the best and worst case will result in N * N-1 iterations, in shorthand this is O(N^2). For small n this doesn’t really matter, but for any decently sized data-series this baby will be slow-as-molasses. The challenge then is to find an O(N) solution to the problem and to save-those-much-needed-cycles for something really important:

```
def drawdown(prices):
prevmaxi = 0
prevmini = 0
maxi = 0
for i in range(len(prices))[1:]:
if prices[i] >= prices[maxi]:
maxi = i
else:
# You can only determine the largest drawdown on a downward price!
if (prices[maxi] - prices[i]) > (prices[prevmaxi] - prices[prevmini]):
prevmaxi = maxi
prevmini = i
return (prices[prevmaxi], prices[prevmini])
```

This solution is a bit harder to explain. We move through the prices and the first part of the ‘if’ will find the highest part of the peak so far. However, the second part of the ‘if’ is where the magic happens. If the next value is less than the maximum then we see if this difference is larger than any previously encountered difference, if it is then this is our new peak-to-trough.

The purist in me likes that fact that the O(N) solution looks like easier code to understand than the O(N^2) solution. Although the O(N^2) solution is, I think, an easier concept to grapple with, when it’s translated into code it just doesn’t grok.

## 11 replies on “Calculating peak-to-trough drawdown”

Hi Steve,

(Apologies if this is a repost – it hasn’t popped up yet, so having to retype)… The second solution – isn’t it a case of two if’s?

My python is pretty weak (har har), but if I’ve translated correctly into C#, doesn’t the below code do the same, but in a simpler fashion?

Hi Chris,

Yes, what you’ve posted kind-of makes the point for me that this problem is harder than it looks. With your solution you’ll find that if the low comes before the high you’ll be showing a peak-to-trough that could never have occurred. In this case you’ll be showing the trough-to-peak gain! Which investors care about too but not for this measure.

Steve

Ahh yes, I noticed this about a day after my initial posting. I see now the complexity..

D’oh!

Working on max drawdown problem and came across a case that I’m baffled with. If your data set has a peak value occuring on the very last value then you will have to trough value would the drawdown be 0 or would you use the previous peak/trough values which your solution provides ?

Thanks

Andy

Hi Andy,

Well the peak-to-trough draw-down is a metric that might tell an investor what’s the absolute worst-case loss ever seen historically. Therefore I would always expect the peak to happen before the trough, therefore a trailing peak should have no effect on the result.

Hope that helps,

Steve

It is not nearly that complicated, it can also be done in excel in seconds.

Step 1) Take first data point set as high

Step 2) run if statement that if n+1 data point is > than n data point, n+1 data point is new high

Step 3) take [(n / step 2) – 1] this gives you your % drawdown

Step 4) take the minimum value achieved from Step 3

Hi Chris,

Yes, that’s easier, but this is not peak-to-trough draw-down as I defined it. This procedure will find the largest price difference.

To appreciate the difference, consider what would happen if the minimum value is

beforethe highest point. This is not draw-down because an investor could never ‘experience’ this loss.Once you bear this in mind you will realise that the peak-to-trough may not actually involve the highest point in the series at-all. Any local maximum can be the highest point as long as it’s followed by a local minimum that is low enough.

Hope that helps

Steve

Great article, as you say its the edge case that are difficult.

Hmmm, this is interesting but I’m not sure imposing temporal order on the data will satisfy your investors requirements. I would think most investors want to know what could be not what would have been if they had have done something (bought x time ago) that they didn’t actually do. In that case it becomes a question of variability regardless of temporal order. As you point out assumptions of historical prices (and events surrounding them) repeating over time is not all that sound but if the assumption is made (eg the highest high could happen again) then I would have thought the biggest difference looking at all the trends (either high to low or low to high remember we’re assuming rightly or wrongly the highs and lows are repeatable so if something has been so low before it can/will be again)is the worst case scenario.

GonzThanks for your comment. Yes, I agree it’s not the greatest of measures. But it does appear quite a lot in hedge-fund prospectuses and the meaning I’ve given is the one that investors expect. Whether they place any weight behind it is another matter!Just stumbled on this but I wanted to share my solution:

This walks through the list, updating the maximum point it has been through, and comparingit to the minimum from the remaining list, calculating a maximum loss at every step through the list. The maxLoss also updates to be the biggest value yet reached as the walk continues.