The Low Tech Secret Santa – Part 1

We’re all Santas now

Secret Santa, if you didn’t know, is a gift-giving game that seeks to reduce the overall burden of buying gifts to a single gift for one person instead of a gift for every person. It also, usually, place limits on the spend which prevents people over or under buying.

For most people the solution to running a ‘Secret Santa’ is pretty simple but it’s worth repeating the process for clarity:

  1. Write the names of your friends/family/colleagues on scraps of paper;
  2. Fold the names of the Santas up and put them in your (hacking!) hat;
  3. Each Santa takes it in turn to pull out a name from the hat;
  4. If any Santa gets themself as their match return to step 2;
  5. Buy gifts and exchange them sometime later.

The ‘secret’ part refers to the fact that no-one should know, who is their Santa or any other pairing. It usually also means that you never find out who bought you your gift, but that part is optional and there is often plenty of fun to be had discussing it afterwards.

Trouble at the North Pole

Problems start to arise when you can’t get all of the people who want to participate in the draw in the same physical room at the same time.

Your first thought might be (like mine was) “well surely there’s an app for that where you tap in all the names and the app privately sends out the matches”.

And you’d be right, there are plenty. But what if the person who is remote to the draw doesn’t have a usable mobile device or use email? I know this is hard to believe but these people do exist. And believe me, I have tried to fix this problem by fixing the bigger problem but progress has been slow. So, for now at least, apps and email are out.

Your second thought might be (like mine also was) “I could just use snail mail to send out the draw to the remote persons”. But the problem with this is that there is a chance, if one person or more is remote then one or more of those remote persons get themselves in that draw. This would reset the draw and mean we’d need to do it over.

So obviously I can fix that by sending out multiple draws and selecting the first one that works. Right? So the question then becomes how many trial draws do I need to be certain of a valid gift giving extravaganza?

99% Confident Santas

So how many draws do we need, for us to be 99% certain that those pre-draws will be successful and give us valid pairings? This problem description is a special case of the Binomal distribution which has the following general formula to find the probability of getting exactly r successes from n trials with probability p:

nCr . p^{r}.(1-p)^{n-r}

But what we actually want is the probability of 1 or more successes, which is the logical opposite statement of the probability of no successes at all. When you substitute that into the Binomial formula it simplifies to:


But what is p?

One way to estimate it is if you write out by hand the set of valid outcomes for different sizes of groups we can get an idea of what sort of values p takes.

nValidPermutations (n!)p (approx)
Table of probabilities of success in a trial of n

It turns out that this number oscillates as the group size increases but it looks like it’s converging to somewhere just above 1/3. In my case I really want to know p for a group size of 6 so a computer is needed to brute force that answer. This is of course in the absence of a specific insight on the nature of the ‘Santa sequence’ which would allow me to calculate the valid outcomes without brute force, but more on that in Part 2.

Turns out that in Python you can write an expression to do that in a one-liner but for clarity I’ve split it onto 3 lines.

>> import itertools, numpy, math
>> santa = numpy.array([1,2,3,4,5,6])
>> sum([1 for z in itertools.permutations(santa) if != 0])

What this does is to produce a Python iterable object which has all the different permutations of this array [1, 2, 3, 4, 5, 6]. If we use the order of the array to indicate the pairing (i.e. the first item corresponds to the first santa, the second item to the second and so on) then [1, 2, 3, 4, 5, 6] is clearly the worst draw since every Santa is paired with themselves.

By using itertools permutations then, we can use this abstraction to represent all possible draws of six Secret Santas as a list of arrays (spoiler the length of this list is the same as 6! which is 720).

But how do we check for a valid draw? One cheap way to know if a draw is valid is to subtract the index of each Santa from the index in the permutation given. If the draw is invalid the subtraction will put one or more zeros in the array, and when you find the product of that array it will be zero if it is invalid and non-zero if it is valid.

Doing this gives the new information we need, that there are 265 valid draws with a group of six people. Which is a p of about 0.3681.

Solved it! Now We Have Two Fails.

So now we want to know how many draws are needed to be 99% confident that one of the draws will succeed. We could simply use our simplified Binomial ‘at least one success’ formula to calculate the probability of success after n trials and then stop when we hit 99% but where is the fun in that?

The ScipPy optimiser is the sledge-hammer we need for this nut, it just needs us to rearrange our equation so that the answer we’re looking for would give zero when put into a Python function.

>> from scipy.optimize import fsolve
>> fsolve(lambda x: 1-(1-265/720.)**x-0.99, 1)

So we need about 10 draws to be confident that it will only go wrong one time in a 100. But, when you think about the mechanics of putting together 10 independent Santa draws and stuffing them in the Royal Mail that feels like a lot of work.

Not only that, there is a much bigger chance that I will mess something up when putting the draws together and it results in a failed draw for a purely human reason (as opposed to a mathematical one).

Another Approach

Then I hit on a slightly different approach which is just a step further towards a generalised approach that could allow us to do as many draws as we want, but probably only when the number of Santas is low.

I now know how to generate all the permutations, and how to identify valid drawings easily, and also how many valid drawings there are (i.e. 265). This number is small enough that perhaps the easiest thing to do, in this case, is to send all Santas a personalised list that gives them their pairing in one of the 265 possible valid outcomes. Then all I need to do is to send out those personalised lists via post and email to all the Santas.

On the date of the draw then someone simply selects a random number from 1-265 and everyone reads their Santa off the list. Sure, as the organiser, I could find out who has which pairing but that’s not really a concern because I really don’t want to know. So I just won’t look.


I ran this process as a trial (to make sure everyone ‘got it’) and then we did it for real on 31st October 2021. I think it worked fine, but we might not really know until the gifts are given. Lol.

This puzzle raised a lot of other questions that I haven’t delved into in this post. In the next post I’ll provide a link to the code that generated the draw sheets that I sent out to my fellow Santas (in-case someone wants to use what I’ve done to run their own Secret Santa) but I’ll also delve into the sequence for p to see if we can answer questions like:

  • What does p converge to?
  • What if the number of Santas is bigger than 10?

But until then I’ve got some shopping to do. 🙂


Circle Square

So as I mentioned in the last post I’ve been messing around with OpenGL shaders. To be honest I know next to nothing about shaders or OpenGL but it’s fun to take a look at these things to try and gain some intuition about how they work. Hopefully by doing so I’ll be able to build complex and interesting shaders of my own, but I’m getting ahead of myself.

So let’s start with something simple, a shader of concentric circles. It’s based upon a somewhat more fascinating shader, but there’s more than enough to talk about with just this simple example.

It turns out that the shader code to draw an animated set of concentric colourful circles requires just 2 lines. Well that’s not quite true you could write it one line if you wanted. But that’s also not quite true, there are a lot of lines of WebGL to get the two lines below to do something in the first place. But once those lines of code are done the bit that makes just the drawing is effectively one formula that can be compiled on the GPU.

void mainImage(out vec4 fragColor, in vec2 fragCoord) {   
  vec2 u = vec2(fragCoord/iResolution.xy-.5) * vec2(1.,iResolution.y/iResolution.x);
  fragColor = vec4(fract(length(u)*4. + vec3(0.,1.,2.)/3. + iTime/16.),1.);

You’ll notice that there aren’t any loops in the code and that’s because what we’re writing is conceptually the centre of a loop managed by WebGL that will get executed once for every pixel we want to shade.

Too Shadey

I should explain a little bit first. In OpenGL there are a few types of shader that form part of the rendering pipeline. But, for the purposes of this series of posts the OpenGL pipeline is a black box and we’re only really interested in the fragment shader that can be constructed in such a way as to supply window input co-ordinates and produce an associated colour for that co-ordinate.

Co-ordinate Systems

A WebGL fragment shader receives pixel co-ordinate as natural numbers where the origin (0,0) is at the bottom left of the canvas. Therefore the first thing many shaders do is to convert those co-ordinates into a new set where the origin is in the centre of the screen and the range of x and y is some set of values that makes the subsequent maths a bit simpler.

How they do this varies, and is not super interesting, but being able to extract the range from the domain of input values is often important for making sense of what happens next. A lot of OpenGL math involves vector and matrix math. For this example it’s not too important to understand this more deeply than a vec2 is a pair of x,y points.

vec2 u = vec2(fragCoord/iResolution.xy-.5) * vec2(1.,iResolution.y/iResolution.x);

I loved my Atari ST it was the third personal computer I owned (preceded by a ZX81 and a ZX Spectrum) and it had a whopping 320×200 resolution with a mighty 512 colours. All on the same screen at the same time! Amazing. So if our device has that resolution then we simply calculate the range of y by substituting in the display resolution.

\tfrac{0}{200}- \tfrac{1}{2} * \tfrac{200}{320} \leq y \leq \tfrac{200}{200} - \tfrac{1}{2} * \tfrac{200}{320}

When you simplify you get the new range of x,y values which reveal that the range of values and that the pixels have been corrected for the aspect ratio to keep them square.

-\tfrac{1}{2} \leq x \leq \tfrac{1}{2},  -\tfrac{5}{16}\leq y \leq \tfrac{5}{16}

Normally, it’s not magic

The final part, and where the magic actually happens, is in the following code segment. The part that provides the ‘circularity’ is the call to length this is because this tells us how far our point in the plane is from the origin (which we placed in the centre). Length is the same as the linear algebra vector operation ‘normalise’ (or norm), that in two dimensions is also simply the old high school favourite Pythagoras.

fragColor = vec4(fract(length(u)*4. + vec3(0.,1.,2.)/3. + iTime/16.),1.);

If we imagine a slice through our circles along the line y=0 then length(u) is equal to x since clearly:


By removing a dimension we can now reason about what the rest of the calculation must do. Since our length will be x then when we scale it by the constant 4 we will end up with x in the range of between -2 and 2. We then add this to the vector:

\begin{pmatrix}\tfrac{0}{3} & \tfrac{1}{3} & \tfrac{2}{3}\end{pmatrix}

It is this step that essentially gives the circles their colour change. We now have three different ranges for the different colour channels and so we will end up with the following:

\begin{matrix}-2 \leq r \leq 2 \\ -1\tfrac{2}{3} \leq g \leq 2\tfrac{1}{3} \\ -1\tfrac{1}{3} \leq b \leq 2\tfrac{2}{3}\end{matrix}

However OpenGL only shows colours in the range of zero to one (if the value is greater than one it is capped at one) and this is the final step. We take the fract in order to give us just the fractional part of the calculation so our values for each channel will range between zero and one. This gives us a type of oscillation where each colour channel will rise steadily towards full saturation but then drop sharply drop to zero afterwards. The key is that each channel will do this on a slightly different period and this is what gives us the candy type colours. In order to visualise this a bit more clearly I’ve made a plot showing how the three channels vary when y=0.

Time to wrap it up

This has been a long post, but we’re not quite done. The part of the formula that gives the circles their animation is the addition of iTime/16. Since iTime is the amount of seconds since the shader began this will have the effect of gently pushing each peak of the RGB channel toward the centre on every frame. I should also point out that it significantly changes the ranges of the colour channels (RGB) prior to the call to fract but because we call fract it makes no difference to the final output range of those channels of between 0 and 1.

This has been a pretty long post. There were a lot of basic things to explain before actually getting to the point. Subsequent posts will be a lot shorter I hope so that we can focus on the interesting stuff!

graphics shaders

The Shadey Student

In the last post, I wrote about getting wowed by WebGL fragment shader hacks. They are simultaneously fascinating and baffling, how can something seeming so simple give rise to such magic?

… an uninformed public tends to confuse scholarship with magicianry…

Foundation and Empire – Isaac Asimov (1952)

It seems then that I am the uninformed public. Again.

Over the years I’ve come to realise that trying to explain something to others is the way to really understand something. It’s an approach that has been broadly given the name: ‘The Feynman Technique’. Which is to say, I think, it’s something Feynman did in his teachings at CalTech. Rather than a bite sized self-help method developed by him to sell stocking filler books. But all props to the Feynman legend, the system works and so I’m going to aim to gain understanding by trying to explain what’s going on under the hood here on this blog. Note that I’m not a mathematician and so there are probably better explanations of how some of this works than mine, most likely what you need, if you want the good stuff is here.

Armed with all the heavy-hitting celebrity ‘thinking’ quotes that I need to motivate, me I’ve set out on this endeavour. The first step of which is creating tools for the job. Whilst is the bomb and my inspiration, there are two reasons I’ve decided not to use it for this:

  1. I would need to publish my shaders on in order to embed them here. What I intend to do in explaining is a little too trivial for what I think should be used for;
  2. seems a bit slow, probably because it’s really popular! Better that I don’t add to that burden (although three page views a week isn’t going to hurt them!) but also better to have control over the content. IMHO.

Therefore I’ve created a little WebGL Javascript embedding tool. Based upon Mozilla’s great WebGL tutorials and inspired by It’s stripped down to simplify what needs to be done but the shaders I create with it should be compatible with when, and if, I do want to publish something.

Finally the last part of the stack is having a graphing tool that I can use to embed custom plots into these pages to explain the functions that are used in the shaders – because this is where the magic is. For that I’m using Python 3.8 with Matplotlib. It’s not the prettiest but it’s definitely more than good enough.

Let the fun begin.

article graphics shaders

New Toys

I have been interested in computer graphics for a long time but never really interested enough to make some positive steps toward it. Like a lot of people, I have tried to get a basic app working in OpenGL or DirectX but never really got very far. It was all a bit intimidating.

However things have changed. These days, on DirectX, there are a load of fantastic tutorials on the internet, as well as seriously helpful libraries. If you’re interested in getting some physics working there’s a few good books and often the content is backed up with source code. For WebGL (which is a flavour of OpenGL) there’s great tutorials to help, and really cool insights into how it works.

There’s quite a lot to take in before you can really grok what the graphics pipeline does, whatever API flavour you chose. It was whilst trying to figure out how shaders work that I stumbled across some stunning examples of what is possible. What I didn’t realise, at first, is that the examples I was looking at are simply fragment shaders.

If you don’t know how 3D graphics works you might say ‘so what?’. But let’s just say it’s not the easy route to getting 3D computer graphics done – not to me anyway. The mathematics involved looks in a lot of ways harder, but the programming looks way easier. Programming this way is mostly declarative, there’s no bonkers API and there are far fewer loops because the magic is in the power of the GPU and the shader loop.

My only problem is that doesn’t quite work how I want it to. For one thing it keeps timing out (guess they need more funds, so I added some through Patreon), but that aside I wanted a bit more control over the shader and how it can be embedded. Partially for this blog but also to learn a bit more WebGL.

That happened two weeks ago. I’ve been messing around trying to get something working for this site and making some embeddable shaders that I can embed directly here. I think I’m almost there …


Server timed out. Rebooting.

TL;DR: The hat is back!

It has been over 10 years since I last wrote something on this blog. Some usual life stuff happened that I won’t trouble you with. The rain fell, but mostly, the sun shone.

After 5 years of letting the site rot I’ve just spent the best part of 2 days getting it back up, it’s had:

  • OS upgrade;
  • WordPress upgrade;
  • but most importlanty a new logo!

5 years of rot has gifted me about 4,000 accounts created by SEO spam-bots so in a fit of rage I deleted all registered accounts. As far as I can tell there is very little spam (if any) on the site, which brings me to my next point.

I’ve disabled comments. I realise, a blog without comments isn’t really a blog at all. But the problem is, as the number of spam posts increase it becomes painful to moderate and so I don’t. Turning off comments is fairer to potential commenters, who might write a comment and never have it published. At least I’m saving you some time.

I’m looking into alternatives, but so far in all the usual places. Perhaps I’ll find something that addresses my problems with spam that doesn’t involve Google. We’ll see.

After all the work of getting the site up I felt like I should probably write something. I’ve been doing some interesting (to me) hobby research into knowledge engineering and separately, computer graphics and shaders. Alongside my day job of project management, there’s a lot of potential topics, and perhaps some of those musings will end up right here – in the hat.

To keep the pace up, the promise I’m making myself is that new posts will be shorter. So … let’s end it there and see what happens next.