Categories
graphics

The Shadey Student

In the last post, I wrote about getting wowed by WebGL fragment shader hacks. They are simultaneously fascinating and baffling, how can something seeming so simple give rise to such magic?

… an uninformed public tends to confuse scholarship with magicianry…

Foundation and Empire – Isaac Asimov (1952)

It seems then that I am the uninformed public. Again.

Over the years I’ve come to realise that trying to explain something to others is the way to really understand something. It’s an approach that has been broadly given the name: ‘The Feynman Technique’. Which is to say, I think, it’s something Feynman did in his teachings at CalTech. Rather than a bite sized self-help method developed by him to sell stocking filler books. But all props to the Feynman legend, the system works and so I’m going to aim to gain understanding by trying to explain what’s going on under the hood here on this blog. Note that I’m not a mathematician and so there are probably better explanations of how some of this works than mine, most likely what you need, if you want the good stuff is here.

Armed with all the heavy-hitting celebrity ‘thinking’ quotes that I need to motivate, me I’ve set out on this endeavour. The first step of which is creating tools for the job. Whilst shadertoy.com is the bomb and my inspiration, there are two reasons I’ve decided not to use it for this:

  1. I would need to publish my shaders on shadertoy.com in order to embed them here. What I intend to do in explaining is a little too trivial for what I think shadertoy.com should be used for;
  2. Shadertoy.com seems a bit slow, probably because it’s really popular! Better that I don’t add to that burden (although three page views a week isn’t going to hurt them!) but also better to have control over the content. IMHO.

Therefore I’ve created a little WebGL Javascript embedding tool. Based upon Mozilla’s great WebGL tutorials and inspired by shadertoy.com. It’s stripped down to simplify what needs to be done but the shaders I create with it should be compatible with shadertoy.com when, and if, I do want to publish something.

Finally the last part of the stack is having a graphing tool that I can use to embed custom plots into these pages to explain the functions that are used in the shaders – because this is where the magic is. For that I’m using Python 3.8 with Matplotlib. It’s not the prettiest but it’s definitely more than good enough.

Let the fun begin.

Categories
article

New Toys

I have been interested in computer graphics for a long time but never really interested enough to make some positive steps toward it. Like a lot of people, I have tried to get a basic app working in OpenGL or DirectX but never really got very far. It was all a bit intimidating.

However things have changed. These days, on DirectX, there are a load of fantastic tutorials on the internet, as well as seriously helpful libraries. If you’re interested in getting some physics working there’s a few good books and often the content is backed up with source code. For WebGL (which is a flavour of OpenGL) there’s great tutorials to help, and really cool insights into how it works.

There’s quite a lot to take in before you can really grok what the graphics pipeline does, whatever API flavour you chose. It was whilst trying to figure out how shaders work that I stumbled across some stunning examples of what is possible. What I didn’t realise, at first, is that the examples I was looking at are simply fragment shaders.

If you don’t know how 3D graphics works you might say ‘so what?’. But let’s just say it’s not the easy route to getting 3D computer graphics done – not to me anyway. The mathematics involved looks in a lot of ways harder, but the programming looks way easier. Programming this way is mostly declarative, there’s no bonkers API and there are far fewer loops because the magic is in the power of the GPU and the shader loop.

My only problem is that shadertoy.com doesn’t quite work how I want it to. For one thing it keeps timing out (guess they need more funds, so I added some through Patreon), but that aside I wanted a bit more control over the shader and how it can be embedded. Partially for this blog but also to learn a bit more WebGL.

That happened two weeks ago. I’ve been messing around trying to get something working for this site and making some embeddable shaders that I can embed directly here. I think I’m almost there …

Categories
article

Server timed out. Rebooting.

TL;DR: The hat is back!

It has been over 10 years since I last wrote something on this blog. Some usual life stuff happened that I won’t trouble you with. The rain fell, but mostly, the sun shone.

After 5 years of letting the site rot I’ve just spent the best part of 2 days getting it back up, it’s had:

  • OS upgrade;
  • WordPress upgrade;
  • but most importlanty a new logo!

5 years of rot has gifted me about 4,000 accounts created by SEO spam-bots so in a fit of rage I deleted all registered accounts. As far as I can tell there is very little spam (if any) on the site, which brings me to my next point.

I’ve disabled comments. I realise, a blog without comments isn’t really a blog at all. But the problem is, as the number of spam posts increase it becomes painful to moderate and so I don’t. Turning off comments is fairer to potential commenters, who might write a comment and never have it published. At least I’m saving you some time.

I’m looking into alternatives, but so far in all the usual places. Perhaps I’ll find something that addresses my problems with spam that doesn’t involve Google. We’ll see.

After all the work of getting the site up I felt like I should probably write something. I’ve been doing some interesting (to me) hobby research into knowledge engineering and separately, computer graphics and shaders. Alongside my day job of project management, there’s a lot of potential topics, and perhaps some of those musings will end up right here – in the hat.

To keep the pace up, the promise I’m making myself is that new posts will be shorter. So … let’s end it there and see what happens next.

Categories
programming

Programming Like It’s 1995

Object-oriented programming, where is it now? When I was at college we learnt a little about object methods and techniques and when I left I kept reading about them. I remember in the 90s I was incensed that an employer was NOT using object methods appropriately. I EVEN read books about it. You could say I was an object fan-boy. But even I have to admit that OOP didn’t exactly deliver how I expected it would.

You may say that this is a crazy talk and that OOP was/is hugely succesful. Shark-jumping mumbo jumbo. But I might disagree. In fact, sometimes I think the object-baggage we’ve inherited is perhaps as much a hinderance as it is a help. The wake-up call came a few months ago when I realised that almost every piece of code that I’m in contact with these days makes only limited use of object-techniques.

Before I talk about all the ways that OOP is dead let me be clear in the ways that it is not. Because when I say OOP is mostly-dead, I mean it is mostly dead to me. In some areas OOP has delivered in spades. For example in modern OO programming languages nearly all come complete with large object libraries. This much is true. There’s objects in them-thar binaries for sure. But my code? Not so much.

Here’s all the ways I manage NOT to write truly object-oriented software:

The Task is Too Small

Some systems are just too small to develop a complex object class hierarchy for. It would be a waste of time to do so. I estimate these ‘glue’ applications could take up as much as 5% of the total LOC of a large enterprise system. It doesn’t matter whether these glue applications are written in Java or Python or Bash because really they’re just scripts. Scripts are their own thing. I would argue that the less scripts you have in your system the better you designed it because you’re not just duct-taping over the seams.

The Spreadhseets Rule

I would also estimate that some appreciable percentage of your enterprise is run entirely from spreadsheets. Whether you know about it or not. Be it phone book, accounts, trading system or stock inventory. These little beauties contain little or-no object code and are spread far-and-wide. I’ve ranted about the pervasiveness of spreadsheets before, no need to go over it again. However, as far as I know, no-one has implemented the idea of an OO spreadsheet. For that we can all be thankful.

World Wide Wasteland

Although the web does lend itself beautifully to model-view-controller, on the client side a lot of it is only markup and Javascript. Neither markup nor JS have particularly strong OO characteristics and both are hugely succesful without those OO characteristics. Indeed many WWW apps are really CRUD applications.

CRUD

The create-update-delete application is everywhere. Be it web-or-desktop. These apps are effectively database front-ends that organise the interactions between user and DB in a more user-centric way. For example in .NET there’s not much need nor desire to map your data into real-objects because the data-binding layer is phenomenally powerful at making data-bound apps quickly. There’s no support to help you map from data to objects to Infragistics. Indeed, nor should there be.

Enterprise Business Objects

JSP And this is the bit that makes me a bit sad inside. This is what OO was really meant for. I used to have arguments with business-analysts about the right object model to use and whether a method should exist in a base-class or a derived-class. But now it doesn’t seem like anyone, including me, really cares. It’s just that somewhere along the line it became a little irrelevant. Don’t get me wrong, I work with objects all the time. But they’re not really objects that were sold to me.

They’re just data-holders or as our fathers used to say: data-records.

No Methods? No Object

The thing is that to me at least, without methods on your objects there’s literally no object. If objects doesn’t respond to messages they’re just data-records transferring data to some other module that can operate on those records. Usually this other module takes the role of a controller. This, to me, sounds very similar to the programming that our fathers used to-do before C++ and Java 1.1. So much so that it’s tempting to break open my book on JSP.

I think there are two fundamental areas where objects failed to deliver.

Finance This

Firstly, whilst OO techniques are very flexible in some business domains they aren’t flexible in exactly the right ways for all business domains. I’m thinking particularly about my own area of expertise, which is financial trading systems.

The objects in trading systems tend to be difficult to compose into a meaningful hierarchy that is both expressive and not too abstract. I think ultimately this failing is because financial instruments are actually themselves models of physical events. This means that it’s straight-forward enough to construct an object-model of financial instruments. However as soon as I start innovating with my financial instruments (i.e. construct new financial models from old ones) the original object models tend to break-down pretty fast.

The Technology Stack Sandwich

The second reason is that there are too many different technologies involved in many enterprise-sized solution stacks to make consistent application of OO methods viable. What does that mean? Well this is perhaps a post in itself but essentially as soon as you are using two or more programming languages that must share objects you’re essentially entering an object-desert.

The End?

Oh no. Very definitely not. Objects are the only way to make sense of a deep-and-wide library. If the domain allows it they are the only way to go.

The surprise is that objects just didn’t deliver in the way that I thought that they might for me. Which is kind-of interesting because it suggests to me that perhaps me, and a lot of people like me, might benefit from forgetting about objects sometimes and just Programming Like It’s 1995.

Categories
windows

Using Git from behind an NTLM proxy

For some reason the Git that ships with Cygwin (v1.6.6.1) won’t do the right thing with NTLM proxies. It seems that Git uses cURL underneath and that cURL can correctly handle NTLM authentication if the options are set correctly.

However, the version of Git I have isn’t capable of passing this information through. In fact, some browsing of the issues that people have with this suggests that there is more than one part of Git that isn’t able to work correctly with NTLM. So even if you’re able to get past the initial connection you probably won’t be able to fetch any of the tree.

The solution, in full then, is to use ntlmaps to act as a proxy to the proxy. All you need to do is to download the latest version of the app from: ntlmaps. Change the config file to include your authentication and proxy details, start it up and then set the proxy to be your new local one:

git config --global http.proxy http://localhost:5865

I can confirm that it works just fine. Not only that you can use it for any app that requires NTLM authentication but does not provide full NTLM support.