Categories
programming

The Not So Super, Super API

Now and again you come across the ‘one size fits all’ of computer programming the ‘Super API’. It derives its name from the fact that it tries to act as a superset of a bunch of other platform-specific APIs. One good example of this is JDBC/ODBC, but there are many others. JDBC/ODBC allow access to many many database servers because they’ve normalised the API so that you can write code that doesn’t care which database it is connected to.

Now and again, though, I find myself requiring features that are not exposed in a Super API. They’re not exposed because the feature is either too niche or because it is not standard across implementations. Creating a Super API is in some ways like plastering. Once you’ve created that nice even finish across an entire wall you might find that you’re missing a few light switches.

I was reminded of this the other day when one particular super SQL API: CLSQL uses another super API: UFFI, what they do and how they do it is largely irrelevant. The important point is that I’m only going to be concerned about the operation of one of the combination of technologies (MySQL with SBCL:SB-ALIEN). I’m concerned because there is a particular SBCL feature that I want CLSQL to make use of when it calls SBCL:LOAD-SHARED-OBJECT from via UFFI. Confused? Don’t worry, it’s not that important.

In the end I found a (most heinous) way to do what I want but it occurred to me that there has to be a better way. I appreciate that there is no way that the library writer could know what I want before I want it and neither could they potentially anticipate future underlying API changes to provide for them. However, what if the library writer gave me the option of registering a per-API first-chance callback function? This function could then perform custom processing for my platform combination and then yield control back to the library when it was completed. That way I could inject any code I wanted at potentially any supported depth.

I don’t doubt that it would be difficult to produce an API like this. Certain classes of API (non reentrant APIs being one that jumps to mind) could potentially break in new and unforseen ways and library maintainers would have to be creative about how they detect and cope with errors happening in and around user injected code.

But if the injected code was simply a one-line replacement for the one-line that would otherwise have be called then it might work. Perhaps it might even be fun to try it.

Categories
programming

Look Ma! No F9!

More people should read the Design of Everyday Things. I couldn’t do the book justice but it did explain something to me that I’d never seen written down anywhere else before and was sorely reminded of today.

It seems that everyday things have something called ‘affordances’ which should give us mental clues about how they work. The design of everyday things attempts to teach us that poorly designed things lead us down wrong mental-paths and make mistakes when using those things. Mistakes that at best might get our fingers burnt or at worst cost lives.

So far so good. What do you do then when the mental-paths that people are sent down get so worn that they can’t think straight anymore? Of course, I’m talking about spreadsheets again. It seems spreadsheets are so ubiquitous and are so well understood by so many people that some people when faced with technology view it as one enormous spreadsheet. Which is ironic because we know by now that spreadsheets simply can’t be enormous because they don’t scale.

I really don’t think technologists should beat non-technologists up about their lack of technical expertise. That’s just counter-productive and plain wrong. But what do you do when you are faced with someone who has convinced themselves that it’s all one big spreadsheet? How do you explain to them that in my world there is no F9? Where’s that pencil …

Categories
article programming

The Spread-able System

Spreadsheets are everywhere. They are simple to create and are an immensely powerful tool. Unsurprisingly then this means that a lot of areas of business rely on spreadsheets to function correctly. But spreadsheets are dangerous too. They suffer from, well-known, fundamental flaws.

The problem is that spreadsheets are a special type of code, and I’m not talking about the Excel ‘macros’ I’m talking about the formulas. As such they probably need to be treated the same way as other types of code, but their very nature makes this difficult. But I’m getting ahead of myself, let’s first look at some of what is good and bad about spreadsheets.

Pros

Spreadsheets are remarkable for their:

  • Utility – we can bend them into almost any shape we want because they give one way to represent almost any business process;
  • Portability – we can pick up our little gobbets of data and logic and relocate them to almost anywhere inside or outside the company, in file-systems, mail servers and web-sites;
  • Simplicity – you don’t have to explain a spreadsheet to anyone. They might have to be a proto-genius to figure out how it works but the working knowledge they would need to get started is pre-loaded in their heads and ready-to-run.

Cons

So they sound pretty useful, and I like to think that I’m a pragmatic guy, so why do I hate them so much? Many have noted about the shortcomings of spreadsheets. The page on spreadsheets at Wikipedia spells it out clearly enough so I’ll paraphrase:

  1. Productivity – Working with spreadsheets requires a lot of “sheet-shuffling” to reach the required goal. The bigger the sheet, the more time is spent copying, cutting and pasting cells around.
  2. Reliability – Although what consitutes an error in a spreadsheet is subjective, the paper A Critical Review of the Literature on Spreadsheet Errors” (pdf) reveals a series of studies (some more recent than others) that have shown that approximately 5% of cells contain errors.
  3. Collaboration – Sharing a spreadsheet is difficult. Having two independent people working on the same sheet and merging their results is as far as I know impossible.

The first two items don’t bother me overly. Yes, it’s a problem but then the alternatives aren’t that great either. Consider what you would do if you didn’t have a spreadsheet to fulfill the task. You’d either do it with a bit of paper and a calculator (i.e. simulate a spreadsheet) or get a programmer to do the task for you. Either way the amount of productivity loss/gain and the amount of errors aren’t going to be that significantly different from using a spreadsheet. Don’t get me wrong, I love my fellow programmer, but we make a LOT of mistakes too. The difference perhaps is that bespoke systems usually end up getting audited (and hence fixed) and spreadsheets often don’t. Although this point is probably moot.

Good + Bad = Too Bad

My real beef is with what happens when you have the ‘pro’ of high portability with the ‘con’ of low collaborative power. You have no way of knowing which version of the spreadsheet you have is the “true” one, and which version is duff. Every copy, whether it be inadvertently through forwarding a sheet by email to someone else or explicitly by taking a ‘backup’ is a 12 foot tall baby-eating, business-crushing monster waiting to rip you and everyone you love apart.

Hug the Monster, Then Run

The thing is we kind of have to embrace the baby-business-beating monster because it’s about all we’ve got. There are some tasks, as a programmer, that I’m really happy that you as the non-programmer don’t bother me with and solve yourself in sheets. Want to set-up an intra-company phone-book as a spreadsheet so you don’t have to bother will all that “Access” voodoo? Be my guest, but I’m watching you. Want to set-up a spreadsheet to run your fantasy football so you don’t have to add two numbers together? Go right ahead, I’ll even drive you to the game so you don’t miss the turn. Want to set up a spreadsheet to calculate payments and and do a mail-merge with the results … STOP. RIGHT. NOW.

The truth is though that you might not know that you’re creating the mother-of-all spreadsheets when you start. I might not know it either but there will probably come a time when a line is crossed and then I will want to know what you’ve been doing and who you’ve been doing it with. I’m just like that.

Unless you are small company (and hence don’t have a lot of choice) you have to be very afraid of trusting anything that might lose you money to a spreadsheet. You need to be very aware of the risks and the potential-costs you are letting yourself in for. Here in Europe there is even a special interest group dedicated to highlighting the risks of spreadsheets. Those guys must throw wild parties …

The Missing Links

In my opinion there is something missing, something that can fill the gap between spreadsheet and system.

I think we need something that can:

  1. Track spreadsheet changes – Not knowing which spreadsheet is “true” and which lies (by being able to identify revisions of the sheet that have happened after yours was ‘branched’), and not being able to merge sheets is a problem. Perhaps someone solved it already, if they had that would be great.
  2. Track spreadsheets themselves – Having some more information about what sort of corporate-data was being accessed, who was using it and how frequently they ran it might alert us to potential spreadsheet monsters being born.
  3. Narrow the gap – Making spreadsheets more like traditional software systems, without significantly castrating the usefulness of the spreadsheet, would be great too. This is a little like asking for the moon on a stick though.

Perhaps I’ll make something like this one day. I have to admit it’s not a terribly exciting project but it has some potential I think. Perhaps I could spice it up by throwing a party and invite the guys from the “European Spreadsheet Risks Interest Group”. Now we’re talking. How will I budget for the 7-up, party hats and streamers? In a spreadsheet of course.

Categories
programming

The Ever Diminishing Deliverable

When you’re making software in-house you can largely ship what you want, within reason. Conversely, when you’re making software for customers or clients I’m guessing you owe it to your customer, and perhaps your bottom-line, that what you produce is of the very best quality. If you don’t your customer goes somewhere else. However, the additional effort required in producing the quality software for customers can be substantial. Since you want to keep your customer, and attract new ones, you must expend the effort at a potentially large personal cost.

When you’re making software in-house there is a well-known danger of gilding the lilly. Indeed, in-house the law of diminishing returns comes into force if you spend too much time making your in-house project of the very highest quality. The law of diminishing returns can best be described as:

… in a production system with fixed and variable inputs (say factory size and labor), beyond some point, each additional unit of variable input yields less and less output. Conversely, producing one more unit of output costs more and more in variable inputs.

Since I don’t work for a software house, if I can release my code in-house when it is only partially complete, then I should be able to get a lot more stuff done for less cost.

So far so good. A problem arises, though, because I have observed that different programmers place different levels of importance on software quality. This is as probably as you’d expect, one thing we have in common is that we’re all individuals.

The sad truth is that, based on someone’s own personal standards, the bare minimum of what is required to get a job done is usually all that is done. This probably goes someway to explaining the appalling state of some of the in-house software I’ve seen, and written myself. It’s understandable because often, once the main problem has been solved the other issues like usability, maintainability, extensibility and support can be overlooked without any immediately dire consequences.

But here’s the sting, once you release something you usually have to support it too. That’s just the way in-house software works (sucks?), I guess. If you made a poor job of it then you’ll probably pay for it many many times over in support queries. In the end it seems to me that unless you quit (or are fired!) the diminishing deliverable cost turns into a potentially huge support cost. Especially if you end up layering new solutions on to an originally broken solution.

All, in all, you might as well have tried to make something a little more durable and complete at the outset. Sure you can re-factor your mistakes later, but even refactoring costs a lot more to do later than it does to get it ‘right’ at the beginning.

It seems to me that it is essentially a problem of planning. In-house software projects, big and small, aren’t usually planned properly. As a result secondary factors that would improve the overall quality are not included in any estimates. As a result of this bad planning those in-house projects are often late and buggy.

This problem is not getting any better. Years of software-development in an in-house setting have shown me that where in-house plans & design meetings exist issues of usability, maintainability and support are very minor concerns if they are concerns at all. Perhaps that should change. Just a little …

Categories
lisp

Choosing a Common Lisp Unit Testing Framework

I have recently become dissatisfied with the unit testing framework I was using: LIFT. After reading Phil Gold’s fairly comprehensive Common Lisp Testing Frameworks I decided to switch to Stefil.

So what’s so wrong with LIFT? Whilst I don’t want to detract from metabangs efforts, LIFT was annoying me enough that I was considering writing my own unit-testing framework! No one wants YAUTF (yet another unit testing framework), especially mine, so I went shopping. I should also say that I’m overjoyed with other metabang creations like bind and log5 but LIFT doesn’t seem to elevate me much any more (groan).

In my experience, your mileage might vary, LIFT seems slow for what it does. Yes, my machine is a little old and beat-up but still, the unit-testing machinery should not be a significant burden to the unit testing process itself! To illustrate this point look let’s look at a highly subjective example. Suppose I want to test the plain and simple truth, but I want to do it 10,000 times – I do this because I never take “yes” for an answer. Here’s a REPL snippet doing just that in LIFT

CL-USER> (lift:deftestsuite test-lift () ()
	     (:tests
	       (test-true
		(lift:ensure t))))

Start: TEST-LIFT#<Results for TEST-LIFT [1 Successful test]>
CL-USER> (time (loop for i from 1 to 10000 do (lift:run-tests :suite 'test-lift)))

Start: TEST-LIFT
Start: TEST-LIFT
<snip 9,997 lines;>
Start: TEST-LIFT
Evaluation took:
  4.029 seconds of real time
  2.100131 seconds of user run time
  0.076005 seconds of system run time
  [Run times include 0.06 seconds GC run time.]
  0 calls to %EVAL
  0 page faults and
  60,780,256 bytes consed.

And then let’s do the same for Stefil

CL-USER> (stefil:defsuite* test-stefil)
#<test TEST-STEFIL>
CL-USER> (stefil:deftest test-true ()
	   (stefil:is t))
.
<snip 9,997 lines;>
.
Evaluation took:
  1.238 seconds of real time
  0.932059 seconds of user run time
  0.116008 seconds of system run time
  [Run times include 0.357 seconds GC run time.]
  0 calls to %EVAL
  0 page faults and
  88,813,344 bytes consed.

Part of the slowness might be that LIFT prints “Start: TEST-LIFT” 10,000 times, but I didn’t dig any deeper. LIFT seems slow when just running a handful of suites. Apart from the slowness the output produced by LIFT isn’t really particularly useful, it’s better than nothing, but I can’t really be sure of the testing progress within a suite. Ideally I would just like to see some incremental idea of progress, and a single “.” per test and a new line after each suite, like Stefil does, is much cleaner.

Secondly, and this is the kicker, I find it difficult with LIFT to find out what went wrong and where. Which is surely the whole point of unit-testing. We expect stuff to fail and hunting down the causes of failure in LIFT is a bit tiresome via the inspector. Conversely, Stefil supports test failures by dropping you straight into the debugger when an assertion fails. Which is perfect because you can look at the code that caused the error, dig about in the source, fix it and continue the test. This is a natural way to go about developing test driven software. It also leverages the REPL making it a far more interactive experience. The only snag is that this sort of behaviour is not always what you want if you want to run automated test & build environments. Stefil provides a special variable *debug-on-assertion-failure* which registers the failure but doesn’t drop you in the debugger. It seems that LIFT does have a testing parameter break-on-error? however this only catches errors, but it probably also needs a break-on-assertion? as well.

Finally, Stefil just seems more concise & natural. Since what we’re doing here is creating functions that test other functions surely we should be able call tests like functions. In my view classes are not the primary units of a test, functions are. And so it is in Stefil because every suite & test are callable functions. In LIFT you have to tell the function lift:run-test to find you a test/suite class with a specific name and then run it.

I didn’t want this blog entry to be a ‘hatchet-job’ on LIFT. I don’t want that because that’s not constructive, and there’s already too much way-too much ranting on the internet. However, in the final analysis, LIFT could be made to be a lot better than it is. Since the effort in switching wasn’t really that great I decided to switch to Stefil rather than persevere and try to directly improve LIFT.


Phil Gold actually makes two conclusions in Common Lisp Testing Frameworks , Stefil and fiveam. I would have tried fiveam, which was Phil’s framework of choice, but it wouldn’t install via asdf. Whilst not being asdf installable isn’t a huge barrier to entry it suggests something (perhaps wrongly) about the quality of the solution. So I skipped it.

Categories
database rant

BizTalk Server 2006 (a.k.a The Beast)

I don’t really like to repeat myself but I’ve found another example of code that isn’t. It has a name, and that name is evil BizTalk. I’ve wanted to write about BizTalk for a while but I’ve been holding back because it’s a large and complex product that I wanted to do proper justice to.

Ok, so let’s get the nasty out of the way first. BizTalk is partly the reason that I’ve done very few blog posts recently. It is also the cause of an annoying pain at the base of my skull that didn’t go away until I stopped ‘doing’ BizTalk. There are many reasons why and I could go on and on about:

  • how it neatly hides important detail from you that you need to problem solve issues;
  • cryptic error messages;
  • property boxes in property boxes in property boxes;
  • lack of effective development environment;
  • high license cost;
  • … yada yada yada …;

Ironically at the same time I was using BizTalk I was also reading the design of everyday things. Whilst not strictly a book about technology, the ubiquity of computers these days makes it a compelling read for programmers like me. Anyway, the point is that BizTalk very neatly violates almost everything that Donald Norman holds dear.

If it’s so bad then why are you using it?

Good question. You read my mind. You see it solves a couple of problems for me, firstly a very expensive software product requires it out of the box so I really have no choice. However, and here’s the interesting part, it does two “big picture” things very well. Indeed, if you ignore the nitty gritty pain of what you had to do to get it to work it solves a couple issues of system integration reasonably well.

  1. Message Handling – the BizTalk server wants to receive a message and do something with it. It is able to take messages from a variety of sources: database, file drop, FTP, HTTP, SOAP, Email, etc. Once this message is received it can be routed to an ‘Orchestration’ which takes the message and applies some logic to it. This can involve changing the shape of the message and or sending it on to a variety of other systems.
  2. Business Focus – the fact that BizTalk server can take messages from a variety of sources means that it becomes the natural place to put things that ‘process external data’. This doesn’t sound like a big deal but when you’ve worked in companies that have attempted this solution without something like BizTalk you’ll know what a mess this becomes. Many programmers, many ideas, little consistency, integration headaches.

So it’s good right?

Well yes and no. I can’t pretend that I’ve even scratched the surface of what BizTalk does, it’s a beast. However, for what I need to do I’ve created about a half a dozen or so orchestrations and a few mappings and every one was as painful as the last. It just doesn’t seem to get any easier.

You see, the bits of BizTalk I like, are relatively quite small. The bits I don’t like are the bits that try and take programming control away from me by providing me with some half-baked UI that ultimately is going to produce me a piece of executable code. And that’s my point. If it’s going to produce executable code why not just give me some powerful libraries (which must exist anyway) and let me write it? Indeed, I’ve had a couple of people tell me that they don’t really like BizTalk either and when faced with a ‘BizTalk challenge’ their solution is to write a custom pipeline to handle it (for those not in the know, this solution gets the job done but it’s like buying a tractor to get your groceries with).

The part of BizTalk that I would pay money for is probably not worth that much. However, I’m smart enough (just) to know that if I wanted to roll my own which just contained the features I wanted I would still be doing it in 2009. So for now, I guess, The Beast and I will get along fine. If there’s ever a viable alternative The Beast and I will be parting ways.

Categories
lisp

Lisp. It Doesn’t Get In Your Way. Much ™

Recently I have been using Common Lisp’s eval function a bit. Since it’s eval that put’s the E in REPL it fair to say that it is a fairly fundamental part of Lisp. However, no code that I have seen appears to use it directly. I think I know why. To make (eval …) always work in the way you’d expect doesn’t appear to be that intuitive.

Paul Graham in On Lisp, has this to say about using eval in your own code:

Generally it is not a good idea to call eval at runtime, for two reasons:

  1. It’s inefficient: eval is handed a raw list, and either has to compile it on the spot, or evaluate it in an intepreter. Either way is slower than compiling the code beforehand, and just calling it.
  2. It’s less powerful, because the expression is evaluated with no lexical context. Among other things, this means that you can’t refer to ordinary variables outside the expression being evaluated

And so when I discovered a need in my project to persist and reload closures I decided that my needs would not violate either of Paul’s points. Firstly, because I don’t know what the code I’m going to persist is going to be, and secondly because no lexical context is needed to create my closures. Therefore, I would store them as strings and then I would use read, and eval to restore them. This worked fine so I put the code into the package and declared my work done.

It turned out that once the code was in a package it didn’t really work as I’d intended. When I tried to run it I got unknown symbol conditions raised when I tried to restore the closures. Qualifying all the symbols with the correct package name worked but it made my shiny new DSL all messy by requiring me to always prefix all my symbols. It turns out then that eval doesn’t work this way by design. The reason is because of this statement in the HyperSpec page of eval.


Description:

Evaluates form in the current dynamic environment and the null lexical environment .

I was expecting that since my eval was inside a package it would be able to see function symbols in that package. Not so, eval works in the dynamic environment, which implies then that the current package is a special variable and hence part of the dynamic environment.

This means my code could only ever work when the current package is the library package. Any other package and the code fails because eval is checking the dynamic environment to determine which symbols are visible without package qualifiers. Indeed it seems that, in SBCL to make my code work in the way that I should expect I need to wrap it in the following:

   (let ((*package* (find-package "MY-PACKAGE")))
      (eval ...))

And this works just fine. The most pleasing thing about this outcome is that it illuminated a point that I’d heard before but never been able to substantiate: Lisp, It Doesn’t Get In Your Way. eval has to work how it does otherwise Common Lisp would probably not work properly. However, because the package system is an accessible part of the language to the programmer it seems as if I can adapt any part of that system to suit my purpose.

You’d be right in thinking too much of this sort of thinking is bad for maintainability, but this single line hack allows me to safely persist executable-code at run time. Since there’s few languages that have closures to begin with, making a minor hack to make them easily persisted too (with the help of macro) seems a small price to pay.

“Lisp. It Doesn’t Get In Your Way. Much.” – I like the phrase so much I think I’m going to trade-mark it.

Categories
programming

Evolved Programmers Wanted. No Returns.

Here’s an interesting quote from Jeff Attwood (emphasis is mine):

“I use implicit variable typing whenever and wherever it makes my code more concise. Anything that removes redundancy from our code should be aggressively pursued — up to and including switching languages.”

It sounded like something I’d heard before. Here’s another quote from Paul Graham:

“The kind of dirtiness Arc seeks to avoid is verbose, repetitive source code. The way you avoid that is not by forbidding programmers to write it, but by making it easy to write code that’s compact.”

I’m not in anyway claiming that Jeff is lacking originality I’m suggesting that when two influential people with a large audience express the same thought … well something ought to happen right? Perhaps programming languages are going to become more expressive and concise. This certainly seems to be one of Paul’s aims at least.

Programming is changing, that’s for sure. It seems like only yesterday that Python was new and I was bending my head around it and liking being released from static types and embracing dynamic languages. Indeed if you watch this video (and I highly recommend you do) you can get a feel for how far Python itself has come in about 5 minutes.

The point though is our industry is still evolving. Natural selection finds the best parts of all those programming languages and software products and begats them into new ones. 10 years ago I believed that the future had arrived and that we would all use C++ where performance mattered and Java everywhere else. It seems that I was hopelessly naive to believe that evolution had ended. It had scarcely even begun.

I’m also eternally grateful to the innovators and early adopters that I don’t have to write much of either C++ or Java anymore. But that’s another story.

Categories
project management

The Silver Lining Of A Lead Balloon

The Heathrow Terminal 5 story is not all bad. Yes it was a bit of a shambles, yes senior management was fired. But today I found some joy in BA’s misery.

A Metaphor

Now don’t get me wrong this isn’t just me deriving pleasure from others misfortune. Although admittedly as a Brit I am innately very good at that. So good in fact that it’s a constant surprise that the Germans managed to invent a word for what is typically a British malaise: schadenfreude.

No, the silver lining of BA’s lead balloon is that T5 has become a common intellectual currency. It’s failure has so clearly underlined the pitfalls of not doing enough testing that I heard T5 being used as an analogy in a recent implementation meeting. A.N.Other said:

“I would not be happy committing to that deadline if we had to cut testing. The last thing I want is for this to become another T5 …”

Nothing gives a better fuzzy feeling than completing a long testing phase. However if testing is getting squeezed out then you have to get management to agree to extending the deadline before you’ve actually reached that deadline. Indeed cutting testing is to invite what Steve Connell has coined as “Wishful Thinking”, and is the 13th classic mistake of software project management:

Wishful thinking isn’t just optimism. It’s closing your eyes and hoping something works when you have no reasonable basis for thinking it will. … It undermines meaningful planning and may be at the root of more software problems than all other causes combined.

Amen.

Categories
article

Educate A Business Person Today!

One recurring theme I have noticed with users of systems I’ve worked on, is that they aren’t nearly as stupid as I think. They often try to make sense of the system thrust in-front of them. This is mostly out of necessity since it stands between them and them doing their job, and so they need to make sense of it. Having written some truly awful systems myself I wish all of them the very best of luck.

Another, seemingly unrelated observation, is that business people are truly astounded, and often suspicious, of how long it takes to provide a solution to a particular problem. Writing software is simply hard so that partly explains it. Sometimes however some of the solutions that are asked for can come quickly. This might happen if the system was expressly designed to handle new cases of the particular solution being requested or if producing the solution requires little more than a configuration or script change. Or it might just be dumb luck that the release cycle has worked in their favour.

Disconnect

This disconnect between implementation times, with no apparent reason to the business user can cause problems. Sometimes it feeds the suspicion that they are being ‘had’ in some elaborate con:

“If change ‘x’ takes a week then surely change ‘y’ should take half as long. How could it not? It only takes half as many words to say out loud. Those guys in IT need firing.”

When that business person is a manager it can lead to awkward situations for developers:

“Change ‘x’ took a week, change ‘y’ will take half as long. How can it not? It’s the only thing that stands between us and product success. If it doesn’t I’ll fire those IT guys”.

The simple truth is that unless a business person has also the developer’s view of the system they will not be able to make sound judgements about it. Hell, I have a developers view and not even my judgements are particularly sound.

However humans are pretty adaptable creatures, and rather than telling them the answer we should explain the answer in a way that they can understand. If they want to listen then educating them has a few potential benefits. For one thing it might make you look like you care about your users, rather than being that IT jerk who steals everyone’s food from the refrigerator. However, if you get your point across without sounding (to them) like a lunatic then you might improve their mental model of how the system actually works.

Breed

There is a breed of programmer out their in the world today that has either evolved or engineered themselves into a situation where they are the only one who ‘knows’. Yes, you know who you are. Sometimes they do this as a survival instinct to make themselves indispensable, sometimes because they’re not great communicators or educators. These are the people that need to be fired because their value is way-less than they think it is. They actually harm the productivity of the company by being obstructive or uncommunicative, plus they’re a real pain-in-the-ass to work with.

Yes you will need to keep a watchful eye on your newly educated fledglings. Especially the managers, but there’s nothing knew about that.