Categories
article project management

You think your code don’t smell?

So, code reviews are great. Get the benefit of some ass-hole telling you that your comments should be C-style (/*) and not C++-style (//) and remind you that the member name ‘mSuckThis’ is not suitable, ever. No really, code reviews are great. It’s just that a lot of times they just don’t work.

The first time I encountered code-review was when my boss of the time had just read some book on how to manage programmers and was keen to inflict it on all his employees. His code-review process was to take all my work print it out and go through it line-by-line. Master and student style.

This type of code-review, in the way that he implemented it, was meaningless. It concentrated on an important but largely automatable aspect of code review and that is: adherence to coding guidelines.

As I see it there are three types of defect that code review is trying to identify:

  1. Adherence to coding guidelines (or lack of it) and inter-package dependencies.
  2. Identification of localised errors: “that loop is infite”, or “that algorithim should be log(N) and not N^2”, “that module is way too big”
  3. Identification of non-local errors. Where local means local to the code-review. For instance the impact of adding a throw on a widely used method and how that affects all the dependent code paths.

I question anyone’s ability to fully understand the dynamic nature of any reasonable sized piece of software by just looking at a small excerpt. Every time you review that code you have to ‘load’ that supporting information into your head to be able to identify whether the code is totally awesome or tragically bogus. In my experience defects of a local type (type 2) were sadly rarely identified by code review and defects of a non-local type (type 3) almost never.

The improvement of code-quality I’m passionate about. But I don’t see any realistic way to achieve what I want. To identify non-local errors you really need your code reviewer to sit next to you during the development or be as deeply involved in the code as you are. It probably would need a similar approach to reliably find local errors too. However your reviewer is rarely as involved as that. It seems that some judicious use of pair programming might be the answer but that comes with its own problems.

It seems that to get the best out of code-reviews you have to be very careful about how you implement them. Sure, let’s automate what we can automate and pair program on tricky items but the real code-review needs to be extremely skilfully handled to get the best bang-for-your-buck-chuck.

2 replies on “You think your code don’t smell?”

A poor code review is as helpful as a poor test. I don’t think we should throw away the idea of reviewing based on bad reviews, and I don’t think we should quit testing because a test didn’t find a bug.

Testing the program (or running it) finds a certain set of problems. Looking at the code finds another set. Done well, the two complement each other.

I think your manager had the right idea, but had some implementation issues. (The greatest problem being, are they giving you an opinion that you can decline or an order that you must follow?)

My experience with code reviews is
* I find a bunch of things just preparing for the review. It forces me to look at before and after.
* Someone reads the comments cold and tries to make sense of them. That’s what’s going to happen in 6 months.
* Devs can’t hide the “how”. You might be really impressed that their bug tracker is working in a week, until you find out that it only runs on a machine named “potrazeebee” under user account “jsmith”. You can’t hide that from a review.

Hi Dion,

>> I think your manager had the right idea, but had some implementation issues

He had more than just implementation issues 😉

I agree with you about the complementary nature of reviews and testing. Indeed with the right tests and the right review the process should be pretty good. Perhaps the missing piece (to me at least) is that a review should where appropriate include new or updated tests.

Comments are closed.