Holding a problem in your head

The linkied article (via Daring Fireball) describes the way that many programmers work – by loading the problem into their head – and techniques designed to manage and support working in such a way. Paul Graham makes the comparison with solving mathematics problems, which is something I can (and obviously have decided to) talk about due to the physics I studied and taught. Then I also have things to say about his specific recommendations.

In my (limited) experience, both the physicist in me and the programmer in me like to have a scratch model of the problem available to refer to. Constructing such a model in both cases acts as the aide memoire to bootstrap the prolem domain into my head, and as such should be as quick, and as nasty, as possible. Agile Design has a concept known as "just barely good enough", and it definitely applies at this stage. With this model in place I now have a structured layout in my head, which will aid the implementation, but I also have it scrawled out somewhere that I can refer to if in working on one component (writing one class, or solving one integral) I forget a detail about another.

Eventually it might be necessary to have a ‘posh’ layout of the domain model, but this is not yet the time. In maths as in computing, the solution to the problem actually contains the structure you came up with, so if someone wants to see just the structure (of which more in a later paragraph) it can be extracted easily. The above statement codifies why in both cases I (and most of the people I’ve worked with, in each field) prefer to use a whiteboard than a software tool for this bootstrap phase. It’s impossible – repeat, impossible – to braindump as quickly onto a keyboard, mouse or even one of those funky tablet stylus things as it is with an instantly-erasable pen on a whiteboard. Actually, in the maths or physics realm, there’s nothing really suitable anyway. Tools like Maple or Mathematica are designed to let you get the solution to a problem, and really only fit into the workflow once you’ve already defined the problem – there’s no adequate way to have large chunks of "magic happens here, to be decided at a later date". In the software world, CASE tools cause you to spend so long thinking about use-cases, CRC definitions or whatever that you actually have to delve into nitty-gritty details while doing the design; great for software architects, bad for the problem bootstrap process. Even something like Omnigraffle can be overkill. it’s very quick but I generally only switch to it if I think my boardwriting has become illegible. To give an example, I once ‘designed’ a tool I needed at Oxford Uni with boxes-and-clouds-and-lines on my whiteboard, then took a photo of the whiteboard which I set as the desktop image on my computer. If I got lost, then I was only one keystroke away from hiding Xcode and being able to see my scrawls. The tool in question was a WebObjects app, but I didn’t even open EOModeler until after I’d taken the photo.


Incidentally, I expect that the widespread use of this technique contributes to "mythical man-month" problems in larger software projects. A single person can get to work really quickly with a mental bootstrap, but then any bad decisions made in planning the approach to the solution are unlikely to be adequately questioned during implementation. A small team is good because with even one other person present, I discuss things; even if the other person isn’t giving feedback (because I’m too busy mouthing off, often) I find myself thinking "just why am I trying to justify this heap of crap?" and choosing another approach. Add a few more people, and actually the domain model does need to be well-designed (although hopefully they’re then all given different tasks to work on, and those sub-problems can be mentally bootstrapped). This is where I disagree with Paul – in recommendation 6, he says that the smaller the team the better, and that a team of one is best. I think a team of one is less likely to have internal conflicts of the constructive kind, or even think past the first solution they get concensus on (which is of course the first solution any member thinks of). I believe that two is the workable minimum team size, and that larger teams should really be working as permutations (not combinations) of teams of two.

Paul’s suggestion number 4 to rewrite often is almost directly from the gospel according to XP, except that in the XP world the recommendation is to identify things which can be refactored as early as possible, and then refactor them. Rewriting for the hell of it is not good from the pointy-haired perspective because it means time spent with no observable value – unless the rewrite is because the original was buggy, and the rewritten version is somehow better. It’s bad for the coder because it takes focus away from solving the problem and onto trying to mentally map the entire project (or at least, all of it which depends on the rewritten code); once there’s already implementation in place it’s much harder to mentally bootstrap the problem, because the subconscious automatically pulls in all those things about APIs and design patterns that I was thinking about while writing the inital solution. It’s also harder to separate the solution from the problem, once there already exists the solution.

The final sentence of the above paragraph leads nicely into discussion of suggestion 5, writing re-readable code. I’m a big fan of comment documentation like headerdoc or doxygen because not only does it impose readability on the code (albeit out-of-band readability), but also because if the problem-in-head approach works as well as I think, then it’s going to be necessary to work backwards from the solution to the pointy-haired bits in the middle required by external observers, like the class relationship diagrams and the interface specifications. That’s actually true in the maths/physics sphere too – very often in an exam I would go from the problem to the solution, then go back and show my working.

This entry was posted in metadev, physics. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.