Essence and accident in language model-assisted coding

In 1986, Fred Brooks posited that there was “no silver bullet” in software engineering—no tool or process that would yield an order-of-magnitude improvement in productivity. He based this assertion on the division of complexity into that which is essential to the problem being solved, and that which is an accident of the way in which we solve the problem.

In fact, he considered artificial intelligence of two types: AI-1 is “the use of computers to solve problems that previously could only be solved by applying human intelligence” (here Brooks quotes David Parnas), to Brooks that is things like speech and image recognition; and AI-2 is “The use of a specific set of programming techniques [known] as heuristic or rule-based programming” (Parnas again), which to Brooks means expert systems.

He considers that AI-1 isn’t a useful definition and isn’t a source of tackling complexity because results typically don’t transfer between domains. AI-2 contains some of the features we would recognize from today’s programming assistants—finding patterns in large databases of how software has been made, and drawing inferences about how software should be made. The specific implementation technology is very different, but while Brooks sees that such a system can empower an inexperienced programmer with the experience of multiple expert programmers—“no small contribution”—it doesn’t itself tackle the complexity in the programming problem.

He also writes about “automatic programming” systems, which he defines as “the generation of a program for solving a problem from a statement of the problem specifications” and which sounds very much like the vibe coding application of language model-based coding tools. He (writing in 1986, remember) couldn’t see how a generalization of automatic programming could occur, but now we can! So how do they fare?

Accidental complexity

Coding assistants generate the same code that programmers generate, and from that perspective they don’t reduce accidental complexity in the solution. In fact, a cynical take would be to say that they increase accidental complexity, by adding prompt/context engineering to the collection of challenges in specifying a program. That perspective assumes that the prompt is part of the program source, but the generated output is still inspectable and modifiable, so it’s not clearly a valid argument. However, these tools do supply the “no small contribution” of letting any one of us lean on the expertise of all of us.

In general, a programming assistant won’t address accidental complexity until it doesn’t generate source code and just generates an output binary instead. Then someone can fairly compare the complexity of generating a solution by prompting with generating a solution by coding; but they also have to ask whether they have validation tools that are up to the task of evaluating a program using only the executable.

Or the tools can skip the program altogether, and just get the model to do whatever tasks people were previously specifying programs for. Then the accidental complexity has nothing to do with programming at all, and everything to do with language models.

Essential complexity

Considering any problem that we might want to write software for, unless the problem statement itself involves a language model then the language model is entirely unrelated to the problem’s essential complexity. For example, “predict the weather for the next week” hides a lot of assumptions and questions, none of which include language models.

That said, these tools do make it very easy and fast to uncover essential complexity, and typically in the cursed-monkey-paw “that’s not what I meant” way that’s been the bane of software engineering since its inception. This is a good thing.

You type in your prompt, the machine tells you how absolutely right you are, generates some code, you run it—and it does entirely the wrong thing. You realize that you needed to explain that things work in this way, not that way, write some instructions, generate other code…and it does mostly the wrong thing. Progress!

Faster progress than the old thing of specifying all the requirements, designing to the spec, implementing to the design, then discovering that the requirements were ambiguous and going back to the drawing board. Faster, probably, even than getting the first idea of the requirements from the customer, building a prototype, and coming back in two weeks to find out what they think. Whether it’s writing one to throw away, or iteratively collaborating on a design[*], that at least can be much faster now.

[*] Though note that the Spec-Driven Development school is following the path that Brooks did predict for automatic programming (via Parnas again): “a euphemism for programming with a higher-level language than was presently available to the programmer”.

About Graham

I make it faster and easier for you to create high-quality code.
This entry was posted in AI, software-engineering, tool-support. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.