Management is the wrong analogy for LLM augmentation

A common meme at the moment in AI-augmented coding circles is “we are all managers now”, with people expressing the idea that alongside actual programming, programmers now manage their team of agents. This is a poor analogy, in both directions. Treating an interaction with an AI agent like a manager-report interaction would lead to a poor experience for using the agent. Treating an interaction with a direct report like you’re using an AI tool would likely result in a visit from your HR representative.

In my experience of being managed and of being a manager in software companies, the good managers I’ve had and aspire to be are the ones who Camille Fournier describes in The Manager’s Path:

Managers who care about you as a person, and who actively work to help you grow in your career. Managers who teach you important skills and give you valuable feedback. Managers who help you navigate difficult situations, who help you figure out what you need to learn. Managers who want you to take their job someday. And most importantly, managers who help you understand what is important to focus on, and enable you to have that focus.

An LLM doesn’t have a job or a career path, or growth goals, or learn from your interactions. You can’t really tell it what’s important to focus on, you can just try to avoid showing it things you don’t want it to focus on. An LLM never gets into a difficult situation; the customer is always “absolutely right!”

Treating an LLM like a direct report can only lead to frustration. It isn’t a person who wants to succeed at its job, to learn and grow in its role, or become more capable. Indeed, it can’t do any of those things. It’s a tool. A tool that happens to have an interface that seems superficially similar to talking to it.

And that means that the correct way to treat an AI agent or coding interface is like a tool: it’s a text editor with a chat-like interface, a nondeterministic build script, or a static analysis tool. You’re looking for the correct combination of words and symbols to feed in to make the tool produce the output you want.

Treating a person who reports to you in that way would be unsatisfying and ultimately problematic. You don’t find different ways to express your problem statement until they solve it the way you would have solved it. You don’t give them detailed rules files with increasingly desperate punctuation around the parts it’s ## MANDATORY! that they follow. You find a way to work together, to teach each other, and to support each other.

If you’re really looking for an analogy with human-human interactions, then working with an outsource agency is slightly more accurate (particularly one located in a different place with a different culture and expectations, where you have to be more careful about communication because you can’t rely on shared norms and tacit knowledge being equivalent). You do, in such cases, work on a clearly-scoped task or project, with written statements of work. and clear feedback points. You still expect it to get better and easier over time, for the agents to learn and adapt in ways that LLM-based tools don’t, and to show initiative when faced with unstated problems in ways that LLM-based tools can’t. And the outsource agents still expect to be treated as peers and experts, helping you out by doing the work that you don’t have the capacity for. It’s better, but not great, as analogies go.

Unfortunately the best analogy we have for “precisely expressing problem statements in such a way that a computer generates the expected solution” is exactly the kind of thing that many people in the LLM world would like to claim isnt happening.

About Graham

I make it faster and easier for you to create high-quality code.
This entry was posted in AI, tool-support. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.