As we learn to operate these new generative predictive transformers, those of us in the world of software need to work out what we’re doing it for. The way in which we use them, the results we get—and the direction the tools develop—changes dramatically depending on this worldview.
One option is that we’re augmenting or supporting software engineering. Perhaps asking a language model to explain how code works, or getting it to investigate whether there are test cases we haven’t covered, or identifying ambiguities in a user story, or getting it to fill in some boilerplate code that would bore us if we wrote it ourselves.
Another option is that we’re generating software using natural language prompts. Perhaps asking a language model to create an app, or generate an integration between two services, or create a website to achieve some goal.
These are (at least for the moment) very different things.