Product teams: our products are not our products

Woah, too many products. Let me explain. No, it will take too long, let me summarise.

Sometimes, people running software organisations call their teams “product teams”, and organise them around particular “products”. I do not believe that this is a good idea. Because we typically aren’t making products, we’re solving problems.

The difference is that a product is “done”. If you have a “product team”, they probably have a “definition of done”, and then release software that has satisfied that definition. Even where that’s iterative and incremental, it leads to there being a “product”. The thing that’s live represents as much of the product as has been done.

The implications of there being a “product” that is partially done include optimising for getting more “done”. Particularly, we will prioritise adding new stuff (getting more “done”) over fixing old stuff (shuffling the deckchairs). We will target productish metrics, like number of daily actives and time spent.

Let me propose an alternative: we are not making products, we are solving problems. And, as much out of honesty as job preservation, let me assure you that the problems are very difficult to solve. They are problems in cybernetics, in other words in communication and control in a complex system. The system is composed of three identifiable, interacting subsystems:

  1. The people who had the problem;
  2. The people who are trying to solve the problem;
  3. The software created to present the current understanding of the solution.

In this formulation, we don’t want “amount of product” to be a goal, we want “sufficiency of solution” to be a goal. We accept that the software does not represent the part of the “product” that has been “done”. The software represents our best effort to date at modelling our understanding of the solution as we comprehend it to date.

We therefore accept that adding more stuff (extending the solution) is one approach we could consider, along with fixing old stuff (reflecting new understanding in our work). We accept that introducing the software can itself change the problem, and that more people using it isn’t necessarily a goal: maybe we’ve helped people to understand that they didn’t actually need that problem solved all along.

Now our goals can be more interesting than bushels of software shovelled onto the runtime furnace: they can be about sufficiency of the solution, empowerment of the people who had the problem, and improvements to their quality of life.

Mapping software engineering tools

Despite the theory that everything can be done in software (and of course, anything that can’t be done could in principle be approximated using numerical methods, or fudged using machine learning), software engineering itself, the business of writing software, seems to be full of tools that are accepted as de facto standards but, nonetheless, begrudgingly accepted by many teams. What’s going on? Why, if software is eating the world, hasn’t it yet found an appealing taste for the part of the world that makes software?

Let’s take a look at some examples. Jira is very popular among many people. I found a blog post literally called Why I Love Jira. And yet, other people say that Jira is an anti pattern, a sentiment that gets reasonable levels of community support.

Jenkins is almost certainly the (“market”, though it’s free) leader among continuous delivery tools, a position it has occupied since ousting Hudson, from which it was forked. Again, it’s possible to find people extolling the virtues and people hating on it.

Lastly, for some quantitative input, we can find that according to the Stack Overflow 2018 survey, most respondents (78.9%) love Rust, but most people use JavaScript (69.8%). From this we draw the interesting conclusion that the most popular tool in the programming language realm is not, actually, the one that wins the popularity contest.

So, weird question, why does everybody do this to themselves? And then more specifically, why is your team doing it to yourselves, and what can you do about it?

My hypothesis is that all of these tools succeed because they are highly configurable. I mean, JavaScript is basically a configuration language for Chromium (don’t @ me) to solve/cause your problem. Jira’s workflows are ridiculously configurable, and if Jenkins doesn’t do what you want then you can find a plugin to do it, write a plugin to do it or make a Groovy script that will do it.

This appeals to the desire among software engineers to find generalisations. “Look,” we say, “Jenkins is popular, it can definitely be made to do what we want, so let’s start there and configure it to our needs”.

Let’s take the opposing view for the moment. I’m going to drop the programming language example of JS/Rust, because all programming languages are, roughly speaking, entirely interchangeable. The detail is in the roughness. The argument below still applies, but requires more exposition which will inevitably lead to dissatisfaction that I didn’t cover some weird case. So, for the moment, let’s look at other tools like Jira and Jenkins.

The exact opposing view is that our project is distinct, because it caters to the needs of our customers and their (or these days, probably our) environment, and is understood and worked on by our people with our processes, which is not true for any other project. So rather than pretend that some other tool fits our needs or can be bent into shape, why don’t we build our own?

And, for our examples, building such a tool doesn’t appear to be a big deal. Using the expansive software engineering term “just”, a CD tool is “just” a way to run each step in the deployment pipeline and tell someone when a step fails. A development-tracking tool is “just” a way to list the things the team is or could be working on.

This is more or less a standard “build or buy” question, with just one level of indirection: both building and buying are actually measured in terms of time. How long would it take the team to write a new CD tool, and to maintain it? How long would it take the team to configure Jenkins, and to maintain it?

The answer should be fairly easy to consider. Let’s look at the map:

We are at x, of course. We are a short way from the Path of Parsimony, the happy path along which the generic tools work out of the box. That distance is marked on the map as .

Think about how you would measure for your team. You would consider the expectations of the out-of-the-box tool. You would consider the expectations of your team, and of your project. You would look at how those expectations differ, and try to quantify the result.

This tells you something about the gap between what the tool provides by default and what you need, which will help you quantify the amount of customisation needed (the cost of building a spur out from the Path of Parsimony to x). You can then compare that with the cost of building a tool that supports your position directly (the cost of building your own path, running through x).

But the map also suggests another option: why don’t we move from x closer to the path, and make smaller? Which of our distinct assumptions are incidental and can be abandoned, which are essential and need to be supported, and which are historical and could be revised? Is there a way to change the context so that adopting the popular tool is cheaper?

[Left out of the map but just as important is the related question: has somebody else already charted a different path, and how far are we from that? In other words, is there a different off-the-shelf product which needs less configuration than the one we’ve picked, so the total migration-plus-configuration cost is less than sticking where we are?]

My impression is that these questions tend to get asked once at the start of a project or initiative, then not again until the team is so far away from the Path of Parsimony that they are starting to get tangled and stung by the Weeds of Woe. Teams that change tooling such as their issue trackers or CD pipeline tend to do it once the existing way is already hurting too much, and the route back to the path no longer clear.

Microservices for the Desktop

In OOP the Easy Way, I make the argument that microservices are a rare instance of OOP done well:

Microservice adopters are able to implement different services in different technologies, to think about changes to a given service only in terms of how they satisfy the message contract, and to independently replace individual services without disrupting the whole system. This […] sounds a lot like OOP.

Microservices are an idea from service-oriented architecture (SOA) in which each application—each microservice—represents a distinct bounded context in the problem domain. If you’re a movie theatre complex, then selling tickets to people is a very different thing from showing movies in theatres, that are coupled loosely at the point that a ticket represents the right to a given seat in a given theatre at a given showing. So you might have a microservice that can tell people what showings there are at what times and where, and another microservice that can sell people tickets.

People who want to write scalable systems like microservices, because they can scale different parts of their application separately. Maybe each franchisee in the theatre chain needs one instance of one service, but another should scale as demand grows, sharing a central resource pool.

Never mind all of that. The real benefit of microservices is that they make boundary-crossing more obvious, maybe even more costly, and as a result developers think about where the boundaries should be. The “problem” with monolithic (single-process) applications was never, really, that the deployment cost too much: one corollary of scale is that you have more customers. It was that there was no real enforcement of separate parts of the problem domain. If you’ve got a thing over here that needs that data over there, it’s easy to just change its visibility modifier and grab it. Now this thing and that thing are coupled, whoops!

When this thing and that thing are in separate services, you’re going to have to expose a new endpoint to get that data out. That’s going to make it part of that thing’s public commitment: a slightly stronger signal that you’re going down a complex path.

It’s possible to take the microservices idea and use it in other contexts than “the backend”. In one Cocoa app I’m working on, I’ve taken the model (the representation in objects of the problem I’m solving) and put it into an XPC Plugin. XPC is a lot like old-style Distributed Objects or even CORBA or DCOM, with the exception that there are more safety checks, and everything is asynchronous. In my case, the model is in Objective-C in the plugin, and the application is in Swift in the host process.

“Everything is asynchronous” is a great reminder that the application and the model are communicating in an arm’s-reach fashion. My model is a program that represents the domain problem, as mentioned before. All it can do is react to events in the problem domain and represent the changes in that domain. My application is a reification of the Cocoa framework to expose a user interface. All it can do is draw stuff to the screen, and react to events in the user interface. The app and the model have to collaborate, because the stuff that gets drawn should be related to the problem, and the UI events should be related to desired changes in the domain. But they are restricted to collaborating over the published interface of the XPC service: a single protocol.

XPC was designed for factoring applications, separating the security contexts of different components and giving the host application the chance to stay alive when parts of the system fail. Those are valid and valuable benefits: the XPC service hosting the model only needs to do computation and allocate memory. Drawing (i.e. messaging the window server) is done elsewhere. So is saving and loading. And that helps enforce the contract, because if I ever find myself wanting to put drawing in the model I’m going to cross a service boundary, and I’m going to need to think long and hard about whether that is correct.

If you want to talk more about microservices, XPC services, and how they’re different or the same, and how I can help your team get the most out of them, you’re in luck! I’ve recently launched the Labrary—the intersection of the library and the laboratory—for exactly that purpose.

Introducing: the Labrary

Is it that a month in the laboratory will save an hour in the library, or the other way around? A little more conversation, a little less action?

There are things to learn from both the library and the laboratory, and that’s why I’m launching the Labrary, providing consulting detective and training service to software teams who need to solve problems, and to great engineers who want to be great lead engineers, principal engineers and architects.

The Labrary is also the home to my books and other projects to come. So if you want to find out what a consulting detective can do for your team, follow the @labrarian on Mastodon or book office hours to talk things over.

Let’s talk about self-documenting code

You think your code is self-documenting. That it doesn’t need comments or Doxygen or little diagrams, because it’s clear from the code what it does.

I do not think that that is true.

Even if your reader has at least as much knowledge of the programming language you’ve used as you have, and at least as much knowledge of the libraries you’ve used as you have, there is still no way that your code is self-documenting.

How long have you been doing your job? How long have you been talking to experts in the problem domain, solving similar problems, creating software in this region? The likelihood is, whoever you are, that the new person on your team has never done that, and that your code contains all of the jargon terms and assumptions that go with however-much-experience-you-have experience at solving those problems.

How long were you working on that story, or fixing that bug? How long have you spent researching that specific change that you made? However long it is, everybody else on your team has not spent that long. You are the world expert at that chunk of code, and it’s self-documenting to you as the world expert. But not to anybody else.

We were told about “working software over comprehensive documentation”, and that’s true, but nobody said anything about avoiding sufficient documentation. And nobody else has invested the time to understand the code that you just wrote that you did, so the only person for whom your code is self-documenting is you.

Help us other programmer folks out, think about us when avoiding documentation.

It’s about the thinking

At some point in the past, programmers used to recommend drawing flowcharts before you start coding. Then they recommended creating CRC cards, or acting through how the turtle will behave, or writing failing tests, or getting the types to match up, or designing contracts, or writing proofs, but the point is that in each case they’re there for eliciting thought before the code gets laid down.

None of these things is mutually exclusive, none of these things is the one true way, but the fact that they all isolate some part of solving the problem from some part of coding the solution is the telling point. The problem is not having the correct type system or test coverage or diagram format, the problem is trying to work in two (or more) levels of abstraction – the problem domain and the computer – at the same time.

Head of Architecture

My current job title is Head of Architecture, though the word “architecture” means different things to different people in the world of software. So what does it mean to me, what do I do when I’m playing Head of Architecture?

I follow Perry and Wolf in Foundations for the Study of Software Architecture by drawing the analogy between software architecture and built environment architecture, not network or electronics architecture. The work of a built environment architect, particularly one who follows the path laid out by Christopher Alexander, combines elements, form and aesthetics by creating a system that complements and enhances its environment. Software systems are deployed into environments (with existing people, processes, cultural norms) and developed in environments, and should make those environments better for the people who are interacting with them, while also meeting the functionality, performance, security and other goals of the system.

But that doesn’t actually explain what I do, which is more about letting other people do things that are “architecture” than about “doing architecture” for them. Programmers, ops folks, QA people, product owners, and others frequently make decisions that have wide (and hence “architectural”) impact, and I think it’s better to enable that and follow up by asking how it impacts the rest of the system, to refine the choices made, than to stop people from making those decisions for them in the name of “being the architect”.

So playing software architect for me tends to be more about creating a forum in which people can present aspects of the problems they’re trying to solve, needing to solve soon, or the solutions they’re exploring, and getting a view from multiple teams and multiple functions about those problems and solutions. Making sure that ops know what devs are doing, that product team Alpha knows how product team Aleph are solving that issue, and so on.

How about the stereotype that software architects program using Visio or Powerpoint? In my case, I program using JavaScript. I do make documentation, to make sure that decisions made in the forum are captured, that proposed or current approaches can be seen and understood. And yes, much of that documentation is diagrammatic. But ultimately I’m a programmer on a software team too, and that documentation has to reflect working, valuable software. That is, while there is value in comprehensive documentation, we value working software more.

Bottom-up teaching

We’re told that the core idea in computer programming is problem-solving. That one of the benefits of learning about computer programming (one that is not universally accepted) is gaining the skill of problem decomposition.

If you look at real teaching of computing, it seems to have more to do with solution composition than problem decomposition. The latter seems to be background noise: here are the things you can build solutions with, presumably at some point you’ll come across a solution that’s the same size and shape as one of your problem components though how is left up to you.

I have many books on programming languages. Each lists the features of the language, and gives minimally complex examples of the use of those features. In that sense, Kernighan and Ritchie’s “The C Programming Language” (section 1.3, the for statement) is as little an instructional in solving problems using a computer as Eric Nikitin’s “Into the Realm of Oberon” (section 7.1, the FOR loop) or Dave Thomas’s “Programming Elixir” (section 7.2, Using Head and Tail to Process a List).

A course textbook on bitcoin and blockchain (Narayanan, Bonneau, Felten, Miller and Goldfeder, “Bitcoin and Cryptocurrency Technologies”) starts with Section 1.1, “Cryptographic hash functions”, and builds a cryptocurrency out of them, leaving motivational questions about politics and regulation to Chapter 7.

This strategy is by no means universal: Liskov and Guttag’s “Program Development in Java” starts out by describing abstraction, then looks at techniques for designing abstractions in Java. Adele Goldberg and Alan Kay described teaching Smalltalk by proposing exploratory projects, designing the objects that model the problem under consideration and the way in which they will communicate, then incrementally filling in by designing classes and methods that have the desired properties. C.J. Date’s “An Introduction to Database Systems” answers the question “why databases?” before introducing the relational model, and doesn’t introduce SQL until it can be situated in the context of the relational model.

Both of these approaches, and their associated techniques (the bottom-up approach and solution construction; the top-down approach and problem decomposition) are useful; the former leads to progress and the latter leads to understanding. But both must be taken in concert, because understanding without progress leads to the frustration of an unsolved problem and progress without understanding is merely the illusion of progress.

My guess is that more programmers – indeed whole movements, when we consider the collective state of things like OOP, functional programming, BDD, or agile practices – are in the “bottom-up only” group than in the “top-down only” or “a bit of both” groups. That plenty more copies of Introduction to Programming in [This Week’s Hot Language] have been sold than Techniques for Making Your Problem Amenable to Computation. That the majority of software really does comprise of solutions looking for problems.

Choose boring employers

Amusingly, my previous post choose boring employees was shared to hacker news under the off-by-one erroneous title choose boring employers. That seemed funny enough to run with, but what does it mean to choose boring employers?

One interpretation is that a boring employer is one where you do not live in interesting times. Where you can get on with your job, and with finding new and better ways to do your job, without constantly fighting fires.

But what if you’re happiest in an environment where you are fighting fires? In that case, you probably should surround yourself with arsonists.

Another interpretation is to invert the discussion in Choose Boring Employees: find an employer who spends their innovation tokens wisely. One who’s OK with the answer to “how do I store these tuples of known structure” being “in a relational database”, or one who doesn’t mind when the answer to “what platform should we base our whole business on” starting with “I skim-read a blog post on HN when I was riding MUNI this morning and…”.

But, let’s be clear, there’s a place for the shiny new technology. Sometimes you do need to spend your innovation tokens, so you don’t want to be somewhere that won’t let you do it at all. Working on a proof of concept, you want to get to proof quickly, so it may be time to throw caution to the wind (unless the concept you’re trying to prove involves working within some cautious boundaries). So boring need not get as far as frustrating.

Choose boring employees

An idea I’ve heard from many directions recently is that “we” (whoever they are) “need to be on the latest tech stack in order to attract developers”. And yes, you do attract developers that way. Developers who want to be paid to work on the latest technology.

Next year, your company will be a year more mature. Your product will be a year more developed. You will have a year more customers. You’ll have a year more tech debt to pay off.

And your cutting-edge tech stack will be so last year. Your employees will be looking at the new startup in the office next door, and how they’re hiring to work on the latest stack while you’re still on your 2017 legacy technology.