There is no “us” in team

I’ve talked before about the non-team team dynamic that is “one person per task”. Where the management and engineers collude to push the organisation beyond a sustainable pace by making sure that at all times, each individual is kept busy and collaboration is minimised.

I talked about the deleterious effect on collaboration, particularly that code review becomes a burden resolved with a quick “LGTM”. People quickly develop specialisations and fiefdoms: oh there’s a CUDA story, better give it to Yevgeny as he worked on the last CUDA story.

The organisation quickly adapts to this balkanisation and optimises for it. Is there a CUDA story in the next sprint? We need something for Yevgeny to do. This is Conway’s Corrolary: the most efficient way to develop software is when the structure matches the org chart.

Basically all forms of collaboration become a slog when there’s no “us” in team. Unfortunately, the contradiction at the heart of this 19th century approach to division of labour is that, when applied to knowledge work, the value to each participant of being in meetings is minimised, while the necessity for each participant to be in a meeting is maximised.

The value is minimised because each person has their personal task within their personal fiefdom to work on. Attending a meeting takes away from the individual productivity that the process is optimising for. Additionally, it increases the likelihood that the meeting content will be mostly irrelevant: why should I want to discuss backend work when Sophie takes all the backend tasks?

The meetings are necessary, though, because nobody owns the whole widget. No-one can see the impact of any workflow change, or dependency adoption, or clean up task, because nobody understands more than a 1/N part of the whole system. Every little thing needs to be run by Sophie and Yevgeny and all the others because no-one is in a position to make a decision without their input.

This might sound radically democratic, and not the sort of thing you’d expect from a business: nobody can make a decision without consulting all the workers! Power to the people! In fact it’s just entirely progress-destroying: nobody can make a decision at all until they’ve got every single person on board, and that’s so much work that a lot of decisions will be defaulted. Nothing changes.

And there’s no way within this paradigm to avoid that. Have fewer meetings, and each individual is happier because they get to maximise progress time spent on their individual tasks. But the work will eventually grind to a halt, as the architecture reflects N different opinions, and the N! different interfaces (which have each fallen into the unowned gaps between the individual contributors) become harder to work with.

Have more meetings, and people will grumble that there are too many meetings. And that Piotr is trying to land-grab from other fiefdoms by pushing for decisions that cross into Sophie’s domain.

The answer is to reconstitute the team – preferably along self-organising principles – into a cybernetic organism that makes use of its constituent individuals as they can best be applied, but in pursuit of the team’s goals, not N individual goals. This means radical democracy for some issues, (agreed) tyranny for others, and collective ignorance of yet others.

It means in some cases giving anyone the autonomy to make some choices, but giving someone with more expertise the autonomy to override those choices. In some cases, all decisions get made locally, in others, they must be run past an agreed arbiter. In some cases, having one task per team, or even no tasks per team if the team needs to do something more important before it can take on another task.

Posted in agile, team | Leave a comment

Episode 33: Ask Me Anything

@sharplet asked me a few questions, and you can too!

The theme music is Blue-Eyed Stranger.

Evidence-Based Software Engineering Using R.

Median household income in the UK in January 2021 was £29,900 (I said “about £28,000”). Median software engineer salary (i.e. for the role “software engineer” with no seniority modifiers) is £37,733.

Leave a comment

It was requested on twitter that I start answering community questions on the podcast. I’ve got a few to get the ball rolling, but what would you like to ask? Comment here, or reach me wherever you know I hang out.

Posted on by Graham | Leave a comment

An Imagined History of Object-Oriented Programming

Having looked at hopefully modern views on Object-Oriented analysis and design, it’s time to look at what happened to Object-Oriented Programming. This is an opinionated, ideologically-motivated history, that in no way reflects reality: a real history of OOP would require time and skills that I lack, and would undoubtedly be almost as inaccurate. But in telling this version we get to learn more about the subject, and almost certainly more about the author too.
They always say that history is written by the victors, and it’s hard to argue that OOP was anything other than victorious. When people explain about how they prefer to write functional programs because it helps them to “reason about” code, all the reasoning that was done about the code on which they depend was done in terms of objects. The ramda or lodash or Elm-using functional programmer writes functions in JavaScript on an engine written in C++. Swift Combine uses a functional pipeline to glue UIKit objects to Core Data objects, all in an Objective-C IDE and – again – a C++ compiler.

Maybe there’s something in that. Maybe the functional thing works well if you’re transforming data from one system to another, and our current phase of VC-backed disruption needs that. Perhaps we’re at the expansion phase, applying the existing systems to broader domains, and a later consolidation or contraction phase will demand yet another paradigm.
Anyway, Object-Oriented Programming famously (and incorrectly, remember) grew out of the first phase of functional programming: the one that arose when it wasn’t even clear whether computers existed, or if they did whether they could be made fast enough or complex enough to evaluate a function. Smalltalk may have borrowed a few ideas from Simula, but it spoke with a distinct Lisp.

We’ll fast forward through that interesting bit of history when all the research happened, to that boring bit where all the development happened. The Xerox team famously diluted the purity of their vision in the hot-air balloon issue of Byte magazine, and a whole complex of consultants, trainers and methodologists jumped in to turn a system learnable by children into one that couldn’t be mastered by professional programmers.
Actually, that’s not fair: the system already couldn’t be mastered by professional programmers, a breed famous for assuming that they are correct and that any evidence to the contrary is flawed. It was designed to be learnable by children, not by those who think they already know better.

The result was the ramda/lodash/Elm/Clojure/F# of OOP: tools that let you tell your local user group that you’ve adopted this whole Objects thing without, y’know, having to do that. Languages called Object-*, Object*, Objective-*, O* added keywords like “class” to existing programming languages so you could carry on writing software as you already had been, but maybe change the word you used to declare modules.

Eventually, the jig was up, and people cottoned on to the observation that Object-Intercal is just Intercal no matter how you spell come.from(). So the next step was to change the naming scheme to make it a little more opaque. C++ is just C with Classes. So is Java, so is C#. Visual BASIC.net is little better.

Meanwhile, some people who had been using Smalltalk and getting on well with fast development of prototypes that they could edit while running into a deployable system had an idea. Why not tell everyone else how great it is to develop prototypes fast and edit them while running into the deployable system? The full story of that will have to wait for the Imagined History of Agile, but the TL;DR is that whatever they said, everybody heard “carry on doing what we’re already doing but plus Jira”.

Well, that’s what they heard about the practices. What they heard about the principles was “principles? We don’t need no stinking principles, that sounds like Big Thinking Up Front urgh” so decided to stop thinking about anything as long as the next two weeks of work would be paid for. Yes, iterative, incremental programming introduced the idea of a project the same length as the gap between pay checks, thus paving the way for fire and rehire.

And thus we arrive at the ideological void of today’s computering. A phase in which what you don’t do is more important than what you do: #NoEstimates, #NoSQL, #NoProject, #NoManagers…#NoAdvances.

Something will fill that void. It won’t be soon. Functional programming is a loose collection of academic ideas with negation principles – #NoSideEffects and #NoMutableState – but doesn’t encourage anything new. As I said earlier, it may be that we don’t need anything new at the moment: there’s enough money in applying the old things to new businesses and funnelling more money to the cloud providers.

But presumably that will end soon. The promised and perpetually interrupted parallel computing paradigm we were guaranteed upon the death of Moore’s Law in the (1990s, 2000s, 2010s) will eventually meet the observation that every object larger than a grain of rice has a decent ARM CPU in, leading to a revolution in distributed consensus computing. Watch out for the blockchain folks saying that means they were right all along: in a very specific way, they were.

Or maybe the impressive capability but limited applicability of AI will meet the limited capability but impressive applicability of intentional programming in some hybrid paradigm. Or maybe if we wait long enough, quantum computing will bring both its new ideas and some reason to use them.

But that’s the imagined future of programming, and we’re not in that article yet.

Posted in OOP | Leave a comment

A hopefully modern description of Object-Oriented Design

We left off in the last post with an idea of how Object-Oriented Analysis works: if you’re thinking that it used around a thousand words to depict the idea “turn nouns from the problem domain into objects and verbs into methods” then you’re right, and the only reason to go into so much detail is that the idea still seems to confuse people to this day.

Similarly, Object-Oriented Design – refining the overall problem description found during analysis into a network of objects that can simulate the problem and provide solutions – can be glibly summed up in a single sentence. Treat any object uncovered in the analysis as a whole, standalone computer program (this is called encapsulation), and do the object-oriented analysis stuff again at this level of abstraction.

You could treat it as turtles all the way down, but just as Physics becomes quantised once you zoom in far enough, the things you need an object to do become small handfuls of machine instructions and there’s no point going any further. Once again, the simulation has become the deployment: this time because the small standalone computer program you’re pretending is the heart of any object is a small standalone computer program.

I mean, that’s literally it. Just as the behaviour of the overall system could be specified in a script and used to test the implementation, so the behaviour of these units can be specified in a script and used to test the implementation. Indeed some people only do this at the unit level, even though the unit level is identical to the levels above and below it.

Though up and down are not the only directions we can move in, and it sometimes makes more sense to think about in and out. Given our idea of a User who can put things in a Cart, we might ask questions like “how does a User remember what they’ve bought in the past” to move in towards a Ledger or PurchaseHistory, from where we move out (of our problem) into the realm of data storage and retrieval.

Or we can move out directly from the User, asking “how do we show our real User out there in the world the activities of this simulated User” and again leave our simulation behind to enter the realm of the user interface or network API. In each case, we find a need to adapt from our simulation of our problem to someone’s (probably not ours, in 2021) simulation of what any problem in data storage or user interfaces is; this idea sits behind Cockburn’s Ports and Adapters.

Moving in either direction, we are likely to encounter problems that have been solved before. The trick is knowing that they have been solved before, which means being able to identify the commonalities between what we’re trying to achieve and previous solutions, which may be solutions to entirely different problems but which nonetheless have a common shape.

The trick object-oriented designers came up with to address this discovery is the Pattern Language (OK, it was architect Christopher Alexander’s idea: great artists steal and all that), in which shared solutions are given common names and descriptions so that you can explore whether your unique problem can be cast in terms of this shared description. In practise, the idea of a pattern language has fared incredibly well in software development: whenever someone says “we use container orchestration” or “my user interface is a functional reactive program” or “we deploy microservices brokered by a message queue” they are relying on the success of the patterns language idea introduced by object-oriented designers.

Meanwhile, in theory, the concept of patterns language failed, and if you ask a random programmer about design patterns they will list Singleton and maybe a couple of other 1990s-era implementations patterns before telling you that they don’t use patterns.

And that, pretty much, is all they wrote. You can ask your questions about what each object needs to do spatially (“who else does this object need to talk to in order to answer this question?”), temporally (“what will this object do when it has received this message?”), or physically (“what executable running on what computer will this object be stored in?”). But really, we’re done, at least for OOD.

Because the remaining thing to do (which isn’t to say the last thing in a sequence, just the last thing we have yet to talk about) is to build the methods that respond to those messages and pass those tests, and that finally is Object-Oriented Programming. If we start with OOP then we lose out, because we try to build software without an idea of what our software is trying to be. If we finish with OOP then we lose out, because we designed our software without using knowledge of what that software would turn into.

Posted in ooa/d | Leave a comment

Episode 32: freeing software from source code

I muse on a concept I’ve been thinking about for a long time: that software engineering is trapped within the confines of the fixed-width text editing paradigm. This was motivated by a discussion with Orta about his Shiki Twoslash project but it’s been rattling around in here for years.

I also talk about Pharo, the Lindisfarne Gospels, Code Bubbles, Scratch, RStudio and more!

Leave a comment

A hopefully modern description of Object-Oriented Analysis

I’ve made a lot over the years, including the book, Object-Oriented Programming the Easy Way, of my assertion that one reason people are turned off from Object-Oriented Programming is that they weren’t doing Object-Oriented Design. Smalltalk was conceived as a system for letting children explore computers by writing simulations of other problems, and if you haven’t got the bit where you’re creating simulations of other problems, then you’re just writing with no goal in mind.

Taken on its own, Object-Oriented Programming is a bit of a weird approach to modularity where a package’s code is accessed through references to that package’s record structures. You’ve got polymorphism, but no guidance as to what any of the individual morphs are supposed to be, let alone a strategy for combining them effectively. You’ve got inheritance, but inheritance of what, by what? If you don’t know what the modules are supposed to look like, then deciding which of them is a submodule of other modules is definitely tricky. Similarly with encapsulation: knowing that you treat the “inside” and “outside” of modules differently doesn’t help when you don’t know where that side ought to go.

So let’s put OOP back where it belongs: embedded in an object-oriented process for simulating a problem. The process will produce as its output an executable model of the problem that produces solutions desired by…

Well, desired by whom? The answer that Extreme Programming and Agile Software Development, approaches to thinking about how to create software that were born out of a time when OOP was ascendant, would say “by the customer”: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”, they say.

Yes, if we’re working for money then we have to satisfy the customer. They’re the ones with the money in the first place, and if they aren’t then we have bigger problems than whether our software is satisfactory. But they aren’t the only people we have to satisfy, and Object-Oriented Design helps us to understand that.

If you’ve ever seen someone who was fully bought into the Unified Modelling Language (this is true for other ways of capturing the results of Object-Oriented Analysis but let’s stick to thinking about the UML for now), then you’ll have seen a Use Case diagram. This shows you “the system” as an abstract square, with stick figures for actors connected to “the system” via arrows annotated with use cases – descriptions of what the people are trying to do with the system.

We’ve already moved past the idea that the software and the customer are the only two things in the world, indeed the customer may not be represented in this use case diagram at all! What would that mean? That the customer is paying for the software so that they can charge somebody else for use of the software: that satisfying the customer is an indirect outcome, achieved by the action of satisfying the identified actors.

We’ve also got a great idea of what the scope will be. We know who the actors are, and what they’re going to try to do; therefore we know what information they bring to the system and what sort of information they will want to get out. We also see that some of the actors are other computer systems, and we get an idea of what we have to build versus what we can outsource to others.

In principle, it doesn’t really matter how this information is gathered: the fashion used to be for prolix use case documents but these days, shorter user stories (designed to act as placeholders for a conversation where the details can be worked out) are preferred. In practice the latter is better, because it takes into account that the act of doing some work changes the understanding of the work to be done. The later decisions can be made, the more you know at the point of deciding.

On the other hand, the benefit of that UML approach where it’s all stuck on one diagram is that it makes commonalities and pattern clearer. It’s too easy to take six user stories, write them on six Github issues, then get six engineers to write six implementations, and hope that some kind of convergent design will assert itself during code review: or worse, that you’ll decide what the common design should have been in retrospective, and add an “engineering story” to the backlog to be deferred forever more.

The system, at this level of abstraction, is a black box. It’s the opaque “context” of the context diagram at the highest level of the C4 model. But that needn’t stop us thinking about how to code it! Indeed, Behaviour-Driven Development has us do exactly that. Once we’ve come to agreement over what some user story or use case means, we can encapsulate (there’s that word again) our understanding in executable form.

And now, the system-as-simulation nature of OOP finally becomes important. Because we can use those specifications-as-scripts to talk about what we need to do with the customer (“so you’re saying that given a new customer, when they put a product in their cart, they are shown a subtotal that is equal to the cost of that product?”), refining the understanding by demonstrating that understanding in executable form and inviting them to use a simulation of the final product. But because the output is an executable program that does what they want, that simulation can be the final product itself.

A common step when implementing Behaviour-Driven Development is to introduce a translation layer between “the specs” and “the production code”. So “a new user” turns into User.new.save!, and someone has to come along and write those methods (or at least inherit from ActiveRecord). Or worse, “a new user” turns into PUT /api/v0/internal/users, and someone has to both implement that and the system that backs it.

This translation step isn’t strictly true. “A new user” can be an statement in your software implementation, your specs can be both a simulation of what the system does and the system itself, you save yourself a lot of typing and a potentially faulty translation.

There’s still a lot to Object-Oriented Analysis, but it roughly follows the well-worn path of Domain-Driven Design. Where we said “a new user”, everybody on the team should agree on what a “user” is, and there should be a concept (i.e. class) in the software domain that encapsulates (that word again!) this shared understanding and models it in executable form. Where we said the user “puts a product in their cart”, we all understand that products and carts are things both in the problem and in the software, and that a user can do a thing called “putting” which involves products, and carts, in a particular way.

If it’s not clear what all those things are or how they go together, we may wish to roleplay the objects in the program (which, because the program is a simulation of the things in the problem, means roleplaying those things). Someone is a user, someone is a product, someone is a cart. What does it mean for the user to add a product to their cart? What is “their” cart? How do these things communicate, and how are they changed by the experience?

We’re starting to peer inside the black box of the “system”, so in a future post we’ll take a proper look at Object-Oriented Design.

Posted in ooa/d | Leave a comment

Episode 31: Apple CPU transitions

I contextualise the x86_64 to arm64 transition by talking about all the other times Apple have switched CPUs in their personal computers; the timeline is roughly 6502-m68k-ppc-ppc+i386-ppc64-i386-x86_64-arm64. Sorry, Newton fans!

Leave a comment

Graham Lee Uses This

I’ve never been famous enough in tech circles to warrant a post on uses this, but the joy of running your own blog is that you get to indulge any narcissistic tendencies with no filter. So here we go!

The current setup

IMG 3357

This is the desktop setup. The idea behind this is that it’s a semi-permanent configuration so I can get really comfortable, using the best components I have access to to provide a setup I’ll enjoy using. The main features are a Herman Miller Aeron chair, which I got at a steep discount by going for a second-generation fire sale chair, and an M1 Mac Mini coupled to a 24″ Samsung curved monitor, a Matias Tactile Pro 4 keyboard and a Logitech G502 mouse. There’s a Sandberg USB camera (which isn’t great, but works well enough if I use it via OBS’s virtual camera) and a Blue Yeti mic too. The headphones are Marshall Major III, and the Philips FX-10 is used as a Bluetooth stereo.

I do all my streaming (both Dos Amigans and [objc retain];) from this desk too, so all the other hardware you see is related to that. There are two Intel NUC devices (though one is mounted behind one of the monitors), one running FreeBSD (for GNUstep) and one Windows 10 (with WinUAE/Amiga Forever). The Ducky Shine 6 keyboard and Glorious Model O mouse are used to drive whichever box I’m streaming from, which connects to the other Samsung monitor via an AVerMedia HDMI capture device.

IMG 3356

The laptop setup is on a variable-height desk (Ikea SKARSTA), and this laptop is actually provided by my employer. It’s a 12″ MacBook Pro (Intel). The idea is that it should be possible to work here, and in fact at the moment I spend most of my work time at it; but it should also be very easy to grab the laptop and take it away. To that end, the stuff plugged into the USB hub is mostly charge cables, and the peripheral hardware is mostly wireless: Apple Magic Mouse and Keyboard, and a Corsair headset. A desk-mounted stand and a music-style stand hold the tablets I need for developing a cross-platform app at work.

And it happens that there’s an Amiga CD32 with its own mouse, keyboard, and joypad alongside: that mostly gets used for casual gaming.

The general principle

Believe it or not, the pattern I’m trying to conform to here is “one desktop, one laptop”. All those streaming and gaming things are appliances for specific tasks, they aren’t a part of my regular computering setup. I’ve been lucky to be able to keep to the “one desktop, one laptop” pattern since around 2004, usually using a combination of personal and work-supplied equipment, or purchased and handed-down. For example, the 2004-2006 setup was a “rescued from the trash” PowerMac 9600 and a handed-down G3 Wallstreet; both very old computers at that time, but readily affordable to a fresh graduate on an academic support staff salary.

The concept is that the desktop setup should be the one that is most immediate and comfortable, that if I need to spend a few hours computering I will be able to get on very well with. The laptop setup should make it possible to work, and I should be able to easily pick it up and take it with me when I need to do so.

For a long time, this meant something like “I can put my current Xcode project and a conference presentation on a USB stick, copy it to the laptop, then go to a conference to deliver my talk and hack on a project in the hotel room”. These days, ubiquitous wi-fi and cloud sync products remove some of the friction, and I can usually rely on my projects being available on the laptop at time of use (or being a small number of steps away).

I’ve never been a single-platform person. Sometimes “my desktop” is a Linux PC, sometimes a Mac, it’s even been a NeXT Turbo Station and a Sun Ultra workstation before. Sometimes “my laptop” is a Linux PC, sometimes a Mac, the most outré was that G3 which ran OpenDarwin for a time. The biggest ramification of that is that I’ve never got particularly deep into configuring my tools. It’s better for me to be able to find my way around a new vanilla system than it is to have a deep custom configuration that I understand really well but is difficult to port.

When Mac OS X had the csh shell as default, I used that. Then with 10.3 I switched to bash. Then with 10.15 I switched to zsh. My dotfiles repo has a git config, and a little .emacs that enables some org-mode plugins. But that’s it.

Posted in Mac, meta-waffle | Leave a comment

On industry malaise

Robert Atkins linked to his post on industry malaise:

All over the place I see people who got their start programming with “view source” in the 2000s looking around at the state of web application development and thinking, “Hey wait a minute, this is a mess” […] On the native platform side, there’s no joy either.

This is a post from 2019, but shared in a “this is still valid” sense. To be honest, I think it is. I recognise those doldrums myself; Robert shared the post in reply to my own toot:

Honestly jealous of people who are still excited by new developments in software and have gone through wondering why, then through how to get that excitement back, now wondering if it’s possible that I ever will.

I’ve spent long enough thinking that it’s the industry that’s at fault to thinking it’s me that’s at fault, and now I know others feel the same way I can expand that from “me” to “us”.

I recognise the pattern. The idea that “we” used to do good work with computers until “we” somehow lost “our” way with “our” focus on trivialities like functional reactive programming or declarative UI technology, or actively hostile activities like adtech, blockchain, and cloud computing.

Yes, those things are all hostile, but are they unique to the current times? Were the bygone days with their shrink-wrapped “breaking this seal means agreeing to the license printed on the paper inside the sealed box” EULAs and their “can’t contact flexlm, quitting now” really so much better than the today times? Did we not get bogged down in trivialities like object-relational mapping and rewriting the world in PerlPHPPython?

It is true that “the kids today” haven’t learned all the classic lessons of software engineering. We didn’t either, and there will soon be a century’s worth of skipped classes to catch up on. That stuff doesn’t need ramming into every software engineer’s brain, like they’re Alex from A Clockwork Orange. It needs contextualising.

A clear generational difference in today’s software engineering is what we think of Agile. Those of us who lived through—or near—the transition remember the autonomy we gained, and the liberation from heavyweight, management-centric processes that were all about producing collateral for executive sign-off and not at all about producing working software that our customers valued. People today think it’s about having heavyweight processes with daily status meetings that suck the life out of the team. Fine, things change, it’s time to move forward. But contextualise the Agile movement, so that people understand at least what moving backward would look like.

So some of this malaise will be purely generational. Some of us have aged/grown/tired out of being excited about every new technology, and see people being excited about every new technology as irrelevant or immature. Maybe it is irrelevant, but if so it probably was when we were doing it too: nothing about the tools we grew up with were any more timeless than today’s.

Some of it will also be generational, but for very different reasons. Some fraction of us who were junior engineers a decade or two ago will be leads, principles, heads of division or whatever now, and responsible for the big picture, and not willing to get caught into the minutiae of whether this buggy VC-backed database that some junior heard about at code club will get sunset before that one. We’d rather use postgres, because we knew it back then and know it now. Well, if you’re in that boat, congratulations on the career progression, but it’s now your job to make those big picture decisions, make them compelling, and convince your whole org to side with you. It’s hard, but you’re paid way more than you used to get and that’s how this whole charade works.

Some of it is also frustration. I certainly sense this one. I can pretend I understood my 2006-vintage iBook. I didn’t understand the half of it, but I understood enough to claim some kind of system-level comfort. I had (and read: that was a long flight) the Internals book so I understood the kernel. The Unix stuff is a Unix system, I know this! And if you ignore classic, carbon, and a bunch of programming languages that came out of the box, I knew the frameworks and developer tools too. I understood how to do security on that computer well enough that Apple told you to consider reading my excellent book. But it turns out that they just wouldn’t fucking sit still for a decade, and I no longer understand all of that technology. I don’t understand my M1 Mac Mini. That’s frustrating, and makes me feel stupid.

So yes, there is widespread malaise, and yes, people are doing dumb, irrelevant, or evil things in the name of computering. But mostly it’s just us.

As the kids these days say, please like and subscribe.

Posted in advancement of the self | Leave a comment