The unreasonable ineffectiveness of considering things harmful

Dijkstra didn’t claim to consider the go to statement harmful, not in those words. The title of his letter to CACM was provided by the editor, Niklaus Wirth, who did such a great job that the entire industry knows that go to is “Considered Harmful”, and that you can quickly rack up the clicks by considering other things harmful.

A deeper reading of his short (~1400 words) article raises some interesting points, that did not as yet receive as much airing. Here, in the interests of writing an even shorter letter, is just one.

My first remark is that, although the programmer’s activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the “making’ of the corresponding process is delegated to the machine.

There are many difficulties with this statement, including the presumed gender of the programmer. Let us also consider the idea of a “correct” program, which does not exist for the majority of programmers. Eight years after Dijkstra’s letter was published, Belady and Lehman published the first law of program evolution dynamics:

_ Law of continuing change_. A system that is used undergoes continuing change until it is judged more cost effective to freeze and recreate it. Software does not face the physical decay problems that hardware faces. But the power and logical flexibility of computing systems, the extending technology of computer applications, the ever-evolving hardware, and the pressures for the exploitation of new business opportunities all make demands. Manufacturers, therefore, encourage the continuous adaptation of programs to keep in step with increasing skill, insight, ambition, and opportunity. In addition to such external pressures for change, there is the constant need to repair system faults, whether they are errors that stem from faulty implementation or defects that relate to weaknesses in design or behavior. Thus a programming system undergoes continuous maintenance and development, driven by mutually stimulating changes in system capability and environmental usage. In fact, the evolution pattern of a large program is similar to that of any other complex system in that it stems from the closed-loop cyclic adaptation of environment to system changes and vice versa.

This model of programming looks much more familiar to me when I reflect on my experience than the Dijkstra model. If Dijkstra’s programmer stopped programming when they have “constructed a correct program”, then their system would fail as it didn’t adapt to “increasing skill, insight, ambition, and opportunity”.

The programmer who would thrive in this environment is more akin to Ward Cunningham’s opportunistic rewriter, based on his experience of the WyCash Portfolio Management System. That programmer rewrites every module they touch, to ensure that it represents the latest information they have. We recognise the genesis of Ward’s “technical debt” concept in this quote, and also perhaps what we would now call “refactoring”:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

The traditional waterfall development cycle has endeavored to avoid programming catastrophe by working out a program in detail before programming begins. We watch with some interest as the community attempts to apply these techniques to objects. However, using our debt analogy, we recognize this amounts to preserving the concept of payment up-front and in-full. The modularity offered by objects and the practice of consolidation make the alternative, incremental growth, both feasible and desirable in the competitive financial software market.

Ward also doesn’t use go to statements, his programming environment doesn’t supply them. But it is not the ability of his team to avoid incorrect programs by using other control structures that he finds valuable; rather the willingness of his programmers to jettison old code and evolve their system with its context.

Empowered free software

Free and open source software has traditionally been defined as the opposite of something else: proprietary (or commercially-licensed) software. That’s particularly obvious in the name of the GNU project, which calls itself “Not UNIX” – a popular AT&T-owned commercial software property of the time. The GNU manifesto goes deeper on the specific ways in which it is Not Unix:

I consider that the Golden Rule requires that if I like a program I must share it with other people who like it. Software sellers want to divide the users and conquer them, making each user agree not to share with others. I refuse to break solidarity with other users in this way. I cannot in good conscience sign a nondisclosure agreement or a software license agreement. For years I worked within the Artificial Intelligence Lab to resist such tendencies and other inhospitalities, but eventually they had gone too far: I could not remain in an institution where such things are done for me against my will.

The focus of this document, and of the various free software licenses, is on the distribution and redistribution rights associated with the software. Thus the focus of the Free Software Foundation, Open Source Initiative, and the other organisations that promote free or open source software is on ensuring appropriate licences are used, and that distributors comply with the licence terms.

Otherwise, free software development is not particularly distinguished from other forms of software development. There are, of course, some outliers, but mostly you’ll see a “core team”, perhaps with a formal structure (component owners, project/release managers, and other roles are common). This team organises its work around proprietary work-allocation tooling like Jira, Github, Slack, Trello, and so on. In many cases, an organisation such as a commercial company or business-interest foundation exists to centralise ownership and decision making, like a good old-fashioned Fordist company. The only visible difference in operations is that it’s possible to read the source code and get changes from outside through the gates and into their repository.

The reason so much work in open source software looks a whole lot like commercial software is that it is work in commercial software. Sometimes the open source project exists exactly to drive adoption of commercially-licensed equivalents, replacements, or upgrades. “Open core” products exist so that you find a need for the commercial features. Developer advocates build open source workflow tools as a sort of loss leading pre-sales activity. And many companies publish open source software “off the money path” as a recruitment activity: do some free work on our pull requests then come here and get paid to reject pull requests from your peers!

Note that I’m not saying all free software or open source software is like this, but that plenty is. Free and open source software development is informed by, and mimics, commercial software engineering to a large extent.

Of course it’s this way, you say, we software engineers need our expensive software engineering lifestyles supported by our high software engineering salaries, so we learn our craft in corporate service and that informs all of our development, including open source. Or that we are only able to contribute to open source when it’s in pursuit of our employers’ goals, because Computering Time is something we do in the office. So it’s no surprise that a lot of open source software development looks like, or is, corporate software development. If only there were proper, sustainable funding for open source software!

To which I say: that is an interesting, and problematic, request, and I wish to put it to one side. Maybe I’ll revisit it in a later post. What I’m more interested in, is what an open source movement that was centered around the Four Freedoms, rather than software-licensing concerns, would look like. One in which you didn’t merely have some abstract legal freedom to do the four activities, but were empowered to do so.

Power Zero: The power to run the program for any purpose.

How much free or open source software cannot directly be used by anyone who isn’t a programmer? Think of all of the things that are only distributed as developer libraries, for developers to incorporate into (commercial or open source) applications. The things that get duplicated as a pod, an egg, a crate, and in whatever new packaging system and programming language comes along. This is the repository namespace as a virtual landgrab, enabling open source as corporate recruitment tool: you should hire me, you’ve heard of me because I wrote the language-your-programmers-use version of library-your-programmers-use.

How much free or open source software is just plain difficult, due to accidental rather than essential complexity? Developers will probably have all sorts of (developer tools) examples here, like GNU autoconf. I find it very hard to make a nice-looking document using open source tools, too. I’ve put enough effort into learning emacs and LaTeX to get somewhat proficient, but still spend a lot of time looking up TeX syntax and getting it wrong. And making (hopefully nice-looking) documents is something I do fairly frequently.

And I’m playing this game on the easy setting. I’ve got years of experience trying to make software work. I am willing (and often paid) to put time into understanding why software doesn’t do what I want. I speak English, the home language of much software, and can use a screen and keyboard without difficulty.

A free software movement in which Freedom 0 is replaced by Empowerment 0 would make getting, trying, and adopting free software trivial. Much simpler than an app store, which has the necessary gatekeeper of payments.

Power One: The power to study how the program works, and change it to make it do what you wish.

Access to the source is a precondition for this. Also, the source being in a readily comprehensible format, amenable to experimentation and adaptation are preconditions. It needs to be possible for a finance person, not a computer person, to look at a finance application like GNUcash, understand its model of finance, and adapt it to their model of finance.

This means elevating automated testing from a thing that developers claim they do sometimes, to the principal mode of hypothesis-driven change in software. Remember when Dan North said that Cucumber would allow business analysts and developers to collaborate on writing the tests? That, but collaborating on writing the software. Currently modes of software development, including free and open source software, are predicated on the division of society into three classes: “developers” who make software, “the business” who sponsor software making, and “users” who do whatever it is they do. An enabling free software movement would erase these distinctions, because it would give the ability (not merely the freedom) to study and change the software to anyone who wanted or needed it.

Software developed with this in mind would have – nay, require – a very clear architecture. If you want a finance person to critique the finance model used in a finance application, everything needs to scream finance. It needs to stop screaming model, view, controller, or memory management, or database transaction, and start shouting credit, debit, accounts receivable.

That doesn’t mean that there isn’t space for experts in memory management or database transactions, but it does mean that you can understand a music score pagesetting application to the point where you can improve its representation of Klezmer modes without needing to understand memory management or database transactions.

Power Two: The power to redistribute and make copies so you can help your neighbour.

An enabling free software movement would make it easy for me to package my computer, or a part of it, and send it to someone else: “here, I find this useful, you give it a go”. So I don’t have to grub around (pardon the pun) for whatever guix, apt, yum, pkcon, brew, pkg or whatever command I had to run to make the software work and tell that person to run the same command, hoping it works. I just give them the thing, and they use it.

Power Three: The power to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

Currently there is a blessed edifice, called “upstream”, the fount of all that is good in software. Any corporate programmer who wants to juice their proprietary github profile longs for their “requests” to be “pulled” by upstream, so that all of those other programmers who are in thrall to upstream finally get to see the fruits of this individual’s labour.

It feels weird to say this in 2020, when the idea was presented as fait accompli in 1997, but an enabling open source software movement would operate more like a bazaar than a cathedral. There wouldn’t be an “upstream”, there would be different people who all had the version of the software that worked best for them. It would be easy to evaluate, compare, combine and modify versions, so that the version you end up with is the one that works best for you, too.

The Four Powers

The combination of these powers points to an open source software movement that erases power dynamics implicit and explicit in our current modes of producing software. It’s as easy for someone who understands the domain of a software system to acquire, understand, improve and share that system as it is for someone who understands the computer it runs on. The infinite malleability of software is deployed to allow people, teams and communities to produce and share their own versions that work best for them.

The antagonism between “the developers” and “the business” documented in the principles behind the agile manifesto is left to commercial software companies. Users, developers, and sponsors are on the same team, and produce software that works better for them than if they worked in other ways.

Pairing in Github

In the world of free software, it’s good to appropriately credit contributors to your community for the work they do.

git makes this hard when you pair program. I was at a hackathon recently, and while I didn’t make a single commit, I sat next to a lot of other people who made plenty of commits based on conversations we had, and suggested a lot of things to try to debug problems, and invented solutions that made it into those commits. No highly-nutritious green squares in github for me, no external evidence that I had contributed two days of my time to these free software projects.

When I pair, if I’m committing, I make sure that I acknowledge the contribution my pair makes as equal to my own. In the github UI, it looks like this. You can see that both of us contributed to the commit.

How do I do this? I commit like this:

git commit -m 'We fixed this thing' --author 'Jennifer H. Pair <jenny.pair@example.com>'

Now both accounts are linked in the UI, because I’m the committer and my pair is the author. This isn’t perfect, because github doesn’t acknowledge the author in their contribution graph, only the committer. If there’s a more egalitarian way to acknowledge my pair I’d want to follow that, but for the moment I’m happy to at least demonstrate that they authored the change I typed into a text editor.

Everyone rejecting everyone else

It’s common in our cooler-than-Agile, post-Agile community to say that Agile teams who “didn’t get it” eschewed good existing practices in their rush to adopt new ways of thinking. We don’t need UML, we’re Agile! Working software over comprehensive documentation!

This short post is a reminder that it ran both ways, and that people used to the earlier ways of thinking also eschewed integrating their tools into the Agile methodology. We don’t need Agile, we’re Model-Driven! Here’s an excerpt from 2004’s UML 2 Toolkit:

Certain object-oriented methods on the market today, such as The Unified Process, might be considered processes. However, other lightweight methods, such as Extreme Programming (XP), are not robust enough to be considered oricesses in the way we use the term here. Although they provide a valuable set of interrelated techniques, they are often referred to as “small m methodologies” rather than as software-engineering processes.

This is in the context of saying that UML supports software-engineering processes in which a sequence of activities produce “documentation…about the system being built” and “a product that solves the initial problems is introduced and delivered”. So XP is not robust enough at doing those things for UML advocates to advocate UML in the context of XP.

So it’s not just that Agilistas abandoned existing practices, but there’s an extent to which existing practitioners abandoned Agilistas too.

The feature constraint

If you’re in a purely software business, your constraining resource is often (not always, not even necessarily in most cases, but often) the rate at which software gets changed. Well, specifically, the rate at which software gets changed in a direction your customers or potential customers are interested in. This means that the limiting factor on growth is likely to be rate at which you can add features or fixes that attract new customers, or maintain old customers.

It’s common to see business where this constraint is not understood throughout management, particularly manifesting in sales. In a business to business context, symptoms include:

  • sales teams close deals with promises of features that don’t exist, and can’t exist soon.
  • there’s no time to fix bugs or otherwise clean up because of the new feature backlog.
  • new features get added to the backlog based on the size of the requesting customer, not the cost/benefit of the customer.
  • the product roadmap is “what we said we’d have, to whom, by when”, not “what we will have”.

As Eliyahu Goldratt says, you have to subordinate the whole process to the constraint. That means incentivising people to sell something a lot like what you have now, over selling a bigger number of things you don’t have now and won’t have soon.

Immutable changes

The Fixed-Term Parliaments Act was supposed to bring about a culture change in the parliament and politics of the United Kingdom. Moving for the second reading of the bill that became this Act, Nick Clegg (then deputy prime minister, now member for Facebook Central) summarized that culture shift.

The Bill has a single, clear purpose: to introduce fixed-term Parliaments to the United Kingdom to remove the right of a Prime Minister to seek the Dissolution of Parliament for pure political gain. This simple constitutional innovation will none the less have a profound effect because for the first time in our history the timing of general elections will not be a plaything of Governments. There will be no more feverish speculation over the date of the next election, distracting politicians from getting on with running the country. Instead everyone will know how long a Parliament can be expected to last, bringing much greater stability to our political system. Crucially, if, for some reason, there is a need for Parliament to dissolve early, that will be up to the House of Commons to decide. Everyone knows the damage that is done when a Prime Minister dithers and hesitates over the election date, keeping the country guessing. We were subjected to that pantomime in 2007. All that happens is that the political parties end up in perpetual campaign mode, making it very difficult for Parliament to function effectively. The only way to stop that ever happening again is by the reforms contained in the Bill.

As we hammer out the detail of these reforms, I hope that we are all able to keep sight of the considerable consensus that already exists on the introduction of fixed-term Parliaments. They were in my party’s manifesto, they have been in Labour party manifestos since 1992, and although this was not an explicit Conservative election pledge, the Conservative manifesto did include a commitment to making the use of the royal prerogative subject to greater democratic control, ensuring that Parliament is properly involved in all big, national decisions—and there are few as big as the lifetime of Parliament and the frequency of general elections.

When a parliament is convened, the date of the next general election automatically gets scheduled for the first Thursday in May, five years out. The Commons could vote, with a qualified majority, to hold an election earlier, or an election would automatically be triggered if the government lost a no-confidence vote, but the prime minister cannot unilaterally declare an election date to suit their popularity with the franchise.

Observed behaviour shows that the Act has been followed to the letter, up to the current dissolution which required a specific change to the rules. Has the spirit of the Act, the motivation presented above, survived intact? The dates of elections since the Act passed were:

  • 7 May 2015, the first Thursday in May at the end of a five-ish-year Parliament, chosen to bring the existing behaviour into sync with the planned behaviour.
  • 8 June 2017, after a qualified majority vote within the terms of the Act.
  • 12 December 2019, after the aforementioned Early Parliamentary General Election Act.

The reason for the disparity is that the intended goal—a predictable release schedule that makes it easier for everyone involved to prepare—doesn’t match the cultural drivers. The desire to release when we’re ready, and have the features that we want to see, remains immutable, and means that even though we’ve adopted the new rules, we aren’t really playing by them.

I was tempted to hit “publish” at this point and leave the software engineering analogy unspoken. I powered on: here are a few examples I’ve seen where the rule changes have been imposed but the cultural support for the new rules hasn’t been nurtured.

  • Regular releases, but the release is “internal only” or completely unreleased until all of the planned features are ready;
  • Short sprints, where everything that has gone from development into QA is declared “done”;
  • Sprint commitments, where the team also describe “stretch goals” that are expected to be delivered;
  • Sustainable pace, where the “velocity” is expected to increase monotonically;
  • Self-organizing teams, where the manager feeds back on everybody’s status update at the daily stand-up;
  • Continuous integration, where the team can disable or skip tests that fail.

All of these can be achieved without the attached sabotage, but that requires more radical changes than adding a practice to the team’s menu. Radical, because you have to go to the root of why you’re doing what you do. Ask what you’re even trying to achieve by having a software team working on your software, then compare how well your existing practice and your proposed practice support that value. If the proposed practice is better, then adopt it, but there’s going to be a transition period where you continually explain why you’re adopting it, show the results, and (constructively, politely, and firmly) guide people toward acceptance of and commitment to the new practice. Otherwise you end up with a new fixed-term parliament starting whenever people feel like it.

On exploding boilers

Throughout our history, it has always been standardisation of components that has enabled creations of greater complexity.

This quote, from Simon Wardley’s finding a path, reminded me of the software industry’s relationship with interchangeable parts.

Brad Cox, in both Object-Oriented Programming: an Evolutionary Approach and Superdistribution, used physical manufacturing analogies (to integrated circuits, and to rifles) to invoke the concept of a “software industrial revolution” that would allow end users to assemble off-the-shelf parts to solve their problems. His “software ICs” built on ideas expressed at least as early as 1968 by Doug McIlroy. Joe Armstrong talked about a universal function registry, so that if someone writes sin/1 everybody else can use it.

Of course we have a lot of reusable components in software engineering now, and we can thank the Free Software movement at least as much as any paradigm of organising programming instructions. CTAN, CPAN, and later repositories act as the “component catalogues” that Cox discussed. If you want to make a computer do something, you can probably find an npm module or a Ruby gem that does most of the work for you. The vast majority of such components have free licenses, it’s rare to pay for a reusable component.

The extent to which they’re “standard parts”, on the model of interchangeable nuts and bolts or integrated circuits, is debatable. Let’s say that you download a package from the NPM. We know that you use it by calling require (or maybe import)…but what does that give you? An object? A constructor? A regular function? Does it run anything as a result of calling require? Does it work in your node/ionic/electron/etc. context? Is it even a lump of regular javascript, or is it a Real, or to have access to a JVM, or some other niche requirement?

Whatever these “standard parts” and however they’re used, you’re probably still doing a bunch of coding. These parts will do computery stuff, or maybe generic behaviour like authentication, date UIs, left-padding strings and the like. Usually we still have to develop ours apps as “engineered” software projects with significant levels of custom coding, to make those “standard parts” actually solve a useful problem. There are still people working for retail companies maintaining online store applications across the four corners of the globe, despite the fact that globes don’t have corners, these things all work the same way, and the risks associated with getting them wrong are significant.

Perhaps this is because software is a distinct thing, and we can never treat it like industrial product manufacturing.

Perhaps this is because our ambition always runs out ahead of our capability. Whatever we can reproducibly build, we’d like to be building something greater.

Perhaps this is because we’re still in the cottage industry stage, where we don’t yet know whether or how to standardise the parts, and occasionally the boilers explode.

Change

I was just discussing software architecture and next steps with a team building a tool to help analyse MRI images of brains. Most of the questions we asked explored ways to proceed by focussing on change:

  • what if the budget for that commercial component shows up? How would that change the system?
  • what if you find this data source isn’t good enough? How would you find that out?
  • which of these capabilities does the customer find most important? When will they change their minds?

that sort of thing.

We have all sorts of words for planning for, and mitigating the risk of, changes in low-level software design. In fact a book on building maintainable software talks about nothing else, because maintainable software is antifragile software.

But it happened that I wasn’t reading that book at the time, I was reading about high-level design and software architecture. The guide I was reading talked a lot about capturing the requirements and constraints in your software architecture, and this is all important stuff. If someone’s paying for your thing, you need to ensure it can do the things they’re paying for it to do. After all, they’re probably paying to be able to do the things that your software lets them do; they aren’t paying to have some software. Software isn’t real.

However, most of the reason your development will slow down once you’ve got that first version out of the door is that the world (which might be real) changes in ways that it’s hard to adapt your software to. Most of the reason you’re not adding new features is that you’re fixing bugs, i.e. changing the behaviour of the software from one that matches the flawed conception you had of what it should do to one that matches the flawed conception you now have of what it should do.

A good architecture should identify, localise, and separate sources of change in the software system. And then it should probably do whatever you think the customers think they want.

The value of the things on the left

With the rise of critical writing like Bertand Meyer’s Agile! The Good, the Hype, and the Ugly, Daniel Mezick’s Agile-Industrial Complex, and my own Fragile Manifesto, it’s easy to conclude the this Agile thing is getting tired. We’re comfortable enough now with the values and principles of the manifesto that, even if software has exited the perennial crisis, we still have problems, we’re willing to criticise our elders and betters rather than our own practices.

It’s perhaps hard to see from this distance, but the manifesto for Agile Software Development was revolutionary when it was published. Not, perhaps, among the people who had been “doing it and helping others to do it”.

Nor, indeed, would it have been seen as revolutionary to the people who were supposed to read it at the time. Of course we value working software over comprehensive documentation. Our three-stage signoff process for the functional specification before you even start writing any software is because we want working software. We need to control the software process so that non-working software doesn’t get made. Yes, of course working software is the primary measure of progress. The fact that we don’t know whether we have any working software until two thirds of the project duration is passed is just how good management works.

At one point, quite a few years after the manifesto was published and before everybody used the A-word to mean “the thing we do”, I worked at a company with a very Roycean waterfall process. The senior engineering management came from a hardware engineering background, where that approach to project management was popular and successful (and maybe helpful, but I’m not a hardware engineer). To those managers, Agile was an invitation for the inmates to take over the asylum.

Developers are notoriously fickle and hard to manage, and you want them to create their own self-organising team? Sounds like anarchy! We understand that you want to release a working increment every two to four weeks with a preference toward the shorter duration, but doesn’t that mean senior managers will spend their entire lives reviewing and signing off on functional specifications and test plans?

The managers who were open to new ideas were considering the Rational Unified Process, which by that time could be defined as Agile for the “nobody ever got fired for buying an IBM” crowd:

The Rational Unified Process. Image: wikimedia

That software engineering department now has different management and is Agile. They have releases at least every month (they already released daily, though those releases were of minimal scope). They respond to change rather than follow a plan (they already did this, though through hefty “change control” procedures). They meet daily to discuss progress (they already did this).

But, importantly, they do the things they do because it helps them release software, not because it helps them hit project milestones. The revolution really did land there.