Non-standard components

Another day, another exercise from Software: A Technical History…

A software engineering project might include both standard and nonstandard engineering components. Give an example of a software engineering project where this would be appropriate.

Kim W. Tracy, Software: A Technical History (p. 43)

Buy vs. build (or, in the age of free software, acquire vs. build) is perhaps the most important question in any software engineering endeavor. I would go so far as to say that the solution to the software crisis wasn’t object-oriented programming, or agile software development, or any other change in the related methods and tools of software—those have largely been fad-driven. It was the confluence of these two seminal events:

  • The creation of the GNU project by Richard Stallman, which popularized to the Four Freedoms, which led to the Debian Social Contract, which led to the Open Source Definition.
  • The dot-com crash, which popularized not having money to spend on software licenses or developers, which led to adopting free software components.

This, the creation of de facto standards in software commons, then drove adoption of the LAMP stack on the technology side, and fast-feedback processes including the lightweight methodologies that became known as agile, lean startup, lean software, and so on.

Staffing costs aside, software development can be very inexpensive at the outset, provided that the developers control the scope of their initiative to avoid “boiling the ocean”. Therefore it can be easy and, to some extent, low-impact, to get the buy-vs-build calculus wrong and build things it’d be better to buy. But, as code is a liability, making the wrong choice is cheap today and expensive tomorrow.

One technique that helps to identify whether to use a standard component is a Wardley map, which answers the question “how closely-related is this part of our solution to our value proposition?” If it’s something you need, but not something that’s core to your unique provision, there’s little need for a unique component. If it’s an important part of your differentiation, it probably ought to be different.

Another is Cynefin, which answers the question “what does this problem domain look like”? If it’s an obvious problem, or a complicated problem, the solution is deterministic and you can look to existing examples. If it’s complex or chaotic, you need to be more adaptive, so don’t want to be as constrained by what other people saw.

Bringing this all together into an example: the Global.health project has a goal to provide timely access to epidemiological data to researchers, the press, and the public. “Providing timely access to…” is a well-solved problem, so the project uses standard components there: Linux, HTTPS, hosted databases, event-driven processing. “Epidemiological data” is a complex problem that became chaotic during COVID-19 (and does again with other outbreaks), so the project uses nonstandard components there: its own schemata, custom code, and APIs for researchers to write their own integrations.

Posted in history | Tagged | Leave a comment

Specific physical phenomena

Continuing the theme of exploring the exercises in Software: A Technical History:

Give an example of a specific physical phenomenon that software depends
on in order to run. Can a different physical phenomenon be used? If so, give
another example phenomenon. If not, explain why that’s the only physical
phenomenon that can be used.

Kim W. Tracy, Software: A Technical History (p. 43)

My short, but accurate, answer is “none”. Referring back to the definition of software I quoted in Related methods and tools, nothing in that definition implies or requires any particular physical device, technology, or other phenomenon.

Exploring the history of computing, it’s clear that the inventors and theoreticians saw computers as automation (or perhaps more accurately flawless repetition) of thought:

We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions…

Alan M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem (§1)

Or earlier:

Whenever engines of this kind exist in the capitals and universities of the world, it is obvious that all those enquirers who wish to put their theories to the test of number, will apply their efforts so to shape the analytical results at which they have arrived, that they shall be susceptible of calculation by machinery in the shortest possible time, and the whole course of their analysis will be directed towards this object. Those who neglect the indication will find few who will avail themselves of formulae whose computation requires the expense and the error attendant on human aid.

Charles Babbage, On the Mathematical Powers of the Calculating Engine

For any particular physical tool you see applied to computing—mercury delay line memory, “silicon” chips (nowadays the silicon wafer is mostly a substrate for other semiconductors and metals), relays, thermionic valves, brass cogs, hydraulic tubes—you can replace it with other tools or even with a person using no tools at all.

So it was then, when the mechanical or digital computers automated the work of human computers. As it was in the last century, when the “I.T.” wave displaced human clerical assistants and rendered the typing pool redundant, and desktop publishing closed the type shop. Thus we see today, that categorization systems based on “A.I.” are validated on their performance when compared with human categorizers.

Nothing about today’s computers is physically necessary for their function, although through a process of iterating discovery with development we’ve consolidated on a physical process (integrated semiconductor circuits) that has particular cost, power, performance, manufacturing, and staffing qualities. A more interesting question to ask would be: what are the human relations that software depends on in order to run? In other words, what was it about these computers, typists, typesetters, paraprofessionals, and so on that made their work the target of software? Can a different human relation be used?

Posted in history | Tagged | Leave a comment

Related methods and tools

The book Software: A Technical History has plenty of exercises and projects at the end of each chapter, to get readers thinking about software and its history and to motivate additional research. For example, here’s exercise 1 (of 27 exercises and 8 projects) from chapter 1 (Introduction to Software History):

Why does the definition of software in this text include “related methods and tools?” What does knowing about the methods and tools used to develop software tell us about the software?

Kim W. Tracy, Software: A Technical History (p. 43)

The definition is this: Software is the set of programs, concepts, tools, and methods used to produce a running system on computing devices. (p. 2)

Those tools and methods are sometimes “a running system on computing devices” themselves, in which case they trivially fall into the definition of software: an Integrated Development Environment is a software system that people use to produce software systems.

Sometimes, the tools aren’t themselves “a running system on computing devices”. Flowcharts, UML diagrams, whiteboards, card punches, graph-paper bitmaps, and other artifacts are not themselves applications of computing, and neither are methods and methodologies like Object-Oriented Programming, the Personal Software Process, or XP.

Both the computer-based and non-computer-based tools and methods influence the system that people create. The system is a realization on a computing machine of an abstract design that’s intended to address some set of desires or needs. The design itself is an important part of the software because it’s the thing that the running system is intended to realize.

The tools and methods are themselves part of the design because they influence and constrain how people think about the system they’re realizing, the attributes of the realized system, and how people collaborate to produce that system. As an example, software developers using Simula-67 create classes as types that encapsulated part of their design, and instantiate objects as example members of those types within their system. Software developers using Smalltalk do the same thing, but documentation about Simula-67 encourages thinking about hierarchical type systems within a structured programming paradigm, and documentation about Smalltalk encourages thinking about active objects within an object-oriented paradigm. So people in a community of Simula-67 programmers and people in a community of Smalltalk programmers would design different systems that work in different ways, even though using very similar tools.

When I read the definition of software and the exercise, I initially thought that it was wrong to think of the tools and methods as part of the software, even though it’s important to include the tools and methods (and the associated social and cultural context of their legitimacy, popularity, and importance) in a consideration of software history. I considered a tighter definition, that includes the running programs, the (intangible) artifacts that comprise those programs, and any source code used in creating those artifacts. Reflecting on the exercise and writing this response, I see that this is an arbitrary place to draw the boundary, and that a definition of software that includes the design and methods used in creating the artifacts is also workable.

Posted in books, history | Tagged | 2 Comments

YX problem

Software people are always all up in the XY problem: someone asks about how to do X when what they’re really trying to solve is Y. I find the YX problem much more frustrating: where software people decide that they want to answer question Y even though what someone asks is question X.

I’ve seen a few different manifestations of this pattern:

  • Respondent doesn’t know the answer to X, but does know the answer to Y, and hopes that answering Y demonstrates expertise/usefulness.
  • Respondent doesn’t know the answer to X, but riffs on what the answer probably would be, and ends up answering Y.
  • Respondent doesn’t believe that the querent should be trying X and thinks they should be trying Y instead; respondent didn’t ask querent the context for X but jumped straight to answering Y.
  • Respondent knows of a process Y that leads up to the querent trying X and decides to enumerate the steps of that process Y; even though they know that the querent is already trying X.
  • Respondent misunderstood question X to be question Y.

The common advice on questions for software people is How to ask questions the smart way. The problem with this advice is that it’s written from the perspective of an asymmetric relationship: the respondent is a busy expert, the querent is an idle dilettante; the querent has a responsibility to frame their question in the optimum way for the expert to impart wisdom to the idler.

Frequently the situation is more symmetric: we’re both busy experts, and we both have incomplete knowledge of both the question domain and what we’re trying to achieve. Have some patience with other people (whichever side of the interaction you’re on), and assume good faith on the part of all involved until they present contrary evidence. That means starting from the assumption that someone asked question X because they want an answer to question X.

Posted in learning | 2 Comments

Floating point numbers aren’t weird

When people say “floating point numbers are weird”, they typically mean that the IEEE 754 floating point representation for numbers doesn’t meet their needs, or maybe that it meets their needs but it is surprising in its behaviour because it doesn’t match their intuitive understanding of how numbers work.

IEEE 754 isn’t weird, it’s just designed for a specific scenario. One where having lots of different representations of NaN makes sense, because they can carry information about what calculation led to NaN, so you shouldn’t do equality comparisons on NaN. One where having positive and negative representations of 0 makes sense. One where…you get the idea.

(What is that specific scenario? It’s one where developers need to represent real numbers, and need reliable error-signaling, and don’t necessarily understand the limitations well enough to design a good system or handle corner cases themselves. Once you’ve got an idea that IEEE 754 does something weird, you’ve probably graduated and are ready to move on.)

But you were sold a general-purpose programming language running on a general-purpose computer, and the only representations of numbers they support are actually not general purpose: fixed-width integers with optional 2s complement negatives, and IEEE 754 floating point. You don’t have to use those options, and you don’t have to feel like you must be holding it wrong because your supposedly general-purpose programming environment only lets you use specific types of numbers that don’t fit your purpose.

Check out decimal representations, arbitrary precision representations, posits, fractional representations, alternative rounding choices, and open up the possibility of general-purpose numbers in your general-purpose computer.

Posted in software-engineering | Leave a comment

Still no silver bullet?

In his 1986 article No Silver Bullet—Essence and Accident in Software Engineering, Fred Brooks suggests that there’ll never be a single tool, technique, or fad that realises an order-of-magnitude improvement in software engineering productivity. His reason is simple: if there were, it would be because current practices make software engineering ten times more onerous than they need to be, and there’s no evidence that this is the case. Instead, software engineering is complex because it provides complex solutions to complex problems, and that complexity can’t be removed without failing to solve the complex problem.

Unfortunately, the “hopes for the silver” that he described as not being silver bullets in the 1980s are still sold as silver bullets.

  • Ada and other high-level language advances. “Ada will not prove to be the silver bullet that slays the software productivity monster. It is, after all, just another high-level language, and the big payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions.” Why, then, do we still have a pre-Cambrian explosion of new programming languages, and evangelism strike forces pooh-poohing all software that wasn’t written in the new hotness? On the plus side, Brooks identifies that “switching to [Ada will be seen to have] occasioned training programmers in modern software design techniques”. Is that happening in strike force land?
  • Object-oriented programming. “Such advances can do no more than to remove all the accidental difficulties from the expression of the design. The complexity of the design itself is essential; and such attacks make no change whatever in that.” The same ought to go for the recent resurgence in function programming as a silver bullet idea: unless our programs were 10x as complex as they need to be, applying new design constraints makes equally complex programs, specified in a different way.
  • Artificial intelligence. “The hard thing about building software is deciding what to say, not saying it. No facilitation of expression can give more than marginal gains.” This is still true.
  • Expert systems. “The most powerful contribution of expert systems will surely be to put at the service of the inexperienced programmer the experience and accumulated wisdom of the best programmers. This is no small contribution.” This didn’t happen, and expert systems are no longer pursued. Perhaps this silver bullet has been dissolved.
  • “Automatic” programming. “It is hard to see how such techniques generalize to the wider world of the ordinary software system, where cases with such neat properties [as ready characterisation by few parameters, many known methods of solution, and existing extensive analysis leading to rules-based techniques for selecting solutions] are the exception. It is hard even to imagine how this breakthrough in generalization could conceivably occur.”
  • Graphical programming. “Software is very difficult to visualize. Whether we diagram control flow, variable scope nesting, variable cross-references, data blow, hierarchical data structures, or whatever, we feel only one dimension of the intricately interlocked software elephant.” And yet visual “no-code solutions” proliferate.
  • Program verification. “The hardest part of the software task is arriving at a complete and consistent specification, and much of the essence of building a program is in fact the debugging of the specification.” Indeed program verification is applied more widely now, but few even among its adherents would call it a silver bullet.
  • Environments and tools. “By its very nature, the return from now on must be marginal.” And yet software developers flock to favoured IDEs like gnus to watering holes.
  • Workstations. “More powerful workstations we surely welcome. Magical enhancements from them we cannot expect.” This seems to have held; remember that at the time Rational was a developer workstation company, who then moved into methodologies.

Meanwhile, of his “promising attacks on the conceptual essence”, all have accelerated in adoption since his time.

  • Buy versus build. Thanks to free software, we now have don’t-buy versus build.
  • Requirements refinement and rapid prototyping. We went through Rapid Application Development, and now have lean startup and minimum viable products.
  • Incremental development—grow, not build, software. This has been huge. Even the most staid of enterprises pay at least some lip service to an Agile-style methodology, and can validate their ideas in a month where they used to wait multiple years.
  • Great designers. Again, thanks to free software, a lot more software is developed out in the open, so we can crib designs that work and avoid those that don’t. Whether or not we do is a different matter; I think Brooks’s conclusions on this point, which conclude the whole paper, are still valid today.
Posted in design, software-engineering | 3 Comments

On whiteboard coding

Another day in which someone lamented to me the demeaning nature of the interview coding challenge. It is indeed embarrassing, when someone with more than two decades of software engineering experience is asked to complete a gotcha-style programming task under the watchful eye of an unhelpful interviewer. It ought to be embarassing for both of them if the modern IDE available to the candidate is a whiteboard, and a selection of coloured markers for syntax highlighting.

But here’s the problem: consistently, throughout those decades and longer, recruiting managers who hire programmers have been beset by candidates who can’t program. It’s such a desirable career path that plenty of people will try to enter, even those who hope to pick up on whatever it is they’re supposed to do once they get the job. And, indeed, that can be a good way to learn: what is Pete McBreen’s “Software Craftsmanship” other than an imperative for on-the-job learning and mentoring?

Many companies don’t have the capacity or ability to bring a keen learner up from scratch, or are hiring into roles where they expect more familiarity with the skill. Thus, the uncomfortable truth: to fix programmer interviews, you first need to fix programmer screening. Demonstrate that all candidates coming through the door are capable programmers at the required level, and hirers no longer need to test their programming skills.

Note: or do they? Maybe someone can program, but uses techniques that the rest of the team consider to be unnatural. Or they work best solo/paired/in a mob, and the team works best paired/in a mob/solo. Still, let's roll with it: remove the need for a test in the interview by only interviewing candidates who would pass the test.

The problem is that every approach to screening for programming comes with its own downsides.

The economic approach, as currently practised: keep people away by making the career less desirable, by laying off hundreds of thousands of practitioners. The problem here is plunging many people into financial uncertainty, and reducing the psychological safety of anyone who does remain.

Moving the problem upstream: sending out pre-interview coding challenges. This suffers many of the same problems as live coding, except that the candidate doesn’t have to meet the dull gaze of a bored interviewer, and the interviewer doesn’t know it was actually the candidate who completed the challenge. I suppose they could require the candidate to sign their submission, then share their key fingerprint in the interview. An additional problem is that the candidate needs time outside of the interview to complete the challenge, which can be difficult. Not as a difficult as finding the time to:

Maintain a public portfolio. This biases towards people with plenty of spare time, or who get to publish their day-job work, or at least don’t have an agreement with their day-job employer that they don’t work on outside projects.

Our last possibility is the more extreme: de-emphasize the importance of the programming skill, so that the reason employers don’t need to screen for it is that it’s less essential as a hiring criterion. This was tried before, with 1990s-style software engineering and particularly Computer-Aided Software Engineering (CASE). It didn’t get very far that time, but could do on a second go around.

Posted in whatevs | Leave a comment

On software engineering hermeneutics

When I use a word it means just what I choose it to mean — neither more nor less.

Humpty-Dumpty in Alice through the Looking Glass

In my recent round of TDD clarifications, one surprising experience is that folks out there don’t agree on the definition of TDD. I made it as clear as possible in my book. I thought it was clear. Nope. My bad.

Kent Beck in Canon TDD

I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.

Alan Kay in The Computer Revolution Hasn’t Happened Yet

I could provide many other examples, where a term was introduced to the software engineering state of the art meaning one thing, and ended up meaning “programming as it’s currently done, but with this small change that’s a readily-observable property of what the person who introduced the term described”. Off the cuff: “continuous integration” to mean “running automated checks on VCS branches”; “Devops” to mean “hiring Devops people”; “refactoring” to mean “editing”; “software engineering” to mean “programming”.

I could also provide examples where the dilution of the idea was accompanied by a dilution of the phrasing. Again, just typing the first ideas that come into my head: Free Software -> Open Source -> Source Available; various 1990s lightweight methodologies -> Agile Software Development -> Agile.

Researchers of institutions and their structures give us tools that help understand what’s happening here. It isn’t that software engineers are particularly bad at understanding new ideas. It’s that software engineering organisations are set up to reproduce the ceremonies of software engineering, not to be efficient at producing software.

For an institution to thrive, it needs to be legitimate: that is, following the logic that the institution proposes needs to be a good choice out of the available choices. Being the rationally most effective or most efficient choice is one legitimising factor. So is being the thing that everybody else does; after all, it works for them, so why not for us? So is being the thing that we already do; after all, it got us this far, so why not further?

With these factors of legitimacy in mind, it’s easy to see how the above shifts in meaning can occur. Let’s take the TDD example. Canon TDD says to write a list of test scenarios; turn one item into a runnable test; change the code to make that test and all previous tests pass; optionally refactor to improve the design; then iterate from the second step.

First person comes along, and has heard that maybe TDD is more effective (rational legitimacy). They decide to try it, but their team has heard “working software over comprehensive documentation” so they don’t want to embarrass themselves by writing a list of test scenarios (cognitive legitimacy). So they skip that step. They create a runnable test; change the code to make that test pass; optionally refactor. That works well! They share this workflow under the name TDD (Red-Green-Refactor).

Second person comes along, and has heard that the cool kids (Kent Beck and first person) are doing TDD, so they should probably do it too (normative legitimacy). They decide to try it, but they notice that if they write the code they want, then write the tests they want, they end up in the same place (they have code, and they have tests, and the tests pass) that Canon TDD and TDD (Red-Green-Refactor) end up in. So where’s the harm? Now they’re doing TDD too! They show their colleagues how easy it is.

Now everybody is doing a slightly different TDD, but it’s all TDD. Their descriptions of what they do construct the reality in which they’re doing TDD, which is an example of what the researchers call performative discourse. TDD itself has become ceremonial; the first and subsequent people are doing whatever they want to do and declaring it TDD because the legitimate thing to do is called TDD.

This does give those people who want to change software engineering some pointers on how to do it. Firstly, overshoot, because everybody’s going to meet you a short way along the path. Secondly, don’t only talk up the benefits of your proposed change, but the similarities with what people already do, to reduce the size of the gap. Thirdly, make sure that the likely partial adoptions of the change are improvements over the status quo ante. Fourthly, don’t get too attached to the words you use and your choice of their meanings: they mean just what anybody chooses them to mean—no more and no less.

Posted in philosophy after a fashion, social-science, software-engineering | Leave a comment

On rational myths

In my research field, one characteristic of institutions is their “rational myths”; ideas that people tell each other are true, and believe are true, but which are under-explored, unverified, and under-challenged. Belief in these myths leads to supposedly rational actions that don’t necessarily improve efficiency or performance, but are done because everyone else does them, and everyone collectively believes they’re what one does.

We know, from Derek Jones’s Evidence-based software engineering, that what we know about software engineering is not very much. So what are the rational myths where you work? Do you recognise them? Could you change them? What would it take to support or undermine your community’s rational myths, and would you want to take that risk?

Posted in academia, social-science, software-engineering | 4 Comments

We shall return one day

On this day 80 years ago, 16th November 1943, the villagers of Tyneham near Lulworth was evacuated to allow Allied military forces to prepare for D-Day. Despite promises that the evacuation was temporary, the UK lurched directly from the second world war into the Cold War and decided to keep the land to practice against the new “enemy” and former ally, the Soviet Union. Tyneham remains uninhabited, and remains within a live firing range. People may only visit when the Ministry of Defence are ready for them.

In a time when people are still being displaced by war across the world, we remember the villagers of Tyneham, and an occasion when the country displaced its own citizens. The ten tracks on this album contain music, song, and storytelling from around Dorset. With no voices left in Tyneham, all parts are performed by the same person, but throughout we hear the message from the locals: “We shall return one day”.

Listen here: https://soundcloud.com/user-343604096/sets/we-shall-return-one-day

Posted in music | 1 Comment