The recent post on addressing “technical debt” did the rounds of the usual technology forums, where it raised a reasonable question: why are people basing these decisions on balancing engineering-led with customer-led tasks on opinion? Why don’t engineers take an evidence-based approach to such choices?
The answer is complex but let’s start at the top: there’s too much money in software. There have been numerous crises in the global economy since software engineering has existed, but really the only one with any material effect on the software sector was the dot-com crash. The lesson there was “have a business plan”: plenty of companies had raised billions in speculative funding on the basis that they were on the internet but once the first couple started to fold, the investors pulled out en masse and the money was sucked from the room. This is the time that gave us Agile (constantly demonstrate that you’re delivering value to your customer), Lean Startup (demonstrate that you’re adding value with as little expenditure as possible), and Lean Software Development (eliminate all of the waste in your process).
Nobody ever demonstrated that Agile, Lean or whatever were better in some objective metric, what they did was tell convincing stories. Would you like to find out that you’ve built the wrong thing two years from now, or two weeks from now? Would you prefer to read an interim draft functional specification, or use working software? Let’s be clear though, nobody ever showed that what we were doing before that was better in any objective way either: software was written by defence contractors and electronics hardware companies, and they grandfathered in the processes used to procure and develop hardware. You can count the number of industry pundits advocating for a genuinely evidence-led approach to software cost control on two fingers (Barry Boehm and Watts Humphries) and you can still raise valid questions about the validities of either of their systems.
Since then, software teams have become less fragile to economic shock. This was already happening in the 2007 credit crunch (the downturn at the beginning of the 2007-2008 global financial crisis). The CFO where I worked explained that bookings of their subscription-based software would go up during a recession. Why? Because people were not confident enough to buy outright or to enter relatively cheap, long-term arrangements like three year contracts. They would instead take the more expensive but more flexible shorter-term contracts so that they could cancel or move if their belts needed tightening. After the crisis, the adoption of subscription-based pricing models has only increased in software, and extended to adjacent fields like media and hardware.
All of this means that there is relative stability in software businesses, and there is still growing demand for software engineers. That has meant that there isn’t the need for systematic approaches to cost-reduction hawked by every single thinker in the “software crisis” era: note that there hasn’t been significant movement beyond Agile, Lean or whatever in the subsequent two decades. They’re good enough, and there is no impetus to find out what’s better. In fact both Agile with its short increments and Lean Startup with its pivoting are optimised for the “get out quickly at any cost” flexibility that also leads customers to choose short-term subscription pricing: when the customers for your VR pet grooming business dry up you can quickly pivot to online fraud detection.
With no need to find or implement better approaches there’s also no need to particularly require software engineers to have a systematic approach or a detailed understanding of the knowledge of their industry. Thus software engineering—particularly programming—remains a craft-based discipline where anyone with an interest can start out at the bottom, learn on the job through mentoring and self-study, and use a process of survivor bias to get along. Did anyone demonstrate in 2002 that there’s objective benefit to a single-page application? Did anyone demonstrate in 2008 that there’s objective benefit to a native mobile app? Did anyone demonstrate in 2016 that there’s objective benefit to a Dapp? Has anyone crunched the numbers to resolve whether DevOps or Site Reliability Engineering is the one true way to do operations? No, but it doesn’t matter: there’s more than enough money to put into these things. And indeed most of those choices listed above are immaterial to where the money comes from or goes, but would be the sorts of “technical debt” transitions that engineering teams struggle to pay for.
You might ask why I’m at all interested in taking a systematic approach to our work when I also think it’s not necessary. Even if it isn’t necessary for survival, it’s definitely professional, and justifiable. When the clients do come to reduce their expenditure, or even when they don’t but are deciding who to go with, the people who can demonstrate that they systematically maximise the output of their work will be the preferred choice.