On the economics of software insecurity

This post is mainly motivated by having read Geekonomics: the real cost of insecure software, by David Rice. Since writing the book Rice has apparently been hired by Apple, though his bio at the Geekonomics site doesn’t mention that (nor his LinkedIn profile).

Geekonomics is a thoroughly interesting read. It’s evidently designed as a call to arms for users to demand better security, and as a result resorts to hyperbole in parts. You are a crash test dummy for software manufacturers and are paying extravagantly for the privilege. In this way it reads as if it is to security as The Inmates are Running the Asylum was to user experience in the 1999: do you realise just how shoddy all of this software you use is?

That said, once you actually dig into Rice’s arguments, the hyperbole disappears and the book becomes well-sourced, internally consistent and rational. He explains why the market forces in the software industry don’t lead to security (or even high quality) as either the primary customer requirement nor the key focus of producers.

Interestingly, while Geekonomics only incidentally touches on the role of security researchers in the software economy, their position is roughly consistent with the one I outlined in On Securing Lion: they are in it to get money (and sometimes fame) from selling either the vunerabilities they discover, or their skill at selling vulnerabilities.

The book ends by describing the different options a curated free market like the US market has for correcting the situation where market forces lead to socially undesirable outcomes: these options are redress via contract law, via tort law or via strict liability legislation. The impact on each of the above on the software industry is estimated.

One option that is not fully explored in the book, but which I believe could be worth exploring, is this: development of critical infrastructure software could be taken away from the free market.

Now the size of even the U.S. government IT budget probably isn’t sufficient to completely fund a bunch of infrastructure developers, but there are other options. Rice correctly notes the existence of not-for-profit software development organisations (particularly the Open Source Initiative and Free Software Foundation), and discusses the benefits and drawbacks of the open source model as it applies to commercial software. He does not explore the possibility that charity development organisations could withdraw from market competition, and focus on engineering practice, quality and security without feature parity or first-to-market speed.

Governments, trade groups, communications carriers and other organisations with an interest in using software as infrastructure (e.g. so-called “cloud” companies) could fund non-profits (maybe with money, maybe with staff) that develop infrastructure-grade software. Those non-profits would have a mission to do quality-centric development, and would put confidentiality, integrity, availability, reliability and correctness before feature richness or novelty. Their governance (the bit I haven’t fully thought through, admittedly) would be organised to promote and reward exactly that approach to development.

The software, its documentation and its engineering methodologies would be open, so that commercial software can take advantage of its advances at low cost. This is partially of importance because where security is a “hygiene factor” to software purchasers, the “security gap” between the infrastructure-grade and commercial-grade software would become clear and would artificially introduce infrastructure-grade robustness to the marketplace. Commercial vendors who could cheaply pick up parts of the infrastructure-grade software for their own products would be, in a self-interested manner, bringing that software’s quality into the commercial marketplace and making it a competition point.

“But,” some people say, “such software would be feature-poor. Why would anyone choose [SafeOS, SafeWebServer, SafeSmartPhone, whatever] over a feature-rich commercial offering?” The point is, that in infrastructure, correct function is more important than features. It’s only the fact that software exists purely in a competitive world that means the focus is on features.

Case in point: one analogy used throughout Geekonomics is that infrastructure software is like cement (actually, in a book I’m currently writing on software testing, I make the same analogy, though relating to design rather than function). Well even taking into account innovations like Portland cement, the feature list of cement hasn’t changed in thousands of years. It sticks aggregate together to make concrete or mortar. It’s only the quality of its stick-aggregate-together-ness that has changed.

In relation to software, most computers are still “stuck together” using RFC791 (Internet Protocol version 4), which was documented in 1981 but was in use already at that time. The main advantages of RFC2460 (Internet Protocol version 6), written in 1998, are increased address space and reduced overhead. It’s better at stick-computers-together-ness, but doesn’t really do anything new. There may have been new applications of networks recently (and of course, the late addition of confidentiality in the mid-1990s), but networking itself doesn’t frequently need new features.

Or even operating system software. The last version of Mac OS X that added any features for its users was version 10.5, said new features were:

  • Time Machine: computers have been doing backup for years, this added a new UI.
  • Spaces: an improvement on the ability to draw windows on the screen.
  • Back to my Mac: an integration of existing capabilities (VNC and wide-area zeroconf networking).
  • Boot Camp: managing partitions, and giving the primary bootloader compatibility with a 1983 computer standard.

All of the other enhancements were in the applications, which still all require the same things of the OS: schedule processes, protect memory, abstract the file systems, manage devices. Again, there may have been new applications of an operating system, but there hasn’t been much newness in the operating system itself.

The part where such non-profit infrastructure software becomes tricky is in integrating with the rest of the “stack”. On the hardware side, it would be inappropriate to require that a government-sponsored and not-for-profit software project run on proprietary hardware. On the other hand, it might be inappropriate to disbar deployment on proprietary hardware—but is infrastructure-grade software on commercial-grade hardware still an infrastructure-grade deployment?

That’s particularly difficult in our world—the world of smartphones—because there isn’t really any open hardware. There are somewhat open definitions: the Android Compatibility Definition Document for example. But as Ken Thompson taught us: in a trusted system, we need to question who we trust and why.

Going the other way, of course, is much easier. Anyone could write an application that interoperates with infrastructure-grade software, or a system partially constructed out of such software. But the same question would still exist: how much of an impact do the non-infrastructure-grade components have on the reliability of the system?

About Graham

I make it faster and easier for you to create high-quality code.
This entry was posted in software-engineering. Bookmark the permalink.

One Response to On the economics of software insecurity

  1. Amateur Barista says:

    Thanks for the review and for summarizing the author’s point of view. Also, first time on this blog (linked from Stack Exchange). I find it a good weekend read for security-aware OS/X developers like me =)

Comments are closed.