On the “advances” in web development since 1995

The first “web application” I worked on was written in a late version of WebObjects, version 4.5. An HTTP request was handled by an “adaptor” layer that chose a controller based on the parameters of the request, you could call this routing if you like. The controller accesses the data model, any relevant application logic, and binds the results to the user interface. That user interface could be an HTML and JavaScript document for presentation in a browser, or it could be a structured document destined for consumption by a remote computer program. The structured document format would probably have looked something like this:

{
  "first_name": "Graham";
  "last_name": "Lee";
}

The data can be bound on properties in objects called “components” that represent different elements in an HTML document, like images, anchors, paragraphs and so on.

Different frameworks for interfacing with various data storage technologies, such as relational databases, were available.

And basically nothing – apart from the cost – has changed since then. A MEAN application works in the same way, but with the “advance” that it makes it easier to support single-page applications so you can send a few megabytes of transpiled bundle over an unreliable mobile data network to run on a slow, battery-operated smartphone processor instead of in the server. And the data storage interfaces have been expanded to include things that don’t work as well as the relational databases. Other “advances” include a complicated collection of transpilers and packagers that mean every project has its own value of “JavaScript”.

It’s not like anything has particularly got much easier, either. For example, a particular issue I’ve had to deal with in a React application – binding a presentation component to properties of an element in a collection – is exactly as difficult now as it was when WebObjects introduced WORepetition in 1995. Before we get to the point where it becomes easier, we’ll give up on React, and Node, and move on to whatever comes after. Then give up on that.

In software engineering, if something isn’t broke, fix it ’til it is.

In which I resolve

When I was a student I got deeply into GNU and Linux. This has been covered elsewhere on this blog, along with the story that as Apple made the best UNIX, and the lab had NeXT computers, I went down the path of Objective-C and OS X.

I now think that this was because, as an impressionable twenty-something-year-old, I thought that the most important part of the technology was, well, the technology. But I realise now that I want to have taken the other path, and now need to beat across toward it.

The other path I’m talking about is the one where I notice that it’s not programs like those in Debian that are so great, but the licences under which they are distributed. Much of the software in a GNU/Linux distribution is not so good. That makes it much like using software on every other platform. The difference is that I have the right-and hopefully, after a decade of practice, the ability-to do something about it in the case of free software.

I’m certain that GNU is not the best of all systems, though, because the focus has been on the right to make changes, not the capability of making changes. A quick review of the components I can remember in the installation on my MacBook shows that I’d need to understand at least nine different programming languages in order to be able to dive in and address bugs or missing features in the system, holistically speaking. If they’ve given me permission to change how my computer works, they haven’t made it easy.

But as someone who can change software and wants software to be less broken for me and for others, that permission is important. Next year I’ll be looking for more opportunities to work with free software and to make things less broken.

The Wealth of Applications

Adam Smith’s Inquiry into the Nature and Causes of the Wealth of Nations opens by discussing the division of labour. How people are able to get more done when they each pick a small part of the work to be done and focus on that, trading their results with others to gain access to the common wealth. He goes on to explain that while what people are really trading is labour, it’s hard to think about that so the more comprehensible stand-in of money is used instead.

Smith’s example is a pin factory, where one person might draw out metal into wire, another cut it to length, a third sharpen one end, a fourth flatten the opposite end. He says that there is a

great increase in the quantity of work, which, in consequence of the division of labour, the same number of people are capable of performing

Great, so dividing labour must be a good thing, right? That’s why a totally post-Smith industry like producing software has such specialisations as:

Oh, this argument isn’t going the way I want. I was kindof hoping to show that software development, as much a product of Western economic systems as one could expect to find, was consistent with Western economic thinking on the division of labour. Instead, it looks like generalists are prized.

[Aside: there really are a few limited areas of specialisation, but then generalists co-opt their words to gain by association a share of whatever cachet is associated with the specialists. For example, did you know that writing UNIX applications makes you an embedded programmer? Another specialisation is kernel programming, but it’s easy to find sentiment that it’s just programming too.]

Maybe the conditions for division of labour just haven’t arisen in programming. Returning to Smith, we learn that such division “is owing to three different circumstances”:

  • the increase of dexterity in every particular [worker]

This seems to be true for programmers. There is an idea that programmers are not fungible resources, and that the particular skills and experiences of any individual programmer can be more or less relevant to the problems your team is trying to solve. If all programmers can truly be generalists, then programmers would basically be interchangeable (though some may be faster, or produce fewer defects, than others).

Indeed Harlan Mills proposed that software teams be built around the idea that different people are suited to different roles. The Chief Programmer Team unsurprisingly included a chief programmer, analogous to a surgeon in a medical team. As needed other specialists including a librarian, technical writers, testers, and “language lawyers” (those with expertise in a particular programming language) could be added to support the chief programmer and perform tasks related to their specialities.

  • the saving of time which is commonly lost in passing from one species of work to another

Programmers believe in this, too. We call it, after DeMarco and Lister, flow.

Which leaves only one place in which to look for a difficulty in applying the division of labour, so let’s try that one.

  • to the invention of a great number of machines which facilitate and abridge labour

So we could probably divide up the labour of computing if only we could invent machines that could do the computing for us.

Hold on. Isn’t inventing machines what we do? Yet they don’t seem to be “facilitating and abridging labour”, at least not for us. Why is that?

At the technical level, there are two things holding back the tools we create:

  • they’re not always very good. Building your software on top of a tool as a way to save labour is perceived as a lottery: the price is the labour involved in working with or around the tool, and the pay-off is the hope that this is less work than not using the tool.

  • they only really turn computers into other computers, and as such aren’t as great a benefit as one might expect. We’re trying to get to “the computer you have can solve the problem you have”, and we only really have tooling for “the computer you have can be treated the same way as any other computer”. We’re still trying to get to “any computer (including the one you have) can solve the problem you have”.

This last point demonstrates that the criterion of facilitating and abridging labour is partially solved. The goal of moving closer to the end state, where our automated machines help with solving actual problems, seems to have come up a few times (mostly around the 1980s) with no lasting impact: in object-oriented programming, artificial intelligence and computer-aided software engineering; specifically Upper CASE.

Why not close that gap, particularly if there is (or at least has been) both academic and research interest in doing so? We have to leave the technology behind now, and look at the economics of software production. Let’s say that computing technology is in the middle of a revolution analogous to the industrial revolution (partly because Brad Cox already went there so I don’t have to overthink the analogy).

The interesting question is: when will we leave the revolution? The industrial revolution was over when mechanisation was no longer the new, exciting, transformative thing. When the change was over, the revolution was over. It was over when people accepted that machines existed, and factored them into the costs of doing their businesses like staff and materials. When you could no longer count on making a machine as a profitable endeavour in itself: it had to do something. It had to pay for itself.

During the industrial revolution, you could get venture capital just for building a cool-looking machine. You could make money from a machine just by showing it off to people. The bear didn’t have to dance well: it was sufficient that it could dance at all.

We’re at this stage now, in the computer revolution. You can get venture capital just for building a cool-looking website. Finding customers who need that thing can come later, if at all. So we haven’t exited the revolution, and won’t while it’s still possible to get money just for building the thing with no customers.

How is that situation supported? In Smith’s world, the transparent relationship between supply and demand drove the efficiency requirements and thus the division of labour and the creation of machines. You know how many pins you need, and you know that your profits will be greater if you can buy cheaper pins. The pin factory owners know how many employees they have and what wages they need, and know that if they can increase the number of pins made per person-time, they can sell pins for less money and make more profit.

How do supply and demand work in software? Supply exists in bucketloads. If you want software written, you can always find some student who wants to improve their skills or even a professional programmer willing to augment their GitHub profile.

[This leads to another deviation from Adam Smith’s world, but one that can be left for later: he writes

In the advanced state of society, therefore, they are all very poor people who follow as a trade, what other people pursue as a pastime.

This is evidently not true in software, where many people will work for free and others are very well-paid.]

There’s plenty of supply, but the demand question is trickier to answer. In the age of supply, not demand Aurel Kleinerman (via Bob Cringely) suggests that supply is driving demand. That people (this seemingly limitless pool of software labour) are building things, then showing them to people and seeing whether they can find some people who want those things. I think this is true, but also over-simplified.

It’s not that there’s no demand, it’s that the demand is confused. People don’t know what could be demanded, and they don’t know what we’ll give them and whether it’ll meet their demand, and they don’t know even if it does whether it’ll be better or not. This comic strip demonstrates this situation, but tries to support the unreasonable position that the customer is at fault over this.

Just as using a library is a gamble for developers, so is paying for software a gamble for customers. You are hoping that paying for someone to think about the software will cost you less over some amount of time than paying someone to think about the problem that the software is supposed to solve.

But how much thinking is enough? You can’t buy software by the bushel or hogshead. You can buy machines by the ton, but they’re not valued by weight; they’re valued by what they do for you. So, let’s think about that. Where is the value of software? How do I prove that thinking about this is cheaper, or more efficient, than thinking about that? What is efficient thinking, anyway?

Could it be that knowledge work just isn’t amenable to the division of labour? Clearly not: I am not a lawyer. A lawyer who is a lawyer may not be a property lawyer. A lawyer who is a property lawyer may only know UK property law. A lawyer who is a UK property lawyer might only know laws applicable to the sale of private dwellings. And so on: knowledge work certainly is divisible.

Could it be that there’s no drive for increased efficiency? Maybe so. In fact, that seems to be how the consumer economy works: invent things that people want so that they need money so that they need to work so that there are jobs in making the things that people want. If it got too easy to do middle-class work, then there’d be less for the middle class to do, which would mean less middle class spending, which would mean less demand for consumer goods, which would… Perhaps knowledge work needs to be inefficient to support its own economy.

If that’s true, will there ever be a drive toward division of labour in the field of software? Maybe, when the revolution is over and the interesting things to create for their own sake lie elsewhere. When computers are no longer novel technology, but are simply a substrate of industry. When the costs and benefits are sufficiently well-understood that business owners can know what they need, when it’s done, and how much it should cost.

That’ll probably be preceded by another software-led recession, as VCs realise that the startups they’re funding are no longer interesting for their own sake and move on to the (initially much smaller) nascent field of the next revolution. Along with that will come the changes in regulation, liability and insurance as businesses accept that software should “just work” and policy adjusts to support that. With stability comes the drive to reduce costs and increase quality and rate of output, and with that comes the division of labour predicted by Adam Smith.

PADDs, not the iPad

Alan Kay says that Xerox PARC bought its way into the future by paying lots of money for each computer. Today, you can (almost) buy your way into the future of mobile computers by paying small amounts of money for lots of computers. This is the story of the other things that need to happen to get into the future.

I own rather a few portable computers.

Macs, iPads, Androids and more.

Such a photo certainly puts me deep into a long tail of interwebbedness. However, let me say this: within a couple of decades this photo will look reasonable, for the number of portable computers in one office room. Faster networks will enable more computers in a single location, and that will be an interesting source of future applications.

Here’s the distant future. The people in this are doing something that today’s people do not do so readily.

PADD

One of them is giving his PADD to the other. Is that because PADDs are super-expensive, so everybody has to share them? No: it’s because they’re so cheap, it’s easy to give one to someone who needs the information on it and to print/replicate/fax/magic a new one.

We can’t do that today. Partly it’s because the devices are pretty expensive. But their value isn’t just associated with their monetary worth: they’ve got so many personalisations, account settings and stored credentials on them that the idea of giving an unlocked device to someone else is unconscionable to many people. The trick is not merely to make them cheap, but to make them disposable.

Disposable pad computers also solve another problem: that of how to display multiple views simultaneously on the pad screen.

PADDs

You can go from rearranging metaphorical documents on a metaphorical desktop, back to actually arranging documents on a desktop. Rather than fighting with admittedly ingenious split-screen UI, you can just put multiple screens side by side.

The cheapest tablet computer I’m aware of that’s for sale near me is around £30, but that’s still too expensive. When they’re effectively free, and when it’s as easy to give them away as it is to use them, then we’ll really be living in the future.

Just as it started, this post ends with Xerox PARC, and the inspiration for this post:

Pads are intended to be “scrap computers” (analogous to scrap paper) that can be grabbed and used anywhere; they have no individualized identity or importance.

Goals upon goals upon goals

As I read Ed Finkler’s piece on losing excitement in technology, I found myself recognising pieces of my own story. The prospect of a new language or framework no longer seems like a new toy, an excuse to stay up all night studying it, using it and learning its secrets as I would have done a few years ago. Instead I find myself asking what new problems are introduced, whether they’re worth accepting over the devils we know are in our existing tools and how many developer-decades are soon to be lost in reimplementing libraries that have been in CPAN for decades in a new language and delivered via a new packaging system.

Because, as I alluded to when not really talking about Swift and as more eloquently described by Matt Gemmell, our problems do not, for the most part, come from our tools and will not be solved by adding more tools. Indeed a developer’s fixation on their tools will allow the surrounding business, market, legal and social problems to go unchecked. No change of programming language will turn customers who don’t want to pay $0.99 into customers who do want to pay $99. Implicitly unwrapped optionals might catch the occasional bug in development but they just aren’t worth a 100x increase in value to people on the sharp end of our creations.

What will be of benefit to them? I think we’re going to have to go through the software equivalent of the consolidation that the consumer goods industry has already seen in hardware. Remember the introduction to the iPhone?

An iPod, a phone, an internet mobile communicator, these are NOT three separate devices!

Actually it’s none of those things. Well, it is, in that it’s all of those and more. Fundamentally, it’s a honking great handheld control unit with hundreds of buttons on, just like Sony used to make for their TV remotes. One of those buttons will turn it into a phone, one will turn it into an iPod, one will turn it into a web browser, another lets you add other buttons that do other things. They don’t all do their things in the same way, and just to keep things interesting the buttons will arbitrarily change location and design and behaviour every so often. The Sony remotes didn’t do that.

The core experience of an iPhone, or Android phone, or Windows phone—the thing you see when you switch it on and wait long enough—is a launcher. It’s the Program Manager from Windows 3.0, packaged up in a shiny interactive box. Both Program Manager and today’s replacement offer the same promise: you’ve got a feeling that you left a button around here somewhere that probably starts you doing the thing you need to do.

So we still need to make good on the promise that you have just one device, by making that one device act like just one device (preferably one which works properly and does what people expect, which is what I spend a lot of time thinking about and working on). How we get there will be the interesting problem for at least another decade. How our tools support that journey will be a fun sideshow, certainly important but hardly the focus. The tools support the goals that support our goals that support the world’s goals.