…and in the end there will be the command line.

You’re pretty happy with the car that the dealer is showing you. It looks comfortable, stylish, and has all of the features you want. There’s a lot of space in the trunk for your luggage. The independent reviews that you’ve seen agree with the marketing literature: once this vehicle gets out onto the open road, it’s nippy and agile and a joy to drive.

You can’t help but think that she isn’t being completely open with you though. To get into the roomy interior and luxurious driver’s seat, you have to climb over a huge black box, twice the height of the cabin itself and by far the longest part of the car. Not to detract from the experience, the manufacturers have put in an automatic platform that lifts you from the ground to the door and returns you gently to earth. But the box is still there.

You ask the dealer about this box, and initially she deflects your questions by talking about the excellent mileage, which is demonstrated by the SpecRoad 2000 report. Then she tells you how great the view of the road is from the high situation of the driver’s seat. Eventually, you ask enough times, and she relents.

“That’s just the starter,” she explains, fiddling with a catch on the door in the rear of the box. “It’s just used to get the petrol motor going, but you don’t need to worry about it. Well, not much.”

Finally, she frees the catch and opens the box. To your astonishment, inside the box are four horses, sullenly eating grain from their nosebags and pawing their hooves on the ground. You can see that they are reined into a system that pulls the rear axle of the car as if it were an old-style carriage. The dealer continues.

“As I said, these cars just use the horses to initially pull the car along until the engine starts up and takes over. It’s how we’ve always built our cars, by layering the modern components over the traditional carriage system. Because the horse-and-carriage arrangement is so stable having been perfected over decades, we can use it as a solid base for our high-tech automobiles. You really won’t notice that it’s there. We send out new grain and clear up any, um, exhaust automatically, so it’s completely invisible. OK every so often one of the horses gets sick or needs re-shoeing and then you can’t use your car at all, but that’s pretty rare. Mostly.”

Again, your curiosity is getting the better of you. In the front of the horses’ cabin, leather reins run from the two leading horses to another boxed-off area. The dealer sees you looking at it, and tries to lead you back out to the showroom, but you persist. With a resigned sigh, she opens yet another hatch into this deeper chamber.

Inside, you are astonished to see a man holding the reins, ready to pull the horses along. “Something has to get the horses started,” the dealer explains, “and this is how we’ve always done it. Our walking technology is even more robust than our horse-drawn system. Don’t worry about any of that though, let me show you the independent temperature zones in the car’s climate control system.”

That’s how it works

In the dim and distant past, barely 672 days after time itself began, the Unix time-sharing system was introduced to the world. It’s a thing for big computers that lets multiple people use them at the same time, without getting in each others’ way. It might not have been the most capable system (which would’ve been Multics, the system which Unix was based on), but due to the fact that AT&T weren’t allowed to sell it, Unix did become popular. By the time this happened, Unix had been rewritten in C so the combination of C, Unix, and tools written atop like roff were what became popular.

Eventually, as small computers became more powerful, they became capable of running C and Unix too. And so they did. People designed processors that were optimised for Unix, other people designed computers that used these processors, and other people brought Unix to these computers. Each workstation itself may have only had a single user, but they were designed to be used together on a network. As the designers had decided that the network is the computer, and the network did have multiple users, it was still a multi-user system, and so the quotas and protections of a time-sharing system still made sense.

Onward and downwards, Unix marched inexorably. As it did so, it dragged its own history with it. As the extremities of Europe became the backdrop to large stone columns with Latin-inscribed capitals, so ever-smaller computers found themselves the backdrop to the Unix kernel and shell. To get there, the biological and technological distinctiveness of each new environment had to be added to Unix’s own.

Compare the Unix workstation to the personal computer. A Unix workstation was designed to run Unix, so its ROM program could look for file systems, find one with the /vmunix program on it, and run that program. The PC was designed…well, it’s not clear what it was designed for, though it was likely to do the same things that CP/M could do on other small computers. If you don’t have an operating system, many of them will give you the infamous NO ROM BASIC message.

Regardless, the bootstrap program in a PC’s ROM certainly isn’t looking for a Unix, or an NT OS kernel, or anything else in particular. It just wants to run whatever comes next. So it looks for a program called the secondary bootloader, and runs that. Then the secondary bootloader itself looks around for the filesystem with /vmlinuz or whatever the Unix (or Unix-like) boot file is called, and runs that.

Magnify and Enhance

The story doesn’t end at the kernel. Getting there, the kernel discovers the hardware available (even though this has been done once or twice already) and then gets on with one of its functions, which is to be a bootloader for a Unix program. Whether that program is initor some newer replacement, that has to start before the computer is properly running a Unix.

One of init‘s tasks is to start up the Unix programs that you want running on the computer, the launch procedure is still not complete. init might follow the instructions in a script called rc, or it could use all the scripts in a folder called init.d or SystemStarter, or it could launch svc.startd and let that decide what to start, or maybe something different happens. Once that procedure has run to completion, the computer is probably doing whatever it was that you bought it for, or at least waiting for you to tell it what that is.

Megakernels

So many different computers go through that complex process – servers, desktops, laptops, mobile phones, tablets, network routers, watches, television receivers, 3D printers. If you have an idea for a novel application of computing hardware, the first step is to stand back and protect your ears from the whomp of four decades of history being dumped in a huge black box on the computer, then you can get cracking.

You want to make a phone? A small device to be used for real-time communication by a single person? whomp comes the megalith.

You want to make a web server? A computer usually dedicated to running three functions (converting input into database requests, converting database responses into output, and tracking which input deserves which output)? whomp comes the megalith.

You want a network appliance? Something that nobody’s going to use at all, that sits in the corner turning 802.11 datagrams into 802.3 datagrams? whomp comes the megalith.

There’s not much point looking at Unix as an architecture or a system of interdependent components in these applications. whomp. It’s a big black box that can be used to get other boxes moving, like the horses used to start a car’s engine. In the 1980s and into the beginning of the 1990s, there were arguments about whether monolithic kernels were better than microkernels. Now, these arguments are redundant: the whole of Unix is itself a megakernel for OS X, Android, iOS, Firefox OS, your routers, network switches and databases.

But the big black box is black because of what’s found at the top of the megalith. It’s a tar pit, sucking in the lower layers of whatever’s perched above. Yesterday, a Unix system would’ve been programmed via the Bourne Shell, a sort of dynamic compromise for the lack of message-passing in C. Today, once the dust has cleared from the whomp, you can see that the Bourne shell is accompanied in the softer layers of tar by Tcl, Perl, Python, Ruby, and other once high-flying programs that got too close to the pit.

Why that’s good

The good news is that Unix isn’t particularly broken. Typically a computer based on Unix can remain working for at least long enough that either the batteries run out or a software update means you have to turn it off anyway.

Because Unix is everywhere, everybody knows Unix. Or they know something that was once built on Unix and has been subsumed into Unix, the remains of which can just be seen and touched in the higher strata of the tar. Maybe they only really know how to generate JSON structures in Ruby, but that’s OK because your next-generation doorbell will have a Ruby interpreter deposited with the whomp of the megalith.

And if something isn’t particularly broken, then there’s not much point in throwing it away for something new. Novelty for its own sake was the death of Taligent, the death of Be, and the death of countless startups and projects who want to do something like X, but newer.

Why that’s bad

The bad news is that Unix is horrendously broken. You can have a supposedly safe runtime environment for your program, but the bottom of this environment is sticking into the tar pit that is C and Unix. Your program can still get into trouble because it’s running on Java and Java is written in C and C is where the trouble comes from.

The idea is that you stay at the top of the megalith, and it just starts your computer and stops you from worrying about the low-altitude parts of the machine. That’s only roughly true though, and lower-down pieces of the megalith sometimes prove themselves to have crumbled under weathering and the pressure from the weight above. If your computer has experienced a kernel panic in the last year, it’s probably because the graphics driver wasn’t very well-written. That’s a prop that has to be inserted into the bottom of the megalith to keep it upright, but people make those props out of balsa wood and don’t check the size of the holes they need to fit the props into.

Treating Unix as the kernel of your modern system means ignoring the fact that Unix is itself a whole operating system, and that your UEFI boot process also loaded another other operating system just to get that other operating system to load your operating system. The outer system displays inner-system problems, being constrained by the same constraints that your Unix flavour imposes. Because Unix is hidden, these become arbitrary-seeming constraints that your developers simply know as always having been there.

What should be done

A couple of decades ago, there were people who knew that PC operating systems like Mac OS and MS-DOS weren’t particularly good, and needed replacing. Some of them looked with envy at the smooth megalith that was Unix, and whomp here it arrived on their desktop machines: MkLinux, Debian, NeXTStep, Solaris, 386BSD and others. Others thought that the best approach was to start again with systems designed to support the desktop paradigm and using modern design techniques and technology advances: they made BeOS, Windows NT and others.

Systems like this (including modern BeOS-inspired Haiku, and Amiga-inspired AROS) are typically described by their project politburos as “efficient”, “lean” and other words generally considered to be antonymical to “a GNU distribution”.

They also tend to have few users in comparison to Mac OS X, GNU and other systems. Partly this is just a marketing concern, that’s irrelevant when such systems are free: if the one that works for you works for you, it shouldn’t matter how many other people it also works for. In practice there is a serious consideration to the install base. The more people who use an operating system, the more people there are who want applications for that system and therefore (hopefully) more people will want to write applications for that system.

If Linus’s Law (that many eyes make bugs shallow; a statement of wishful thinking that should actually be attributed to Eric S. Raymond) actually held true, then one might expect that more popular systems would suffer fewer bugs. Perhaps more popular systems end up with higher expectations, and therefore gain newer features faster, thus gaining bugs faster than people could fix them?

Presumably as the only point to Unix these days is to be a stable stratum on which to layer other things, there are numerous companies and individuals who would benefit from it being stable. We can accept that all of this complexity is going nowhere except upward, and that the megalith will continue to grow inexorably as more components fall into the tar pit. With that being the case, all of the companies and individuals involved could standardise on a single implementation of the megalith. They could all shore up the same foundations and fix the same cracks.

What I think I want to do

I often choose to rank potential solutions to technical problems in a two-dimensional graph, because if you can reduce any difficult question down to four quadrants then you can make a killing as a consultant. In this case, the axes are political acceptability and technical quality.

+-------------------+-------------------+ T
|                   |                   | e
|  Awkward  genius  |     Slam-dunk     | c
|                   |                   | h
+-------------------+-------------------+ n
|                   |                   | i
|   Feverish rant   | Saleable band-aid | c
|                   |                   | a
+-------------------+-------------------+ l
                Political

A completely new system might be a great idea technically, but is unlikely to get any traction. There may be all sorts of annoying problems that make current systems a bit disappointing, but no-one’s suffering badly enough to consider a kill or cure option. The conditions for a radically novel system becoming snapped up by an incumbent to replace their existing technical debt don’t really exist, and haven’t for decades (Commodore bought Amiga to get their new system, but in the 1990s Apple just needed a system that was already a warmed-over workstation Unix).

In fact despite the view of the software sector being a high-tech industry, it’s both socially and technologically very conservative. It’s rare for completely new ideas to take hold, and what’s taken for progress can often be seen more realistically as a partially-directed form of Brownian motion. As already discussed, this isn’t completely bad, because it stops new risks being introduced. The counterpoint to that melody is that it stops old risks from being removed, too.

Getting a lot of developer traction around a single Unix system therefore has a higher likelihood, in fact it’s already happened. It’s not necessarily the best approach technically, because it means rather than replacing that huge megalith we just agreed was a (very large) millstone, we resign ourselves to patching up and stabilising the same megalith together. Given that one penguin-based megalith is already used in far more contexts than any other, this seems more likely to be acceptable to more people beset by the crumbly megalith problem.

There’s room in the world for both solutions, too. What I call a more acceptable solution is really just easier to accept now, and the conditions can change over time. Ignoring the crumbling megalith could eventually produce a crisis, and slicing the Gordian knot could then be an acceptable solution. Until that crisis hits, there will be the kernel, the command-line, and the continuing echos of that original, deafening whomp.

About Graham

I make it faster and easier for you to create high-quality code.
This entry was posted in architecture of sorts. Bookmark the permalink.

3 Responses to …and in the end there will be the command line.

  1. Informatimago says:

    As a lot of people, you are forgetting something: today we have unix systems because we came back to them.

    Last century unix was all but eclipsed by OSes without any kind of CLI: MacOS and MSWindows. They failed, and unix came back stronger than ever in form of MacOSX, Android and other Linux derivatives.

    If the sources of those non-CLI OSes and their applications had been more easily available (like in Smalltalk systems), parhaps unix wouldn’t have come back. But one important feature of unix and its CLI let it beat them: its modularity. If you had a unix system program with no sources, you could easily replace it with another similar one for which you had the sources, just by reverse engineering its input and output files. This would be much harder to do with a non-CLI system (eg. an OO system) without the sources.

    Perhaps the question should be why Smalltalk systems (or the LispMachine systems) didn’t take over, since they seem to have the advantages of all the other systems.

  2. Graham says:

    The question about Smalltalk and Lisp Machines is indeed a good one. I don’t know much about the history of Lisp machines beyond what RMS talks about with respect to the MIT AI Lab people who went to Symbolics. It seems that what happened with Smalltalk was that the superficial parts were taken to become OOP and the main thing no longer had clear benefits to encourage more complete adoption. Smalltalk is to OOP as XP is to Agile, or Free Software is to Open Source.

  3. Pingback: Lazy Reading for 2019/05/05 – DragonFly BSD Digest

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.