Skip to content

On opinionation

I’ve realised that when I read that a tool or framework is “opinionated”, I interpret that as meaning that I’m going to have to spend time on working out how to express my solution in its terms. I have enough trouble trying to work out how to express my solution in terms of my problem, so I’m probably going to avoid that tool or framework.

The Tankard Brigade

I have a guideline that seems to apply to many pursuits and hobbies: any activity can be fun until there’s too high a density of men with beards and tankards.

Of course, they aren’t all men (though many are) and don’t all have beards and tankards (though many do). But they can turn any enjoyable pastime into a maddeningly frustrating pursuit of ever-receding goals, like Zeno’s arrow approaching but never reaching its target.

Some background. For any activity there will be different levels of engagement; different extents to which once can take it seriously. For most people (apart from, in this simplified model, two people) there will be a collection of people who are less invested than they are, and a collection of people who take the pursuit more seriously.

Often, this doesn’t cause any disharmony. Some natural outgroup bias might make people believe that those who take it more seriously take themselves too seriously, and put too much effort into what should be an enjoyable way to spend one’s time. Similarly, those who take it less seriously are perhaps not really engaged with the activity, and can be a bit too frivolous. Of course the distance between my level of engagement and perception of the outgroup’s investment is decidedly non-linear: pro golfers and non-golfers do not cause as much difficulty for year-round and fair-weather amateurs as these two groups cause for each other.

This is all largely harmless snobbery and joshing until the men with beards and tankards come along. A man with a beard and a tankard (even if only figuratively a man with a figurative beard and tankard) is someone whose level of engagement with a pursuit must be the greatest of everyone in the room or online forum.

A single man with his solitary beard and tankard in a group can be harmless, endearing or slightly irritating. He will have bought the most expensive equipment available, and will gladly tell everyone who’ll listen (and many who would rahter not). He’ll explain why he got it, why it’s better, and why you simply can’t appreciate the subtleties and nuances of what you’re participating in (and apparently enjoying very well, thankyou very much) without the basic investment in a good…whatever that thing is they bought. Moreover, there’s only one way to engage in the activity, and that’s the way in which he does it. Anyone who has a different way, particularly one that invovles spending less time or money, is looked on in smiling condescenscion as one who simply doesn’t – and perhaps can’t – appreciate the craft in all its majestic glory. It might involve some eye-rolling, tutting, and perhaps a strategic choice of seat at the society Christmas dinner, but you can usually cope with one man with his beard and tankard.

Two or more, on the other hand, start to get out of hand (this one or the other one), because each wants to be at the apotheosis of his craft, neither can afford to be outdone by the other. The effect is of course an inescapable ratchet of dedication and investment, much like the ever-accelerating and ultimately ruinous cycle of gift-giving in potlatch societies. When one comes in with some new piece of kit, the other must have it or the one better than it. When one adopts some new and laborious way of interacting with the pursuit at hand, the other will immediately adopt or surpass it. Their interactions with each other are best described as ‘banter’, that particularly masculine (and indeed beard-ridden and betankarded) species of chatter that seems on the surface to be friendly ribaldry but that covers a seething and complex web of mistrust and hatred.

As I said earlier, I think that what really counts for enjoyment of a pursuit is the density of men with beards and tankards. Some activities seem able to hold at once both people with large individual investments, and a welcoming attitude toward newcomers and casual participants. Many runners and cyclists, regardless of how much they spent on their kit, will be happy with and friendly toward anyone who turns up to the same event to run or ride alongside them. The next person may be on their super-list carbon fibre frame with $5000 wheels and bespoje saddly uniquely contoured to fit their cheeks, when you turn up on the $250 bike you got in the January sales. But they’re a cyclist, and you’re a cyclist, so hey, let’s get some cycling going.

Some activities are clearly at the transition, where variations in the density of the tankard field can locally push it past the critical limit. Most motorcyclists are happy to acknowledge and welcome other bikers (though obviously not scooter riders), with a few notable exceptions. Harley riders tend to only notice other Harley riders. The extreme end of the amateur track circuit only pays attention to how much you’ve bored out your cylinders (and anyone who’ll listen) and the amount of time you spend riding a dynamometer. And that certain class of BMW rider who watched The Long Way Down can’t believe that you don’t have mud pans, GPS, and aluminium flight cases attached to your bike when you go to the corner shop for a pint of semi-skimmed. But still, most bikers accept most other bikers, and talk to them about most biking.

And then there are the activities that are forever lost to the men with beards and tankards, for which the ratchet has turned so far that even if the barrier to entry is theoretically low, the barrier to sociable entry – to engaging with the community as an equal – can be insurmountable.

Consider astronomy. It used to be that if you had some cheap army surplus binoculars, you could go along to your local astronomy society and discuss what you’d seen of the moon, the planets and some of the brighter objects in the Messier catalogue. Then, with the introduction of the charge-coupled detector, the tankard brigade arrived in force. Now people will swap photos constructed from multiple hundreds of exposures through different narrow-band filters, taken with their large reflecting telescopes with computer-controlled star drives and the latest in CCDs (all probably permanently housed in purpose-built observatories in their gardens). No multi-thousand-dollar telescope (perhaps even no garden)? Nothing to discuss.

In some circles, folk music can have a more practice-driven ratchet system. The British folk revival of the 1970s brought with it literal men with actual beards and genuine tankards who defined what folk singing was (in apparent contradiction to the idea that it should be up to the folk to decide). Now there exist folk clubs where unless you have that certain nasal folkier-than-thou timbre in your voice and practiced wobbly delivery, and unless you can remember all of the words to all twelve verses without recourse to the book, you probably shouldn’t take part.

All of this leads me to my questions. As a programmer, which of the practices I participate in are pragmatic, which necessary, and which informed by the ratchet of the men with beards and tankards? How much of what we do is determined by what others do, and must be seen to be done before we can claim we’re doing it right?

And which things? Is it the runaway complexity of type systems that’s the ratchet, or the insistence of programming without the safety net at all in a dynamic language? Or both?

My naive guess is that tankardism manifests where unnecessarily highfalutin words are deployed, like ‘paradigm’ for ‘style’ or ‘methodology’ for ‘method’. And yes, that sentence was deliberately unnecessarily highfalutin.

UNIX-like pedantry

Some people like to refer to OS X as UNIX-like when it’s actually a UNIX. There was a time when it was UNIX-like and some people liked to refer to it as a UNIX, but it’s not now.

The other pink dollar

How did (a very broad and collective) we go from selling NeXT at $440M to selling Tumblr at $1.1B, in under two decades? Why was Sun Microsystems, one of the most technologically advanced companies in the valley, only worth two Nests?

I don’t think we’re technologists (much) any more. We’ve moved from building value by making interesting, usable and advanced technology to building value by solving problems for people and making interesting, useful and advanced experiences. The good thing about a NeXT workstation was that it was better than other workstations and minicomputers; the bad thing is that you can’t actually do a lot with a Unix workstation. You need applications, functions for turning silicon-rich paperweights into useful tools.

NeXT’s marketing was it’s easier to turn our paperweight into a tool than their paperweight, but today’s tech companies are mostly making things you can already use. The technology is a back-office function, enabling the things you can already use or working around problems the companies discovered in trying to enable those useful things. Making the paperweight with the most potential is no longer interesting to most of the industry, though there will be money in paperweights for a few years yet (even if the paperweights are becoming small enough that they can’t weigh down paper, and even if no-one has a stack of paper to weigh down any more).

Hence the current focus on “disruption”, in the Silicon Valley, not Clayton Christensen, sense. It’s easy to see how an already-solved problem can be solved faster, cheaper or better, by taking out intermediate steps or slow communications.

This is traditional science-fiction advancement. What technology do you need to get the plot moving quickly? Two people need to talk but they’re not on the same planet: you need a mobile phone. Two people need to be in the same room but they’re not on the same planet: you need a teleporter. Two people need to share the specifications of a starship but there isn’t enough paper in the universe: you need a PADD.

It’s harder to identify solutions to unsolved problems, or solutions to unknown problems. This hasn’t changed since the paperweight days, the transition has been from “well, I guess you can find some problem to solve with this workstation” to “we solved this obvious problem for you”. That’s an advance.

It really is good that we’re moving from building things that could potentially solve a problem to things that definitely do solve a problem. That’s more efficient, as fewer people are solving the same problem. Consider the difference between every company buying a Sun workstation and hiring a programmer to write a CRM application, and every company paying someone else to deliver them a CRM system.

It’s also likely the reason for the rise in open-source software, hardware, data centres and related infrastructure. Nobody’s making railroads any more, they’re making haulage companies that use railroads to solve the problem of “you’re in Chicago, Illinois but your crate full of machinery is in a port in Seattle, Washington”. There’s lots of cost in having the best rails, but questionable benefit, so why not share the blueprints for the rails so that anyone can improve them?

Well, what if it turns out that the best way to haul your goods is not on rails? If it’s easy to accept better rails, but the best solution lies in a different direction? Alan Kay would recognise this problem: everyone can see incremental improvements in the pink plane but there are magnitudes of improvements to be made by getting out into the blue plane.

How can a blue plane venture get funded, or adopted? When was the last time it happened? Is there something out there already that just needs adoption to get us onto the blue plane? Or have we set up a system that makes it easy to move quickly on the pink plane, but not to change direction to the blue plane?

…and in the end there will be the command line.

You’re pretty happy with the car that the dealer is showing you. It looks comfortable, stylish, and has all of the features you want. There’s a lot of space in the trunk for your luggage. The independent reviews that you’ve seen agree with the marketing literature: once this vehicle gets out onto the open road, it’s nippy and agile and a joy to drive.

You can’t help but think that she isn’t being completely open with you though. To get into the roomy interior and luxurious driver’s seat, you have to climb over a huge black box, twice the height of the cabin itself and by far the longest part of the car. Not to detract from the experience, the manufacturers have put in an automatic platform that lifts you from the ground to the door and returns you gently to earth. But the box is still there.

You ask the dealer about this box, and initially she deflects your questions by talking about the excellent mileage, which is demonstrated by the SpecRoad 2000 report. Then she tells you how great the view of the road is from the high situation of the driver’s seat. Eventually, you ask enough times, and she relents.

“That’s just the starter,” she explains, fiddling with a catch on the door in the rear of the box. “It’s just used to get the petrol motor going, but you don’t need to worry about it. Well, not much.”

Finally, she frees the catch and opens the box. To your astonishment, inside the box are four horses, sullenly eating grain from their nosebags and pawing their hooves on the ground. You can see that they are reined into a system that pulls the rear axle of the car as if it were an old-style carriage. The dealer continues.

“As I said, these cars just use the horses to initially pull the car along until the engine starts up and takes over. It’s how we’ve always built our cars, by layering the modern components over the traditional carriage system. Because the horse-and-carriage arrangement is so stable having been perfected over decades, we can use it as a solid base for our high-tech automobiles. You really won’t notice that it’s there. We send out new grain and clear up any, um, exhaust automatically, so it’s completely invisible. OK every so often one of the horses gets sick or needs re-shoeing and then you can’t use your car at all, but that’s pretty rare. Mostly.”

Again, your curiosity is getting the better of you. In the front of the horses’ cabin, leather reins run from the two leading horses to another boxed-off area. The dealer sees you looking at it, and tries to lead you back out to the showroom, but you persist. With a resigned sigh, she opens yet another hatch into this deeper chamber.

Inside, you are astonished to see a man holding the reins, ready to pull the horses along. “Something has to get the horses started,” the dealer explains, “and this is how we’ve always done it. Our walking technology is even more robust than our horse-drawn system. Don’t worry about any of that though, let me show you the independent temperature zones in the car’s climate control system.”

That’s how it works

In the dim and distant past, barely 672 days after time itself began, the Unix time-sharing system was introduced to the world. It’s a thing for big computers that lets multiple people use them at the same time, without getting in each others’ way. It might not have been the most capable system (which would’ve been Multics, the system which Unix was based on), but due to the fact that AT&T weren’t allowed to sell it, Unix did become popular. By the time this happened, Unix had been rewritten in C so the combination of C, Unix, and tools written atop like roff were what became popular.

Eventually, as small computers became more powerful, they became capable of running C and Unix too. And so they did. People designed processors that were optimised for Unix, other people designed computers that used these processors, and other people brought Unix to these computers. Each workstation itself may have only had a single user, but they were designed to be used together on a network. As the designers had decided that the network is the computer, and the network did have multiple users, it was still a multi-user system, and so the quotas and protections of a time-sharing system still made sense.

Onward and downwards, Unix marched inexorably. As it did so, it dragged its own history with it. As the extremities of Europe became the backdrop to large stone columns with Latin-inscribed capitals, so ever-smaller computers found themselves the backdrop to the Unix kernel and shell. To get there, the biological and technological distinctiveness of each new environment had to be added to Unix’s own.

Compare the Unix workstation to the personal computer. A Unix workstation was designed to run Unix, so its ROM program could look for file systems, find one with the /vmunix program on it, and run that program. The PC was designed…well, it’s not clear what it was designed for, though it was likely to do the same things that CP/M could do on other small computers. If you don’t have an operating system, many of them will give you the infamous NO ROM BASIC message.

Regardless, the bootstrap program in a PC’s ROM certainly isn’t looking for a Unix, or an NT OS kernel, or anything else in particular. It just wants to run whatever comes next. So it looks for a program called the secondary bootloader, and runs that. Then the secondary bootloader itself looks around for the filesystem with /vmlinuz or whatever the Unix (or Unix-like) boot file is called, and runs that.

Magnify and Enhance

The story doesn’t end at the kernel. Getting there, the kernel discovers the hardware available (even though this has been done once or twice already) and then gets on with one of its functions, which is to be a bootloader for a Unix program. Whether that program is initor some newer replacement, that has to start before the computer is properly running a Unix.

One of init‘s tasks is to start up the Unix programs that you want running on the computer, the launch procedure is still not complete. init might follow the instructions in a script called rc, or it could use all the scripts in a folder called init.d or SystemStarter, or it could launch svc.startd and let that decide what to start, or maybe something different happens. Once that procedure has run to completion, the computer is probably doing whatever it was that you bought it for, or at least waiting for you to tell it what that is.

Megakernels

So many different computers go through that complex process – servers, desktops, laptops, mobile phones, tablets, network routers, watches, television receivers, 3D printers. If you have an idea for a novel application of computing hardware, the first step is to stand back and protect your ears from the whomp of four decades of history being dumped in a huge black box on the computer, then you can get cracking.

You want to make a phone? A small device to be used for real-time communication by a single person? whomp comes the megalith.

You want to make a web server? A computer usually dedicated to running three functions (converting input into database requests, converting database responses into output, and tracking which input deserves which output)? whomp comes the megalith.

You want a network appliance? Something that nobody’s going to use at all, that sits in the corner turning 802.11 datagrams into 802.3 datagrams? whomp comes the megalith.

There’s not much point looking at Unix as an architecture or a system of interdependent components in these applications. whomp. It’s a big black box that can be used to get other boxes moving, like the horses used to start a car’s engine. In the 1980s and into the beginning of the 1990s, there were arguments about whether monolithic kernels were better than microkernels. Now, these arguments are redundant: the whole of Unix is itself a megakernel for OS X, Android, iOS, Firefox OS, your routers, network switches and databases.

But the big black box is black because of what’s found at the top of the megalith. It’s a tar pit, sucking in the lower layers of whatever’s perched above. Yesterday, a Unix system would’ve been programmed via the Bourne Shell, a sort of dynamic compromise for the lack of message-passing in C. Today, once the dust has cleared from the whomp, you can see that the Bourne shell is accompanied in the softer layers of tar by Tcl, Perl, Python, Ruby, and other once high-flying programs that got too close to the pit.

Why that’s good

The good news is that Unix isn’t particularly broken. Typically a computer based on Unix can remain working for at least long enough that either the batteries run out or a software update means you have to turn it off anyway.

Because Unix is everywhere, everybody knows Unix. Or they know something that was once built on Unix and has been subsumed into Unix, the remains of which can just be seen and touched in the higher strata of the tar. Maybe they only really know how to generate JSON structures in Ruby, but that’s OK because your next-generation doorbell will have a Ruby interpreter deposited with the whomp of the megalith.

And if something isn’t particularly broken, then there’s not much point in throwing it away for something new. Novelty for its own sake was the death of Taligent, the death of Be, and the death of countless startups and projects who want to do something like X, but newer.

Why that’s bad

The bad news is that Unix is horrendously broken. You can have a supposedly safe runtime environment for your program, but the bottom of this environment is sticking into the tar pit that is C and Unix. Your program can still get into trouble because it’s running on Java and Java is written in C and C is where the trouble comes from.

The idea is that you stay at the top of the megalith, and it just starts your computer and stops you from worrying about the low-altitude parts of the machine. That’s only roughly true though, and lower-down pieces of the megalith sometimes prove themselves to have crumbled under weathering and the pressure from the weight above. If your computer has experienced a kernel panic in the last year, it’s probably because the graphics driver wasn’t very well-written. That’s a prop that has to be inserted into the bottom of the megalith to keep it upright, but people make those props out of balsa wood and don’t check the size of the holes they need to fit the props into.

Treating Unix as the kernel of your modern system means ignoring the fact that Unix is itself a whole operating system, and that your UEFI boot process also loaded another other operating system just to get that other operating system to load your operating system. The outer system displays inner-system problems, being constrained by the same constraints that your Unix flavour imposes. Because Unix is hidden, these become arbitrary-seeming constraints that your developers simply know as always having been there.

What should be done

A couple of decades ago, there were people who knew that PC operating systems like Mac OS and MS-DOS weren’t particularly good, and needed replacing. Some of them looked with envy at the smooth megalith that was Unix, and whomp here it arrived on their desktop machines: MkLinux, Debian, NeXTStep, Solaris, 386BSD and others. Others thought that the best approach was to start again with systems designed to support the desktop paradigm and using modern design techniques and technology advances: they made BeOS, Windows NT and others.

Systems like this (including modern BeOS-inspired Haiku, and Amiga-inspired AROS) are typically described by their project politburos as “efficient”, “lean” and other words generally considered to be antonymical to “a GNU distribution”.

They also tend to have few users in comparison to Mac OS X, GNU and other systems. Partly this is just a marketing concern, that’s irrelevant when such systems are free: if the one that works for you works for you, it shouldn’t matter how many other people it also works for. In practice there is a serious consideration to the install base. The more people who use an operating system, the more people there are who want applications for that system and therefore (hopefully) more people will want to write applications for that system.

If Linus’s Law (that many eyes make bugs shallow; a statement of wishful thinking that should actually be attributed to Eric S. Raymond) actually held true, then one might expect that more popular systems would suffer fewer bugs. Perhaps more popular systems end up with higher expectations, and therefore gain newer features faster, thus gaining bugs faster than people could fix them?

Presumably as the only point to Unix these days is to be a stable stratum on which to layer other things, there are numerous companies and individuals who would benefit from it being stable. We can accept that all of this complexity is going nowhere except upward, and that the megalith will continue to grow inexorably as more components fall into the tar pit. With that being the case, all of the companies and individuals involved could standardise on a single implementation of the megalith. They could all shore up the same foundations and fix the same cracks.

What I think I want to do

I often choose to rank potential solutions to technical problems in a two-dimensional graph, because if you can reduce any difficult question down to four quadrants then you can make a killing as a consultant. In this case, the axes are political acceptability and technical quality.

+-------------------+-------------------+ T
|                   |                   | e
|  Awkward  genius  |     Slam-dunk     | c
|                   |                   | h
+-------------------+-------------------+ n
|                   |                   | i
|   Feverish rant   | Saleable band-aid | c
|                   |                   | a
+-------------------+-------------------+ l
                Political

A completely new system might be a great idea technically, but is unlikely to get any traction. There may be all sorts of annoying problems that make current systems a bit disappointing, but no-one’s suffering badly enough to consider a kill or cure option. The conditions for a radically novel system becoming snapped up by an incumbent to replace their existing technical debt don’t really exist, and haven’t for decades (Commodore bought Amiga to get their new system, but in the 1990s Apple just needed a system that was already a warmed-over workstation Unix).

In fact despite the view of the software sector being a high-tech industry, it’s both socially and technologically very conservative. It’s rare for completely new ideas to take hold, and what’s taken for progress can often be seen more realistically as a partially-directed form of Brownian motion. As already discussed, this isn’t completely bad, because it stops new risks being introduced. The counterpoint to that melody is that it stops old risks from being removed, too.

Getting a lot of developer traction around a single Unix system therefore has a higher likelihood, in fact it’s already happened. It’s not necessarily the best approach technically, because it means rather than replacing that huge megalith we just agreed was a (very large) millstone, we resign ourselves to patching up and stabilising the same megalith together. Given that one penguin-based megalith is already used in far more contexts than any other, this seems more likely to be acceptable to more people beset by the crumbly megalith problem.

There’s room in the world for both solutions, too. What I call a more acceptable solution is really just easier to accept now, and the conditions can change over time. Ignoring the crumbling megalith could eventually produce a crisis, and slicing the Gordian knot could then be an acceptable solution. Until that crisis hits, there will be the kernel, the command-line, and the continuing echos of that original, deafening whomp.

But where to go?

I agree with John Gruber here: it’s not like Apple’s stuff has become worse than a competitor’s, it’s just that it’s not as good as I remember or expect. It could be, as Daniel Jalkut suggests, rose-tinted glasses[*].

I don’t think there is a “better” competitor, except in limited senses: Solaris/IllumOS and OpenBSD both have good-quality code but are not great to use out of the box: Solaris in particular I associate with abysmal package management and flaky support from supposedly cross-platform applications that are actually only ever built and tested on GNU Debian has well-adhered free software guidelines and much better compatibility but not all GNU code is as high-quality as some alternatives. The OpenBSD copyright policy and the FSF definition of freedom are incompatible so OpenBSD doesn’t contain much GNU software GNU doesn’t contain much OpenBSD software: you get a base system that’s either one or the other and have to do work if you want bits of both. Other GNU/Linux distributions can be easier to set up and have better (i.e. any) support for non-free software and wider collections of device drivers.

So there are plenty of alternatives, many of which are good in some ways and bad in others, and all I know is that I don’t want things to be like this, without being able to say I want one of those instead. I don’t even think there will be one of those, at least not in the sense of a competitor to Apple on laptop operating systems. Why not? Because I agree with the following statement from wesolows:

from the perspective of someone who appreciates downstack problems and the value created by solving them, is that the de facto standard technology stack is ossifying upward. As one looks at each progressively lower layer, the industry’s collective willingness to contemplate, much less sponsor, work at that layer diminishes.

There’s still research in operating systems, sure, but is there development? Where are the NeXTs and Bes of today? I don’t believe you could get to a million users and have a Silicon Valley “exit” with low-level technology improvements, and so I don’t think the startup world is working in that area. So we probably won’t get anything good from there. I don’t see competition in operating systems being fruitful. If it were, Sun wouldn’t have been sold.

In fact I don’t even think that Apple’s systems are bad, they’ve just lost the “it just works” sheen. It’s just that when you combine that with the lack of credible alternative, you realise the problem is probably in expecting some corporation to put loads of resources into something that’s not going to have a great value, and merely needs to be “good enough” to avoid having any strategic penalty.

To me, that means treating the low-level parts of the technology stack as a public good. If we accept that the stack is ossifying upwards, and that EM64T, Unix, C, IP, HTTP, SQL and other basic components are going to be around essentially forever[**] then we need to treat them and their implementations as public goods and take common ownership of them. They might not be the best possible, but they are the best available. We (we the people who make systems on top of them, in addition to we the people who use systems made on top of them) need them to work collectively, so we should maintain them collectively.

[*]I particularly like his use of the phrase “Apple-like” in this context, because that term is often used to mean “my platonic ideal of Apple’s behaviour” rather than “what Apple actually does” and reminds me to be wary of my own recollections. I remember Lightning connectors being welcomed in a tweet that derided the old iPod 30-pin connector as “un-Apple-like”, despite the evidence that Apple invented, introduced the 30-pin connector and then supported it for over a decade.

[**] speaking of Sun, I use the definition of computer-forever I learned from a Sun engineer: five years or longer.

Learn Mansplaining The Hard Way

zed shaw:

It didn’t matter that most of these detractors admitted to me that they don’t code C anymore, that they don’t teach it, and that they just memorized the standard so they could “help” people.

[…]

I cannot help old programmers. They are all doomed. Destined to have all the knowledge they accumulated through standards memorization evaporate at the next turn of the worm. They have no interest in questioning the way things are and potentially improving things, or helping teach their craft to others unless that education involves a metric ton of ass kissing to make them feel good. Old programmers are just screwed.

On switching to Linux

In November, I switched to GNU/Linux at home (I still use OS X at work, because I still write Objective-C in Xcode at work). Or rather, I switched back: I’d been using it around a decade ago.

The Ubuntu installer on my MacBook air

In December, I resolved to spend more time working with Free Software during 2015 and beyond. Now I find that my story is not unique, others have left OS X (now a dead link, sadly) or are concerned at the state of Apple. I have had conversations with friends about moving from OS X to Debian, to OpenBSD, to other systems.

In my case, there was no watershed moment, no Damascene conversion from the Tablet According to Jobs to the Stallman Doctrine. Rather, my experience has been one of a thousand tiny cuts, and the feeling that while GNU is not actually a better system, it’s one that I’m allowed to make changes to. When iWork was ‘upgraded’ to a less-capable version with an incompatible file format, cut. Every time I plug my Apple display into my Apple computer, then have to unplug it and connect it again before it works properly, cut. Even when the display works, I have to unplug the (Apple) keyboard and plug it back in, cut. Every time iTunes connects to the network but doesn’t let me play any of my tunes, cut. When Apple trumpets the superiority of their new map that shows I live in “Royal Spa”, cut. When iCloud showed its message that I have to upgrade to the new way and devices still on the old way can’t see my files any more, cut. Like the above-linked authors, I’ve got to a point where the only Apple-supplied software in my Dock is the Finder. In my case, it’s happened one app at a time.

The thing is, Apple’s been building some great new things. There’s the watch, Swift, improvements to Cocoa, and they still have the best hardware around. If it weren’t for both the real regressions and the fear of potential regressions on every upgrade or app update, I’d still be confident that their Unix desktop were the best. As it is, I think it could possibly be the least worst, but I’m no longer certain.

Most of the time I don’t care about my computer’s operating environment. This isn’t so much the year of Desktop Linux for me, as the year of Desktop Ambivalence. The OS is the thing that spends most of the time and RAM involved in showing me an emacs window, or a terminal emulator, or a Smalltalk machine, or a web browser. I shouldn’t need to care how it works. But if it’s going to get in my way, give me a chance to do something about it.

The standard trope response is “LOL you must have a bad idea of UX if you use Linux”, which is the sort of thing that was true at the turn of the millennium when most of us were last tyre-kicking some GNU distro but is now an outdated view of the world. But again, should there be a problem, I hope to have the ability to fix it both for myself and others.

So far this year I’ve started small. I’ve been reminding myself of the differences between OS X and BSD commands that I’m used to, and the Linux and SysV commands that I’ve not used frequently in nearly a decade. I’ve been learning about Vala and GTK+.

Layers of Distraction

A discussion I was involved in over on Facebook reminded me of some other issues I’d already drafted for this blog, so I stuck the two together and here we are.

Software systems can often be seen as aggregations of strata, with higher layers making use of the services in the lower layers. You’ll often see a layered architecture diagram looking like a flat and well-organised collection of boiled sweets.

As usual, it’s the interstices rather than the objects themselves that are of interest. Where two layers come together, there’s usually one of a very small number of different transformations taking place. The first is that components above the boundary can express instructions that any computer could run, and they are transformed into instructions suitable for this computer. That’s what the C compiler does, it’s what the x86 processor does (it takes IA-32 instructions, which any computer could run, and turns them into the microcode which it can run), it’s what device drivers do.

The second is that it turns one set of instructions any computer could run into another set that any computer could run. If you promise not to look too closely the Smalltalk virtual machine does this, by turning instructions in the Smalltalk bytecode into instructions in the host machine language.

The third is that it turns a set of computer instructions in a specific domain into the general-purpose instructions that can run on the computer (sometimes this computer, sometimes any computer). A function library turns requests to do particular things into the machine instructions that will do them. A GUI toolkit takes requests to draw buttons and widgets and turns them into requests to draw lines and rectangles. The UNIX shell turns an ordered sequence of suggestions to run programs into the collection of C library calls and machine instructions implied by the sequence.

The fourth is turning a model of a problem I might want solving into a collection of instructions in various computer domains. Domain-specific languages sit here, but usually this transition is handled by expensive humans.

Many transitions can be found in the second and third layers, so that we can turn this computer into any computer, and then build libraries on any computer, then build a virtual machine atop those libraries, then build libraries for the virtual machine, then build again in that virtual machine, then finally put the DOM and JavaScript on top of that creaking mess. Whether we can solve anybody’s problems from the top of the house of cards is a problem to be dealt with later.

You’d hope that from the outside of one boundary, you don’t need to know anything about the inside: you can use the networking library without needing to know what device is doing the networking, you can draw a button without needing to know how the lines get onto the screen, you can use your stock-trading language without needing know what Java byte codes are generated. In other words, both abstractions and refinements do not leak.

As I’ve gone through my computing career, I’ve cared to different extents about different levels of abstraction and refinement. That’s where the Facebook discussion came in: there are many different ways that a Unix system can start up. But when I’m on a desktop computer, I not only don’t care which way the desktop starts up, I don’t want to have to deal with it. Whatever the relative merits of SMF, launchd, SysV init, /etc/rc, SystemStarter, systemd or some other system, the moment I need to even know which is in play is the moment that I no longer want to use this desktop system.

I have books here on processor instruction sets, but the most recent (and indeed numerous) are for the Motorola 68k family. Later than that and I’ll get away with mostly not knowing, looking up the bits I do need ad hoc, and cursing your eyes if your debugger drops me into a disassembly.

So death to the trope that you can’t understand one level of abstraction (or refinement) without understanding the layers below it. That’s only true when the lower layers are broken, though I accept that that is probably the case.

Rubbies and sores

I imagine many of you are familiar with the difference between Ruby (a beautiful language representing the best pragmatic balance between Smalltalk’s elegance and C’s ubiquity) and Rubby (a horrendous mishmash of abominations in the style of all scripting languages, glommed together by finding nearly-compatible corner cases).

I also make the same distinction between Open Source (an attempt to get the same exploitation of labour as Free Software but without the principles) and Open Sores (the disturbingly wobbly house of cards that arises when collections of developers, none of whom feels empowered to make big changes, individually attach small pieces of mud to the collectively-constructed big ball).

Neither is fair, but both are useful shorthands.