Open Source and the Lehrer-von Braun defence

Tom Lehrer’s song about Wernher von Braun is of a man who should not be described as hypocritical:

Say rather that he’s apolitical. “Once the rockets go up, who cares where they come down? That’s not my department,” says Wernher von Braun.

The idea that programming as a field has no clear ethical direction is not news. As Martin Fowler says here, some programmers seem to believe that they are mere code monkeys. We build things, it’s up to other people to choose how they get used, right?

It’s in open source software that this line of thinking is clearest. Of course anyone can use commercial software, but it becomes awkward to have blood money on your company’s records. Those defence companies and minerals miners just lead people to ask questions, and it’d be better if they didn’t. You could just choose not to sell to those people, but that reduces the impact of your product.

A solution presents itself: don’t take their money! Rather, decouple the sending up of the thing and the choice of where it comes down, by making it available to people who now don’t have any (obvious or traceable, anyway) connection to you or your employer. That’s not your department! Instead of selling it, stick it up on a website (preferably someone else’s, like GitHub) and give a blanket licence to everyone to use the software for any purpose. You just built a sweet library for interfacing with gyroscopic stabilisers, is it really your fault that someone built a cruise missile that uses the library?

“But wait,” you say, “this doesn’t sound like the clear-cut victory you make it out to be. In avoiding the social difficulties attendant in selling my software to so-called evildoers, I’ve also removed the possibility to sell it to gooddoers. Doesn’t that mean no money?” No, as Andrew Binstock notes, you can still sell the software.

Anyway, perhaps it’d be useful to restructure the economics of the software industry such that open source was seen as a value-driver, so you can both have your open source cake and eat the cake derived from valuable monetary income. You might do that by organising things such that an open source portfolio were seen as a necessary input to getting hired, for example. So while plenty of people still don’t get paid for open source software, they still indirectly benefit from it monetarily.

We can, evidently, easily spin contributions to open source such that they are to our own benefit. What about everybody else? When a government uses Linux computers to spy on the entire world, or an armed force powers its weapons with free software, is that pro bono publico?

The usual response would be the Lehrer-von Braun defence detailed above. “We just built it in good faith, it’s up to others to choose how they use it.” An attempt to withdraw from ethical evaluation is itself an ethical stance: it’s saying that decisions over whether the things you make are good or evil are above (or beneath) your pay-grade. That we, as developers, are OK with the idea that we get large paychecks to live in comfortable countries and solve mental problems, and that the impact of those solutions is for somebody else to deal with. That despite being at the epicentre of one of the world’s biggest social and economic changes, we don’t care what happens to society or to economy as a result of our doings.

Attempts have been made to produce “socially aware” software, but these have so far not been unqualified successes. The JSON licence includes the following clause:

The Software shall be used for Good, not Evil.

Interestingly, in one analysis I discovered, the first complaint about this clause is that it interferes with the Free Software goal of copyleft. How ethical do we think an industry is that values self-serving details over the impact of its work on society?

The other problem raised in relation to the JSON licence is that it doesn’t explain what good or evil are, nor who is allowed to decide what good or evil are. Broad agreement is unlikely, so this is like the career advisor who tells you to “follow your dreams” without separating out the ones where you’re a successful human rights lawyer and the ones where you’re being chased by a giant spider with a tentacle face through an ever-changing landscape of horror.

I should probably stop reading H. P. Lovecraft at bedtime.

Posted in philosophy after a fashion, Responsibility | Leave a comment

It’s just like English

Fans of the RSpec tool for writing tests will be familiar with its English-like(fn1) syntax for describing tests, which looks like this.

describe StrawMan do
  context "when interpreting a test in RSpec" do
    it "is written in plain English" do
      expect(spec).to eq(legible_text)
    end
  end
end

That’s almost completely distinguishable from conversational English. Perhaps programmers just have a different idea of what English looks like than many typical speakers of English. I posit this conclusion because the gulf between “English-like” and “English” is not new. You can almost see attempts at real constructs in the English language being bashed into place in the syntax for BASIC:

“For every number between 1 and 10, do this with the number being named ‘I’…that’s everything, so move on to the next value for I now.”

FOR I = 1 TO 10 : … : NEXT I

And the the the Apple-recommended Definitive Guide to the AppleScript has this the the to say the about the “English-likeness” monster:(fn2)

Personally, though, I’m not fond of AppleScript’s English-likeness. For one thing, I feel it is misleading. It gives one the sense that one just knows AppleScript because one knows English; but that is not so. It also gives one the sense that AppleScript is highly flexible and accepting of commands expressed just however one cares to phrase them; and that is really not so.

Reviewing, then, we have a collection of tools that claim some similarity with English, but then fall down on every comparison except “uses some sequences of characters that have also been used in English”. What went wrong? Indeed, did anything go wrong?

Programming’s close analogue in natural language is the Arabic wish. Computers are much like the djinn in that you tell them what should happen and instead they make exactly the thing you asked for. You waste two attempts to converse with them on asking reasonable questions that they wilfully misinterpret, then spend forever agonising over your third and final attempt. With a djinni, it’s your final attempt because you only got three wishes. With a computer, you’re allowed to try as often as you like but by the third time you’re realising how much more appealing a career in assassinating mythical preternatural wish-givers is looking but you don’t want to take that kind of risk. Both djinn and computers are like that person who’s had a restraining order ever since they decided to take “pick me up at 8” more literally than was truly warranted.

So the role of the programmer is like a kind of djinn-lawyer, translating all of the nuance and creative ambiguity of conversational language into the sort of precise, single-meaning prose that even the most belligerent of readers cannot deliberately misinterpret. And that bit of programming has not materially changed in decades. We’ve gone from “do exactly this”, through “do exactly this but you choose how you use the register file to do it” and “do exactly this but you choose how you use the main memory to do it”.

Getting computers to act like participants in a conversation is possible, but either a bit of a gimmick or limited in application. If you really wanted to build the Knowledge Navigator you’d need to fix this problem (along with the attendant acoustic engineering problems).

That is when we’ll actually be able to claim success at improvement through abstraction. Not when we can give specialist djinn-linguists more abstractions, but when we can give computers enough abstractions that you no longer need to be a translator to make a computer do anything.

(fn1) Why “English-like” and not “verbal language-like”? One might chalk it up to neocolonialism and American industry deciding that English was Good Enough For Everybody. Indeed, as Matz notes, many Japanese people cannot speak English well and that adds a barrier to learning programming languages that are sort-of in English.

(fn2) Please don’t write in about all of the spurious occurrences of “the” in the last (non-quote) sentence. For those of us who have used AppleScript, they can be our little in-joke.

Posted in nearly linguistics | Comments Off on It’s just like English

Code longevity

I recently wrote about the impending centenary of applied computing; a time when we could reflect on the first hundred years to make it easier for people to progress beyond our position into the second hundred years. This necessitates looking at the things we’ve tried, the things that succeeded and the things that failed. It involves recalling and describing the good ideas and the bad ideas.

So, did the bad ideas fail and the good ideas succeed? Can we declare that because something worked, it must have been a success? Is length of service a great proxy for quality of principle?

Let’s start by looking at the lifetime of some of the trappings of applied computing. I’m writing this on the smartphone shown in the picture below. It is, among the many computers I own that claim to be computers and could reasonably be described as modern, one of only two that is not running a recent variant of a minicomputer game–loading system.

Surface RT and Lumia 925

Now is that a fair assessment? Certainly all the Macs, iOSes, Androids (and even routers and television streamy box things) in the house are based on Unix, and Unix is the thing of the 1970s minicomputer. I’ve even used that idea to explain why we still have to deal with PDP-8 problems in iPhones. But is it fair to assume that because the name has lasted, then the idea has been preserved? Did Unix succeed, or has it been replaced by different things with the same name? That happens a lot; is today’s ethernet really the same ethernet that Bob Metcalfe and colleagues at PARC invented? Conversely, just because the name changed is everything new? Does Windows NT really represent a clean break in 1993?

There’s certainly some core, a kernel (f’nar) of the modern Unix that, whether in code or philosophy, can be traced back to the original system (and indeed beyond). But is that there because it’s still a good idea, or because there’s no impetus to remove it? Or even because it’s a bad idea, but removing it would be expensive?

As we’re already talking about Unix, let’s talk about C. In his talk Null References: The Billion-Dollar Mistake, Tony Hoare describes his own mistake as being the introduction of a null reference. He then says that C’s mistake (C follows Algol in having null references, but it also lacks have subscript bounds checking) is an order of magnitude worse. In fact, Hoare also identified a third problem: he says that it’s a good idea to permit a program failure to be diagnosed just from the error message and the high-level program source text. However, runtime failures in C usually end up with a core dump and/or a stack trace through the instructions of the target machine environment.

We can easily wonder just how much (expensive) programmer time has been lost disassembling stack traces, matching up debugger symbols and interpreting core dumps, but without figures for that I’ll generously assume that it’s an order of magnitude smaller than the losses due to buffer overflows. Now that’s only a tens-of-billions-of-dollars value of mistake, and C is the substrate for trillions of dollars of value of industry. So do we say that on balance, C is 99% a Good Thing™? Is it a bad idea that nonetheless enabled plenty of good ones?

[Incidentally, and without wanting to derail the central thesis of this post, I disagree with Hoare’s numbers. Symantec is merely one of the largest companies in the information security sector, with annual revenue in their most recent report of $6.9B. That’s a small part of the total value sunk into that sector, which I’ll guess has an annual magnitude of multiple tens of billions. A large fraction of the problems addressed by infosec can be attributed to C’s lack of bounds checking, so that there’s probably just an annual impact of around ten billion dollars working on fixing the problem. Assuming those businesses have sustainable revenues over multiple years, the integrated cost is well into the hundreds of billions. That only revises the estimated impact on the C software industry from ‘fractions of a per cent’ to ‘a per cent’ though.]

Perhaps it’s fair to say that C was a good idea when it arose, and that it’s since been found to have deficiencies that haven’t yet become expensive enough to warrant decommissioning it. There’s an assumption of rational action in there that I think it’s fair to question, though: am I assuming that C is not worth replacing just because it has not been replaced? Might there actually be other factors involved?

Yes, there might. It’s possible that there are organisations out there for whom C is more expensive than its worth, but where the sunk cost fallacy stops them from moving on. Or organisations who stick with C because their platform vendor gives them a C toolset, even where free or paid alternatives would be cheaper [in fact that would point to a difficulty with any holistic evaluation: that the cost to the people who provide development environments and the cost to the people who consume development environments depends on different factors, and the power in the market is biased towards a few large providers. Welcome to economics]. Or organisations who stick with C because of a perception of a large community of users, which is (perceived to be) more useful than striking out alone with better tools.

It’s also possible that moves in the other direction are based on non-rational factors: organisations that seek novelty rather than improvement, or who move away from C because a vendor convinces them that their alternative is better regardless of objective truth.

It turns out that the simple question we wanted to ask about applied computing: “What works?” leads to such a complex and maybe even chaotic system of forces acting in multiple dimensions that answering it will be very difficult. This doesn’t mean that an answer should not be sought, but that finding the answer will combine expertise from many different fields. Particularly, something that survives for a long time doesn’t necessarily work: it could just be that people are afraid of the alternatives, or haven’t really considered them.

Posted in code-level, economics, software-engineering | Comments Off on Code longevity

Preparing for Computing’s Big One-Oh-Oh

However you slice the pie, we’re between two and three decades away from the centenary celebration for applied computing (which is of course significantly after theoretical or hypothetical advances made by the likes of Lovelace, Turing and others). You might count the anniversary of Colossus in 2043, the ENIAC in 2046, or maybe something earlier (and arguably not actually applied) like the Z3 or ABC (both 2041). Whichever one you pick, it’s not far off.

That means that the time to start organising the handover from the first century’s programmers to the second is now, or perhaps a little earlier. You can see the period from the 1940s to around 1980 as a time of discovery, when people invented new ways of building and applying computers because they could, and because there were no old ways yet. The next three and a half decades—a period longer than my life—has been a period of rediscovery, in which a small number of practices have become entrenched and people occasionally find existing, but forgotten, tools and techniques to add to their arsenal, and incrementally advance the entrenched ones.

My suggestion is that the next few decades be a period of uncovery, in which we purposefully seek out those things that have been tried, and tell the stories of how they are:

  • successful because they work;
  • successful because they are well-marketed;
  • successful because they were already deployed before the problems were understood;
  • abandoned because they don’t work;
  • abandoned because they are hard;
  • abandoned because they are misunderstood;
  • abandoned because something else failed while we were trying them.

I imagine a multi-volume book✽, one that is to the art of computer programming as The Art Of Computer Programming is to the mechanics of executing algorithms on a machine. Such a book✽ would be mostly a guide, partly a history, with some, all or more of the following properties:

  • not tied to any platform, technology or other fleeting artefact, though with examples where appropriate (perhaps in a platform invented for the purpose, as MIX, Smalltalk, BBC BASIC and Oberon all were)
  • informed both by academic inquiry and practical experience
  • more accessible than the Software Engineering Body of Knowledge
  • as accepting of multiple dissenting views as Ward’s Wiki
  • at least as honest about our failures as The Mythical Man-Month
  • at least as proud of our successes as The Clean Coder
  • more popular than The Celestial Homecare Omnibus

As TAOCP is a survey of algorithms, so this book✽ would be a survey of techniques, practices and modes of thought. As this century’s programmer can go to TAOCP to compare algorithms and data structures for solving small-scale problems then use selected algorithms and data structures in their own work, so next century’s applier of computing could go to this book✽ to compare techniques and ways of reasoning about problems in computing then use selected techniques and reasons in their own work. Few people would read such a thing from cover to cover. But many would have it to hand, and would be able to get on with the work of invention without having to rewrite all of Doug Engelbart’s work before they could get to the new stuff.

It's dangerous to go alone! Take this.

✽: don’t get hung up on the idea that a book is a collection of quires of some pigmented flat organic matter bound into a codex, though.

Posted in academia, advancement of the self, books, learning, Responsibility, software-engineering, tool-support | Comments Off on Preparing for Computing’s Big One-Oh-Oh

Intuitive is the Enemy of Good

In the previous instalment, I discussed an interview in which Alan Kay maligned growth-restricted user interfaces. Here’s the quote again:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

This is nowhere more evident than in the world of the mobile app. Any one app comprises a very small number of very focussed, very easy to use features. This has a couple of different effects. One is that my phone as a whole is an incredibly broad, incredibly shallow experience. For example, one goal I want help with from technology is:

As an obese programmer, I want to understand how I can improve my lifestyle in order to live longer and be healthier.

Is there an app for that? No; I have six apps that kindof together provide an OK, but pretty disjointed experience that gets me some dissatisfying way toward my goal. I can tell three of these apps how much I run, but I have to remember that some subset can feed information to the others but the remainder cannot. I can tell a couple of them how much I ate, but if I do it in one of them then another won’t count it correctly. Putting enough software to fulfil my goal into one app presumably breaks the cardinal rule of making every feature available within two gestures of the app’s launch screen. Therefore every feature is instead hidden behind the externalised myriad gestures required to navigate my home screens and their folders to get to the disparate subsets of utility.

The second observable effect is that there is a lot of wasted potential in both the device, and the person operating that device. You have never met an expert iPhone user, for the simple reason that someone who’s been using an iPhone for six years is no more capable than someone who has spent a week with their new device diligently investigating. There is no continued novelty, there are no undiscovered experiences. There is no expertise. Welcome to the land of the perpetual beginner.

Thankfully, marketing provided us with a thought-terminating cliché, to help us in our discomfort with this situation. They gave us the cars and trucks analogy. Don’t worry that you can’t do everything you’d expect with this device. You shouldn’t expect to do absolutely everything with this device. Notice the sleight of brain?

Let us pause for a paragraph to notice that even if making the most simple, dumbed-down (wait, sorry, intuitive) experience were our goal, we use techniques that keep that from within our grasp. An A/B test will tell you whether this version is incrementally “better” than that version, but will not tell you whether the peak you are approaching is the tallest mountain in the range. Just as with evolution, valley crossing is hard without a monumental shake-up or an interminable period of neutral drift.

Desktop environments didn’t usually get this any better. The learning path for most WIMP interfaces can be listed thus:

  1. cannot use mouse.
  2. can use mouse, cannot remember command locations.
  3. can remember command locations.
  4. can remember keyboard shortcuts.
  5. ???
  6. programming.

A near-perfect example of this would be emacs. You start off with a straightforward modeless editor window, but you don’t know how to save, quit, load a file, or anything. So you find yourself some cheat-sheet, and pretty soon you know those things, and start to find other things like swapping buffers, opening multiple windows, and navigating around a buffer. Then you want to compose a couple of commands, and suddenly you need to learn LISP. Many people will cap out at level 4, or even somewhere between 3 and 4 (which is where I am with most IDEs unless I use them day-in, day-out for months).

The lost magic is in level 5. Tools that do a good job of enabling improvement without requiring that you adopt a skill you don’t identify with (i.e. programming, learning the innards of a computer) invite greater investment over time, rewarding you with greater results. Photoshop gets this right. Automator gets it right. AppleScript gets it wrong; that’s just programming (in fact it’s all the hard bits from Smalltalk with none of the easy or welcoming bits). Yahoo! Pipes gets it right but markets it wrong. Quartz Composer nearly gets it right. Excel is, well, a bit of a boundary case.

The really sneaky bit is that level 5 is programming, just with none of the trappings associated with the legacy way of programming that professionals do it. No code (usually), no expressing your complex graphical problem as text, no expectation that you understand git, no philosophical wrangling over whether squares are rectangles or not. It’s programming, but with a closer affinity with the problem domain than bashing out semicolons and braces. Level 5 is where we can enable people to get the most out of their computers, without making them think that they’re computering.

Posted in iPad, iPhone, learning, tool-support, UI | 1 Comment

How much programming language is enough?

Many programmers have opinions on programming languages. Maybe, if I present an opinion on programming languages, I can pass off as a programmer.

An old debate in psychology and anthropology is that of nature vs nurture, the discussion over which characteristics of humans and their personalities are innate and which are learned or otherwise transferred.

We can imagine two extremists in this debate turning their attention to programming languages. On the one hand, you might imagine that if the ability to write a computer program is somehow innate, then there is a way of expressing programming concepts that is closely attuned to that innate representation. Find this expression, and everyone will be able to program as fast as they can think. Although there’ll still be arguments over bracket placement, and Dijkstra will still tell you it’s rubbish.

On the other hand, you might imagine that the mind is a blank slate, onto which can be writ any one (or more?) of diverse patterns. Then the way in which you will best express a computer program is dependent on all of your experiences and interactions, with the idea of a “best” way therefore being highly situated.

We will leave this debate behind. It seems that programming shares some brain with learning other languages, and when it comes to deciding whether language is innate or learned we’re still on shaky ground. It seems unlikely on ethical grounds that Nim Chimpsky will ever be joined by Charles Babboonage, anyway.

So, having decided that there’s still an open question, there must exist somewhere into which I can insert my uninvited opinion. I had recently been thinking that a lot of the ceremony and complexity surrounding much of modern programming has little to do with it being difficult to represent a problem to a computer, and everything to do with there being unnecessary baggage in the tools and languages themselves. That is to say that contrary to Fred Brooks’s opinion, we are overwhelmed with Incidental Complexity in our art. That the mark of expertise in programming is being able to put up with all the nonsense programming makes you do.

From this premise, it seems clear that less complex programming languages are desirable. I therefore look admirably at tools like Self, io and Scheme, which all strive for a minimum number of distinct parts.

However, Clemens Szyperski from Microsoft puts forward a different argument in this talk. He works on the most successful development environment. In the talk, Szyperski suggests that experienced programmers make use of, and seek out, more features in a programming language to express ideas concisely, using different features for different tasks. Beginners, on the other hand, benefit from simpler languages where there is less to impede progress. So, what now? Does the “less is more” principle only apply to novice programmers?

Maybe the experienced programmers Szyperski identified are not experts. There’s an idea that many programmers are expert beginners, that would seem to fit Szyperski’s model. The beginner is characterised by a microscopic, non-holistic view of their work. They are able to memorise and apply heuristic rules that help them to make progress.

The expert beginner is someone who has simply learned more rules. To the expert beginner, there is a greater number of heuristics to choose from. You can imagine that if each rule is associated with a different piece of programming language grammar, then it’d be easier to remember the (supposed) causality behind “this situation calls for that language feature”.

That leaves us with some interesting open questions. What would a programming tool suitable for experts (or the proficient) look like? Do we have any? Alan Kay is fond of saying that we’re stuck with novice-friendly user experiences, that don’t permit learning or acquiring expertise:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

Perhaps, while you could never argue that common programming languages don’t have learning curves, they are still “generally worthless and/or debilitating”. Perhaps it’s true that expertise at programming means expertise at jumping through the hoops presented by the programming language, not expertise at telling a computer how to solve a problem in the real world.

Posted in code-level, nearly linguistics, tool-support | Leave a comment

On too much and too little

In the following text, remember that words like me or I are to be construed in the broadest possible terms.

It’s easy to be comfortable with my current level of knowledge. Or perhaps it’s not the value, but the derivative of the value: the amount of investment I’m putting into learning a thing. Anyway, it’s easy to tell stories about why the way I’m doing it is the right, or at least a good, way to do it.

Take, for example, object-oriented design. We have words to describe insufficient object-oriented design. Spaghetti Code, or a Big Ball of Mud. Obviously these are things that I never succumb to, but other people do. So clearly (actually, not clearly at all, but that’s beside the point) there is some threshold level of design or analysis practice that represents an acceptable minimum. Whatever that value is, it’s less than the amount that I do.

Interestingly there are also words to describe the over-application of object-oriented design. Architecture Astronauts, for example, are clearly people who do too much architecture (in the same way that NASA astronauts got carried away with flying and overdid it, I suppose). It’s so cold up in space that you’ll catch a fever, resulting in Death by UML Fever. Clearly I am only ever responsible for tropospheric architecture, thus we conclude that there is some acceptable maximum threshold for analysis and design too.

The really convenient thing is that my current work lies between these two limits. In fact, I’m comfortable in saying that it always has.

But wait. I also know that I’m supposed to hate the code that I wrote six months ago, probably because I wasn’t doing enough of whatever it is that I’m doing enough of now. But I don’t remember thinking six months ago that I was below the threshold for doing acceptable amounts of the stuff that I’m supposed to be doing. Could it be, perhaps, that the goalposts have conveniently moved in that time?

Of course they have. What’s acceptable to me now may not be in the future, either because I’ve learned to do more of it or because I’ve learned that I was overdoing it. The trick is not so much in recognising that, but in recognising that others who are doing more or less than me are not wrong, they could in fact be me at a different point on my timeline but with the benefit that they exist now so I can share my experiences with them and work things out together. Or they could be someone with a completely different set of experiences, which is even more exciting as I’ll have more stories to swap.

When it comes to techniques and devices for writing software, I tend to prefer overdoing things and then finding out which bits I don’t really need after all, rather than under-application. That’s obviously a much larger cognitive and conceptual burden, but it stems from the fact that I don’t think we really have any clear ideas on what works and what doesn’t. Not much in making software is ever shown to be wrong, but plenty of it is shown to be out of fashion.

Let me conclude by telling my own story of object-oriented design. It took me ages to learn object-oriented thinking. I learned the technology alright, and could make tools that used the Objective-C language and Foundation and AppKit, but didn’t really work out how to split my stuff up into objects. Not just for a while, but for years. A little while after that Death by UML Fever article was written, my employer sent me to Sun to attend their Object-Oriented Analysis and Design Using UML course.

That course in itself was a huge turning point. But just as beneficial was the few months afterward in which I would architecturamalise all the things, and my then-manager wisely left me to it. The office furniture was all covered with whiteboard material, and there soon wasn’t a bookshelf or cupboard in my area of the office that wasn’t covered with sequence diagrams, package diagrams, class diagrams, or whatever other diagrams. I probably would’ve covered the external walls, too, if it wasn’t for Enterprise Architect. You probably have opinions(TM) of both of the words in that product’s name. In fact I also used OmniGraffle, and dia (my laptop at the time was an iBook G4 running some flavour of Linux).

That period of UMLphoria gave me the first few hundred hours of deliberate practice. It let me see things that had been useful, and that had either helped me understand the problem or communicate about it with my peers. It also let me see the things that hadn’t been useful, that I’d constructed but then had no further purpose for. It let me not only dial back, but work out which things to dial back on.

I can’t imagine being able to replace that experience with reading web articles and Stack Overflow questions. Sure, there are plenty of opinions on things like OOA/D and UML on the web. Some of those opinions are even by people who have tried it. But going through that volume of material and sifting the experience-led advice from the iconoclasm or marketing fluff, deciding which viewpoints were relevant to my position: that’s all really hard. Harder, perhaps, than diving in and working slowly for a few months while I over-practice a skill.

Posted in advancement of the self, architecture of sorts, OOP, software-engineering, tool-support | Leave a comment

Some so-called expert

There’s a comedy sketch being frequently tweeted called The Expert. Now, all programmers will be aware that there is nothing funnier than interpreting a joke literally and telling everyone the many ways in which it’s wrong, and that there is no way to be seen as a more intelligent and empathetic person than to do this. So here we go: what are all the inexpert things this “expert” does?

Firstly, having been told how important the strategic initiative is, he makes no attempt to actually find out what it is, and how his task is connected to the objectives described. This means that he doesn’t know anything about the context of his work, which is just setting himself up for all sorts of trouble. It’s like a programmer going “yeah sure, I can add a second copy of that goto line” without checking whether they’re working on some sort of security-sensitive module.

He refuses to accept any form of creative solution to the problem, and his project manager is correct to try to tactfully defer his immediate refusal to do the work asked. Immediately saying “no, I can’t do that” is identical to saying “I have never done that, and I cannot imagine any novelty entering my life”. This is not symptomatic of expertise, but of narrow-mindedness.

A pause, and a gathering of resources, leads us to conclude that some of the tasks set are eminently achievable, making this alleged expert look like the comfort-zone-hogging risk-averse luddite that perhaps he is. Of course you can draw a red line with inks of other colours, for example. You simply rely on the relativistic Doppler effect, or on fluorescent properties of the materials. Of course you can draw seven lines all perpendicular, if your diagram can extend into seven dimensions. And that is of course assuming a Euclidean geometry for the diagram; an assumption that our “I know best” expert doesn’t even think to question. Alternatively, you can find out what the time-dependent evolution of the diagram is, as it may be that a total of seven lines that are each instantaneously perpendicular to the other lines present but that do not all simultaneously exist is a sufficient solution. Again, our unimaginative expert doesn’t think about that. In fact, he never really explores whether the perpendicularity requirement means mutually perpendicular, he just proceeds to mansplain to the client representative why he is right and she is wrong.

Assured of his expertise, he then injects sarcasm into his voice in a condescending fashion. “I’m sure your target audience doesn’t exist solely of those people.” Again, this is indicative of a lack of empathy and an unwillingness to consider other viewpoints than his own.

Although, having said that, he’s pretty quick to demur to authority, and on the few occasions that he does want to enquire about the requirements, does not pursue the matter if someone else interrupts.

This is an “expert” who is going to go away with an incomplete understanding of the problem, and will likely fail to give a satisfactory solution. Often such people will then seek to externalise any responsibility for the failure, complaining that the requirements weren’t clear or that the clients had unrealistic expectations. Maybe they weren’t and they did, but as an expert it’s his responsibility to understand those and apply his skills to solving the problem at hand, not to find ways to throw other people under the proverbial bus.

The manager in this video is clearly the sanest voice, and also manages to keep his frustration at his own mistake somewhat bottled. The extent of that mistake? He has contracted an “expert in a narrow field”, who “doesn’t see the overall picture”, and put him in a meeting with their client for which he was totally unprepared. So it’s a shame that the expert’s grandest commitment—to inflate a balloon of unknown quality and structure into the shape of a kitten—is made without the manager around to intermediate. He might have been able to intervene before the physical contact between the “expert” and the designer, which should be considered wholly inappropriate for a business meeting.

Maybe it was a mistake to put someone so junior in front of the client without some coaching. Hopefully, with appropriate mentoring and support, our “expert” can grow to be a mature, empathetic and positive contributor to his team.

Posted in advancement of the self | Comments Off on Some so-called expert

The Software Leviathan

Thomas Hobbes viewed society as a meta-person, a gigantic creature whose parts were human and which was in the service of those humans. Left to their own devices, people would not work well together as their notion of individualism and search for personal gain leads directly to conflict: strong government is needed to instil a sense of cooperation and of social obligation. This idea of “government through social contract” is pervasive in Western political thought, being the basis as it is for the “government of the people, by the people, for the people” with which Abraham Lincoln hoped to lead post-civil war America.

Software systems themselves can also be thought of as Leviathans. From a purely technical sense, all of “professional” software construction is based on notions of composition, of software systems that are themselves made of software systems. So we have structured or procedural programming, with routines composed of subroutines. And functional programming, with functions composed of functions. And object-oriented programming, with objects composed of objects. So central are these ideas to expressions of thought in software that they are considered paradigmatic by many, representing fundamental world-views of the art/craft/science.

There’s a second formulation of software-as-Leviathan, which is closer to Hobbesian meaning. The technical aspect of our software systems is merely a substrate[*] through which a social system—that of the people interacting with the software, the people acting on the software, and the people interacting with the other people—is reified. So the descriptions Hobbes made of his Leviathan can be made of these socio-technical systems:

  • First the Matter thereof, and the Artificer; both which is Man[sic].
  • Secondly, How, and by what Covenants it is made; what are the Rights and just Power or Authority of a Soveraigne; and what it is that Preserveth and Dissolveth it.
  • Thirdly, what is a Christian Common-Wealth.
  • Lastly, what is the Kingdome of Darkness.

[*] I wonder what form of substance gives the best sense of the analogy. Scaffolding? Lubricant? Mortar? Framework?

OK, maybe not so much the third one, except that it is really an attempt to define the values and norms of a society, which in the context of Hobbes’s writing, meant a Christian society.

Of course, any attempt to describe such a system is going to be filtered by the preconceptions, ideas and values of the person creating the description. Which brings me onto today’s topic: the pun in the new domain of this blog. Evidently it’s a contraction of “Structure and Interpretation of Computer Programmers”, based on the Abelson and Sussman book title. That book is abbreviated to SICP, so it’s not too difficult to see how it might be adapted to SICPers.

We can also see it as being a Latin abbreviation: sic pers., meaning such a person. So there is both the Structure and Interpretation of Computer Programmers, and there is this person who is doing the interpreting, in the domain name.

Posted in meta-waffle, philosophy after a fashion, social-science | Leave a comment

Where am I going with this?

I recently asked how people would describe this Secure Mac Programming blog were they trying to tell someone else they should read it. Of all the answers, the one that most succinctly sums up the trouble with the old name is from Alan:

@secboffin Not Just Secure, Not Just Mac, Not Just Programming.

I’m probably in the midst of some existential crisis, having spent a couple of years thinking and writing about philosophy, ethics, and the social responsibility of my work and its context. It’s clear that I’m dealing with some conflict, and it doesn’t look like reconciliation is an option.

Often I write about ideas that are still knocking around my head, such that I never come to any conclusion. I’ve used multiple choice conclusions, conclusions that appear to be from a different argument, and have concluded that my entire argument may or may not be useful.

This is just something I need to work out: what do I think I do, what do other people think I do, what parts of that do I like and dislike, are there other things I would like, can I replace the disliked parts with the liked parts, and so on. I write it here as you may have related ideas, or you may be thinking about the same things yourself and benefit from knowing that other people are, too.

What I know includes a list of things that currently interest me:

With all that in mind, I’m happy to introduce the beginning of a slow rebranding of this blog. It is now called the Structure and Interpretation of Computer Programmers, and can be found at https://www.sicpers.info/ in addition to its previous home at http://blog.securemacprogramming.com.

I do not intend to remove the old domain or break existing feed subscriptions. Over time (basically, as I work out how to do it) I’ll migrate links, feed entries and so on to reference the new domain, and the age-old updated mission of the blog.

Posted in advancement of the self, meta-waffle, whatevs | Comments Off on Where am I going with this?