On a re-read you realise this isn’t really about Swift

It’s a bit early to have formed an opinion on a recently-announced programming language, but as the requisite number of people have asked what mine is (i.e. at least zero) I thought I’d type and see what happens.

Rules in programming tend to be bullshit. This is about one-third of a talk I’m giving later in the year, so I’ll leave that train of thought alone in case anyone’s going to be in the audience.

Anyway, knowing this, we can observe the exceptions to any rule people tend to throw at us about programming languages. For example: “Static types for good engineering, dynamic types for exploration“. Make sure your useing you’re type’s good, and notice how many engineering practices come from programmers who know dynamic languages.

We could add “you can’t do good tooling on dynamic languages”. O RLY.

Having thus realised that the rules are nonsense, and that some expert actually sat down and thought about what they wanted to get out of a language in the contexts in which they thought they’d use it. When I did that, I decided on something that goes in entirely the opposite direction to Swift. Starting with ObjC, I’m more likely to end up with Self than with Swift (but then I’ve already told you that I think Sun Microsystems did awesome shit, so that may be no surprise). It probably just means that I’m not an expert, though.

If you’re going to use types, may as well go all-in on using tuples. You’ll probably see plenty of definitions of a tuple, so let me confuse things by adding another:

A tuple is an element drawn from the cartesian product of its member types.

What this means is that unlike boring old-fashioned ordered collections, there’s a known number of things in a tuple and each is of a known type. This is useful for the vexing question of signalling errors, as you can just return something like a (Result?, Error?) like an industrialised society ought to be doing. It also means that almost every situation in which a method’s interface accepts or returns a collection can be replaced with a situation in which it accepts a known number of things of known type.

But anyway, for the most part our programming languages allow us to accidentally introduce problems that we shouldn’t need to solve, but the biggest problems we actually have to solve lie elsewhere.

Posted in code-level, nearly linguistics | Comments Off on On a re-read you realise this isn’t really about Swift

It doesn’t take an Oracle to see that coming

Today has largely been brought to you by nostalgia brought about by this article, reporting on a get-together of former Sun Microsystems employees.

I have never been a former Sun Microsystems employee, and of course now I never will be one. Of all the tech companies I’ve interacted with, Sun is the one I most regret not getting to work with. By the time I dealt with them, they had already put the “crash” in “dot-com crash” but there was still a feeling that they made great things. And besides, they showed that even a pony-tailed Objective-C programmer can be a tech CEO.

I recently talked about the importance of GNU projects, but plenty of other software projects were also important, and Sun had a hand in quite a few of them:

  • Bill Joy worked for them, and most of their early workstation operating systems were based on BSD Unix.
  • In fact while Apollo may have invented the idea that a single person might use a Unix computer, Sun popularised it.
  • I learned how to boot Macs by learning how to program Forth and boot Suns.
  • NFS was the beginning of the separation between your device and your documents.
  • NIS was a bit of an important step on the way to logging in anywhere (its level of baroqueness compared to OAuth has never been accurately gauged).
  • In fact, they pretty much invented cloud computing.
  • Java was quite a big thing for a while.
  • Dtrace is pretty amazing.
  • They even got into standard Unix workstation vendor capitalisation for a while.

It’s likely that much of the interesting stuff at Sun was already over by the time I could’ve worked there, and I certainly experienced a very last-minute replay of some of their history. When I was a student I ‘borrowed’ an Ultra 5 (one of their least good workstations, pretty much a PC with a sun4u SPARC innards) and a SparcStation 5 (one of their most good) to learn about Solaris, SunOS and NeXTSTEP. But it certainly feels like a lot of the future was invented there, even if they were largely following Xerox’s playbook like the rest of the industry.

So tonight, I’ll remember that my control key is in the correct place:

Sun type 5 keyboard

I’ll press L1 and A, then raise a glass to Sun and the job I never had.

Posted in Business, Java, UNIX | Comments Off on It doesn’t take an Oracle to see that coming

Is TDD Dead? My questions

These are my questions for parts 5 and 6 of Is TDD Dead?. I’d like to start by thanking the panellists for publishing their discussions.

TDD the Principle

Kent and Martin, why is it that you practise test-driven development? What do you get from it? David, how do you get those same things?

What could change about the way we write software to make TDD redundant or obsolete for Kent and Martin? What could change about the way that TDD is performed to make it useful or beneficial for David?

TDD the Practise

David, is there a way in which we could retain the test-first idea of TDD, but avoid the design problems you encounter in the production code? Are you able to distil a short list of design guidelines that avoid your “design damage”? Kent and Martin, how could TDDers follow those guidelines in their work?

TDD the Community

How have all three of you felt about the community reaction to this discussion? What has been good? What has been bad? What has been missing? Is there information the community could supply to help developers evaluate and choose or discard TDD for their work that is missing? Are there questions developers should ask before choosing whether to adopt or avoid TDD that you believe are not being raised?

Posted in TDD | Leave a comment

The lighter side of open source

In a recent post I talked about the apolitical, amoral nature of open source software and how it puts the interests of a small programming class before the interests of the broad collection of people who interact with programmers’ output. The open source movement has been of great benefit to the software industry, and this hasn’t necessarily been a zero-sum game.

Reality is always more nuanced than history, and yet here is a potted guide to open source history. In the beginning, there were military computers. There was no-one else to share your computer programs with, because:

  1. no-one else had a computer.
  2. well, maybe they did, but they weren’t telling you.
  3. you didn’t want to tell anyone else you have a computer.

Then there were academic computers. Now you do want to share your programs with everyone, and they share theirs with you and so everyone is on the cutting edge.

Then there were commercial computer companies (I told you this history would lack nuance), who were happy to share their programs with you because it meant you could get more out of the computers they were selling.

Then there were commercial computer companies who decided that the source code to the programs used to interface with their hardware were their competitive advantage, and decided to stop sharing them. This made an academic (Richard Meriadoc (humour me) Stallman) sad, and so he created the Free Software movement to:

  1. promote sharing of software over not sharing software;
  2. subvert the copyright system usually used to restrict sharing to enable sharing.

Then there were people who wanted to use Free Software in their day jobs but found that the movement was considered too idealogical to be palatable to management, so they rebranded it Open Source Software to re-frame the discussion along business, rather than political, lines.

This is about the point when your protagonist enters, stage right. The dot-com bubble was imploding, leading to changed fortunes for all sorts of people and organisations in the software industry. Everything I would do regarding professional computing depended in some way on the GNU project and the Free Software Foundation:

  1. I learned Unix, thanks to the ability to inexpensively run GNU/Linux on my desktop computer.
  2. The things I learned about Unix, C programming and so on were portable to various platforms beyond GNU/Linux, thanks to the GNU compiler collection, GNU bash, GNU make, GNU debugger and others.
  3. One such platform was Mac OS X, the new hotness from Apple. This was a technology acquired through the purchase of NeXT, who had been able to provide a complete programming environment despite their small size and (comparatively) small budget by wrapping the tools listed above.

Somewhere in all the above I even found it possible to get paid for writing software: a GPLv2-licensed Lisp package for GNU Emacs.

Of course, that’s just my story, but there are plenty like it. Many other programmers work on platforms like iOS, or Android, or Linux, or in environments like Ruby or Objective-C, that either only exist or have only become as successful as they have due to the successes of the Free Software Foundation, and the ability for organisations (commercial or otherwise) to take advantage of Free or Open Source software as building blocks which they can combine or add to.

Since then, the discussion has again been re-framed. Open Source – originally a branding change to make Free Software acceptable to business – has become a principle rather than a tool. A community that owes its financial viability to Free Software now denounces such “viral” licences, as source released under their conditions is harder to profit from than the more permissive, university-style Open Source licences.

Software writers in the 1980s liked to talk about how object technology would be the silver bullet that allowed re-use and composition of software systems, moving programming from a cottage industry where everyone makes everything from scratch to a production-line enterprise where standard parts fit together to provide a base for valuable products. It wasn’t; the sharing-required software licence was.

Posted in Business, economics | Leave a comment

I use mocks and I’m happy with that

Both Kent Beck and Martin Fowler have said that they don’t use mock objects in their test-driven development. I do. I use them mostly for the sense described first in my BNR blog post on Mock Objects, namely to stand in for a thing that can receive messages I want to send, but that does not yet exist.

If you look at the code in Test-Driven iOS Development, you’ll find that it uses plenty of test doubles but none of them is a mock object. What has changed in my worldview to move from not-mocking to mocking in that time?

The key information that gave me the insight was this message, pardon the pun! from Alan Kay on object-oriented programming:

The big idea is “messaging” – that is what the kernal[sic] of Smalltalk/Squeak
is all about (and it’s something that was never quite completed in our
Xerox PARC phase). The Japanese have a small word – ma – for “that which
is in between” – perhaps the nearest English equivalent is “interstitial”.
The key in making great and growable systems is much more to design how its
modules communicate rather than what their internal properties and
behaviors should be.

What I’m really trying to do is to define the network of objects connected by message sending, but the tool I have makes me think about objects and what they’re doing. To me, mock objects are the ability to subvert the tool, and force it to let me focus on the ma.

Posted in code-level, OOP, TDD, TDiOSD | 1 Comment

One meeellleeon

A teacher recently asked her computing class if there was any question they would like to ask me. One of the students came up with a question: how could they make a million pounds?

I think my answer would be one of these:

  1. Facebook has order of a billion users and is worth order of 100 billion pounds. Network value scales as the square of the number of users, so to merely make a million pounds you could build a network with just three and a half million users.

  2. A lowest-tier iOS app nets its developer roughly 40p per sale. To make a million pounds you need simply build a cheap app, then attract two and a half million sales.

If the app were a value-add for the network, you could easily make more than two million pounds.

[In fact I’m pretty sure I’ve already make a million pounds, it’s just that the costs worked out to about a million pounds.]

Posted in edjercashun | Comments Off on One meeellleeon

Open Source and the Lehrer-von Braun defence

Tom Lehrer’s song about Wernher von Braun is of a man who should not be described as hypocritical:

Say rather that he’s apolitical. “Once the rockets go up, who cares where they come down? That’s not my department,” says Wernher von Braun.

The idea that programming as a field has no clear ethical direction is not news. As Martin Fowler says here, some programmers seem to believe that they are mere code monkeys. We build things, it’s up to other people to choose how they get used, right?

It’s in open source software that this line of thinking is clearest. Of course anyone can use commercial software, but it becomes awkward to have blood money on your company’s records. Those defence companies and minerals miners just lead people to ask questions, and it’d be better if they didn’t. You could just choose not to sell to those people, but that reduces the impact of your product.

A solution presents itself: don’t take their money! Rather, decouple the sending up of the thing and the choice of where it comes down, by making it available to people who now don’t have any (obvious or traceable, anyway) connection to you or your employer. That’s not your department! Instead of selling it, stick it up on a website (preferably someone else’s, like GitHub) and give a blanket licence to everyone to use the software for any purpose. You just built a sweet library for interfacing with gyroscopic stabilisers, is it really your fault that someone built a cruise missile that uses the library?

“But wait,” you say, “this doesn’t sound like the clear-cut victory you make it out to be. In avoiding the social difficulties attendant in selling my software to so-called evildoers, I’ve also removed the possibility to sell it to gooddoers. Doesn’t that mean no money?” No, as Andrew Binstock notes, you can still sell the software.

Anyway, perhaps it’d be useful to restructure the economics of the software industry such that open source was seen as a value-driver, so you can both have your open source cake and eat the cake derived from valuable monetary income. You might do that by organising things such that an open source portfolio were seen as a necessary input to getting hired, for example. So while plenty of people still don’t get paid for open source software, they still indirectly benefit from it monetarily.

We can, evidently, easily spin contributions to open source such that they are to our own benefit. What about everybody else? When a government uses Linux computers to spy on the entire world, or an armed force powers its weapons with free software, is that pro bono publico?

The usual response would be the Lehrer-von Braun defence detailed above. “We just built it in good faith, it’s up to others to choose how they use it.” An attempt to withdraw from ethical evaluation is itself an ethical stance: it’s saying that decisions over whether the things you make are good or evil are above (or beneath) your pay-grade. That we, as developers, are OK with the idea that we get large paychecks to live in comfortable countries and solve mental problems, and that the impact of those solutions is for somebody else to deal with. That despite being at the epicentre of one of the world’s biggest social and economic changes, we don’t care what happens to society or to economy as a result of our doings.

Attempts have been made to produce “socially aware” software, but these have so far not been unqualified successes. The JSON licence includes the following clause:

The Software shall be used for Good, not Evil.

Interestingly, in one analysis I discovered, the first complaint about this clause is that it interferes with the Free Software goal of copyleft. How ethical do we think an industry is that values self-serving details over the impact of its work on society?

The other problem raised in relation to the JSON licence is that it doesn’t explain what good or evil are, nor who is allowed to decide what good or evil are. Broad agreement is unlikely, so this is like the career advisor who tells you to “follow your dreams” without separating out the ones where you’re a successful human rights lawyer and the ones where you’re being chased by a giant spider with a tentacle face through an ever-changing landscape of horror.

I should probably stop reading H. P. Lovecraft at bedtime.

Posted in philosophy after a fashion, Responsibility | Leave a comment

It’s just like English

Fans of the RSpec tool for writing tests will be familiar with its English-like(fn1) syntax for describing tests, which looks like this.

describe StrawMan do
  context "when interpreting a test in RSpec" do
    it "is written in plain English" do
      expect(spec).to eq(legible_text)
    end
  end
end

That’s almost completely distinguishable from conversational English. Perhaps programmers just have a different idea of what English looks like than many typical speakers of English. I posit this conclusion because the gulf between “English-like” and “English” is not new. You can almost see attempts at real constructs in the English language being bashed into place in the syntax for BASIC:

“For every number between 1 and 10, do this with the number being named ‘I’…that’s everything, so move on to the next value for I now.”

FOR I = 1 TO 10 : … : NEXT I

And the the the Apple-recommended Definitive Guide to the AppleScript has this the the to say the about the “English-likeness” monster:(fn2)

Personally, though, I’m not fond of AppleScript’s English-likeness. For one thing, I feel it is misleading. It gives one the sense that one just knows AppleScript because one knows English; but that is not so. It also gives one the sense that AppleScript is highly flexible and accepting of commands expressed just however one cares to phrase them; and that is really not so.

Reviewing, then, we have a collection of tools that claim some similarity with English, but then fall down on every comparison except “uses some sequences of characters that have also been used in English”. What went wrong? Indeed, did anything go wrong?

Programming’s close analogue in natural language is the Arabic wish. Computers are much like the djinn in that you tell them what should happen and instead they make exactly the thing you asked for. You waste two attempts to converse with them on asking reasonable questions that they wilfully misinterpret, then spend forever agonising over your third and final attempt. With a djinni, it’s your final attempt because you only got three wishes. With a computer, you’re allowed to try as often as you like but by the third time you’re realising how much more appealing a career in assassinating mythical preternatural wish-givers is looking but you don’t want to take that kind of risk. Both djinn and computers are like that person who’s had a restraining order ever since they decided to take “pick me up at 8” more literally than was truly warranted.

So the role of the programmer is like a kind of djinn-lawyer, translating all of the nuance and creative ambiguity of conversational language into the sort of precise, single-meaning prose that even the most belligerent of readers cannot deliberately misinterpret. And that bit of programming has not materially changed in decades. We’ve gone from “do exactly this”, through “do exactly this but you choose how you use the register file to do it” and “do exactly this but you choose how you use the main memory to do it”.

Getting computers to act like participants in a conversation is possible, but either a bit of a gimmick or limited in application. If you really wanted to build the Knowledge Navigator you’d need to fix this problem (along with the attendant acoustic engineering problems).

That is when we’ll actually be able to claim success at improvement through abstraction. Not when we can give specialist djinn-linguists more abstractions, but when we can give computers enough abstractions that you no longer need to be a translator to make a computer do anything.

(fn1) Why “English-like” and not “verbal language-like”? One might chalk it up to neocolonialism and American industry deciding that English was Good Enough For Everybody. Indeed, as Matz notes, many Japanese people cannot speak English well and that adds a barrier to learning programming languages that are sort-of in English.

(fn2) Please don’t write in about all of the spurious occurrences of “the” in the last (non-quote) sentence. For those of us who have used AppleScript, they can be our little in-joke.

Posted in nearly linguistics | Comments Off on It’s just like English

Code longevity

I recently wrote about the impending centenary of applied computing; a time when we could reflect on the first hundred years to make it easier for people to progress beyond our position into the second hundred years. This necessitates looking at the things we’ve tried, the things that succeeded and the things that failed. It involves recalling and describing the good ideas and the bad ideas.

So, did the bad ideas fail and the good ideas succeed? Can we declare that because something worked, it must have been a success? Is length of service a great proxy for quality of principle?

Let’s start by looking at the lifetime of some of the trappings of applied computing. I’m writing this on the smartphone shown in the picture below. It is, among the many computers I own that claim to be computers and could reasonably be described as modern, one of only two that is not running a recent variant of a minicomputer game–loading system.

Surface RT and Lumia 925

Now is that a fair assessment? Certainly all the Macs, iOSes, Androids (and even routers and television streamy box things) in the house are based on Unix, and Unix is the thing of the 1970s minicomputer. I’ve even used that idea to explain why we still have to deal with PDP-8 problems in iPhones. But is it fair to assume that because the name has lasted, then the idea has been preserved? Did Unix succeed, or has it been replaced by different things with the same name? That happens a lot; is today’s ethernet really the same ethernet that Bob Metcalfe and colleagues at PARC invented? Conversely, just because the name changed is everything new? Does Windows NT really represent a clean break in 1993?

There’s certainly some core, a kernel (f’nar) of the modern Unix that, whether in code or philosophy, can be traced back to the original system (and indeed beyond). But is that there because it’s still a good idea, or because there’s no impetus to remove it? Or even because it’s a bad idea, but removing it would be expensive?

As we’re already talking about Unix, let’s talk about C. In his talk Null References: The Billion-Dollar Mistake, Tony Hoare describes his own mistake as being the introduction of a null reference. He then says that C’s mistake (C follows Algol in having null references, but it also lacks have subscript bounds checking) is an order of magnitude worse. In fact, Hoare also identified a third problem: he says that it’s a good idea to permit a program failure to be diagnosed just from the error message and the high-level program source text. However, runtime failures in C usually end up with a core dump and/or a stack trace through the instructions of the target machine environment.

We can easily wonder just how much (expensive) programmer time has been lost disassembling stack traces, matching up debugger symbols and interpreting core dumps, but without figures for that I’ll generously assume that it’s an order of magnitude smaller than the losses due to buffer overflows. Now that’s only a tens-of-billions-of-dollars value of mistake, and C is the substrate for trillions of dollars of value of industry. So do we say that on balance, C is 99% a Good Thing™? Is it a bad idea that nonetheless enabled plenty of good ones?

[Incidentally, and without wanting to derail the central thesis of this post, I disagree with Hoare’s numbers. Symantec is merely one of the largest companies in the information security sector, with annual revenue in their most recent report of $6.9B. That’s a small part of the total value sunk into that sector, which I’ll guess has an annual magnitude of multiple tens of billions. A large fraction of the problems addressed by infosec can be attributed to C’s lack of bounds checking, so that there’s probably just an annual impact of around ten billion dollars working on fixing the problem. Assuming those businesses have sustainable revenues over multiple years, the integrated cost is well into the hundreds of billions. That only revises the estimated impact on the C software industry from ‘fractions of a per cent’ to ‘a per cent’ though.]

Perhaps it’s fair to say that C was a good idea when it arose, and that it’s since been found to have deficiencies that haven’t yet become expensive enough to warrant decommissioning it. There’s an assumption of rational action in there that I think it’s fair to question, though: am I assuming that C is not worth replacing just because it has not been replaced? Might there actually be other factors involved?

Yes, there might. It’s possible that there are organisations out there for whom C is more expensive than its worth, but where the sunk cost fallacy stops them from moving on. Or organisations who stick with C because their platform vendor gives them a C toolset, even where free or paid alternatives would be cheaper [in fact that would point to a difficulty with any holistic evaluation: that the cost to the people who provide development environments and the cost to the people who consume development environments depends on different factors, and the power in the market is biased towards a few large providers. Welcome to economics]. Or organisations who stick with C because of a perception of a large community of users, which is (perceived to be) more useful than striking out alone with better tools.

It’s also possible that moves in the other direction are based on non-rational factors: organisations that seek novelty rather than improvement, or who move away from C because a vendor convinces them that their alternative is better regardless of objective truth.

It turns out that the simple question we wanted to ask about applied computing: “What works?” leads to such a complex and maybe even chaotic system of forces acting in multiple dimensions that answering it will be very difficult. This doesn’t mean that an answer should not be sought, but that finding the answer will combine expertise from many different fields. Particularly, something that survives for a long time doesn’t necessarily work: it could just be that people are afraid of the alternatives, or haven’t really considered them.

Posted in code-level, economics, software-engineering | Comments Off on Code longevity

Preparing for Computing’s Big One-Oh-Oh

However you slice the pie, we’re between two and three decades away from the centenary celebration for applied computing (which is of course significantly after theoretical or hypothetical advances made by the likes of Lovelace, Turing and others). You might count the anniversary of Colossus in 2043, the ENIAC in 2046, or maybe something earlier (and arguably not actually applied) like the Z3 or ABC (both 2041). Whichever one you pick, it’s not far off.

That means that the time to start organising the handover from the first century’s programmers to the second is now, or perhaps a little earlier. You can see the period from the 1940s to around 1980 as a time of discovery, when people invented new ways of building and applying computers because they could, and because there were no old ways yet. The next three and a half decades—a period longer than my life—has been a period of rediscovery, in which a small number of practices have become entrenched and people occasionally find existing, but forgotten, tools and techniques to add to their arsenal, and incrementally advance the entrenched ones.

My suggestion is that the next few decades be a period of uncovery, in which we purposefully seek out those things that have been tried, and tell the stories of how they are:

  • successful because they work;
  • successful because they are well-marketed;
  • successful because they were already deployed before the problems were understood;
  • abandoned because they don’t work;
  • abandoned because they are hard;
  • abandoned because they are misunderstood;
  • abandoned because something else failed while we were trying them.

I imagine a multi-volume book✽, one that is to the art of computer programming as The Art Of Computer Programming is to the mechanics of executing algorithms on a machine. Such a book✽ would be mostly a guide, partly a history, with some, all or more of the following properties:

  • not tied to any platform, technology or other fleeting artefact, though with examples where appropriate (perhaps in a platform invented for the purpose, as MIX, Smalltalk, BBC BASIC and Oberon all were)
  • informed both by academic inquiry and practical experience
  • more accessible than the Software Engineering Body of Knowledge
  • as accepting of multiple dissenting views as Ward’s Wiki
  • at least as honest about our failures as The Mythical Man-Month
  • at least as proud of our successes as The Clean Coder
  • more popular than The Celestial Homecare Omnibus

As TAOCP is a survey of algorithms, so this book✽ would be a survey of techniques, practices and modes of thought. As this century’s programmer can go to TAOCP to compare algorithms and data structures for solving small-scale problems then use selected algorithms and data structures in their own work, so next century’s applier of computing could go to this book✽ to compare techniques and ways of reasoning about problems in computing then use selected techniques and reasons in their own work. Few people would read such a thing from cover to cover. But many would have it to hand, and would be able to get on with the work of invention without having to rewrite all of Doug Engelbart’s work before they could get to the new stuff.

It's dangerous to go alone! Take this.

✽: don’t get hung up on the idea that a book is a collection of quires of some pigmented flat organic matter bound into a codex, though.

Posted in academia, advancement of the self, books, learning, Responsibility, software-engineering, tool-support | Comments Off on Preparing for Computing’s Big One-Oh-Oh