Goals upon goals upon goals

As I read Ed Finkler’s piece on losing excitement in technology, I found myself recognising pieces of my own story. The prospect of a new language or framework no longer seems like a new toy, an excuse to stay up all night studying it, using it and learning its secrets as I would have done a few years ago. Instead I find myself asking what new problems are introduced, whether they’re worth accepting over the devils we know are in our existing tools and how many developer-decades are soon to be lost in reimplementing libraries that have been in CPAN for decades in a new language and delivered via a new packaging system.

Because, as I alluded to when not really talking about Swift and as more eloquently described by Matt Gemmell, our problems do not, for the most part, come from our tools and will not be solved by adding more tools. Indeed a developer’s fixation on their tools will allow the surrounding business, market, legal and social problems to go unchecked. No change of programming language will turn customers who don’t want to pay $0.99 into customers who do want to pay $99. Implicitly unwrapped optionals might catch the occasional bug in development but they just aren’t worth a 100x increase in value to people on the sharp end of our creations.

What will be of benefit to them? I think we’re going to have to go through the software equivalent of the consolidation that the consumer goods industry has already seen in hardware. Remember the introduction to the iPhone?

An iPod, a phone, an internet mobile communicator, these are NOT three separate devices!

Actually it’s none of those things. Well, it is, in that it’s all of those and more. Fundamentally, it’s a honking great handheld control unit with hundreds of buttons on, just like Sony used to make for their TV remotes. One of those buttons will turn it into a phone, one will turn it into an iPod, one will turn it into a web browser, another lets you add other buttons that do other things. They don’t all do their things in the same way, and just to keep things interesting the buttons will arbitrarily change location and design and behaviour every so often. The Sony remotes didn’t do that.

The core experience of an iPhone, or Android phone, or Windows phone—the thing you see when you switch it on and wait long enough—is a launcher. It’s the Program Manager from Windows 3.0, packaged up in a shiny interactive box. Both Program Manager and today’s replacement offer the same promise: you’ve got a feeling that you left a button around here somewhere that probably starts you doing the thing you need to do.

So we still need to make good on the promise that you have just one device, by making that one device act like just one device (preferably one which works properly and does what people expect, which is what I spend a lot of time thinking about and working on). How we get there will be the interesting problem for at least another decade. How our tools support that journey will be a fun sideshow, certainly important but hardly the focus. The tools support the goals that support our goals that support the world’s goals.

Posted in futurology, philosophy after a fashion | Comments Off on Goals upon goals upon goals

Intra-curricular activities

I’m apparently fascinated by the idea of defining curricula for learning programming. I’ve written about how we need to be careful what we try to pay forward from the way we learned in the past, and I’ve talked about how we do need to pay it forward so that the second hundred years see faster progress than the first hundred years.

I’m a fan (with reservations, as seen below) of the book series as a form of curriculum. Take something like Kent Beck’s signature series, which covers a decent subset of both technical and social approaches in software development in breadth and in depth. You could probably imagine developers who would benefit from reading some or all of the books in the series. In fact, you may be one.

Coping with people approaching the curriculum from different skill levels and areas of experience is hard. Not just for the book series, it’s hard in general. Universities take the simplifying approach of assuming that everybody wants to learn the same stuff, and teaching that stuff. And to some extent that’s easy for them, because the backgrounds of prospective students is relatively uniform. Even so, my University course organised incoming students into two groups; those who had studied complex numbers at A-level and those who had not. The difference was simply that the group who had not were given a couple of lectures on complex numbers, then it was assumed that they also knew the topic from the fourth week.

Now consider selling a programming book to the public. Part of the proposal process with all of the publishers I’ve worked with has been describing the target audience. Is this a book for people who have never programmed before? For people who have programmed a little, but never used this particular tool or technique? People who have programmed a lot but never used this tool? Is this thing similar to what they have used before, or very different? For people who are somewhat familiar with the tool? For experts (and how is that defined)? Is it for readers comfortable with maths? For readers with no maths background?

Every “no” in answer to one of those questions is an opportunity to improve the experience for a subset of the potential audience by tailoring it to that subset. It’s also an opportunity to exclude a subset of the audience by making the content less relevant to them.

[I’ll digress here to explain how I worked that out for my books: whether it’s selfishness or a failure of empathy, I wrote books that I wanted to read but that didn’t exist. Therefore the expected experience is something similar to mine, back when I filled in the proposal form.]

Clearly no single publication will cover the whole phase space of potential readers and be any good. The interesting question is how much it’s worth covering with multiple publications; whether the idea of series-as-curriculum pulls in the general direction as much as scope-limiting each book pulls in the specific. Should the curriculum take readers on a straight line from novice to master? Should it “fan in” from multiple introductions? Should it “fan out” in multiple directions of interest and enquiry? Would a non-linear curriculum be inclusive or offputtingly confusing? Should the questions really be answered by substituting the different question “how many people would buy that”?

Posted in academia, advancement of the self, books, edjercashun, learning | Comments Off on Intra-curricular activities

Planet of the Apps

Scene: in front of a green screen somewhere in the present day. Our protagonist, freshly burned out from a session of writing dynamically-typed web backend code in vim, looks up from the monitor. In the distance, some way along the beach, they see an odd shape poking out of the sand. Their curiosity piqued, they trudge out under the burning sun toward the edifice.

Running a risk of collapsing through dehydration, finally they are close enough to the object to be able to see through the heat haze that it is the top of a large statue that’s largely covered by centuries of detritus. The only discernible features are a hand holding aloft a chorded keyer, and the stern-browed head of Douglas Englebart.

Oh my God, I’m back. I’m home. All the time it was…we finally really did it. YOU MANIACS! OH DAMN YOU! GOD DAMN YOU ALL TO HELL!

Posted in whatevs | Comments Off on Planet of the Apps

Things I believe

The task of producing software is one of choosing and creating constraints, rules and abstractions inside a system which provides very few a priori. Typically we select a large collection of pre-existing constraints, rules and abstractions upon which to base our own: models of computation, programming and deployment environments, everything from the size of the register file to the way in which text is represented on the display is theoretically up for grabs, but we impose limitations in their freedom upon ourselves when we create a new product.

None of these limitations is essential. Many are conventional, and have become so embedded in the cultural practice of making software that it would be expensive or impractical to choose alternative options. Still others have so much rhetoric surrounding them that the emotional cost of change is too great to bear.

So what are these restrictions? Here’s a list of mine. I accept that they don’t all apply to you. I accept that many of them have alternatives. Indeed I believe that all of them have alternatives, and that enumerating them is the first thing that lets me treat them as assumptions to be challenged.

  1. Computers use the same memory for programs and data. I know the alternatives exists but wouldn’t know how to start using them.
  2. Memory is a big blob of uniform storage. Like above, except I know this one isn’t true but that I just ignore that detail.
  3. Memory and bus wires can be in one of two states.
  4. There probably is a free-form hierarchical database available.
  5. There is a thing called a stack and a thing called a heap, and the difference between the two is important.
  6. There is no point trying to do a better job at multiprocessing than the operating system.
  7. There is an operating system.
  8. The operating system, file system, indeed any first system on which my thing is a second system; those first systems are basically interchangeable.
  9. I can buy a faster thing (except in mobile, where I can’t).
  10. Whatever processor you gave me behaves correctly.
  11. Whatever compiler you gave me behaves correctly.
  12. Whatever library you gave me probably behaves correctly.
  13. Text is a poor way to represent a computer program but is the best we have.
  14. The way to write a computer program is to tell the computer what to do.
  15. The goal of the industry for last few decades has been the DynaBook.
  16. I still do not need a degree in computer science.
  17. I should know what my software does before I give it to the people who need to use it.
  18. The universal runtime environment is the C system.
  19. Processors today are basically like faster versions of the MC68000.
  20. Platform vendors no longer see lock-in as a goal, but do see it as a convenient side-effect.
  21. You will look after drawing pictures, playing videos, and making sounds for me.
  22. Types are optional.
Posted in advancement of the self, architecture of sorts | Leave a comment

Wristwatches in the Future

[Int: Moscone West convention center third floor ballroom. A presentation is taking place.]

So that was an update on our existing products, which I’m proud to say have never been stronger. Now I’d like to talk to you about our watch. We think you’re gonna love it.

When thinking about recent attempts to make wristwatches by some of our competitors, you’re probably thinking whatever happened to the retina display?

Eliminate Blake

I’ll tell you what happened: you’re looking at it in the wrong size. The pixel density is fine, but the screen’s smaller than it looks on this projector.

Avon Calling!

No, smaller than that.

Ulysse Nardiaaaaaaaaahn!!!!

Hmm, eye strain doesn’t sound like a future thing does it? Shouldn’t we come up with something a bit more ergonomic?

…actually, no, it’s a reasonable compromise. Centuries of user experience research have shown that future people find it most natural to talk into their scaphoid bones. That’s even true when they’re plastic people who don’t actually have scaphoid bones!

Stand by for wrist action!

[Sidenote: No, I will not be apologising for the alt text on that image.]

The technology was originally introduced as part of the struggle to end the Cold War, when one key application was in the unification of Germany.

Rembrandt's Knight Watch.

After seeing how people tried to use these devices, we came up with the breakthrough form factor: five pounds of computer-machined aluminum and an incomprehensible user interface.

This watch is Bullock's.

So that’s the watch. We can’t wait to see what you do with it!

Posted in whatevs | Comments Off on Wristwatches in the Future

Reflections on “Is TDD Dead”

The first thing I noticed that I needed to change as a result of watching the Is TDD Dead? series is that I started out with a defensive mindset. If I believe in the dogma of a rule, then presumably I’m still a beginner who hasn’t yet learned that there are other rules, each of which has limited applicability. So lesson one is to practise more, to find the limitations of the techniques I use, and to understand what to do when those limits are met.

An unsurprising aspect of the discussion was that the thing I call TDD is not the thing that others call TDD. At some point recently I got converted to the Mockist school. Part of me (the part that doesn’t value my social life) is inclined to write a second edition of test-driven iOS development but then keep both editions available, because the style will have changed so much between the two.

Anyway it is more germane to note that none of the three speakers has a lot of time for mocks. I need to understand this, the experiences they have had that I have not, and the things that I do not know about mocks. Perhaps they view the sorts of tests that I build as the structural tests that they would delete once the system is working.

Posted in TDD | Leave a comment

On a re-read you realise this isn’t really about Swift

It’s a bit early to have formed an opinion on a recently-announced programming language, but as the requisite number of people have asked what mine is (i.e. at least zero) I thought I’d type and see what happens.

Rules in programming tend to be bullshit. This is about one-third of a talk I’m giving later in the year, so I’ll leave that train of thought alone in case anyone’s going to be in the audience.

Anyway, knowing this, we can observe the exceptions to any rule people tend to throw at us about programming languages. For example: “Static types for good engineering, dynamic types for exploration“. Make sure your useing you’re type’s good, and notice how many engineering practices come from programmers who know dynamic languages.

We could add “you can’t do good tooling on dynamic languages”. O RLY.

Having thus realised that the rules are nonsense, and that some expert actually sat down and thought about what they wanted to get out of a language in the contexts in which they thought they’d use it. When I did that, I decided on something that goes in entirely the opposite direction to Swift. Starting with ObjC, I’m more likely to end up with Self than with Swift (but then I’ve already told you that I think Sun Microsystems did awesome shit, so that may be no surprise). It probably just means that I’m not an expert, though.

If you’re going to use types, may as well go all-in on using tuples. You’ll probably see plenty of definitions of a tuple, so let me confuse things by adding another:

A tuple is an element drawn from the cartesian product of its member types.

What this means is that unlike boring old-fashioned ordered collections, there’s a known number of things in a tuple and each is of a known type. This is useful for the vexing question of signalling errors, as you can just return something like a (Result?, Error?) like an industrialised society ought to be doing. It also means that almost every situation in which a method’s interface accepts or returns a collection can be replaced with a situation in which it accepts a known number of things of known type.

But anyway, for the most part our programming languages allow us to accidentally introduce problems that we shouldn’t need to solve, but the biggest problems we actually have to solve lie elsewhere.

Posted in code-level, nearly linguistics | Comments Off on On a re-read you realise this isn’t really about Swift

It doesn’t take an Oracle to see that coming

Today has largely been brought to you by nostalgia brought about by this article, reporting on a get-together of former Sun Microsystems employees.

I have never been a former Sun Microsystems employee, and of course now I never will be one. Of all the tech companies I’ve interacted with, Sun is the one I most regret not getting to work with. By the time I dealt with them, they had already put the “crash” in “dot-com crash” but there was still a feeling that they made great things. And besides, they showed that even a pony-tailed Objective-C programmer can be a tech CEO.

I recently talked about the importance of GNU projects, but plenty of other software projects were also important, and Sun had a hand in quite a few of them:

  • Bill Joy worked for them, and most of their early workstation operating systems were based on BSD Unix.
  • In fact while Apollo may have invented the idea that a single person might use a Unix computer, Sun popularised it.
  • I learned how to boot Macs by learning how to program Forth and boot Suns.
  • NFS was the beginning of the separation between your device and your documents.
  • NIS was a bit of an important step on the way to logging in anywhere (its level of baroqueness compared to OAuth has never been accurately gauged).
  • In fact, they pretty much invented cloud computing.
  • Java was quite a big thing for a while.
  • Dtrace is pretty amazing.
  • They even got into standard Unix workstation vendor capitalisation for a while.

It’s likely that much of the interesting stuff at Sun was already over by the time I could’ve worked there, and I certainly experienced a very last-minute replay of some of their history. When I was a student I ‘borrowed’ an Ultra 5 (one of their least good workstations, pretty much a PC with a sun4u SPARC innards) and a SparcStation 5 (one of their most good) to learn about Solaris, SunOS and NeXTSTEP. But it certainly feels like a lot of the future was invented there, even if they were largely following Xerox’s playbook like the rest of the industry.

So tonight, I’ll remember that my control key is in the correct place:

Sun type 5 keyboard

I’ll press L1 and A, then raise a glass to Sun and the job I never had.

Posted in Business, Java, UNIX | Comments Off on It doesn’t take an Oracle to see that coming

Is TDD Dead? My questions

These are my questions for parts 5 and 6 of Is TDD Dead?. I’d like to start by thanking the panellists for publishing their discussions.

TDD the Principle

Kent and Martin, why is it that you practise test-driven development? What do you get from it? David, how do you get those same things?

What could change about the way we write software to make TDD redundant or obsolete for Kent and Martin? What could change about the way that TDD is performed to make it useful or beneficial for David?

TDD the Practise

David, is there a way in which we could retain the test-first idea of TDD, but avoid the design problems you encounter in the production code? Are you able to distil a short list of design guidelines that avoid your “design damage”? Kent and Martin, how could TDDers follow those guidelines in their work?

TDD the Community

How have all three of you felt about the community reaction to this discussion? What has been good? What has been bad? What has been missing? Is there information the community could supply to help developers evaluate and choose or discard TDD for their work that is missing? Are there questions developers should ask before choosing whether to adopt or avoid TDD that you believe are not being raised?

Posted in TDD | Leave a comment

The lighter side of open source

In a recent post I talked about the apolitical, amoral nature of open source software and how it puts the interests of a small programming class before the interests of the broad collection of people who interact with programmers’ output. The open source movement has been of great benefit to the software industry, and this hasn’t necessarily been a zero-sum game.

Reality is always more nuanced than history, and yet here is a potted guide to open source history. In the beginning, there were military computers. There was no-one else to share your computer programs with, because:

  1. no-one else had a computer.
  2. well, maybe they did, but they weren’t telling you.
  3. you didn’t want to tell anyone else you have a computer.

Then there were academic computers. Now you do want to share your programs with everyone, and they share theirs with you and so everyone is on the cutting edge.

Then there were commercial computer companies (I told you this history would lack nuance), who were happy to share their programs with you because it meant you could get more out of the computers they were selling.

Then there were commercial computer companies who decided that the source code to the programs used to interface with their hardware were their competitive advantage, and decided to stop sharing them. This made an academic (Richard Meriadoc (humour me) Stallman) sad, and so he created the Free Software movement to:

  1. promote sharing of software over not sharing software;
  2. subvert the copyright system usually used to restrict sharing to enable sharing.

Then there were people who wanted to use Free Software in their day jobs but found that the movement was considered too idealogical to be palatable to management, so they rebranded it Open Source Software to re-frame the discussion along business, rather than political, lines.

This is about the point when your protagonist enters, stage right. The dot-com bubble was imploding, leading to changed fortunes for all sorts of people and organisations in the software industry. Everything I would do regarding professional computing depended in some way on the GNU project and the Free Software Foundation:

  1. I learned Unix, thanks to the ability to inexpensively run GNU/Linux on my desktop computer.
  2. The things I learned about Unix, C programming and so on were portable to various platforms beyond GNU/Linux, thanks to the GNU compiler collection, GNU bash, GNU make, GNU debugger and others.
  3. One such platform was Mac OS X, the new hotness from Apple. This was a technology acquired through the purchase of NeXT, who had been able to provide a complete programming environment despite their small size and (comparatively) small budget by wrapping the tools listed above.

Somewhere in all the above I even found it possible to get paid for writing software: a GPLv2-licensed Lisp package for GNU Emacs.

Of course, that’s just my story, but there are plenty like it. Many other programmers work on platforms like iOS, or Android, or Linux, or in environments like Ruby or Objective-C, that either only exist or have only become as successful as they have due to the successes of the Free Software Foundation, and the ability for organisations (commercial or otherwise) to take advantage of Free or Open Source software as building blocks which they can combine or add to.

Since then, the discussion has again been re-framed. Open Source – originally a branding change to make Free Software acceptable to business – has become a principle rather than a tool. A community that owes its financial viability to Free Software now denounces such “viral” licences, as source released under their conditions is harder to profit from than the more permissive, university-style Open Source licences.

Software writers in the 1980s liked to talk about how object technology would be the silver bullet that allowed re-use and composition of software systems, moving programming from a cottage industry where everyone makes everything from scratch to a production-line enterprise where standard parts fit together to provide a base for valuable products. It wasn’t; the sharing-required software licence was.

Posted in Business, economics | Leave a comment