The Vexing Problems in Programming

I admit it, I’ve been on the internet for quite a while (I could tell you that my ICQ number is 95941970, but I haven’t logged in for years) and my habits haven’t changed. I still regularly get technology news from slashdot, and today was no exception. An interesting article was Here Be Dragons: The 7 most vexing problems in programming. Without wanting to spoil the article for you by giving away the punchline, there are indeed some frustratingly difficult problems mentioned: multithreading, security, encryption are among the list.

All of these problems are sideshows to what I see as one of the largest and most vexing issues in programming: the fundamental rule to business administration is that your income should be greater than your costs, but software makers still, by and large, don’t have a way to compare the expected value of their work to the expected cost of the work.

The problem in space

Different software teams – and individuals – do work in different contexts, and in different ways. The lone wolf micro-ISV is not the same as an individual contract developer. The in-house IT team does not have the same problems to solve as the shrinkwrapped software vendor, and those developing web services for public consumption have yet another context. The team with core hours all working in a single office is different from the distributed team inhabiting multiple time zones.

How much of this variety is essential, and how much is accidental? How much of it is relevant, when considering some intervention, process change, or technique? Consultants speaking at conferences (another context with its own similarities and differences from the others) don’t tend to talk about what researchers in fields such as psychology would recognise as “threats to validity” of their work, but given all of the ways in which software is made, we need to know whether some proposal applies to all of them, or to some of them, or whether it has been applied to some of them and might be applied to others, and what would assist or confound that application.

The problem in time

What are the accepted, tested and validated ways to identify who will be using and otherwise impacted by our software systems? To know whether they can use the system we propose, and whether it is the best system for their intended use? To ensure that our proposed software systems treat those people ethically? To understand the cost (to ourselves and to others) of constructing those systems? To deliver the systems to the people who will interact with them? To choose which people are or aren’t entitled to access? To build a representation of the problem to be solved, to validate that representation, to validate the solution against that representation?

Where an answer exists to those questions, what are the contexts in which it is valid and what are the threats to its validity? How has that answer been compared with other possibilities? How has it been confirmed? How has it been challenged? How can I find out about those confirmations and challenges? How can I find out about any alternatives? What techniques exist to weigh up those alternatives quantitively, rather than relying merely on the persuasiveness of the conference speakers promoting those solutions (and, by the way, the books/screencasts that describe the solutions)?

The lack of a problem

Why should I care? There’s enough money in software at the moment to mean that I don’t need to be any good at knowing what works or doesn’t work, I just need to get out there and sell some software. In the rare situation that I don’t make my money back, that’s just the market forces at work, and I can go and get a high-paying job somewhere while I lick my wounds, and pick another programming language/framework/platform/whatever it is that’s going to make my next attempt definitely succeed.

Clearly, this bottomless pit of money that arises from society’s unwavering faith in software and its ability to cure all ills is never going to run out. There’s no need to worry ever about whether we’re doing it right, because there’ll always be someone out there willing to pay for us to do it wrong. Life as a programmer is like some kind of socialist utopia where whether we’re making a valuable contribution or not, the rest of society is looking out for us.

That’s going to last forever, right?

In which I interview so you don’t have to

Describing job interviews for technical roles in the software industry to people who have left or have always been outside the software industry requires two things: patience on the part of the one doing the describing, and the ability for the listener to take a joke. Over the last twelve years I have taken countless job interviews so that you don’t have to. Here’s what I’ve found: presented as a guide to running the average software developer interview. As with all descriptions of mediocrity, you should treat this as best practice.

[Be clear on this: not all interviews are like this. But this is an expectable baseline, derived from experience.]

Person Specification

The ideal candidate will be rich. We’re going to put them through hours – maybe even days – of tests, interviews, meetings, and “informal chats” that they’d better be on best behaviour for anyway. They need to be able to afford taking that time away from work, friends, other opportunities, so they’d better be rich.

That multiple-hour interview process means that they’d better be desperate for a job too. As you’ll find out in the section on our process, we pride ourselves on not giving away too much. We’re not selling our company to you, because we know we’re offering the chance to do what you’ve always wanted: sit in our open plan office space next to our own particular loud crisp-eater muttering at Eclipse.

The ability to go without food is desirable too. Even if a stage of the interview is planned to take so long that it would go over lunch, and even though we might put a break for lunch in, we might also forget to do any catering. Computers don’t need food and programmers are sort of like computers, we heard. We actually occasionally do feed our staff, and advertise this as a perk.

Our Process

The first thing we want to check is whether you can solve logical problems. We don’t actually need you to solve logical problems, after all, that’s what the computers are for. But we’ll give you an aptitude/basic reasoning test anyway [yes, although it’s no longer the 1960s and we aren’t IBM, this is still common if not universal].

The reasoning test is there to weed out people who didn’t have the same education as us, or were raised speaking a different language, or in a different culture. Empathy is hard, and to avoid unduly stressing our staff we want to make sure that their colleagues are as similar to them as possible. Additionally the hour you’ll take going through this test is an hour we don’t have to make eye contact or conversation with you: empathy is hard.

To be honest we have no idea what this test means or how to interpret its results. Everybody before you went through this test, and they’d raise merry hell if we “lowered the bar” by removing it now. As a holacracy/meritocracy/hypocrisy/this week’s organisational behaviour buzzword, we empower our employees to not see any changes that might raise a small amount of discomfort.

So after that test, depending on the seniority of the position and the candidate’s experience, we’ll…no, not really. We did nearly keep a straight face through that sentence though. In fact we didn’t read your CV except to find out whether the keywords that describe the problems we have right now and the solutions we have chosen last week appear. We didn’t read your GitHub/Lanyrd/Bitbucket profiles either, except to check that you have them so we know how much free work to expect out of you in addition to the paid stuff. Our project management system works on the Pareto Principle: 80 hours a week on our stuff, 20 hours a week on open source stuff that we can co-opt.

The next stage in the process is actually the same for everybody: a basic programming test to find out whether you even know what a computer is. We don’t care that you’re [glances at CV] Grace Hopper, we still don’t believe that you can reverse a linked list. None of our employees has ever had to reverse a linked list on the job, and we’d fire them if they did reverse a linked list on the job because there are libraries for that.

Now we’ll come onto the technical interview: a cross-examination by a panel of between one and twelve [not joking] people who have, or have had, a word like “engineer” in their job description at some point. These people are tasked with finding out whether you’ve solved the same problems in your career as they have in theirs. If you haven’t, you might not be clever enough. If you have, then what new experiences are you bringing to the table?

By the way, our flexibility on your technical skills will go down as you become more experienced. We appreciate that new grads might not have used our tools/frameworks/technology and are willing to train them, but if you have more than six months’ experience with Java we’re going to call you a Java developer and only consider you for Java roles.

After all of that, it’s still possible that you might have somehow snuck through the system despite not going to the same university or belonging to the same society as the founder. We can’t really quantify the idea of “culture fit” but that’s what we’re examining in the next part of the process and we’ll know it when we don’t see it.

The Offer

You’ll get a phone call from us while you’re in the bath. We’ll outline the position, pay and (unless this is an American company and there isn’t any) holiday provision. You then have two seconds in which to reply, with either “Yes” or whatever the other one is. You may have other irons in the fire but of course you’ll want to drop all of those when we tell you about the parking space we’ve already allocated for you [This has happened. I don’t have a car.].

The Job

You will be working with a team of people who all went through that same interview and decided they wanted to work in our environment. We will leave it to you to decide what that means.

The Alternative

There are some less…scientific…approaches to hiring that involve using the candidate’s stated and visible experience to have a conversation about what they’ve done, how they do and don’t like to work, how they’ve responded to success and failure, and whether the challenges they would like to see in their career match up with the environment we’re able to provide. While that sounds like quite a pleasant experience for everybody involved we fail to see how it could possibly translate into discovering whether we want to work with you or vice versa.

On running out of words

John Gruber’s subscription to Wiktionary expired:

At just 20 percent of unit sales, Apple isn’t even close to a monopoly. At 92 percent profit share, they have a market dominance that rivals any actual monopoly the tech industry has ever seen. We don’t even have a term for this situation, it’s so unusual.

We do have a term: monopoly will do just fine. Gruber says that Apple “isn’t even close to a monopoly”, but you don’t need to have all or even most of the unit sales in a market in order to be able to act monopolistically. An entity (or a cabal) only needs a big enough share of the sales in order to be able to set prices independent of the other competitors in the market. (Working at big telecoms companies has the effect of teaching you specifics of market economics, but then so did those economics classes I took at University.)

That 92% profit on 20% sales is indicative, rather than contraindicative, of a monopoly. And there’s another word we could use, too: monopsony. Let’s say that you’ve made an iOS app, and now you want to sell it. Do you create a storefront on your website to do that? Do you contact Sears and see how many boxes they want? Speak to some third-party distributor? No, you can only sell to Apple, they are the only buyer for iOS apps.

The thing it’s important to remember about monopolies or monopsonies is that they are not inherently bad: badness happens when an entity uses its dominant position in a market to set prices or other terms that are not considered fair, and that’s a pretty woolly situation. When the one buyer in your market decides that your contribution is “amateur hour” (sucks to be a hobbyist, I guess), or that your content is “over the line”, and doesn’t want to buy your product, you have no other vendors to sell it to: is that fair?

This is an argument that relies too much on legal details and nuance to be able to answer as a novice, so I’ll spare you my “amateur hour” pontification. I would imagine that a legal system that did explore this question would consider analogous environments, like the software market of the 1990s. Back then, Microsoft bundled a web browser and a media player with their operating systems and used their market power (which let them act as a monopoly even though competitors existed) as an operating system vendor to make it hard to sell competing browsers or media players. It might be an interesting thought experiment to compare that situation with today’s.

The other pink dollar

How did (a very broad and collective) we go from selling NeXT at $440M to selling Tumblr at $1.1B, in under two decades? Why was Sun Microsystems, one of the most technologically advanced companies in the valley, only worth two Nests?

I don’t think we’re technologists (much) any more. We’ve moved from building value by making interesting, usable and advanced technology to building value by solving problems for people and making interesting, useful and advanced experiences. The good thing about a NeXT workstation was that it was better than other workstations and minicomputers; the bad thing is that you can’t actually do a lot with a Unix workstation. You need applications, functions for turning silicon-rich paperweights into useful tools.

NeXT’s marketing was it’s easier to turn our paperweight into a tool than their paperweight, but today’s tech companies are mostly making things you can already use. The technology is a back-office function, enabling the things you can already use or working around problems the companies discovered in trying to enable those useful things. Making the paperweight with the most potential is no longer interesting to most of the industry, though there will be money in paperweights for a few years yet (even if the paperweights are becoming small enough that they can’t weigh down paper, and even if no-one has a stack of paper to weigh down any more).

Hence the current focus on “disruption”, in the Silicon Valley, not Clayton Christensen, sense. It’s easy to see how an already-solved problem can be solved faster, cheaper or better, by taking out intermediate steps or slow communications.

This is traditional science-fiction advancement. What technology do you need to get the plot moving quickly? Two people need to talk but they’re not on the same planet: you need a mobile phone. Two people need to be in the same room but they’re not on the same planet: you need a teleporter. Two people need to share the specifications of a starship but there isn’t enough paper in the universe: you need a PADD.

It’s harder to identify solutions to unsolved problems, or solutions to unknown problems. This hasn’t changed since the paperweight days, the transition has been from “well, I guess you can find some problem to solve with this workstation” to “we solved this obvious problem for you”. That’s an advance.

It really is good that we’re moving from building things that could potentially solve a problem to things that definitely do solve a problem. That’s more efficient, as fewer people are solving the same problem. Consider the difference between every company buying a Sun workstation and hiring a programmer to write a CRM application, and every company paying someone else to deliver them a CRM system.

It’s also likely the reason for the rise in open-source software, hardware, data centres and related infrastructure. Nobody’s making railroads any more, they’re making haulage companies that use railroads to solve the problem of “you’re in Chicago, Illinois but your crate full of machinery is in a port in Seattle, Washington”. There’s lots of cost in having the best rails, but questionable benefit, so why not share the blueprints for the rails so that anyone can improve them?

Well, what if it turns out that the best way to haul your goods is not on rails? If it’s easy to accept better rails, but the best solution lies in a different direction? Alan Kay would recognise this problem: everyone can see incremental improvements in the pink plane but there are magnitudes of improvements to be made by getting out into the blue plane.

How can a blue plane venture get funded, or adopted? When was the last time it happened? Is there something out there already that just needs adoption to get us onto the blue plane? Or have we set up a system that makes it easy to move quickly on the pink plane, but not to change direction to the blue plane?

It doesn’t take an Oracle to see that coming

Today has largely been brought to you by nostalgia brought about by this article, reporting on a get-together of former Sun Microsystems employees.

I have never been a former Sun Microsystems employee, and of course now I never will be one. Of all the tech companies I’ve interacted with, Sun is the one I most regret not getting to work with. By the time I dealt with them, they had already put the “crash” in “dot-com crash” but there was still a feeling that they made great things. And besides, they showed that even a pony-tailed Objective-C programmer can be a tech CEO.

I recently talked about the importance of GNU projects, but plenty of other software projects were also important, and Sun had a hand in quite a few of them:

  • Bill Joy worked for them, and most of their early workstation operating systems were based on BSD Unix.
  • In fact while Apollo may have invented the idea that a single person might use a Unix computer, Sun popularised it.
  • I learned how to boot Macs by learning how to program Forth and boot Suns.
  • NFS was the beginning of the separation between your device and your documents.
  • NIS was a bit of an important step on the way to logging in anywhere (its level of baroqueness compared to OAuth has never been accurately gauged).
  • In fact, they pretty much invented cloud computing.
  • Java was quite a big thing for a while.
  • Dtrace is pretty amazing.
  • They even got into standard Unix workstation vendor capitalisation for a while.

It’s likely that much of the interesting stuff at Sun was already over by the time I could’ve worked there, and I certainly experienced a very last-minute replay of some of their history. When I was a student I ‘borrowed’ an Ultra 5 (one of their least good workstations, pretty much a PC with a sun4u SPARC innards) and a SparcStation 5 (one of their most good) to learn about Solaris, SunOS and NeXTSTEP. But it certainly feels like a lot of the future was invented there, even if they were largely following Xerox’s playbook like the rest of the industry.

So tonight, I’ll remember that my control key is in the correct place:

Sun type 5 keyboard

I’ll press L1 and A, then raise a glass to Sun and the job I never had.

The lighter side of open source

In a recent post I talked about the apolitical, amoral nature of open source software and how it puts the interests of a small programming class before the interests of the broad collection of people who interact with programmers’ output. The open source movement has been of great benefit to the software industry, and this hasn’t necessarily been a zero-sum game.

Reality is always more nuanced than history, and yet here is a potted guide to open source history. In the beginning, there were military computers. There was no-one else to share your computer programs with, because:

  1. no-one else had a computer.
  2. well, maybe they did, but they weren’t telling you.
  3. you didn’t want to tell anyone else you have a computer.

Then there were academic computers. Now you do want to share your programs with everyone, and they share theirs with you and so everyone is on the cutting edge.

Then there were commercial computer companies (I told you this history would lack nuance), who were happy to share their programs with you because it meant you could get more out of the computers they were selling.

Then there were commercial computer companies who decided that the source code to the programs used to interface with their hardware were their competitive advantage, and decided to stop sharing them. This made an academic (Richard Meriadoc (humour me) Stallman) sad, and so he created the Free Software movement to:

  1. promote sharing of software over not sharing software;
  2. subvert the copyright system usually used to restrict sharing to enable sharing.

Then there were people who wanted to use Free Software in their day jobs but found that the movement was considered too idealogical to be palatable to management, so they rebranded it Open Source Software to re-frame the discussion along business, rather than political, lines.

This is about the point when your protagonist enters, stage right. The dot-com bubble was imploding, leading to changed fortunes for all sorts of people and organisations in the software industry. Everything I would do regarding professional computing depended in some way on the GNU project and the Free Software Foundation:

  1. I learned Unix, thanks to the ability to inexpensively run GNU/Linux on my desktop computer.
  2. The things I learned about Unix, C programming and so on were portable to various platforms beyond GNU/Linux, thanks to the GNU compiler collection, GNU bash, GNU make, GNU debugger and others.
  3. One such platform was Mac OS X, the new hotness from Apple. This was a technology acquired through the purchase of NeXT, who had been able to provide a complete programming environment despite their small size and (comparatively) small budget by wrapping the tools listed above.

Somewhere in all the above I even found it possible to get paid for writing software: a GPLv2-licensed Lisp package for GNU Emacs.

Of course, that’s just my story, but there are plenty like it. Many other programmers work on platforms like iOS, or Android, or Linux, or in environments like Ruby or Objective-C, that either only exist or have only become as successful as they have due to the successes of the Free Software Foundation, and the ability for organisations (commercial or otherwise) to take advantage of Free or Open Source software as building blocks which they can combine or add to.

Since then, the discussion has again been re-framed. Open Source – originally a branding change to make Free Software acceptable to business – has become a principle rather than a tool. A community that owes its financial viability to Free Software now denounces such “viral” licences, as source released under their conditions is harder to profit from than the more permissive, university-style Open Source licences.

Software writers in the 1980s liked to talk about how object technology would be the silver bullet that allowed re-use and composition of software systems, moving programming from a cottage industry where everyone makes everything from scratch to a production-line enterprise where standard parts fit together to provide a base for valuable products. It wasn’t; the sharing-required software licence was.

Laggards don’t buy apps: devil’s advocate edition

Silky-voiced star of podcasts and all-round nice developer person Brent Simmons just published a pair of articles on dropping support for older OS releases. His argument is reasonable, and is based on a number of axioms including this one:

  • People who don’t upgrade their OS are also the kind of people who don’t buy apps.

Sounds sensible. But here’s another take that also sounds sensible, in my opinion.

  • People who don’t upgrade their OS are also the kind of people who don’t like having to computer instead of getting their stuff done.

Let’s explore a world in which that axiom is true. I’m not saying it is true, nor that Brent’s is false: nor am I saying that his is true and that mine is false. I’m saying that there’s an open question, and we can investigate multiple options (or maybe even try to find some data).

Developers are at the extreme end of a range of behaviours, which can be measured in a single dimension that’s glibly called “extent to which individual is willing to mess about with a computer and consider the time spent valuable”. The “only upgraders buy apps” axiom can be seen as an extension of the idea that all changes to a computer fit onto the high end of that dimension: if you’re willing to buy an app, then you’re willing to computer. If you’re willing to computer, then you’re willing to upgrade. Therefore anyone who wants to sell an app is by definition selling to upgraders, so you can reduce costs by targeting the latest upgrade.

Before exploring the other option, allow me to wander off into an anecdote. For a few years between about 2004 and 2011 I was active in my (then) local Mac User Group, which included chairing it for a year. There were plenty of people there who were nearer the middle of the “willingness to mess with a computer” spectrum, who considered messing with upgrades and configuration a waste of time and often a way to introduce unwanted risk of data loss, but nonetheless were keen to learn about new ways to use their computers more efficiently. To stop computering, and start working.

Many of these people were, it is true, on older computers. The most extreme example was a member who to this day uses a Powerbook G3 Pismo and a Newton MessagePad 2100. He could do everything he needed with those two computers. But that didn’t stop him from wanting to do it with less computer, from wanting to optimise his workflow, to find the latest tips and tricks available and decide whether they got him where he was going more efficiently.

As I said, that example was extreme. There were plenty of other people who only bought new computers every five years or more, but were still on the latest versions of apps like Photoshop, Quark Xpress, or iWork where they could be, and whenever new ones got released the meeting topic would be to dig into the new version (some brave soul would have it on day one) to see whether they could do things better, or with less effort.

These people were paying big money for big software. Not because it was the newest, or because they had to be on “latest and [as we like to claim] greatest”, but because it was better suited to their needs. It gave them a better experience. So, bearing in mind that this is a straw-man for exploration purposes, let me introduce the hypothesis that defends the straw-man axiom presented above:

  • A delightful user experience means not making people mess about with computer stuff just to use their computers.
  • There are plenty of people out there who would rather get something that lets them work more effectively than waste time on upgrades.
  • To those people, spending money on software that gets them where they are going is a better investment than any amount of time and anxiety spent on messing with settings including operating system upgrades.
  • These people represent the middle of the spectrum: not the extreme low end where you never change anything once you’ve bought the computer; nor the extreme high end where fiddling with settings and applying upgrades is considered entertainment.
  • Therefore, the low price of “latest and greatest” software reflects at least in part the externalisation of the (time-based) costs and risks associated with upgrading to mid-range tinkerers.
  • Because of this, while the incremental number of users associated with supporting earlier OS versions may not be great, the incremental value per user may be much higher than gaining users with low-price apps on the current operating systems.

As I say, interesting food for thought, but not necessarily any more (or less) true than the view presented in Brent’s posts. Please have your pinch of salt ready, and don’t bet your business on the thoughts of this idle blogger.

Story points: because I don’t know what I’m doing

The scenario

[Int. developer’s office. Developer sits at a desk that faces the wall. Two of the monitors on Developer’s desk are on stands, if you look closely you see that the third is balanced on the box set of The Art of Computer Programming, which is still in its shrink-wrap. Developer notices you and identifies an opportunity to opine about why the world is wrong, as ever.]

Every so often, people who deal with the real world instead of the computer world ask us developers annoying questions about how our work interacts with so-called reality. You’re probably thinking the same thing I do: who cares, right? I’m right in the middle of a totally cool abstraction layer on top of the operating system’s abstraction layer that abstracts their abstraction so I can interface it to my abstraction and abstract all the abstracts, what’s that got to do with reality and customers and my employer and stuff?

Ugh, damn, turning up my headphones and staring pointedly at the screen hasn’t helped, they’re still asking this question. OK, what is it?

Apparently they want to know when some feature will be done. Look, I’m a programmer, I’m absolutely the worst person to ask about time. OK, I believe that you might want to know whether this development effort is going to deliver value to the customers any time soon, and whether we’re still going to be ahead financially when we’re done, or whether it’d be better to take on some other work. And really I’d love to answer this question, except for one thing:

I have absolutely no idea what I’m doing. Seriously, don’t you remember all the other times that I gave you estimates and they were way off? The problem isn’t some systematic error in the way I think about how long it’ll take me to do stuff, it’s that while I can build abstractions on top of other abstractions I’m not so great at going the other way. Give me a short description of a task, I’ll try and work out what’s involved but I’m likely to miss something that will become important when I go to do it. It’s these missed details that add time, and I don’t know how many of those there will be until I get started.

The proposed solution

[Developer appears to have a brainwave]

Wait, remember how my superpower is adding layers of abstraction? Well your problem of estimation looks quite a lot like a nail to me, so I’ll apply my hammer! Let’s add a layer of abstraction on top of time!

Now you wanted to know how long it’ll take to finish some feature. Well I’ll tell you, but I won’t tell you in units of hours or days, I’ll use BTUs (Bullshit Time Units) instead. So this thing I’m working on will be about five BTUs. What do you mean, that doesn’t tell you when I’ll be done? It’s simple, duh! Just wait a couple of months, and measure how many BTUs we actually managed to complete. Now you know how many BTUs per day we can do, and you know how long everything takes!

[Developer puts their headphones back in, and turns to face the monitor. The curtain closes on the scene, and the Humble(-ish) Narrator takes the stage.]

The observed problem

Did you notice that the BTU doesn’t actually solve the stated problem? If it’s possible to track BTU completion over time until we know how many BTUs get completed in an iteration, then we are making the assumption that there is a linear relationship between BTUs and units of time. Just as there are 40 (or 90, if you picked the wrong recruiter) hours to the work week, so there are N BTUs to the work week. A BTU is worth x hours, and we just need to measure for a bit until we find the value of x.

But Developer’s problem was not a failure to understand how many hours there are in an hour. Developer’s problem was a failure to know what work is outstanding. An inability to foresee what work needs to be done cannot be corrected by any change to the way in which work to be done is mapped onto time. It is, to wear out even further an already tired saw, an unknown unknown.

What to do about it

We’re kindof stuck, really. We can’t tell how long something will take until we do it, not because we’re bad at estimating how long it’ll take to do something but because we’re bad at knowing what it is we need to do.

The little bit there about “until we do it” is, I think, what we need to focus on. I can’t tell you how long something I haven’t done will take, but I can probably tell you what problems are outstanding on the thing I’m doing now. I can tell you whether it’s ready now, or whether I think it’ll be ready “soon” or “not soon”.

So here’s the opportunity: we’ll keep whatever we’ve already got ready for immediate release. We’ll share information about which of the acceptance tests are passing, and if we were to release right now you’d know what customers will get from that. Whatever the thing we’re working on now is, we’ll be in a position to decide whether to switch away if we can do some more valuable work instead.

Conflicts in my mental model of Objective-C

My worldview as it relates to the writing of software in Objective-C contains many items that are at odds with one another. I either need to resolve them or to live with the cognitive dissonance, gradually becoming more insane as the conflicting items hurl one another at my cortex.

Of the programming environments I’ve worked with, I believe that Objective-C and its frameworks are the most pleasant. On the other hand, I think that Objective-C was a hack, and that the frameworks are not without their design mistakes, regressions and inconsistencies.

I believe that Objective-C programmers are correct to side with Alan Kay in saying that the designers of C++ and Java missed out on the crucial part of object-oriented programming, which is message passing. However I also believe that ObjC missed out on a crucial part of object-oriented programming, which is the compiler as an object. Decades spent optimising the compile-link-debug-edit cycle have been spent on solving the wrong problem. On which topic, I feel conflicted by the fact that we’ve got this Smalltalk-like dynamic language support but can have our products canned for picking the same selector name as some internal secret stuff in someone else’s code.

I feel disappointed that in the last decade, we’ve just got tools that can do the same thing but in more places. On the other hand, I don’t think it’s Apple’s responsibility to break the world; their mission should be to make existing workflows faster, with new excitement being optional or third-party. It is both amazing and slightly saddening that if you defrosted a cryogenically-preserved NeXT application programmer, they would just need to learn reference counting, blocks and a little new syntax and style before they’d be up to speed with iOS apps (and maybe protocols, depending on when you threw them in the cooler).

Ah, yes, Apple. The problem with a single vendor driving the whole community around a language or other technology is that the successes or failures of the technology inevitably get caught up in the marketing messages of that vendor, and the values and attitudes ascribed to that vendor. The problem with a community-driven technology is that it can take you longer than the life of the Sun just to agree how lambdas should work. It’d be healthy for there to be other popular platforms for ObjC programming, except for the inconsistencies and conflicts that would produce. It’s great that GNUstep, Cocotron and Apportable exist and are as mature as they are, but “popular” is not quite the correct adjective for them.

Fundamentally I fear a world in which programmers think JavaScript is acceptable. Partly because JavaScript, but mostly because when a language is introduced and people avoid it for ages, then just because some CEO says all future websites must use it they start using it, that’s not healthy. Objective-C was introduced and people avoided it for ages, then just because some CEO said all future apps must use it they started using it.

I feel like I ought to do something about some of that. I haven’t, and perhaps that makes me the guy who comes up to a bunch of developers, says “I’ve got a great idea” and expects them to make it.

What’s the mobile app market up to, then?

While this post is obviously motivated by Recent Events™, it’s completely not got anything to do with employers past, present or future. Dave has posted what next for Agant which explains how that company’s path through the market has gone:

Over the past few years, the App Store has become more and more competitive, and more and more risky with it. Agant’s speciality has been high-quality, higher-value apps, often published in collaboration with our clients. Typically these are paid (rather than free or freemium) apps. Unfortunately, the iOS App Store’s set-up just does not seem to support the discovery, trialling and long-term life of these kinds of high-value apps, making it difficult to justify the risk of their development.

This is not that story. This is my story. It is a different story, though I agree with the paragraph above. It’s a story that doesn’t discuss games because I really don’t know a lot about them.

Something I’ve learned from going to conferences like QCon is that outside the filter bubble of the ObjC conferences I spend a lot of time in, there’s a lot more interest in “the mobile web” (or as we should probably call it these days, “the web”) in the general IT community. This makes sense in the enterprise world: it avoids backing a single horse and tying your company’s IT to one supplier, something they’re rightfully afraid of. Companies that were in the Microsoft camp had to deal with Vista and Windows 8; companies that backed Sun are now Oracle vassals; companies that backed Apple no longer have any servers. Given that mindset, developing javascript apps makes perfect sense. Even if you deliver them now as Cordova apps for a single platform, you’ve got the ability to do something else really quickly if you need to.

This is also something that’s carried over into the world of SaaS apps, where you don’t care what UI people are looking at as long as they subscribe to your service. Whether it’s delivered as a native-wrapped JS app (which is a first-party option for Windows Phone 8 and Windows 8) or a web app (which then lets you add platforms like Chrome OS and Firefox OS), targeting JavaScript lets these developers increase their prospective customer bases from a single code base. Not, perhaps, without some rework of views for different platforms: but certainly without maintaining separate Objective-C, Java and C# projects.

While I’m talking about JavaScript, let me add another relevant datum, particularly for companies working in or with the publishing industry: another word for a bundled JS app is “iBook”.

I think there are also still reasons for having native apps.

Some people want the “most ${platform}-like” experience, and are willing to pay for that. These are, quite frankly, the people who kept Mac software houses going through the 1990s. They’re the people who demanded Cocoa versions of their Carbon apps in the 2000s. You can focus on these people, ignoring the “should be free” masses and getting to the sort of people who buy the Which iPad Format User app of the month because it was the app of the month.

People who have invested money or time into something may be willing to spend a bit in order to increase the value of that investment. This is going to cover both tradespeople and hobbyists. Look at how much you can sell golf swing software for. One of my own hobbies is astronomy: having spent around a grand on my telescope I’m not going to miss £20 dropped on an app that helps me get more value from that purchase. The trick here is not to rely on gaming the “astronomy” keyword in the app store, but to become known in that world. Magazines are more relevant than you might give them credit for, when looking at these markets. Astronomy Now, one of the UK’s astronomy mags, has a circulation of 24,000 (publishers then have an “estimated number of readers per sale” fiddle factor that’s relevant to advertising, so there might be 24-50k monthly readers). These people will read about your product, like it (if you’re doing it right) and will then go out to their user groups and meet-ups and tell those people about your product.[*]

[*] This paragraph owes a lot to Dave Addey, who referred to such audiences as broad niches.

The difficulty is that two forms of advertising no longer work: you can no longer rely on being on the app store as a way to get your app known, and similarly saying to an existing audience “hey, we’re on the app store” is also insufficient. Apps are no longer a novelty in and of themselves, so having a thing that does a thing is not a guaranteed retirement plan.

This points us to a couple of things that definitely are not reasons for having apps. Mass-market apps are now a very hard sell. They can be hard to differentiate on, hard to price reasonably and hard to generate awareness of. This awareness issue brings us into contact with the most powerful businesses in the app market: the platform vendors. No platform is going to allow a “killer app” to surface. Think back, for a moment, to the days of Visicalc. People bought Apple II computers so that they could run Visicalc. That’s fine when Visicalc is Apple-only; not so good when it gets ported to Tandy, IBM and other architectures. It’s also not good when someone else comes out with a better Visicalc for the other platform: 1-2-3 and your customers are gone. Apple (and other OEMs) want control over their customers: they’re not about to cede that control to some ISV with a good idea.

The other thing it’s not a good idea to do is to plug a gap in the OEM software. In smartphones, though not in hi-fis, printers or other electronic devices, the OEM companies are actually pretty good at executing on software features so if you’re doing “the missing ${X} for ${platform}”, as soon as it becomes at all popular the OEM vendor will fill in their version of ${X}. It might not be as featureful, it might not even be better but it’ll probably be good enough to stop the third-party ones from selling.

Notice that I haven’t said “native is better”, or “mobile web is better”. There are apps that you can only build as native apps because the technology limits you to that: this does not mean that you must build them as native apps. There’s no reason you must build them at all. Decide who you’re building for, and what you can offer them that they’d consider to be a valuable experience. Having done that, decide on the best way to build and deliver it.

There is no longer any value in having “an app for that”. There is value in a beneficial experience, which it might make sense for you to build as an app.