What’s the mobile app market up to, then?

While this post is obviously motivated by Recent Events™, it’s completely not got anything to do with employers past, present or future. Dave has posted what next for Agant which explains how that company’s path through the market has gone:

Over the past few years, the App Store has become more and more competitive, and more and more risky with it. Agant’s speciality has been high-quality, higher-value apps, often published in collaboration with our clients. Typically these are paid (rather than free or freemium) apps. Unfortunately, the iOS App Store’s set-up just does not seem to support the discovery, trialling and long-term life of these kinds of high-value apps, making it difficult to justify the risk of their development.

This is not that story. This is my story. It is a different story, though I agree with the paragraph above. It’s a story that doesn’t discuss games because I really don’t know a lot about them.

Something I’ve learned from going to conferences like QCon is that outside the filter bubble of the ObjC conferences I spend a lot of time in, there’s a lot more interest in “the mobile web” (or as we should probably call it these days, “the web”) in the general IT community. This makes sense in the enterprise world: it avoids backing a single horse and tying your company’s IT to one supplier, something they’re rightfully afraid of. Companies that were in the Microsoft camp had to deal with Vista and Windows 8; companies that backed Sun are now Oracle vassals; companies that backed Apple no longer have any servers. Given that mindset, developing javascript apps makes perfect sense. Even if you deliver them now as Cordova apps for a single platform, you’ve got the ability to do something else really quickly if you need to.

This is also something that’s carried over into the world of SaaS apps, where you don’t care what UI people are looking at as long as they subscribe to your service. Whether it’s delivered as a native-wrapped JS app (which is a first-party option for Windows Phone 8 and Windows 8) or a web app (which then lets you add platforms like Chrome OS and Firefox OS), targeting JavaScript lets these developers increase their prospective customer bases from a single code base. Not, perhaps, without some rework of views for different platforms: but certainly without maintaining separate Objective-C, Java and C# projects.

While I’m talking about JavaScript, let me add another relevant datum, particularly for companies working in or with the publishing industry: another word for a bundled JS app is “iBook”.

I think there are also still reasons for having native apps.

Some people want the “most ${platform}-like” experience, and are willing to pay for that. These are, quite frankly, the people who kept Mac software houses going through the 1990s. They’re the people who demanded Cocoa versions of their Carbon apps in the 2000s. You can focus on these people, ignoring the “should be free” masses and getting to the sort of people who buy the Which iPad Format User app of the month because it was the app of the month.

People who have invested money or time into something may be willing to spend a bit in order to increase the value of that investment. This is going to cover both tradespeople and hobbyists. Look at how much you can sell golf swing software for. One of my own hobbies is astronomy: having spent around a grand on my telescope I’m not going to miss £20 dropped on an app that helps me get more value from that purchase. The trick here is not to rely on gaming the “astronomy” keyword in the app store, but to become known in that world. Magazines are more relevant than you might give them credit for, when looking at these markets. Astronomy Now, one of the UK’s astronomy mags, has a circulation of 24,000 (publishers then have an “estimated number of readers per sale” fiddle factor that’s relevant to advertising, so there might be 24-50k monthly readers). These people will read about your product, like it (if you’re doing it right) and will then go out to their user groups and meet-ups and tell those people about your product.[*]

[*] This paragraph owes a lot to Dave Addey, who referred to such audiences as broad niches.

The difficulty is that two forms of advertising no longer work: you can no longer rely on being on the app store as a way to get your app known, and similarly saying to an existing audience “hey, we’re on the app store” is also insufficient. Apps are no longer a novelty in and of themselves, so having a thing that does a thing is not a guaranteed retirement plan.

This points us to a couple of things that definitely are not reasons for having apps. Mass-market apps are now a very hard sell. They can be hard to differentiate on, hard to price reasonably and hard to generate awareness of. This awareness issue brings us into contact with the most powerful businesses in the app market: the platform vendors. No platform is going to allow a “killer app” to surface. Think back, for a moment, to the days of Visicalc. People bought Apple II computers so that they could run Visicalc. That’s fine when Visicalc is Apple-only; not so good when it gets ported to Tandy, IBM and other architectures. It’s also not good when someone else comes out with a better Visicalc for the other platform: 1-2-3 and your customers are gone. Apple (and other OEMs) want control over their customers: they’re not about to cede that control to some ISV with a good idea.

The other thing it’s not a good idea to do is to plug a gap in the OEM software. In smartphones, though not in hi-fis, printers or other electronic devices, the OEM companies are actually pretty good at executing on software features so if you’re doing “the missing ${X} for ${platform}”, as soon as it becomes at all popular the OEM vendor will fill in their version of ${X}. It might not be as featureful, it might not even be better but it’ll probably be good enough to stop the third-party ones from selling.

Notice that I haven’t said “native is better”, or “mobile web is better”. There are apps that you can only build as native apps because the technology limits you to that: this does not mean that you must build them as native apps. There’s no reason you must build them at all. Decide who you’re building for, and what you can offer them that they’d consider to be a valuable experience. Having done that, decide on the best way to build and deliver it.

There is no longer any value in having “an app for that”. There is value in a beneficial experience, which it might make sense for you to build as an app.

On rewriting your application

I’m really far behind on podcasts. I have a long commute, and listen to one audiobook every month, filling the slack time with a selection of podcasts. It happens that between two really long books (Cryptonomicon by Neal Stephenson and The Three Musketeers by Alexandre Dumas, both of which I’d recommend) and quite a few snow days I’ve managed to fall behind.

This is the context in which I listened to Episode 77 of iDeveloper Live, on whether to rewrite or not rewrite an application. I was meant to be on that program but had to pull out due to a combination of being ill and pressing on with writing what would become Discworld: the Ankh-Morpork Map. I imagine that one of these two factors was the cause of the other.

Anyway, listening to the podcast made me decide to put my case forward. It’s unfortunate that I wasn’t part of the live discussion but I hope that what I have to say still has value. I’d recommend listening to what Scotty, Uli, John and Pilky have to say on the original podcast, too. If I make references to the discussion in the podcast, I’ll apologise in advance for failing to remember which person said what.

The view from the outside

The point that got me thinking about this was the marketing for a version x.0 of an app. I don’t remember what the app was, I think Danny Greg probably posted the link to it. The release described how the app was “completely rewritten” from the previous version; an announcement that’s not uncommon in software release notes.

My position is this: A complete rewrite is neither a feature nor a benefit. It’s a warning. It says “warning: you might like the previous version, in fact that’s probably why you’re buying it, but what we’re now selling you has nothing in common with it”. It says “warning: the workflow you’re accustomed to in this app may no longer work, or may have different bugs than the ones you’ve discovered how to work around”. It says “warning: this product and the previous version are similar in name alone”. This is what I infer when I read that your product has been rewritten.

The view from the inside

As programmers, we’re saying “but I’ve got to rewrite this, it’s so old and crappy”. Why is it old and crappy? Is it because we weren’t trained to write readable code, and we weren’t trained to read code? Someone on the podcast referred to the “you should hate code you wrote six months ago or you’re not learning” trope. No. You should look at code you were writing six months ago and see how it could be improved. You should be able to decide whether to make those improvements, depending on whether they would benefit the product.

Many of the projects I’ve worked on have taken more than six months to complete. In each case, we could have either released it, or we could still be in a cycle of finding code that was modified more than six months ago, looking at it in disgust, throwing it away and writing it again—and waiting for the new regression bug reports to come in.

Bear in mind that source code is a liability, not an asset. When you tear something out to rewrite it from scratch, you’re using up time and money to create a thing that provides the same value as the thing you’re replacing. It’s at times like this that we enjoy waving our hands and talking about Technical Debt. Martin Fowler:

The tricky thing about technical debt, of course, is that unlike money it’s impossible to measure effectively. The interest payments hurt a team’s productivity, but since we CannotMeasureProductivity, we can’t really see the true effect of our technical debt.

We can’t really see the true effect. So how do you know that this rewrite is going to pay off? If you have some problem now it might be clear that this problem can be addressed more cheaply with a rewrite than by extending or modifying existing code. This is the situation Uli described in the podcast; they’d used some third-party library in version 1, which got them a quick release but had its problems. Having learned from those problems, they decided to replace that library in version 2.

Where you have problems, you can solve them either by modification or by rewriting. If you think that some problem might occur in the future, then leave it: you don’t make a profit by solving problems that no-one has.

A case study

While I had a few jobs in computing beforehand, all of which required that I write code, my first job where the title meant “someone who writes code” started about six years ago, at a company that makes anti-virus software. I was a developer (and would quickly become lead developer, despite not being ready) on the Mac version of the software.

This software had what could be described as an illustrious history: it was older than some of the readers of this blog (which is not uncommon: Cocoa is old enough to vote in the UK and UNIX is middle-aged). It started life as a Lightspeed/THINK C product, then become a PowerPlant product at around the time of the PowerPC transition. When I started working on it in 2007 the PowerPlant user interface still existed, but it did look out of place and dated. In addition, Apple were making noises about the library not being supportable on future versions of Mac OS X, so the first project for the new team was to build a new UI in Cocoa.

We immediately set out getting things wrong more quickly than any other team in the company. The lead developer when I joined had plenty of experience on the MS-DOS and Windows versions of the product, but had never worked on a Mac nor in Objective-C: then I became the lead developer, having worked in Objective-C but never in a team of more than one person. I won’t go into all of the details but the project ended up taking multiple times its estimated duration: not surprising, when the team had never worked together and none of the members had worked on this type of project so our estimates were really random numbers.

At the outset of this project, being untrained in the reading of other people’s code, I was dismayed by what I saw. I repeatedly asked for permission to rewrite other parts of the system that had copyright dates from when Kurt Cobain was still making TV appearances. I was repeatedly refused: the correct decision as while the old code had its bugs it was a lot more stable than what we were writing, and as it had already been written it cost a lot less to supply to the customer[*].

Toward the eventual end of the project, I asked my manager why we hadn’t given up on it after a year of getting things wrong, declared that a learning exercise and started over. Essentially, why couldn’t we take the “I hate the code I wrote last year, let’s start from scratch” approach. His answer was that at least one person would’ve got frustrated and quit after having their code thrown away; then we’d have no product and also no team so would not be in a better position.[*]

Eventually the product did get out of the door, and while I’m no longer involved with it I can tell that the version shipping today still has most of the moving parts that were developed during my time and before. Gradual improvement, responding to changes in what customers want and what suppliers provide, has stood that product in good stead for over two decades.

[*] It’s important to separate these two arguments from the Sunk Cost Fallacy. In neither case are we including the money spent on prior work. The first paragraph says “from today’s perspective, what we already have is free and what you haven’t written is not free, but they both do the same thing”. The second paragraph says “from today’s perspective, finishing what you’ve done costs a lot of money. Starting afresh costs a lot of money and introduces social disruption. But they both do the same thing.”

I published a new book!

Executive summary: it’s called APPropriate Behaviour, head over to the LeanPub site to check it out.

For quite a while, I’ve noticed that posts here are moving away from nuts and bolts code towards questions about evaluating my own performance, working with other developers and the industry in general.

I decided to spend some time working on these and related thoughts, trying to derive some consistent narrative as well as satisfying myself that these ideas were indeed going somewhere. I quickly ended up with about half of a novel-length book.

The other half is coming soon, but in the meantime the book is already published in preview state. To quote from the introduction:

this book is about the things that go into being a programmer that aren’t specifically the programming. It starts fairly close to home, with chapters on development tools, on supporting your own programming needs, and on other “software engineering” practices that programmers should understand and make use of. But by the end of the book we’ll be talking about psychology and metacognition — about understanding how you the programmer function and how to improve that functioning.

As I said, this is currently in very much a preview state—only about half of the content is there, it hasn’t been reviewed, and the thread that runs through it has dropped a few stitches. However, even if you buy the book now you’ll get free updates forever so you’ll get to find out as chapters are added and as changes are made.

At this early stage I’m particularly interested in any feedback readers have. I’ve set up a Glassboard for the book—in the Glassboard app, use invite code XVSSV to join the discussion.

I hope you enjoy APPropriate behaviour!

On free apps

This post is sort-of a follow-on to @daveaddey’s post on the average app; although in reality it’s a follow-on to the response that comes out every time a post on app store revenue is written.

Events go like this:

  1. Some statistic about app store revenue.
  2. “Your numbers include free apps. You shouldn’t include free apps.

Yes you should. The revenue that comes from the app store is indeed shared across all apps, free and paid. Free apps contribute significantly to the long-tail price distribution of apps on the store, and the consumer perception that apps shouldn’t cost much. Some of them generate revenue through in-app purchase: revenue that probably is counted in Apple’s “we’ve given $5bn to developers” number. Some of them will generate revenue through iAd: it’s not clear whether that’s included in the $5bn but it’s certainly money paid by Apple to developers. Similarly, it’s not clear whether money paid through Newsstand subscriptions (again, in free apps) counts: it probably does. Some of them have not always and will not always be free; again these apps make money directly from Apple.

A lot of free apps come from companies that do other things, but feel a marketing need to be on the app store:Amazon, O2, facebook and others go down this route. In these cases there is absolutely no money to be made from the app directly, though there are possibly many sorts of collateral benefits. It costs Amazon money to write the Windowshop app, but they bank on users buying more things from them than if the apps doesn’t exist.

On the other hand, some free apps are just written by developers who want to put a free app on the store so that it can act as their portfolio when they try to get work as iOS app developers.

So yes, other business models exist. But when talking about money made from the app store, you have to include all of those products that don’t make money on the app store. Ignoring the odd outlier is fine statistics. Ignoring large quantities of data that make your conclusions look bad is not science; it’s witchcraft.

On community

This is a post that had been boiling for a while; I talked a little about the topic when I was in Appsterdam earlier this year, and had a few more thoughts which were completely supplanted and rearranged by watching iOSDevUK. I threw away my earlier draft; you’re about to read something different. Where you see “we”, “us” or “our community” you should probably take it to mean Cocoa programmers, though read on to find out why “us” doesn’t always make sense.

Acknowledgements

So many people have contributed to this, by saying things that I agree with, by saying things that I disagree with, by organising conferences, or in other ways. I’ve tried to cite where appropriate but I’ve probably missed someone somewhere. Sorry :-(.

Introduction

This article is more the presentation of a problem and some thoughts about it than an attempt to argue in favour of a particular solution. I’ll investigate what it means to be in “the Cocoa programming community”, beginning with whether or not Apple is in a community of its own devising. I’ll ask whether there’s room for more collaboration in the community, and whether the community of Cocoa programmers encompasses all Cocoa programmers. Finally, I’ll notice that these are questions as yet unanswered, and explore what the solutions and non-solutions might be.

On Apple and the community

This is the bit that I’d done most work on already, as it was the topic of my Appsterdam talk. The summary of that talk is pretty much the same as Dave’s working-with-Apple pro tip in his iOSDevUK talk. As his was more succinct, I’ll use that version:

Apple is people too. Don’t be a dick.

(I’m a fan of people not being dicks.)

The thing is that as Scotty said, the community wins when all of its members win. But he also said that Apple isn’t in the community, so don’t they obviate themselves from this relationship?

Well, no. If we look at the community that most of the people reading this post – and that most of the people at iOSDevUK – consider themselves a member of, it’s the community of iOS app makers. It happens that all of these people depend on the same thing: on iOS. Being nice to Apple and helping them just makes good business sense. If you’re not helping Apple to win, they might decide to help you lose.

On a related subject: for Apple to win, it’s not necessary for anyone else to lose. In fact, I’m not the first person to say this. I’m stealing from a man who was, at the time this quote was coined, freshly CEO after having been a management consultant at Apple:

We have to let go of this notion that for Apple to win, Microsoft has to lose. We have to embrace a notion that for Apple to win, Apple has to do a really good job. And if others are going to help us that’s great, because we need all the help we can get, and if we screw up and we don’t do a good job, it’s not somebody else’s fault, it’s our fault.

So Microsoft, Windows 8 and Windows Phone 8 don’t have to lose. Google and Android don’t have to lose. Enterprise Java programmers don’t have to lose. Your competitors don’t have to lose. The team in Apple that make that thing that just crashed don’t have to lose.

On that last note, Apple is the biggest company in the world and you’re supplying one or a handful of 600,000 or so different replaceable components that helps them make a trivial fraction of their income. So if the choice you give them is “do what I need or I’ll stop working with you”, they’ll pick option 2. “Fix Radar or GTFO”? It’s cheaper and easier for Apple to GTFO.

That’s not to say the best strategy is always to do whatever Apple want. Well, actually it probably is in the short term, but Apple is real people and real people benefit from constructive feedback too.

Just who is “them”, anyway?

Around the time that I started to be a proper software writing person, there was a strong division in Mac development. The side I was in (and I was young, opinionated, easily led, and was definitely in this faction) was the Yellow Box. We knew that the correct way to write software for the Mac was to use the Foundation and AppKit APIs via the Objective-C or Java languages.

We also knew that the other people, the Blue Boxers who were using libraries compatible with Mac OS 8 and the C or C++ languages, were grey-bearded dinosaurs who didn’t get it.

This sounds crazy now, right? Should I also point out that I wrote a Carbon app, just to make it sound a little crazier?

That’s because it is crazy. Somehow those of us who had chosen a different programming language knew that we were better at writing software; much better than those clowns who just made the most successful office suite ever, the most successful picture editing app ever, or the most successful video player ever. Because we’d taken advice on how to write software from a company that was 90 days away from bankruptcy and had proven incapable of executing on software development, we were awesome and the people who were making the shittons of money on the most popular software of all time were clueless idiots.

But what about the people who were writing Mac software with WXWindows (which included myself), or RealBASIC, or the PerlObjCBridge (which also included me)? Where did those fit in this dichotomy? Or the people over on Windows (me again) or Solaris (yup, me here)?

The definition of “us” and “them” is meaningless. It needs to be, in order to remain fluid enough that a new “them” can always be found. Looking through my little corner of history, I can see a few other distinctions that have come and gone over time. CodeWarrior vs Project Builder. Mach-O vs CFM. iPhone vs Android. Windows vs Mac. UNIX vs VMS. BSD vs System V. KDE vs GNOME. Java vs Objective-C. Browser vs native. BitKeeper vs Monotone. Dots vs brackets.

Let’s look in more detail at the Windows vs Mac distinction. If you cast your mind back, you’ll recall that around 2000 it was much easier to make money on Windows. People who were in the Mac camp made hand-waving references to technical superiority, or better user interfaces, or breaking the Microsoft hegemony, or not needing to be super-rich. Many of those Mac developers are now iPhone developers. In the iOS vs Android distinction, iOS developers readily point to the larger amount of money that’s available in making iOS apps…wait.

O(community)

The community contribution fraction

As Scotty said, an important role in a community is that of the reader/consumer/learner, the people who take and use the information that’s shared through the community. Indeed in any community this is likely to be the largest share of the community’s population; the people who produce and share the information are also making use of it too.

The thing is, that means that there are many people who are making use of those great ideas, synthesising them, and making even new and better ideas. And we’re not finding out about them. Essentially there is more knowledge than there is opportunity to share knowledge.

It’d be great to have some way to make it super-easy for everyone who was involved in “the community” to contribute, even if it’s just to add a single thought or idea to the pool. As Scotty said, there’s no way you can force people to contribute, and that’s not even desirable as it’s a great way to put people off talking to you ever again.

So you can’t hold a gun up to people and force them to tell you a fact about Objective-C. You can ensure everyone knows what forms of contribution take place; perhaps they’ll find something that’s easier than they thought or something they’ll enjoy. Perhaps they’ll give it a go, and enjoy it.

Face to face

Conferences are definitely not that simple way for everybody to contribute. Conferences are great, though as I’ve said before there aren’t enough seats for them to have a wide direct impact on the community. Tech conferences will never be a base for broad participation, both due to finite size (even WWDC comprises less than one percent of registered developers on the platforms) and limited scope for contribution – particularly the bias toward contributors with “prior”.

One “fix” to scale up the conference is to run the conference all year long. This allows people who don’t like the idea of being trapped in a convention with the same 200 people for a week the option to dip in and out as they see fit. It gives far more opportunity for contribution – because there are many more occasions on which contribution is needed. On the other hand, part of the point of a conference is that the attendees are all at the same place at the same time, so there’s definitely some trade off to be had.

Conferences and Appsterdams alike lead to face-to-face collaboration; the most awesomest flavour of collaboration there is. In return, they require (like Cocoaheads, NSCoder or whatever you call your pub/café meet) that you have the ability to get to the venue. This can call for anything from a walk down the street via a couple of ten-hour flights to relocating yourself and your family.

Smaller-scale chances for face to face interaction exist: one-on-few training courses and one-on-one mentoring and apprenticeships. These are nearly, but not quite, one-way flows of information and ideas from the trainer or sensei to the students or proteges. There are opportunities to make mentoring a small part of your professional life so it doesn’t seem to require a huge time investment.

Training courses, on the other hand, do. Investment by the trainer, who must develop a course, teach it, respond to feedback, react to technology changes and so on. Investment by the trainees, who must spend an amount of time and money attending the course, then doing any follow-up exercises or exams. They’re great ways to quickly get up to speed with a technology by immersing yourselves in them, but no-one is ever going to answer the question “how can I easily contribute to my community?” with “run a training course”.

Teaching at a distance

A lower barrier to entry is found by decoupling the information from the person presenting the information. For as long as there has been tech there have been tech books; it’s easy (if you have $10-$50) to have a book automatically delivered to your house or reader and start absorbing its facts. For published books, there’s a high probability that the content has been proofread and technically reviewed and therefore says something a bit accurate in a recognisable language.

On the other hand, there are very few “timeless” books about technology. Publisher schedules introduce some delay between finishing a manuscript and having something to sell, further reducing any potential shelf life. If you’re in the world of Apple development and planning to say anything about, for example, Objective-C or Xcode, you’re looking at a book that will last a couple of months before being out of date.

Writing a book, then, takes a long time which already might be a blocker to contribution for a lot of people. There’s also the limitation on who will even be invited to contribute: the finite number of publishers out there will preferentially select for established community members and people who have demonstrated an ability to write. It’s easier to market books that way.

The way to avoid all of that hassle is to write a blog (hello!). You get to write things without having to be selected by some commissioning editor. Conversely, you aren’t slowed down by the hassles of having people help you make the thing you write better, either—unless you choose to seek that help.

You then need to find somebody to read your blog. This is hard.

Stats for this blog: most pages have only ever been read a couple of hundred times.

If someone else already has an audience, you can take advantage of that. Jeff Atwood previously wrote about using stack overflow as a blog, where you’d get great reach because they bring their audience. Of course, another thing you can do on stack overflow is answer questions from other people: so that quick answer you contribute is actually solving someone’s problem.

This is, in my opinion, the hallowed middle ground between books (slow, static, hard to get into, with a wide reach) and blogs (fast, reactive, easy to pick up, hard to get discovered). Self-publishing a book is a lot like spending ages writing a long blog post. On the other hand, contributing to a community resource like a Q and A site or a wiki means only writing the bit of the book that you’re best placed to contribute. It also means sharing the work of ensuring correctness and value among the whole contributor base.

Our community / People with ideas ≪ 1

Whatever your definition of “the community”: the iOS developer community, the object-oriented programming community, the developer community—there are many more people who aren’t in that community. But they still have things to say that could be interesting and help us see what we do in different ways.

I’m not so sure that there are people out there doing what we do who don’t even passively engage with the rest of the community. Maybe there are, maybe there are lots. But I’m sure most people have at least read a book, or done a search that ended up at a mailing list post or blog entry. Very few people will never have used community-supplied resources; although it’s possible that there are programmers out there who’ve learned everything they know from first party documentation.

What I am sure of is that if you’re an Objective-C developer building mobile apps and you only listen to other Objective-C developers building mobile apps, you’re missing out on the information and ideas you could be taking from everyone else. Dave Addey told us to go and visit museums and art galleries to get inspiration, but that’s not all there is to it. Talk to someone doing Objective-C in a different context. Talk to someone doing Java, or Clojure. Talk to business people, or artists, or musicians. Break out of the echo chamber, and find out whether what other people are doing could be applied to what you’re doing.

Conclusions

As promised, there aren’t really any conclusions here. It’s more a collection of my own thoughts dumped out from brain to MarsEdit in order to let me make sense of them, and to stop me having to think about them at bedtime.

What’s clear is that there are a load of different ways for people to contribute to a community. Consumption of other people’s thoughts, advice and ideas is itself a very beneficial service as it’s how new ideas get synthesised, how new practices are formed and how the community collectively improves its output. It would be even better if what those people were doing were also made available and shared with the rest of us, to achieve an exponential growth in experience and advancement across the whole community.

But that’s not guaranteed to happen. The best thing to do is not to try driving people to contribute, but to give them so many opportunities to do so that, at some point, someone in the community will be in the position that sharing something is really easy and they choose to do so.

Other techniques to improve the number of ideas you get from the community are to be less adversarial in your definition of community, and more broad in your inclusion. The “community of people making iOS apps with Objective-C” is small, the “community of people making things” is universal.

Coding. Standards.

I just realised that this month marks the 10th anniversary of my first payment for writing software (on, of all the weird things to be writing software on in 2002, a NeXTstation)! What have I learned from those ten years? What advice would I give to someone who wants to do this stuff for at least 10 years?

Programming is the easy bit.

Well, comparatively. There are hard bits in programming, and every few years a new paradigm comes along that means you have to unlearn whatever it was you were doing and learn something else to do anyway. So learning programming is never done, but still programming is easier than:

  • estimating. The one project I’ve worked on that finished on its planned completion date only did so by accident.
  • getting any kind of agreement out of two or more people.
  • accepting that the other person isn’t a dick, but has different goals and problems than you.
  • objectively evaluating your own work.
  • objectively assessing someone else’s evaluation of your work.
  • stopping programming when you’re done.

You always need to be learning.

You can’t compete on price in the software market, because there’s always some student somewhere who’s willing to do the same work for free. It was Mike who first taught me that. You have to compete on quality, which means you need to strive to improve your own quality. Because other people are too, so you need to run just to stand still.

There are various ways to learn, and they’re not mutually exclusive. A combination of books/articles, experimentation, and discussion with peers is valuable. If your town has an Appsterdam or a CocoaHeads, get along and say hello.

You probably don’t want to be doing this in 10 years.

I was actually a UNIX programmer a decade ago (well, I was mainly a student). Then I was a barman, then sysadmin, then a Linux server application programmer, then a Mac app programmer, then a contract Mac programmer, then a Java app programmer, then a security consultant, then an iOS app programmer. There’s only a small probability that I’ll be an iOS app programmer in 10 years.

This world moves really quickly. Ten years ago, the iPod was a new and relatively risky proposition. Macs used PowerPC CPUs. Windows XP was the new hotness, and .NET was just about to appear – meanwhile Mac OS X was a sluggish amalgam of NeXT, Java and legacy code. Java, by the way, was run by a now-defunct company called Sun Microsystems, which was trying to work out how to survive the dot-com crash.

Speaking of the dot-com crash, it seems highly likely that within the next decade we’ll see the dot-app crash. App downloads are worth $0.18¢ each, but an app costs $200k – apparently it’s hard work. That means you’ve got to either get yourself into the long tail value-wise (i.e. have a very good app that people will pay for), or you’ve got to find a million users for version 1.0.

For everyone else, the market isn’t worth staying in long-term. The market will bore of brochureware apps, only a few high-value brands will be able to support unprofitable vanity apps, and VCs will realise that throwing their money after an app with no profit strategy is the same as throwing their money after a website with no profit strategy.

It’s likely that at least one of the companies that’s big in the current software world – Microsoft, Apple, Oracle, Google and the like – will be big in the software world of 2022. It’s also likely that there’ll be some new comers that change things completely: Facebook and Twitter didn’t exist ten years ago, and neither did Android, Inc. Sometimes companies that seem to be in an interminable tailspin – like Apple – turn themselves around and become successful.

Learn more than one thing

This is related, in part, to what came above: the thing you’re using right now may not exist, or may be hard to get work in, in a few years’ time. On the other hand, some things seem to outlive the cockroaches: C – and by extension, languages that can link somewhat seamlessly with C like C++, Fortran and so on – have been going on forever. It can be hard to predict which of these camps your favourite tech sits in, so learning more than one thing keeps you employable.

More than that, if your technology of choice comes from a single supplier (e.g. Microsoft, Apple, Embarcadero) then diversification just makes good business sense. This particularly applies in the age of the app store where that sole supplier can also be your sole vendor – you don’t want to sign your entire business’s value over to one other company.

Learning another thing makes you better at the first thing

This is another reason why diversifying your technology portfolio is beneficial. Many of the changes I’ve made recently in the way I write object-oriented software come from talking to Clojure programmers.

The more different things you know, the more connections you’ll be able to make between them. The more you’ll be able to critically analyse one technology, beyond what the vendor tells you. And the more you’ll be able to understand other new things and incorporate them into your Weltanschauung.

Conclusion

My summary could be “learn whatever you can: you never know which bits you need”. Or it could be “don’t rely on your supplier to solve all of your problems”.

I think it’s actually going to be: analyse everything. Reflect on your work: what went well? What didn’t? Could you have done things better? If you don’t think you could have, then you’re probably wrong: what would you need to know to identify the bit that actually could’ve gone better?

But know when to stop, too. Analysis paralysis is as much of a problem as going in blind. At some point, you need to suck it up and move on. Trading these two things against each other is the real difficulty in software engineering.

App security consultancy from your favourite boffin

I’m very excited to soon be joining the ranks of Agant Ltd, working on some great apps with an awesome team of people. I’ll be bringing with me my favourite title, Smartphone Security Boffin. Any development team can benefit from a security boffin, but I’m also very excited to be in product development with the people who make some of the best products on the market.

There’s another side to all of this: once again can security boffinry be at your disposal. I’ll be available on a consultancy basis to help out with your application security and privacy issues. If you’re unsure how to do SSL right in your iOS app, are having difficulty getting your Mac software to behave in the App Store sandbox, or need help to identify the security pain points in your application’s design or code, I can lend a hand.

Of course, it’s not just about technology. The best way to help your developers get security right is to help your developers to help themselves. When I’ve done security training for developers before I’ve seen those flashes of enlightenment when delegates realise how the issues relate to their own work; the hasty notes of class or method names to look into back at the desk; the excited discusses in the breaks. Security training for iOS app developers is great for the people, great for the product – and, of course, something I can help out with.

To check availability and book some time (which will be no earlier than July), drop me a line on the twitters or at graham@agant.com.

Thoughts on Tech Conferences

This post is being, um, posted from the venue for GOTO Copenhagen 2012. It’s the end result of a few months of reflection on what I get out of conferences, what I want to get out of conferences, what I put into (and want to put into) conferences and the position of tech conferences in our industry. I’ve also been discussing things a lot with my friends and peers; I’ve tried to attribute specific quotes where I remember who said them but let it be known that many people have contributed to the paragraphs below in many different ways. I’ll make it clear at the outset that I’m talking about my experience at independent commercial and non-profit tech conferences, not scientific conferences (of which I have little experience) or first-party events like WWDC (which are straightforward marketing exercises).

Conference speakers

My favourite quote on this subject is courtesy of Mike; I remember him saying it in his MDevcon keynote this year but I’m also fairly sure he’s said the same thing earlier:

The talks at a conference are only there so that you can claim the ticket cost as an expense.

We’re in a knowledge economy; but knowledge itself is not of any value unless it’s applied. That means it’s not the people who tell other people what’s going on who’re are doing the most important work; that’s being done by the people who take this raw knowledge, synthesise it into a weltanschauung – a model of how the world works – and then make things according to that model. Using an analogy with the economy of physical things, when we think of the sculpture of David in Florence we think of Michelangelo, the sculptor, not of the quarry workers who extracted the marble from the ground. Yes their work was important and the sculpture wouldn’t exist without the rock, but the most important and valuable contribution comes from the sculptor. So it is in the software world. Speakers are the quarry workers; the marble hewers, providing chunks of rough knowledge-stuff to the real artisans – the delegates – who select, combine and discard such knowledge-stuff to create the valuable sculptures: the applications.

Michelangelo's David, source: Wikimedia Commons.

Conference speakers who believe that the value structure is the other way around are deluding themselves. Your talk is put on at the conference to let people count the conference as a work expense, and to inspire further discussion and research among the delegates on the topic you’re talking about. It’s not there so that you can promote your consultancy/book/product, or produce tweet worthy quotes, or show off how clever you are. Those things run the gamut from “fringe benefits” to “deleterious side effects”.

As an aside, the first time I presented at a Voices That Matter conference I was worried due to the name; it sounds like the thing that matters at this conference is the speakers’ voices. In fact I suspect there is some of that as many of the presenters have books published by the conference hosts, but it’s a pretty good conference covering a diverse range of topics, with plenty of opportunities to talk to fellow delegates. And IIRC all attendees got an “I am one of the voices that matter” sticker. Anyway, back to the topic at hand: thus do we discover a problem. Producing a quality conference talk is itself knowledge work, that requires careful preparation, distillation and combination of even more raw knowledge-stuff. It takes me (an experienced speaker who usually gets good, but not rave, reviews) about three days to produce a new one hour talk, a roughly 25:1 ratio of preparation:delivery. That’s about a day of deciding what to say and what to leave out, a day of designing and producing materials like slides, handouts and sample code, and a day of practising and editing. Of course, that’s on top of whatever research it was that led me to believe I could give the talk in the first place. The problem I alluded to at the start of the last paragraph is this: there’s a conflict between acknowledging that the talks are the bricks-and-mortar of the conference rather than the end product, and wanting some return on the time invested. How that conflict’s resolved depends on the personal values of the individual; I won’t try to speak for any of my peers here because I don’t know their minds.

The conference echo chamber

That’s not my phrase; I’ve heard it a lot and can track my most recent recollection to @secwhat’s post Conference Angst. Each industry’s conferences has a kind of accepted worldview that is repeated and reinforced in the conference sessions, and that only permits limited scrutiny or questioning – except for one specific variety which I’m coming onto later. As examples, the groupthink in indie Mac/iOS conferences is “developers only need developer features that have been blessed by Apple”. There’s recently been significant backlash to the RubyMotion framework, as there usually is when a new third-party abstraction for iOS appears. But isn’t abstraction a good thing in software engineering? The truth is, of course, that there are more things in heaven and earth than are dream’t in Apple’s philosophy.

The information security groupthink is that information security is working. Shocking though it may sound, that’s far from obvious, evident or even demonstrable. Show me the blind test where similar projects were run with different levels of info sec engagement and where the outcome was significantly different. Demonstrate how any company’s risk profile has changed since last year. Also, show me an example of security practitioners being ahead of the curve, predicting and preparing for a new development in the field: where were the talks on hacktivism before Anonymous or Wikileaks?

One reason that the same views are repeated over multiple conferences is that the same circuit of speakers travels to all of the conferences. I’m guilty of perpetuating that myself, being (albeit unintentionally in one year) one of the speakers in the iOS circuit. And when I’ve travelled to Seattle or Atlanta or Copenhagen or Aberystwyth, I’ve always recognised at least a few names in the speaker line-up. [While I mentioned Aberystwyth here, both iOSDevUK and NSConf take steps to address the circuit problem. iOSDevUK had a number of first-time speakers and a “bar camp” where people could contribute their own talks. NSConf has the blitz talks which are an accessible way to get a large number of off-circuit speakers, and on one occasion ran a whole day of attendee-contributed sessions called NSConf Mini. When you give people who don’t normally present the opportunity to do so, someone will step up.]

I mentioned before that the echo chamber only permits limited scrutiny, and that comes in the form of the “knowing troll” talk. Indeed at GOTO there’s a track on the final day called “Iconoclasm”, which is populated solely with this form of talk. Where the echo chamber currently resounds to the sound of , it’s permitted to deliver an “ sucks” talk. This will usually present a straw man version of and list its failings or shortcomings. That’s allowed because it actually reinforces – real-world examples are rarely anything like as bad as the straw man version, therefore isn’t really that bad. This form of talk is often a last-session-of-the-day entry and doesn’t really lead people to challenge their beliefs. What happens later is that when everyone moves on to the next big thing, the “ sucks” talks will become the main body of the conference and “<X+1> sucks” will be the new troll talk.

Conferences

Weirdly, while the word conference means a bringing together of people to talk, coming from the same root as “conversation”, many conferences are designed around a one-way flow of words from the speakers to the delegates. Here’s the thing with that. As I said in my keynote talk at MDevcon, we learn from each other by telling and listening to stories. Terry Pratchett, Jack Cohen and Ian Stewart even went as far as to reclassify humans as pan narrans, the storytelling chimpanzee. Now if you’ve got M speakers and 10M<N<100M delegates, then putting a sequence of speakers up and listening to their stories gets you a total of M stories. Letting the N delegates each share their stories, and then letting each of the N-1 other delegates share the stories that the first N stories reminded them of, and so on, would probably lead to a total of N! stories if you had the time to host that. But where that does happen, it’s usually an adjunct to the “big top” show which is the speaker series. [And remember: if you’ve got C conferences, you don’t have C*M speakers, you have M+ε speakers.]

There’s one particular form of wider participation that never works well, and that’s to follow a speaker session with Q&A. Listen carefully to the questions asked at the next Q&A you’re in, and you’ll find that many are not questions, but rhetorical statements crafted to make the “asker” appear knowledgable. Some of those questions that are questions are rhetorical land mines with the intent of putting the speaker on the back foot, again to make the asker seem intellectually talented. Few of these questions will actually be of collective value to the plenus, so there’s not much point in holding the Q&A in front of everyone.

Speaker talks are only one way to run a session, though. Panels, workshops and debates all invite more collaboration than speaker sessions. They’re also much more difficult to moderate and organise, so are rarely seen: many conferences have optional days that are called “workshops” but in reality are short training courses run by an invited speaker. In the iOS development world, lab sessions are escaping the confines of WWDC and being seen at more independent conferences. These are like one-on-one or few-on-few problem solving workshops, which are well focussed and highly collaborative but don’t involve many people (except at Voices that Matter, where they ran the usability workshops on the stage in front of the audience). A related idea being run at GOTO right now, which I need to explore, is a whole track of pair programming sessions. The session host chooses a technology and a problem, and invites delegates onto the stage to work through the challenge with the host in a pair-programming format. That’s a really interesting way to attract wider participation; I’ll wait until I’ve seen it in action before reaching an opinion on whether it works.

There’s another issue, that requires a bit more setup to explain. Here’s a Venn diagram for any industry with a conference scene; the areas are indicative rather than quantitative but they show the relation between:

  • the population of all practitioners;
  • the subsection of that population that attends conferences; and
  • the subsection of that population that speaks at conferences.

Conference Venn diagram

So basically conferences scale really badly. Even once we’ve got past the fact that conferences are geared up to engage the participation of only a handful of their attendees, the next limiting factor is that most people in [whatever industry you’re in] aren’t attending conferences. For the stories told at a conference (in whatever fashion) to have the biggest impact on their industry, they have to break the confines of the conference. This would traditionally, in many conferences, involve either publishing the proceedings (I’ve not heard of this happening in indie tech conferences since the NATO conferences of 1968-9, although Keith Duncan is one of a couple of people to mention to me the more general idea of a peer-reviewed industry journal) or the session videos (which is much more common).

To generate the biggest impact, the stories involved must be inspiring and challenging so that the people who watched them, even those who didn’t attend the conference, feel motivated to reflect on and change the way they work, and to share their experiences (perhaps at the same conference, maybe elsewhere). Before moving on to a summary of everything I’ve said so far, I’ll make one more point about the groups drawn on the Venn diagram. Speakers tend to be specialists (or, as Marcus put it in his NSConf talk, subject matter experts) in one or two fields; that’s not surprising given the amount of research effort that goes into a talk (described above). Additionally, some speakers are asked to conferences because they have published a book on the topic the convenor wishes them to speak on; that’s an even longer project of focussed research. This in itself is a problem, because a lot of the people having difficulty with their work are likely to be neophytes, but apparently we’re not listening to them. We listen to self-selected experts opining on why everyone needs to take security/TDD/whatever seriously and why that involves retaining the experts’ consultancy service: we never listen to the people who can tell us that after a month of trying this Objective-C stuff still doesn’t make sense. These are the people who can give us insight into how to improve our practice, because these are the people reminding the experts (and indeed the journeymen) of the problems they had when they’d been at this for a month. They tell us about the issues everyone has, and give us ideas on how we can fix it for all (future) participants.

Conference goers, then, get the benefit of a small handful of specialists: in other words they have a range of experience to call on (vicariously) that is both broad and deep. Speakers of course have the same opportunity, though don’t always get to take full advantage of the rest of a conference due to preparation, equipment tests, post-talk question sessions and the like. The “non-goers” entry in the diagram represents a vast range of skills and experiences, so it’s hard to find any one thing to say about them. Some will be “distance delegates”, attending every conference by purchasing the videos, transcripts or other materials. Some will absorb information by other means, including meet-ups, books, blogs etc. And some will be lone coders who never interact with anyone in their field. Imagine for a moment that your goal in life is to apply the Boy Scout Rule (which I’m going to attribute again to @ddribin because I can’t remember who he got it from; Uncle Bob probably) to your whole industry. Your impact on $thing_you_do will be to leave the whole field, the whole practice a bit better than it was when you got here. (If that really is your goal, then skip the imagination part for a bit.)

It seems to me that the best people to learn from are the conference delegates (who have seen a wide section of the industry in considerable depth) and the best people to transfer that knowledge to are, well, everybody.

Summary of the current position

Conferences are good. I don’t want people to think I’m hating on conferences. They’re enjoyable events, there are plenty of good ones, there’s an opportunity to learn things, and to see fresh perspectives on many aspects of our industry. They’re also more popular than ever, with new events appearing (and selling out rapidly) every year. However, these perspectives often have an introspective, echo chamber quality. We’re often listening to a small subset of the conference delegates, and if you integrate over multiple conferences you find the subset gets relatively smaller because it’s the same people presenting all the time. Most delegates will not get the benefit of listening to all of the other delegates, which means they’re missing out on engaging with some of the broadest experience in the industry. Most of the practitioners in your corner of the industry probably don’t attend any conferences anyway; there aren’t enough seats for that to work.

The ideal tech conference

OK, I am very clearly lying here: this isn’t the ideal tech conference, it’s my ideal tech conference. In my world, those are the same thing. PerfectConf features a much more diverse portfolio of speakers. In the main this is achieved exactly the way that Appsterdam does it; by offering the chance to speak to anyone who’ll take it, by looking for things that are interesting to hear about rather than accomplished or expert speakers to say it, and by giving novice speakers the chance to train with the experts before they go in front of the stage. Partly this diversity is achieved by allowing people who aren’t comfortable with speaking the opportunity to host a different kind of session, for example a debate or a workshop.

In addition to engaging session hosts who would otherwise be apprehensive about presenting, we get to hear about the successes and tribulations encountered by the whole cohort of delegates. At least one session would be a plenary debate, focussed on a problem that the industry is currently facing. This session has the modest aim of discovering a solution to the problem to move the industry as a whole forward. Another way in which diversity is introduced into the conference is by listening to people outside of our own sector. If infosec is having trouble getting budget for its activities, perhaps they ought to invite more CFOs or comptrollers to its conferences to discuss that. If iPhone app developers find it hard to incorporate concurrency into their application designs, they could do worse than to listen to an Erlang or Occam expert. Above all, the echo chamber would be avoided; session hosts would be asked to challenge the perceived industry status quo.

I’ve long thought that if a talk of mine doesn’t annoy at least one member of the audience then I haven’t said anything useful; a former manager of mine said “if we both think the same way about everything then one of us is redundant”. This way of thinking would be codified into the conference. Essentially, what I’m talking about is the death of the thought leader (or “rock star”). Rather than having one subject matter expert opining on how everyone should think about security, UX, marketing, or whatever, PerfectConf encourages the community to work together like a slime mould, allowing the collective motion of all of the members to explore all opportunities and options and select the best one by communicating freely across the colony.

Slime mold solving a maze (photo: Nature)

Finally, PerfectConf proceedings are published as soon as practical; not just the speaker sessions but the debates too. Where the plenus reaches a consensus, the consensus decision becomes available for all those people who couldn’t make it to the conference of to discover, consider, and potentially adopt or react to. Unfortunately I’m not a conference organiser.

Software-ICs and a component marketplace

In the previous post, I was talking about Object-Oriented Programming, an Evolutionary Approach. What follows is a thought experiment based on that.

Chapter 6 of Brad Cox’s book, once he’s finished explaining how ObjC works (and who to buy it from), is concerned with his vision of how Object-Oriented software will be built. He envisions “Software-ICs”—compiled object files defining the code to support a single class (no need for header files, remember) that are distributed with documentation on how to use that class.

Developers or “software librarians” connect ICs together into collections called “categories”, which are implemented as object libraries. It’s a bit unfortunate that “category” is an inappropriate name mainly due to later reuse by NeXT; but then the alternate word “framework” is also unfortunate due to confusion with the computer science term (which allows that AppKit, WebObjects UIKit are frameworks, but Foundation, Quartz and so on are not). But it’s an entirely understandable reuse: in Smalltalk-80 related methods are grouped into categories, and NeXT used the same terminology for a very similar purpose.

Interestingly, Cox allowed for the compiler to generate vtables of selectors for each category, a bit like the Amiga operating system’s library format. That’s to support having different variable types for selectors with the same name. Modern Objective-C doesn’t support that; if you define selectors with the same name but different parameters or return values, you’ll get a warning and your code might not work correctly.

Finally, an application is a network of categories connected by the linker. One (or perhaps more, depending on your design) of the categories in the application contains the app-specific classes.

My reason for bringing this up is that this vision of object-oriented software engineering closely models component-oriented hardware engineering by allowing for software shops to produce catalogs of the components they produce at each level. Just as you can order a single IC, or a circuit board with a few ICs connected, or a whole widget, so you could order a class, or a category, or an application. If you want to build a new application, you might buy a couple of classes from one vendor, a category from another vendor, then write a few classes yourself and integrate the whole lot.

Enough ancient book, talk about the real world.

We have a lot of this, and make quite a lot of use of it. There are loads of Objective-C classes, libraries and frameworks out there for us to use, and to some extent there are catalogs. Many of the components we can use are open source, which means that we can treat the class interfaces themselves as the catalogs. If we’re lucky there’ll be some documentation, perhaps in the form of AppleDoc or a README.

Unfortunately availability vastly outstrips discoverability. You have to go to multiple catalogs to ensure that you’ve exhausted the search space: Google Code, GitHub, BitBucket, SourceForge etc. in addition to finding commercial libraries which won’t be listed in any of those places. Actual code search engines like OpenGrok and Koders are great for finding out about source code, but not so great for discovering it in the first place.

Metacatalogs like Cocoa Objects, Cocoa Controls and CocoaOpen solve part of this problem by letting people list their source code in a single place, but because they’re incomplete they only add to the number of places you need to search.

Then, once you’ve got the component, what do you do? Are you meant to drop the source files into your project? Should you drop the project in and add the library as a dependency of your app? Should you use CocoaPods?

Learn from what we already do

Just as we already push most of the apps we write to a single app store where customers can discover, purchase and install apps in a state where they’re ready to use, we should do the same with components.

[Please bear in mind that like most descriptions of ideas, a lot of nuances and complexity are known but are elided below for the sake of clarity. Comment brownie points will not be awarded for comments that explain how I haven’t considered case X; I probably have.]

A component store would, for browsers, start off very similar to Cox’s idea of a component catalog. You’d go to it and search for a component that suits your needs. You could see a “spec sheet” for each component detailing what it does, what it costs, the terms of using it and that sort of thing. You’d then buy the component if it’s paid for and download it. If the licence permits it you could download the source, too.

The download would drop the binary and headers into a folder that Xcode would recognise as an additional SDK. It would also drop the documentation in docset format into a standard location. An Xcode project would just need to point at the additional SDK and it could pick up all of the components available to the developer.

From the perspective of a component manufacturer, the component store would look a little like iTunes Connect. You’d write your code, then package it up for the store in a standard way along with the description that goes into the “spec sheet”. For open source projects that could just involve git push componentstore master to have the store itself generate the binaries and documentation from the source code.

Culture, heritage and apps

I said earlier on Twitter that I’m disappointed with the state of apps produced for museums and libraries. I’d better explain what I mean. Here’s what I said:

Disappointed to find that many museum apps (British Library, Bodleian, Concorde etc) are just the same app with different content. :-(

In each case (particularly Concorde) there’s some magic specific to the subject. Rubber-stamping the app doesn’t capture that magic.

They’re all made with a tool called Toura that sits atop PhoneGap. Makes it easy to get the content into the apps at the cost of expression.

So to be clear: my problem is not with the museums and other heritage sites. I’m familiar with the financial and political problems associated with running a museum. My social circle includes curators, librarians and medieval manuscript experts and I know that money is tight, that oversight is close and that any form of expenditure in promotion must result in a demonstrable increase in feet through the door and donations to be considered a success.


The Magna Carta

My problem is also not with the Toura product and the team behind it. They’re to be commended for identifying that while the culture and heritage community doesn’t have much money, they still deserve to be represented in our tablets and phones. The apps I listed above all feature astounding objects: examples include the Concorde aircraft, the Lindisfarne gospels and the Magna Carta.


Concorde

So why are these apps disappointing? It’s basically because they’re all the same. Each of these objects has its own magic: the unique shape of Concorde, the vivid colours of the gospels and the constitution-defining text of the Carta. So why present them in the same way? Why not make a Concorde app that evokes aerodynamic speed, an illuminated Lindisfarne gospels app, and a revolutionary Magna Carta app? As many people have explained, it’s because standard content-viewing apps like Toura are all these institutions can afford.


Folio 27r from the Lindisfarne Gospels: image from Wikimedia Commons

There’s something seriously wrong in our app industry. We’ve created a world in which apps are too cheap, and developers struggle to make a living from 70¢ per sale. Simultaneously, we’ve created a world in which apps are too expensive: the keepers of some of the world’s most interesting objects can’t afford to showcase these with more than a generic master/detail view. What’s that about?

What mindset leads us to demand high worth, when we’re making products that we can’t convince customers contain any value? If they don’t see the value, are we deluding ourselves? People talk about events like the Instagram acquisition as being evidence of a bubble subset of the app economy. But isn’t that oxymoron – a free app that’s worth a billion dollars – just an exaggerated version of what the rest of the industry is up to?

Let me present this problem in a different way, by moving from apps about museums to museums about apps.

IMG_0114.JPG
The Rijksmuseum, Amsterdam.

Last month I spent an enjoyable time with some very good friends in Amsterdam, the city of both apps and museums. While walking around the Rijksmuseum, I reminded myself that we’re in the middle of the possibly the largest and definitely the fastest technology-mediated social revolution humanity has ever experienced. Those of us enabling the application of this technology are literally providing the fulcrum around which our species is revolving.

During this year, museums will look at apps as cultural, historical and ethnological artefacts in their own right. Over the coming century, heritage institutions will shape the way that our brief period in time is presented for posterity. Be sure that all of the million apps currently available across all app stores will not be on display.

As with any presentation, perfection is achieved not when there is nothing left to add but when there is nothing left to take away. Just as you can’t (and wouldn’t want to) walk into a museum and see every manuscript from the 10th century on display, you won’t be able to walk into the Nieuwerijksmuseum in 3112 and see every app from the 21st century. Only those that are considered cultural treasures will be given pride of place in the galleries. Can something that apparently has no value be considered a cultural treasure? Will any of your apps be part of the presentation? I don’t think I’ve yet produced anything that will be, and that needs to change.