On device identifiers

Note: as ever, this blog refrains from commenting on speculation regarding undisclosed product innovations from device providers. This post is about the concept of tracking users via a device identifier. You might find the discussion useful in considering future product directions; that’s fine.

Keeping records of what users are up to has its benefits. As a security boffin, I’m fully aware of the benefits of good auditing: discovering what users (and others) have (or haven’t) done to a valuable system. It also lets developers find out how users are getting on with their application: whether they ignore particular features, or have trouble deciding what to do at points in the app’s workflow.

Indeed, sometimes users actively enjoy having their behaviour tracked. Every browser has a history feature; many games let players see what they’ve achieved and compare it with others. Location games would be pretty pointless if they could only tell me where I am now, not tell the story of where I’ve been.

A whole bunch of companies package APIs for tracking user behaviour in smartphone devices. These are the Analytics companies. To paraphrase Scott Adams: analytics is derived from the root word “anal”, and the Greek “lytics” meaning “to pull a business model from”. What they give developers for free is the API to put analytics services in their apps, and the tools to draw useful conclusions from these data.

This is where the fun begins. Imagine that the analytics company uses some material derived from a device identifier (a UDID, IMEI, or some other hardware key) as the database key to associate particular events with users. Now, if the same user uses multiple apps even by different developers on the same device, and they all use that analytics API, then that analytics provider can aggregate the data across all of the apps and build up a bigger picture of that user’s behaviour.

If only one of the apps records the user’s name as part of its analytics, then the analytics company – a company with whom the user has no relationship – gets to associate all of that behaviour with the user’s real name. So, of course, do that company’s customers: remember that the users and developers are provided with their stuff for free, and that businesses have limited tendency toward altruism. The value in an analytics company is their database, so of course they sell that database to those who will buy it: like advertisers, but again, companies with whom the user has no direct relationship.

People tend to be uneasy about invisible or unknown sharing of their information, particularly when the scope or consequences of such sharing are not obvious up front[*]. The level of identifiable[**] information and scope of data represented by a cross-app analysis of a smartphone user’s behaviour – whether aggregated via the model described above or other means – is downright stalker-ish, and will make users uncomfortable.

One can imagine a scenario where smartphone providers try not to make their users uncomfortable: after all, they are the providers’ bread and butter. So they don’t give developers access to such a “primary key” as has been described here. Developers would then be stuck with generating identifiers inside their apps, so tracking a single user inside a single app would work but it would be impossible to aggregate the data across multiple apps, or the same app across multiple devices.

Unless, of course, the developer can coerce the user into associating all of those different identifiers with some shared identifier, such as a network service username. But how do you get users to sign up for network services? By ensuring that the service has value for the user. Look at game networks that do associate user activity across multiple apps, like OpenFeint and Game Center: they work because players like seeing what games their friends are playing, and sharing their achievements with other people.

The conclusion is, in the no-device-identifier world, it’s still possible to aggregate user behaviour, but only if you exchange that ability for something that the user values. Seems like a fair deal.

[*] My “don’t be a dick” guide to data privacy takes into account the fact that people like sharing information via online services such as Facebook, foursquare etc. but that they want to do so on their own terms. It goes like this: you have no right to anything except what the user told you. You have no right to share anything except what the user told you to share; they will tell you who you may share it with, and can change their minds. The user has a right to know what they’re telling you and how you’re sharing it.

[**] Given UK-sized postal zones, your surname and postcode are sufficient to uniquely identify you. Probably your birthday and postcode would also work. It doesn’t take much information to uniquely identify someone, anyway.

Remember the future?

The future is notoriously hard to pin down. For example, what is Seattle’s lasting legacy from 20th Century technology? What would people have pointed to in, say, the 1970s? Of course, Seattle is the home of Boeing, who did a lot of construction for NASA (and bought most of the other companies that were also doing so) on projects like the Saturn V rocket and the Space Shuttle. Toward the end of the 1970s, those in the know in Seattle would have confidently claimed that the Shuttle’s weekly trips into space as the world’s longest-distance haulage provider will be central to 21st century Space Age technology. But neither the shuttle nor the Saturn V works any more, and nothing equivalent has come along to replace them (certainly not from Boeing). The permanent remnant of Seattle’s part in the space race comes from earlier on, when the USSR already had satellites in orbit, Gagarin had safely returned, and the USA wanted to assert its technological superiority over the Soviets. I’m talking, of course, about Seattle Center and its most famous landmark: a giant lift shaft with a restaurant at one end and a gift shop at the other.

Space NeedlePeople like to prognosticate about how our industry, society or civilization is going to change: both in the short term, and in the distant future (which can be anything beyond about three years in software terms). Sure, it’s fun to speculate. Earlier this year I took part in a panel with my arch-brother among others, where many of the questions were about what direction we thought Apple might take with the iPad, how existing companies would work their software to fit in post-PC devices, that sort of thing. That means that not only do we enjoy prognostication, but we seek it out. People enjoy playing the game of deciding what the future will be like, and hope for that spark of satisfaction of knowing either that they were right, or that they were there when someone else was right.

But why? It’s not as if we’re any good at it. The one thing that history lets us discover is that people who predict the future generally get it wrong. If they have the foresight to make really grandiose predictions they get away with it, because no-one finds out that they were talking out of their arses until well after they died. But just as the Space Needle has outlived the Space Shuttle, so the “next big thing” can easily turn out to be a fad while something apparently small and inconsequential now turns out to last and last.

Space NeedleOf course I’ll discuss the computing industry in this article, but don’t think this is specific to computing. In general, people go for two diametric visions of the future: either it’s entirely different from what came before, or it’s the same but a little better. The horses are faster, that kind of thing. Typically, experts in an industry are the people who find it hardest to predict that middle ground: a lot of things are the same, but one or two things have created a large change. Like the people at the air ministry who knew that superchargers were too heavy to ever allow Frank Whittle’s jet turbine to take off. Or the people who didn’t believe that people could travel at locomotive speeds. Or H.G. Wells, who predicted men on (well, in) the Moon, severely stratified society, life on other planets…just not the computer that was invented during his lifetime.

OK, so, computing. Remember the future of computers? The future will contain "maybe five" computers, according to Thomas Watson at IBM. I’m in a room now with about nine computers, not including the microcontrollers in my watch, hi-fi, cameras and so forth. There were around ten examples of Collosus produced in the 1940s. Why maybe five computers? Because computers are so damned heavy you need to reinforce the main frame of your floor to put them in. Because there are perhaps two dozen people in the world who understand computers. Because if you have too many then you have loads of dangerous mercury sloshing around. Because companies are putting themselves out of business attempting to sell these things for a million dollars when the parts cost nearly two million. And, finally, because there’s just not much you can do on a computer: not everyone needs ballistics tables (and most of the people who do want them, we don’t want to sell to).

Space NeedleEnough of the dim depths of computing. Let’s come into the future’s future, and ask whether you remember the other future of computers: the workstation. Of course, now we know that mainframes are old and busted, and while minicomputers based on transistor-to-transistor logic are cheaper, smaller, more reliable and all, they’re still kindof big. Of course, micros like the Altair and the Apple are clearly toys, designed as winter-evening hobbies for married men[*]. Wouldn’t it be better to use VLSI technology so that everyone can have their own time-sharing UNIX systems[**] on their desks, connected perhaps through the ultra-fast thinwire networks?

Better, maybe, but not best. Let’s look at some of the companies involved, in alphabetical order. Apollo? Acquired by HP. Digital? Acquired, circuitously, by HP. HP? Still going, but not making workstations (nor, apparently, much else) any more. IBM? See HP. NeXT? Acquired, making consumer electronics these days. Silicon Graphics? Acquired (after having left the workstation industry). Stanford University Networks? Acquired by a service company, very much in the vein of IBM or HP. Symbolics, the owners of the first ever .com domain? Went bankrupt.

The problem with high-end, you see, is that it has a tendency to become low-end. Anything a 1980s workstation can do could be done in a “personal” computer or a micro by, well, by the 1980s. It’s hard to sell bog standard features at a premium price, and by the time PCs had caught up to workstations, workstations hadn’t done anything new. Well, nothing worth talking about…who’d want a camera on their computer? Notice that the companies that did stay around-IBM and HP-did so by getting out of the workstation business: something SGI and Sun both also tried to do and failed. The erosion of the workstation market by the domestic computer is writ most large in the Apple-NeXT purchase.

Space NeedleSo workstations aren’t the future. How about the future of user interfaces? We all know the problem, of course: novice computer users are confused and dissuaded by the “computery-ness” of computers, and by the abstract nature of the few metaphors that do exist (how many of you wallpaper your desktop?). The solution is obvious: we need to dial up the use of metaphor and skeuomorphism to make the user more comfortable in their digital surroundings. In other words, we need Bob. By taking more metaphors from the real world, we provide a familiar environment for users who can rely on what they already know about inboxes, bookshelves, desk drawers and curtains(!) in order to navigate the computer.

Actually, what we need is to get rid of every single mode in the computer’s interface. This is, perhaps, a less well-known future of computing than the Bob future of computing, despite being documented in the classic book The Humane Interface, by Jef Raskin. The theory goes like this: we’ve got experience of modal user interfaces, and we know that they suck. They force the user to stop working while the computer asks some asinine question, or tells them something pointless about the state of their application. They effectively reverse the master-slave relationship, making the user submit to the computer’s will for a while. That means that in the future, computers will surely dispose of modes completely. Well, full modes: of course partial modes that are entirely under the user’s control (the Shift key represents a partial mode, as does the Spotlight search field) are still permitted. So when the future comes to invent the smartphone, there’ll be no need for a modal view controller in the phone’s API because future UI designers will be enlightened regarding the evils of modality.

A little closer to home, and a little nerdier, do you remember the future of the filesystem? HFS+ is, as we know, completely unsuitable as a filesystem for 2009 so future Macs will instead use Sun’s ZFS. This will allow logical volume management, versioned files…the sorts of goodies that can’t be done on HFS+. Oh, wait.

These are all microcosmic examples of how the future of computing hasn’t quite gone according to the predictions. I could quote more (one I’ve used before is Bob Cringely’s assertion in 1992 that in fifteen years, we’ll have post-PC PCs; well I’m still using a PC to write this post and it’s 2011), but it’s time to look at the bigger picture, so I’m going to examine why the predictions from one particular book have or haven’t come about. I’m not picking this book because I want to hate on it; in fact in a number of areas the predictions are spot on. I’m picking on this book because the author specifically set out to make short, medium and long-term forecasts about the silicon revolution, and the longest-term predictions were due to have become real by the year 2000. The book is The Mighty Micro: Impact of the Computer Revolution by Dr. Christopher Evans, published in 1979.

According to the Mighty Micro the following should have all happened by now.

  • Openness and availability of information leads to the collapse of the Soviet Union. ✓
  • A twenty-hour working week and retirement at fifty. ✗
  • Microcontroller-based home security. ✓ For everyone, replacing the physical lock-and-key. ✗
  • Cars that anticipate and react to danger. ✓ As the standard. ✗
  • A “wristwatch” that monitors pulse and blood pressure. ✓
  • An entire library stored in the volume of a paperback book. ✓
  • A complete end to paper money. ✗
  • An end to domestic crime. ✗

So what happened? Well, “processors and storage will get smaller and cheaper” was the prevailing trend from the forties to the seventies, i.e. over the entire history of electronic computing. Assuming that would continue, and that new applications for tiny computers would be discovered, was a fairly safe bet, and one that played out well. The fundamental failures behind all of the other predictions were twofold: that such applications would necessarily replace, rather than augment, whatever it was that we were doing before computers, and that we would not find novel things to do with our time once computers were doing the things we already did. The idea was that once computers were doing half of our work, we would have 50% as much work to do: not that we would be able to do other types of work for that 50% of our working week.

One obvious thing we-well, some of us-have to do now that we didn’t before is program computers. Borrowing some figures from the BSA, there were 1.7M people working in software in the US in 2007, earning significantly more than the national average wage (though remember that this was during the outsourcing craze, so a lot of costs and jobs even for American companies might be missing here). The total worldwide expenditure on (packaged, not bespoke) software was estimated at $300bn. Once you include the service aspects and bespoke or in-house development, it’s likely that software was already a trillion-dollar industry by 2007. Before, if you remember, the smartphone app gold rush.

This is a huge (and, if we’re being brutally honest, inefficient) industry, with notoriously short deadlines, long working hours, capricious investors and variable margins. Why was it not predicted that, just as farmhands became machine operators, machine operators would become computer programmers? That the work of not having a computer would be replaced by the work of having a computer?

So, to conclude, I’ll return to a point from the article’s introduction: that making predictions is easy and fun, but making accurate predictions is hard. When a pundit tells you that something is a damp squib or a game-changer, they might be correct…but you might want to hedge your bets. Of course, carry on prognosticating and asking me to do so: it’s enjoyable.

[*] This is one of the more common themes of futurology; whatever the technological changes, whatever their impacts on the political or economic structure of the world, you can bet that socially things don’t change much, at least in the eye of the prognosticators. Take the example of the Honeywell Kitchen Computer: computers will revolutionise the way we do everything, but don’t expect women to use them for work.

[**] Wait, if we’ve each got our own computer, why do they have to be time-sharing?

Want to hire iamleeg?

Well, that was fun. For nearly a year I’ve been running Fuzzy Aliens, a consultancy for app developers to help get security and privacy requirements correct, reducing the burden on the users. This came after a year of doing the same as a contractor, and a longer period of helping out via conference talks, a book on the topic, podcasts and so on. I’ve even been helping the public to understand the computer industry at museums.

Everything changes, and Fuzzy Aliens is no exception. While I’ve been able to help out on some really interesting projects, for everyone from indies to banks, there hasn’t been enough of this work to pay the bills. I’ve spoken with a number of people on this, and have heard a range of opinions on why there isn’t enough mobile app security work out there to keep one person employed. My own reflection on the year leads me to these items as the main culprits:

  • There isn’t a high risk to developers associated with getting security wrong;
  • Avoiding certain behaviour to improve security can mean losing out to competitors who don’t feel the need to comply;
  • The changes I want to make to the industry won’t come from a one-person company; and
  • I haven’t done a great job of communicating the benefits of app security to potential customers.

Some people say things like Fuzzy Aliens is “too early”, or that the industry “isn’t ready” for such a service: those are actually indications that I haven’t made the industry ready: in other words, that I didn’t get the advantages across well enough. Anyway, the end results are that I can and will learn from Fuzzy Aliens, and that I still want to make the world a better place. I will be doing so through the medium of salaried employment. In other words, you can give me a job (assuming you want to). The timeline is this:

  • The next month or so will be my Time of Searching. If you think you might want to hire me, get in touch and arrange an interview during August or early September.
  • Next will come my Time of Changing. Fuzzy Aliens will still offer consultancy for a very short while, so if you have been sitting on the fence about getting some security expertise on your app, now is the time to do it. But this will be when I research the things I’ll need to know for…
  • whatever it is that comes next.

What do I want to do?

Well, of course my main areas of experience are in applications and tools for UNIX platforms—particularly Mac OS X and iOS—and application security, and I plan to continue in that field. A former manager of mine described me thus on LinkedIn:

Graham is so much more than a highly competent software engineer. A restless “information scout” – finding it, making sense of it, bearing on a problem at hand or forging a compelling vision. Able to move effortlessly between “big picture” and an obscure detail. Highly capable relationships builder, engaging speaker and persuasive technology evangelist. Extremely fast learner. Able to use all those qualities very effectively to achieve ambitious goals.

Those skills can best be applied strategically I think: so it’s time to become a senior/chief technologist, technology evangelist, technical strategy officer or developer manager. That’s the kind of thing I’ll be looking for, or for an opportunity that can lead to it. I want to spend an appreciable amount of time supporting a product or community that’s worth supporting: much as I’ve been doing for the last few years with the Cocoa community.

Training and mentoring would also be good things for me to do, I think. My video training course on unit testing seems to have been well-received, and of course I spent a lot of my consulting time on helping developers, project managers and C*Os to understand security issues in terms relevant to their needs.

Where do I want to do it?

Location is somewhat important, though obviously with a couple of years’ experience at telecommuting I’m comfortable with remote working too. The roles I’ve described above, which depend as much on relationships as on sitting at a computer, may be best suited by split working.

My first choice preference for the location of my desk is a large subset of the south of the UK, bounded by Weston-Super-Mare and Lyme Regis to the west, Gloucester and Oxford to the north, Reading and Chichester to the east and the water to the south (though not the Solent: IoW is fine). Notice that London is outside this area: having worked for both employers and clients in the big smoke, I would rather not be in that city every day for any appreciable amount of time.

I’d be willing to entertain relocation elsewhere in Europe for a really cool opportunity. Preferably somewhere with a Germanic language because I can understand those (including, if push comes to shove, Icelandic and Faroese). Amsterdam, Stockholm and Dublin would all be cool. The States? No: I couldn’t commit to living over there for more than a short amount of time.

Who will you do it for?

That part is still open: it could be you. I would work for commercial, charity or government/academic sectors, but I have this restriction: you must not just be another contract app development agency/studio. You must be doing what you do because you think it’s worth doing, because that’s the standard I hold myself to. And charging marketing departments to slap their logo onto a UITableView displaying their blog’s RSS feed is not worth doing.

That’s why I’m not just falling back on contract iOS app development: it’s not very satisfying. I’d rather be paid enough to live doing something great, than make loads of money on asinine and unimportant contracts. Also, I’d rather work with other cool and motivated people, and that’s hard to do when you change project every couple of months.

So you’re doing something you believe in, and as long as you can convince me it’s worth believing in and will be interesting to do, and you know where to find me, then perhaps I’ll help you to do it. Look at my CV, then as I said before, e-mail me and we’ll sort out an interview.

I expect my reward to be dependent on how successful I make the product or community I’m supporting: it’s how I’ll be measuring myself so I hope it’s how you will be too. Of course, we all know that stock and options schemes are bullshit unless the stock is actually tradeable, so I’ll be bearing that in mind.

Some miscellaneous stuff

Here’s some things I’m looking for, either to embrace or avoid, that don’t quite fit in to the story above but are perhaps interesting to relate.

Things I’ve never done, but would

These aren’t necessarily things my next job must have, and aren’t all even work-related, but are things that I would take the opportunity to do.

  • Give a talk to an audience of more than 1,000 people.
  • Work in a field on a farm. Preferably in control of a tractor.
  • Write a whole application without using any accessors.
  • Ride a Harley-Davidson along the Californian coast.
  • Move the IT security industry away from throwing completed and deployed products at vulnerability testers, and towards understanding security as an appropriately-prioritised implicit customer requirement.
  • Have direct reports.

Things I don’t like

These are the things I would try to avoid.

  • “Rock star” developers, and companies who hire them.
  • Development teams organised in silos.
  • Technology holy wars.
  • Celery. Seriously, I hate celery.

OK, but first we like to Google our prospective hires.

I’ll save you the trouble.