Remember the future?

The future is notoriously hard to pin down. For example, what is Seattle’s lasting legacy from 20th Century technology? What would people have pointed to in, say, the 1970s? Of course, Seattle is the home of Boeing, who did a lot of construction for NASA (and bought most of the other companies that were also doing so) on projects like the Saturn V rocket and the Space Shuttle. Toward the end of the 1970s, those in the know in Seattle would have confidently claimed that the Shuttle’s weekly trips into space as the world’s longest-distance haulage provider will be central to 21st century Space Age technology. But neither the shuttle nor the Saturn V works any more, and nothing equivalent has come along to replace them (certainly not from Boeing). The permanent remnant of Seattle’s part in the space race comes from earlier on, when the USSR already had satellites in orbit, Gagarin had safely returned, and the USA wanted to assert its technological superiority over the Soviets. I’m talking, of course, about Seattle Center and its most famous landmark: a giant lift shaft with a restaurant at one end and a gift shop at the other.

Space NeedlePeople like to prognosticate about how our industry, society or civilization is going to change: both in the short term, and in the distant future (which can be anything beyond about three years in software terms). Sure, it’s fun to speculate. Earlier this year I took part in a panel with my arch-brother among others, where many of the questions were about what direction we thought Apple might take with the iPad, how existing companies would work their software to fit in post-PC devices, that sort of thing. That means that not only do we enjoy prognostication, but we seek it out. People enjoy playing the game of deciding what the future will be like, and hope for that spark of satisfaction of knowing either that they were right, or that they were there when someone else was right.

But why? It’s not as if we’re any good at it. The one thing that history lets us discover is that people who predict the future generally get it wrong. If they have the foresight to make really grandiose predictions they get away with it, because no-one finds out that they were talking out of their arses until well after they died. But just as the Space Needle has outlived the Space Shuttle, so the “next big thing” can easily turn out to be a fad while something apparently small and inconsequential now turns out to last and last.

Space NeedleOf course I’ll discuss the computing industry in this article, but don’t think this is specific to computing. In general, people go for two diametric visions of the future: either it’s entirely different from what came before, or it’s the same but a little better. The horses are faster, that kind of thing. Typically, experts in an industry are the people who find it hardest to predict that middle ground: a lot of things are the same, but one or two things have created a large change. Like the people at the air ministry who knew that superchargers were too heavy to ever allow Frank Whittle’s jet turbine to take off. Or the people who didn’t believe that people could travel at locomotive speeds. Or H.G. Wells, who predicted men on (well, in) the Moon, severely stratified society, life on other planets…just not the computer that was invented during his lifetime.

OK, so, computing. Remember the future of computers? The future will contain "maybe five" computers, according to Thomas Watson at IBM. I’m in a room now with about nine computers, not including the microcontrollers in my watch, hi-fi, cameras and so forth. There were around ten examples of Collosus produced in the 1940s. Why maybe five computers? Because computers are so damned heavy you need to reinforce the main frame of your floor to put them in. Because there are perhaps two dozen people in the world who understand computers. Because if you have too many then you have loads of dangerous mercury sloshing around. Because companies are putting themselves out of business attempting to sell these things for a million dollars when the parts cost nearly two million. And, finally, because there’s just not much you can do on a computer: not everyone needs ballistics tables (and most of the people who do want them, we don’t want to sell to).

Space NeedleEnough of the dim depths of computing. Let’s come into the future’s future, and ask whether you remember the other future of computers: the workstation. Of course, now we know that mainframes are old and busted, and while minicomputers based on transistor-to-transistor logic are cheaper, smaller, more reliable and all, they’re still kindof big. Of course, micros like the Altair and the Apple are clearly toys, designed as winter-evening hobbies for married men[*]. Wouldn’t it be better to use VLSI technology so that everyone can have their own time-sharing UNIX systems[**] on their desks, connected perhaps through the ultra-fast thinwire networks?

Better, maybe, but not best. Let’s look at some of the companies involved, in alphabetical order. Apollo? Acquired by HP. Digital? Acquired, circuitously, by HP. HP? Still going, but not making workstations (nor, apparently, much else) any more. IBM? See HP. NeXT? Acquired, making consumer electronics these days. Silicon Graphics? Acquired (after having left the workstation industry). Stanford University Networks? Acquired by a service company, very much in the vein of IBM or HP. Symbolics, the owners of the first ever .com domain? Went bankrupt.

The problem with high-end, you see, is that it has a tendency to become low-end. Anything a 1980s workstation can do could be done in a “personal” computer or a micro by, well, by the 1980s. It’s hard to sell bog standard features at a premium price, and by the time PCs had caught up to workstations, workstations hadn’t done anything new. Well, nothing worth talking about…who’d want a camera on their computer? Notice that the companies that did stay around-IBM and HP-did so by getting out of the workstation business: something SGI and Sun both also tried to do and failed. The erosion of the workstation market by the domestic computer is writ most large in the Apple-NeXT purchase.

Space NeedleSo workstations aren’t the future. How about the future of user interfaces? We all know the problem, of course: novice computer users are confused and dissuaded by the “computery-ness” of computers, and by the abstract nature of the few metaphors that do exist (how many of you wallpaper your desktop?). The solution is obvious: we need to dial up the use of metaphor and skeuomorphism to make the user more comfortable in their digital surroundings. In other words, we need Bob. By taking more metaphors from the real world, we provide a familiar environment for users who can rely on what they already know about inboxes, bookshelves, desk drawers and curtains(!) in order to navigate the computer.

Actually, what we need is to get rid of every single mode in the computer’s interface. This is, perhaps, a less well-known future of computing than the Bob future of computing, despite being documented in the classic book The Humane Interface, by Jef Raskin. The theory goes like this: we’ve got experience of modal user interfaces, and we know that they suck. They force the user to stop working while the computer asks some asinine question, or tells them something pointless about the state of their application. They effectively reverse the master-slave relationship, making the user submit to the computer’s will for a while. That means that in the future, computers will surely dispose of modes completely. Well, full modes: of course partial modes that are entirely under the user’s control (the Shift key represents a partial mode, as does the Spotlight search field) are still permitted. So when the future comes to invent the smartphone, there’ll be no need for a modal view controller in the phone’s API because future UI designers will be enlightened regarding the evils of modality.

A little closer to home, and a little nerdier, do you remember the future of the filesystem? HFS+ is, as we know, completely unsuitable as a filesystem for 2009 so future Macs will instead use Sun’s ZFS. This will allow logical volume management, versioned files…the sorts of goodies that can’t be done on HFS+. Oh, wait.

These are all microcosmic examples of how the future of computing hasn’t quite gone according to the predictions. I could quote more (one I’ve used before is Bob Cringely’s assertion in 1992 that in fifteen years, we’ll have post-PC PCs; well I’m still using a PC to write this post and it’s 2011), but it’s time to look at the bigger picture, so I’m going to examine why the predictions from one particular book have or haven’t come about. I’m not picking this book because I want to hate on it; in fact in a number of areas the predictions are spot on. I’m picking on this book because the author specifically set out to make short, medium and long-term forecasts about the silicon revolution, and the longest-term predictions were due to have become real by the year 2000. The book is The Mighty Micro: Impact of the Computer Revolution by Dr. Christopher Evans, published in 1979.

According to the Mighty Micro the following should have all happened by now.

  • Openness and availability of information leads to the collapse of the Soviet Union. ✓
  • A twenty-hour working week and retirement at fifty. ✗
  • Microcontroller-based home security. ✓ For everyone, replacing the physical lock-and-key. ✗
  • Cars that anticipate and react to danger. ✓ As the standard. ✗
  • A “wristwatch” that monitors pulse and blood pressure. ✓
  • An entire library stored in the volume of a paperback book. ✓
  • A complete end to paper money. ✗
  • An end to domestic crime. ✗

So what happened? Well, “processors and storage will get smaller and cheaper” was the prevailing trend from the forties to the seventies, i.e. over the entire history of electronic computing. Assuming that would continue, and that new applications for tiny computers would be discovered, was a fairly safe bet, and one that played out well. The fundamental failures behind all of the other predictions were twofold: that such applications would necessarily replace, rather than augment, whatever it was that we were doing before computers, and that we would not find novel things to do with our time once computers were doing the things we already did. The idea was that once computers were doing half of our work, we would have 50% as much work to do: not that we would be able to do other types of work for that 50% of our working week.

One obvious thing we-well, some of us-have to do now that we didn’t before is program computers. Borrowing some figures from the BSA, there were 1.7M people working in software in the US in 2007, earning significantly more than the national average wage (though remember that this was during the outsourcing craze, so a lot of costs and jobs even for American companies might be missing here). The total worldwide expenditure on (packaged, not bespoke) software was estimated at $300bn. Once you include the service aspects and bespoke or in-house development, it’s likely that software was already a trillion-dollar industry by 2007. Before, if you remember, the smartphone app gold rush.

This is a huge (and, if we’re being brutally honest, inefficient) industry, with notoriously short deadlines, long working hours, capricious investors and variable margins. Why was it not predicted that, just as farmhands became machine operators, machine operators would become computer programmers? That the work of not having a computer would be replaced by the work of having a computer?

So, to conclude, I’ll return to a point from the article’s introduction: that making predictions is easy and fun, but making accurate predictions is hard. When a pundit tells you that something is a damp squib or a game-changer, they might be correct…but you might want to hedge your bets. Of course, carry on prognosticating and asking me to do so: it’s enjoyable.

[*] This is one of the more common themes of futurology; whatever the technological changes, whatever their impacts on the political or economic structure of the world, you can bet that socially things don’t change much, at least in the eye of the prognosticators. Take the example of the Honeywell Kitchen Computer: computers will revolutionise the way we do everything, but don’t expect women to use them for work.

[**] Wait, if we’ve each got our own computer, why do they have to be time-sharing?

Posted in books | Leave a comment

Want to hire iamleeg?

Well, that was fun. For nearly a year I’ve been running Fuzzy Aliens, a consultancy for app developers to help get security and privacy requirements correct, reducing the burden on the users. This came after a year of doing the same as a contractor, and a longer period of helping out via conference talks, a book on the topic, podcasts and so on. I’ve even been helping the public to understand the computer industry at museums.

Everything changes, and Fuzzy Aliens is no exception. While I’ve been able to help out on some really interesting projects, for everyone from indies to banks, there hasn’t been enough of this work to pay the bills. I’ve spoken with a number of people on this, and have heard a range of opinions on why there isn’t enough mobile app security work out there to keep one person employed. My own reflection on the year leads me to these items as the main culprits:

  • There isn’t a high risk to developers associated with getting security wrong;
  • Avoiding certain behaviour to improve security can mean losing out to competitors who don’t feel the need to comply;
  • The changes I want to make to the industry won’t come from a one-person company; and
  • I haven’t done a great job of communicating the benefits of app security to potential customers.

Some people say things like Fuzzy Aliens is “too early”, or that the industry “isn’t ready” for such a service: those are actually indications that I haven’t made the industry ready: in other words, that I didn’t get the advantages across well enough. Anyway, the end results are that I can and will learn from Fuzzy Aliens, and that I still want to make the world a better place. I will be doing so through the medium of salaried employment. In other words, you can give me a job (assuming you want to). The timeline is this:

  • The next month or so will be my Time of Searching. If you think you might want to hire me, get in touch and arrange an interview during August or early September.
  • Next will come my Time of Changing. Fuzzy Aliens will still offer consultancy for a very short while, so if you have been sitting on the fence about getting some security expertise on your app, now is the time to do it. But this will be when I research the things I’ll need to know for…
  • whatever it is that comes next.

What do I want to do?

Well, of course my main areas of experience are in applications and tools for UNIX platforms—particularly Mac OS X and iOS—and application security, and I plan to continue in that field. A former manager of mine described me thus on LinkedIn:

Graham is so much more than a highly competent software engineer. A restless “information scout” – finding it, making sense of it, bearing on a problem at hand or forging a compelling vision. Able to move effortlessly between “big picture” and an obscure detail. Highly capable relationships builder, engaging speaker and persuasive technology evangelist. Extremely fast learner. Able to use all those qualities very effectively to achieve ambitious goals.

Those skills can best be applied strategically I think: so it’s time to become a senior/chief technologist, technology evangelist, technical strategy officer or developer manager. That’s the kind of thing I’ll be looking for, or for an opportunity that can lead to it. I want to spend an appreciable amount of time supporting a product or community that’s worth supporting: much as I’ve been doing for the last few years with the Cocoa community.

Training and mentoring would also be good things for me to do, I think. My video training course on unit testing seems to have been well-received, and of course I spent a lot of my consulting time on helping developers, project managers and C*Os to understand security issues in terms relevant to their needs.

Where do I want to do it?

Location is somewhat important, though obviously with a couple of years’ experience at telecommuting I’m comfortable with remote working too. The roles I’ve described above, which depend as much on relationships as on sitting at a computer, may be best suited by split working.

My first choice preference for the location of my desk is a large subset of the south of the UK, bounded by Weston-Super-Mare and Lyme Regis to the west, Gloucester and Oxford to the north, Reading and Chichester to the east and the water to the south (though not the Solent: IoW is fine). Notice that London is outside this area: having worked for both employers and clients in the big smoke, I would rather not be in that city every day for any appreciable amount of time.

I’d be willing to entertain relocation elsewhere in Europe for a really cool opportunity. Preferably somewhere with a Germanic language because I can understand those (including, if push comes to shove, Icelandic and Faroese). Amsterdam, Stockholm and Dublin would all be cool. The States? No: I couldn’t commit to living over there for more than a short amount of time.

Who will you do it for?

That part is still open: it could be you. I would work for commercial, charity or government/academic sectors, but I have this restriction: you must not just be another contract app development agency/studio. You must be doing what you do because you think it’s worth doing, because that’s the standard I hold myself to. And charging marketing departments to slap their logo onto a UITableView displaying their blog’s RSS feed is not worth doing.

That’s why I’m not just falling back on contract iOS app development: it’s not very satisfying. I’d rather be paid enough to live doing something great, than make loads of money on asinine and unimportant contracts. Also, I’d rather work with other cool and motivated people, and that’s hard to do when you change project every couple of months.

So you’re doing something you believe in, and as long as you can convince me it’s worth believing in and will be interesting to do, and you know where to find me, then perhaps I’ll help you to do it. Look at my CV, then as I said before, e-mail me and we’ll sort out an interview.

I expect my reward to be dependent on how successful I make the product or community I’m supporting: it’s how I’ll be measuring myself so I hope it’s how you will be too. Of course, we all know that stock and options schemes are bullshit unless the stock is actually tradeable, so I’ll be bearing that in mind.

Some miscellaneous stuff

Here’s some things I’m looking for, either to embrace or avoid, that don’t quite fit in to the story above but are perhaps interesting to relate.

Things I’ve never done, but would

These aren’t necessarily things my next job must have, and aren’t all even work-related, but are things that I would take the opportunity to do.

  • Give a talk to an audience of more than 1,000 people.
  • Work in a field on a farm. Preferably in control of a tractor.
  • Write a whole application without using any accessors.
  • Ride a Harley-Davidson along the Californian coast.
  • Move the IT security industry away from throwing completed and deployed products at vulnerability testers, and towards understanding security as an appropriately-prioritised implicit customer requirement.
  • Have direct reports.

Things I don’t like

These are the things I would try to avoid.

  • “Rock star” developers, and companies who hire them.
  • Development teams organised in silos.
  • Technology holy wars.
  • Celery. Seriously, I hate celery.

OK, but first we like to Google our prospective hires.

I’ll save you the trouble.

Posted in Business, Policy, Responsibility, software-engineering | Leave a comment

On the new Lion security things

This post will take a high-level view of some of Lion’s new security features, and examine how they fit (or don’t) in the general UNIX security model and with that of other platforms.

App sandboxing

The really big news for most developers is that the app sandboxing from iOS is now here. The reason it’s big news is that pretty soon, any app on the Mac app store will need to sign up to sandboxing: apps that don’t will be rejected. But what is it?

Since 10.5, Mac OS X has included a mandatory access control framework called seatbelt, which enforces restrictions governing what processes can access what features, files and devices on the platform. This is completely orthogonal to the traditional user-based permissions system: even if a process is running in a user account that can use an asset, seatbelt can say no and deny that process access to that asset.

[N.B. There’s a daemon called sandboxd which is part of all this: apparently (thanks @radian) it’s just responsible for logging.]

In 10.5 and 10.6, it was hard for non-Apple processes to adopt the sandbox, and the range of available profiles (canned definitions of what a process can and cannot do) was severely limited. I did create a profile that allowed Cocoa apps to function, but it was very fragile and depended on the private details of the internal profile definition language.

The sandbox can be put into a trace mode, where it will report any attempt by a process to violate its current sandbox configuration. This trace mode can be used to profile the app’s expected behaviour: a tool called sandbox-simplify then allows construction of a profile that matches the app’s intentions. This is still all secret internal stuff to do with the implementation though; the new hotness as far as developers are concerned starts below.

With 10.7, Apple has introduced a wider range of profiles based on code signing entitlements, which makes it easier for third party applications to sign up to sandbox enforcement. An application project just needs an entitlements.plist indicating opt-in, and it gets a profile suitable for running a Cocoa app: communicating with the window server, pasteboard server, accessing areas of the file system and so on. Additional flags control access to extra features: the iSight camera, USB devices, users’ media folders and the like.

By default, a sandboxed app on 10.7 gets its own container area on the file system just like an iOS app. This means it has its own Library folder, its own Documents folder, and so on. It can’t see or interfere with the preferences, settings or documents of other apps. Of course, because Mac OS X still plays host to non-sandboxed apps including the Finder and Terminal, you don’t get any assurance that other processes can’t monkey with your files.

What this all means is that apps running as one user are essentially protected from each other by the sandbox: if any one goes rogue or is taken over by an attacker, its effect on the rest of the system is restricted. We’ll come to why this is important shortly in the section “User-based access control is old and busted”, but first: can we save an app from itself?

XPC

Applications often have multiple disparate capabilities from the operating system’s perspective, that all come together to support a user’s workflow. That is, indeed, the point of software, but it comes at a price: when an attacker can compromise one of an application’s entry points, he gets to misuse all of the other features that app can access.

Of course, mitigating that problem is nothing new. I discussed factoring an application into multiple processes in Professional Cocoa Application Security, using Authorization Services.

New in 10.7, XPC is a nearly fully automatic way to create a factored app. It takes care of the process management, and through the same mechanism as app sandboxing restricts what operating system features each helper process has access to. It even takes care of message dispatch and delivery, so all your app needs to do is send a message over to a helper. XPC will start that helper if necessary, wait for a response and deliver that asynchronously back to the app.

So now we have access control within an application. If any part of the app gets compromised—say, the network handling bundle—then it’s harder for the attacker to misuse the rest of the system because he can only send particular messages with specific content out of the XPC bundle, and then only to the host app.

Mac OS X is not the first operating system to provide intra-app access control. .NET allows different assemblies in the same process to have different privileges (for example, a “write files” privilege): code in one assembly can only call out to another if the caller has the privilege it’s trying to use in the callee, or an adapter assembly asserts that the caller is OK to use the callee. The second case could be useful in, for instance, NSUserDefaults: the calling code would need the “change preferences” privilege, which is implemented by writing to a file so an adapter would need to assert that “change preferences” is OK to call “write files”.

OK, so now the good stuff: why is this important?

User-based access control is old and busted

Mac OS X—and for that matter Windows, iOS, and almost all other current operating systems—are based on timesharing operating systems designed for minicomputers (in fact, Digital Equipment Corp’s PDP series computers in almost every case). On those systems, there are multiple users all trying to use the same computer at once, and they must not be able to trip each other up: mess with each others’ files, kill each others’ processes, that sort of thing.

Apart from a few server scenarios, that’s no longer the case. On this iMac, there’s exactly one user: me. However I have to have two user accounts (the one I’m writing this blog post in, and a member of the admin group), even though there’s only one of me. Apple (or more correctly, software deposited by Apple) has more accounts than me: 75 of them.

The fact is that there are multiple actors on the system, but mapping them on to UNIX-style user accounts doesn’t work so well. I am one actor. Apple is another. In fact, the root account is running code from three different vendors, and “I” am running code from 11 (which are themselves talking to a bunch of network servers, all of which are under the control of a different set of people again).

So it really makes sense to treat “provider of twitter.com HTTP responses” as a different actor to “code supplied as part of Accessorizer” as a different actor to “user at the console” as a different actor to “Apple”. By treating these actors as separate entities with distinct rights to parts of my computer, we get to be more clever about privilege separation and assignment of privileges to actors than we can be in a timesharing-based account scheme.

Sandboxing and XPC combine to give us a partial solution to this treatment, by giving different rights to different apps, and to different components within the same app.

The future

This is not necessarily Apple’s future: this is where I see the privilege system described above as taking the direction of the operating system.

XPC (or something better) for XNU

Kernel extensions—KEXTs—are the most dangerous third-party code that exists on the platform. They run in the same privilege space as the kernel, so can grub over any writable memory in the system and make the computer do more or less anything: even actions that are forbidden to user-mode code running as root are open to KEXTs.

For the last eleventy billion years (or since 10.4 anyway), developers of KEXTs for Mac OS X have had to use the Kernel Programming Interfaces to access kernel functionality. Hopefully, well-designed KEXTs aren’t actually grubbing around in kernel memory: they’re providing I/O Kit classes with known APIs and KAUTH veto functions. That means they could be run in their own tasks, with the KPIs proxied into calls to the kernel. If a KEXT dies or tries something naughty, that’s no longer a kernel panic: the KEXT’s task dies and its device becomes unavailable.

Notice that I’m not talking about a full microkernel approach like real Mach or Minix: just a monolithic kernel with separate tasks for third-party KEXTs. Remember that “Apple’s kernel code” can be one actor and, for example, “Symantec’s kernel code” can be another.

Sandboxing and XPC for privileged processes

Currently, operating system services are protected from the outside world and each other by the 75 user accounts identified earlier. Some daemons also have custom sandboxd profiles, written in the internal-use-only Scheme dialect and located at /usr/share/sandbox.

In fact, the sandbox approach is a better match to the operating system’s intention than the multi-user approach is. There’s only one actor involved, but plenty of pieces of code that have different needs. Just as Microsoft has the SYSTEM account for Windows code, it would make sense for Apple to have a user account for operating system code that can do things Administrator users cannot do; and then a load of factored executables that can only do the things they need.

Automated system curation

This one might worry sysadmins, but just as the Chrome browser updates itself as it needs, so could Mac OS X. With the pieces described above in place, every Mac would be able to identify an “Apple” actor whose responsibility is to curate the operating system tasks, code, and default configuration. So it should be able to allow the Apple actor to get on with that where it needs to.

That doesn’t obviate an “Administrator” actor, whose job is to override the system-supplied configuration, enable and configure additional services and provide access to other actors. So sysadmins wouldn’t be completely out of a job.

Posted in Authentication, Authorization, Codesign, Mac, PCAS, sandbox | 4 Comments

TDD/unit testing video training for iOS developers

I recently recorded a series of videos on unit testing and test-driven development for iOS developers with Scotty of iDeveloper.tv. The videos and associated source code is now available for purchase and download.

Posted in code-level, iDeveloper.TV, iPad, iPhone, software-engineering, Talk, TDD, tool-support | Comments Off on TDD/unit testing video training for iOS developers

Making computing exciting

Over the last couple of years, I have visited three different museums of computing. NSBBQ in 2009 and 2010 visited the National Museum of Computing at Bletchley Park and the Museum of Computing in Swindon respectively. At this year’s WWDC I got the chance, along with a great group of friends, to visit the Computer History Museum in Mountain View.

While each has its interesting points, each also has its disappointments. My principle problem is this: most of the kit is switched off. Without a supply of electrons and an output device, most computers from my childhood just look like beige typewriters. Earlier computers look like poorly thought out hi-fi equipment, or refrigerators that Stanley Kubrick tarted up to use as props. The way you find out just how much computers have advanced over the last few decades is not by looking at the cases: it’s by using the computers.

If you’re anything like me, you keep track of your finances and tax return figures in Numbers. Now imagine doing it in Visicalc. Better still: try doing it in Visicalc. Or take your iOS app, and implement the core features in Microsoft BASIC (or MC6809 machine code, if you’re feeling hardcore). Write your next blog post in PenDown. It’s this experience that will demonstrate just how primitive even a 15 year old desktop computer feels. And the portables? See if you can lift one!

Of course, complaining is the easy part. Fixing it is harder. Which is why I’m now a volunteer at the Swindon museum of computing, on the team that designs the gallery. My main goal is to make the whole experience more interactive. In the short term, this means designing programming challenges for kids to try out: let’s face it, if we want more children to be interested in programming, we need to make programming more interesting to children. I certainly don’t relish the prospect of becoming a portable brain in a pickle jar just because the next generation doesn’t know any objective-c.

So it won’t happen overnight, but if I’m at all successful then we should be able to make the museum gallery more interactive, more educational, and more fun. To find out how it’s going, follow @MuseumComputing.

Posted in Uncategorized | 4 Comments

On what Marcus said

This post is a response to Why so serious? over at Cocoa is my Girlfriend. Read that.

Welcome back. OK, so firstly let’s talk about that damned carousel. Kudos to the developer who wrote a nice smoothly scrolling layer-backed image pager, but as Marcus says, that’s not the same as doing a nice smoothly scrolling carousel. Believe me, I’ve taken around one hundred Instruments traces of the carousel. Swirling images around an iPad screen is the least of its concerns.

Now, let’s start looking at the state of the community thing. It’s like an iceberg, or a duck. Or maybe a duck with the proportions of an iceberg. The point is that what you see is a bunch of developers being flown around the world to talk at conferences, plugging their books (the evil capitalist bastards). What you get is a bunch of people who have put their jobs and careers into the background for a while because they learned something cool and want to share it with the class. The 7/8ths of the duck kicking frantically below the ice is people not getting paid to help everyone else do their job as well as they can.

I can’t speak for Marcus’s experience, but I can describe my own. That security book? The one that I’m already planning to replace because there’ll be so much more stuff to talk about after Monday? The one where I know you read chapter one then put it on the shelf until such time as one of the other chapters describes a problem you have? Around nine months of research, study, and staring blankly at an OpenOffice window. During that time, almost all of my coding was either learning about or preparing samples for the content of the book. I got a warm feeling when I saw it in print, but they don’t pay rent.

The same, but in smaller writing, for conference talks (one to two weeks of preparation each) and even blog posts (half to two days of preparation each). That’s why I love reading posts from CIMGF, TheoCacao, Mike Ash and others: each new post represents time someone else has taken to make me a better programmer: time they could have billed to a client. By the way I don’t know whether this is commonly known, but there’s no pay for doing technical talks at iOS developer conferences. The keynote speakers sometimes get paid, the content speakers do not.

Ok, so that’s me on my high horse, but we were supposed to be talking about snarking in the community. That happens. My favourite recent example was the one piece of negative feedback I got from a recent conference talk: a page-long missive describing how I’d wasted the person’s time by talking about the subject of my talk rather than the topic they wanted to hear about.

Thing is, there’s a lesson in there. I could have done a better job at either describing the importance of my subject to that attendee, or getting them to leave the room early on in the talk. Could have, but didn’t. Next time, I will. And so that’s great, this commenter told me something I didn’t know before, something I can use to change the way I work.

But that’s not always the case. Sometimes, you look for the lesson and there isn’t one. The tweeter just doesn’t like you. The best way to get past this is to realise that the exchange has been neutral: you got nothing from their feedback, but in return because they chose to ignore you, you gave them nothing too. Maybe that guy does know the topic better than you. Maybe he’s just a blowhard. Either way, you gave nothing, you got nothing: it’s not a loss, it’s a no-score draw.

But then there are the other times. You know what I mean, the dark times. When your amygdala or whatever weird bit of your brain it is responds before your cortex does (I’m no neuroscientist, and I don’t even play one in my armchair), and you get the visceral rage before you get a chance to rationally respond.

There’s one common case that still turns me into a big green hulk of fury, even though I should have got over it years ago. It’s the times when a commentator or talk attendee decides that my entire argument is broken because that person either disagrees with my choice of terminology, or can think of an edge case where my solution can’t be rubber-stamped in.

On the one hand, as software engineers we are used to finding edge cases where the requirements don’t quite seem to fit. On the other hand, as software engineers it is our job to solve these problems and edge cases. If you find a situation at work where a particular set of circumstances causes your app to fail, I’m willing to bet that you consider that a bug and try to find a way to fix that app, then you give the bug fix to your users. I doubt you pull the app from the store and smugly proclaim that your users were idiots for thinking it could solve their problems in the first place.

So apply that same thinking to solutions other people are showing you. If you have to drill down to an edge case to find the problem, then what you’re saying is not that the solution is wrong, but that it’s almost right. Provide not a repudiation but an enhancement, a bug fix if you will. Make the solution better and we’ve all learned something.

Conclusion

Of course, don’t be a dick. Your twitter-wang is not the most important thing in your career, knowledge is. You’re a knowledge worker. The person who got up on that stage, or wrote that post or that book, did it because they found something cool and wanted everyone to benefit. They didn’t make you pay some percentage of your app revenue to use that knowledge, or withhold the knowledge, or supply it exclusively to your competition. They told you something they thought would help.

If it didn’t help, maybe that’s because you know something about the topic that they didn’t. That’s fine, but don’t stop at saying that they’re wrong. That doesn’t help you or them, or anyone else who listened. Understand their position, understand how your knowledge provides a different perspective, then combine the two to make the super-mega-awesome KnowledgeZoid. And now start sharing that.

But don’t expect that just because you’re not being a dick, everyone else will not be a dick. Just try to avoid taking it personally: which is hard, I certainly can’t do it all the time. You took a risk in raising your head above the parapet and trying to get us engineers to change the way we work: the reward for that far outweighs the cost of dealing with detractors.

One more thing

There’s another group of developers, of course. Bigger than the sharers, bigger than the detractors. That’s the group of developers who silently get on with building great things. Please, if you’re in that group, consider heading over to the dev forums or to stack overflow and answering one question. Or adding a paragraph to the cocoadev wiki (or just removing decade-old conversations from the content). We’re all eager to learn from you.

Posted in code-level, iDeveloper.TV, iPad, software-engineering | 1 Comment

On BizSpark

You’ll remember that recently I reviewed Windows Phone 7 Mango from the perspective of an iOS guy, and actually came back pretty impressed with it.

You’ll also remember that through my company, Fuzzy Aliens Ltd, I offer app security services to mobile app developers. So far, that basically means iOS developers: in addition to being where I have most experience, I have punted around for Android clients and got exactly zero interest.

So I thought it would be useful to offer the same service for WP7. After all, Microsoft knows the bad press associated with having security fail on their platform, so should be welcoming of a security guy adding his biological and technological distinctiveness to their own. Not only that, but there will probably be a lot of line-of-business app developers out there who would appreciate mobile security knowledge.

Now the thing that puts me off is basically the cost. I own exactly one copy of Windows 7, and use the free Visual Studio Express. To meaningfully research and code for Windows Phone 7 I’d need another two Windows licences (£100-£250 each roughly depending on version) and Visual Studio Pro and MSDN (roughly £700), along with at least one handset (£300) and an App Hub membership (£60). Wow. Around £1500 just to dip my toe in untested waters.

Luckily, Microsoft have a plan designed to help. BizSpark ought to give me access to most of the above except the phone, in addition to training. It also offers that MS would put me in touch with potential clients and even investors, and could help with hosting costs for web services. The idea is that Fuzzy Aliens would get this stuff for free for a while, during which MS would help build the business. Then, once FZA “graduates” from the program, I get to keep all the software and MS have a new trusted partner.

Seems like a low-risk way to get into Windows Phone 7, and to grow my business which – while only six months old – is already showing signs that I need to find more clients from somewhere. So I signed up at around 16:15 today.

By 18:13 Microsoft had decided that:

it does not appear that you meet all the eligibility requirements at this time. To enter the program, your startup must be:

  • Actively engaged in development of a software-based product or online service that will form a core piece of its current or intended business,
  • Privately held,
  • In business for less than 3 years, and
  • Less than US $1 million in annual revenue

Well, in fact FZA meets all of those criteria. The basis of its business is secure software, and indeed I am currently (OK, I’m blogging – you see what I mean though) developing such secure software. Indeed I even help out the platform community for free by releasing some of this software here as open source.

The business is fully held by me, and has been operating for nearly six months. I would dearly love to have more than $1M of revenue, but it hasn’t happened yet.

So for whatever reason – though not one they care to tell me about – Microsoft has decided that they don’t want me joining their community. Given that this leaves me free to focus on making the iPhone a safer platform for its users, I don’t yet know which of us has lost out the most.

Posted in Business, WinPhone | 9 Comments

A Cupertino Yankee in the Court of King Ballmer

This post summarises my opinions of Windows Phone 7 from the Microsoft Tech Day I went to yesterday. There’s a new version of Windows Phone 7 (codenamed Mango) due out in the Autumn, but at the Tech Day the descriptions of the new features were mainly the sorts of things you see in the Microsoft PressPass video below (Silverlight required), the API stuff is going on in a separate event.

I want to provide some context: I first encountered C#, J# and .NET back in around 2002, when I was given a beta of Visual Studio .NET (Rainier) and Windows .NET Server (which later became Windows Server 2003). Since then, of course most of my programming work has been on Objective-C and Java, on a variety of UNIX platforms but mainly Mac OS X and iOS. But I’ve kept the smallest edge of a toe in the .NET world, too.

From the perspective of the phone, however, I really am coming to this as an iOS guy. Almost all of the mobile work I’ve done has been on iOS, with a small amount of Android thrown into the mix. I’ve used a WP7 phone a couple of times, but have no experience programming on WP7 or earlier Windows Mobile platforms.

The UI

The speakers at the Tech Day – a mix of Microsoft developer relations and third-party MVPs – brought as much focus on user experience and visual impact of WP7 apps as you’ll find at any Apple event. Windows Phone uses a very obviously distinctive UI pattern called Metro, which you can see in the demo screencasts, or the Cocktail Flow app.

Metro is almost diametrically opposed to the user experience on iOS. Rather than try to make apps look like physical objects with leather trim and wooden panels, WP7 apps do away with almost all chrome and put the data front and centre (and, if we’re honest, sides and edges too). Many controls are implicit, encouraging the user to interact with their data and using subtle iconography to provide additional guidance. Buttons and tiles are far from photorealistic, they’re mainly understated coloured squares. Users are not interacting with apps, they’re interacting with content so if an app can provide relevant functionality on data from another app, that’s encouraged. A good example is the augmented search results demoed in the above video, where apps can inspect a user’s search terms and provide their own content to the results.

In fact, that part of the video shows one of the most striking examples of the Metro user interface: the panorama view. While technologically this is something akin to a paginated scroll view or a navigation controller, it’s the visual execution that makes it interesting.

Instead of showing a scroll thumb or a page indicator, the panorama just allows the title of the next page to sneak into the page you’re currently looking at, giving the impression that it’s over there, and if you swipe to it you can find it. When the user goes to the next page, a nice parallax scroll moves the data across by a page but the title by only enough to leave the edges of the previous and next titles showing.

The Tools

It’s neither a secret nor a surprise that Microsoft’s developer tools team is much bigger than Apple’s, and that their tools are more feature-rich as a result (give or take some ancient missteps like MSTest and Visual SourceSafe). But the phone is a comparatively new step: WP7 is under a year old, but of course Windows Mobile and Compact Editions are much older. So how have Microsoft coped with that?

Well, just as Apple chose to use their existing Cocoa and Objective-C as the basis of the iOS SDK, Microsoft have gone with .NET Compact Framework, Silverlight and XNA. That means that they get tools that already support the platform well, because they’re the same tools that people are using to write desktop, “rich internet” and Xbox Live applications.

From the view-construction perspective, XAML reminds me a lot more of WebObjects Builder than Interface Builder. Both offer drag-and-drop view positioning and configuration that’s backed by an XML file, but in Visual Studio it’s easy to get precise configuration by editing the XML directly, just as WebObjects developers can edit the HTML. One of the other reasons it reminds me of WebObjects is that Data Bindings (yes, Windows Phone has bindings…) seems to be much more like WebObjects bindings than Cocoa Bindings.

Custom classes work much better in the XAML tools than in IB. IB plugins have always been a complete ‘mare to set up, poorly documented, and don’t even work in the Xcode 4 XIB editor. The XAML approach is similar to IB’s in that it actually instantiates real objects, but it’s very easy to create mock data sources or drivers for UI objects so that designers can see what the app looks like populated with data or on a simulated slow connection.

Speaking of designers, an interesting tool that has no parallel on the iPhone side is Expression Blend, a XAML-editing tool for designers. You can have the designer working on the same files as the developer, importing photoshop files to place graphics directly into the app project.

It’d be really nice to have something similar on iPhone. All too often I have spent loads of time on a project where the UI is specified as a photoshop wireframe or some other graphic provided by a web designer, and I’m supposed to customise all the views to get pixel-perfection with these wireframes. With Blend, the designer can waste his time doing that instead :).

Other tools highlights include:

  • Runtime-configurable debugging output on both phone and emulator, including frame rates, graphics cache miss information, and Quartz Debug-style flashes of updated screen regions
  • The emulator supports virtual accelerometers
  • The emulator supports developer-supplied fake location information and even test drivers generating location updates <3

The biggest missing piece seems to be a holistic debugging app like Apple’s Instruments. Instruments has proved useful for both bug fixing and performance analysis, and it’s pretty much become a necessary part of my iOS and Mac development work.

Update: I’m told by @kellabyte that an Instruments-like tool is coming as part of the Mango SDK, and that this was announced at MIX ’11.

The “ecosystem”

A couple of the demos shown yesterday demonstrated phone apps talking to Azure cloud services, ASP.NET hosted web apps (mainly using the RESTful OData protocol), SOAP services etc. Because there’s .NET on both sides of the fence, it’s very easy to share model code between the phone app and the server app.

That’s something Apple lacks. While Cocoa can indeed be used on Mac OS X Server, if you want to do anything server-side you have to either hope you can find some open-source components or frameworks, or you have to switch to some other technology like Ruby or PHP. While Apple ship that stuff, it’s hard to claim that they’re offering an integrated way to develop apps on iOS that talk to Apple servers in the same way that MS do.

To the extent that WebObjects can still be said to exist, it doesn’t really fill this gap either. Yes, it means that Apple provide a way to do dynamic web applications: but you can’t use Apple’s tools (use Eclipse and WOLips instead), you can’t share code between iOS and WO (iOS doesn’t have Java, and WO hasn’t had ObjC in a long time), and you can just about share data if you want to use property lists as your interchange format.

On the other hand, it’s much easier to distribute the same app on both iPhone and iPad than it would be to do so on WP7 and a Microsoft tablet/slate, because their official line is still that Windows 7 is their supported slate OS. I expect that to change once the Nokia handset thing has shaken out, but making a Silverlight tablet app is more akin to writing a Mac app than porting an iOS app.

The market

This is currently the weakest part, IMO, of the whole Windows Phone 7 deal, though it is clear that MS have put some thought and resources behind trying to address the problems. Given that Windows Phone 7 was such a late response to the iPhone and Android, Microsoft need to convince developers to write on the platform and users to adopt the platform. The problem is, users are driven to use apps, so without any developers there won’t be any users: without any users, there’s no profit on the platform so there are no developers.

Well, not no developers. Apparently the 17,000 apps on the marketplace were written by 7,500 of the 42,000 registered developers (and interestingly the UK has one of the lowest ratio of submitted apps to registered developers). By comparison, there are 500,000 apps on the app store.

Microsoft has clearly analysed the bejeesus out of the way their users interact with the marketplace. They have seen, for instance, that MO billing (essentially having your phone operator add your app purchase costs to your phone bill, rather than having a credit card account on the marketplace itself) increases purchase rates of apps by 5 times, and are working (particularly through Nokia of course who already have these arrangements) to add MO billing in as many marketplace countries as they can.

This makes sense. People already have a payment relationship with their network operators, so it’s easier for them to add a few quid to their phone bill than it is to create a new paying account with the Windows Marketplace and give Microsoft their card details. By analogy, iPhone users already have bought stuff from Apple (an iPhone, usually…and often some music) so throwing some extra coin their way for apps is easy. Incidentally, I think this is why the Android app market isn’t very successful: people see Google as that free stuff company so setting up a Checkout account to buy apps involves an activation energy.

Incidentally, some other stats from the app marketplace: 12 downloads per user per month (which seems high to me), 3.2% of all apps downloaded are paid, and the average price of a bought app is a shave under $3. Assuming around 3 million users worldwide (a very rough number based on MS and analyst announcements), that would mean a total of around $3.5M app sales worldwide per month. That’s nowhere near what’s going on on the iPhone, so to get any appreciable amount of cash out of it you’d better have an app that appeals to all of the platform’s users.

The subject of appeal is a big issue, too. Microsoft aren’t really targeting the WP7 at anyone in particular, just people who want a smartphone. With Mango and Nokia handsets, they’ll be targeting people who want a cheaper smartphone than RIM/Apple/Android offers: I bet that brings down that $3 mean app price. This is, in my opinion, a mistake. Microsoft should play to their strengths, and make a generally-useful device but target it at particular groups who Microsoft can support particularly well.

Who are those groups? Well, I think there’s Xbox 360 gamers, as WP7 has Xbox Live integration; and there’s enterprises with custom app needs, due to the integration with Azure and similarity with ASP.NET. It ought to be cheaper for game shop that’s written an Xbox game to do a WP7 tie-in than an iOS tie-in. It ought to be cheaper for an enterprise with an MS IT department to extend their line-of-business apps onto WP7 than onto Blackberry. Therefore MS should court the crap out of those developers and make the WP7 the go-to device for those people, rather than just saying “buy this instead of an iPhone or Android”.

The reason I’d do it that way is that you bring the users and the developers together on shared themes, so you increase the chance that any one app is useful or relevant to any given customer and therefore increase the likelihood that they pay. Once you’ve got gamers and game devs together, for example, the gamers will want to do other things and so there’ll be a need for developers of other classes of app. I call using Xbox games to sell utilities the Halo Effect.

Conclusion

Windows Phone 7 is a well thought out mobile platform, with some interesting user experience and design. It’s got a good development environment, that’s highly consistent with the rest of the Microsoft development platform. However, no matter how easy and enjoyable you make writing apps, ultimately there needs to be someone to sell them to. Microsoft don’t have a whole lot of users on their platform, and they’re clearly banking on Nokia bringing their huge brand to beef up sales. They should be making the platform better for some people, and then getting those people on to the platform to make WP7 phones aspirational. They aren’t, so we just have to wait and see what happens with Nokia.

Footnote: Nokisoft

Nokia do well everywhere that Microsoft doesn’t, by which I mean Europe and China mainly. Particularly China actually, where the Ovi store is pretty big. Conversely, MS phone isn’t doing too badly in America, where Nokia traditionally are almost unheard of. So on paper, the Nokia deal should be a good thing for both companies.

Posted in Business, iPad, iPhone, Mac, tool-support, WebObjects, WinPhone | 1 Comment

On the top 5 iOS appsec issues

Nearly 13 months ago, the Intrepidus Group published their top 5 iPhone application development security issues. Two of them are valid issues, the other three they should perhaps have thought longer over.

The good

Sensitive data unprotected at rest

Secure communications to servers

Yes, indeed, if you’re storing data on a losable device then you need to protect the data from being lost, and if you’re retrieving that data from elsewhere then you need to ensure you don’t give it away while you’re transporting it.

Something I see a bit too often is people turning off SSL certificate validation while they’re dealing with their test servers, and forgetting to turn it on in production.

The bad

Buffer overflows and other C programming issues

While you can indeed crash an app this way, I’ve yet to see evidence you can exploit an iOS app through any old buffer overflow due to the stack guards, restrictive sandboxes, address-space layout randomisation and other mitigations. While there are occasional targeted attacks, I would have preferred if they’d been specific about which problems they think exist and what devs can do to address them.

Patching your application

Erm, no. Just get it right. If there are fast-moving parts that need to change frequently, extract them from the app and put them in a hosted component.

The platform itself

To quote Scott Pack in “The DMZ”, If you can’t trust your users to implement your security plan, then your security plan must work without their involvement. In other words, if you have a problem and the answer is to train 110 million people, then you have two problems.

Posted in buffer-overflow, code-level, Crypto, Data Leakage, Encryption, iPad, iPhone, ssl, Updates, user-error, Vulnerability | 2 Comments

“Patently” secure

One thing that occasionally becomes interesting about working in security is that doing security and managing business have a great deal of overlap. This makes a lot of sense: a business wants to be profitable, and profit is a reward conferred by the market for taking on some risk. But too much risk can expose your business to undesirable failures, so understanding and controlling your exposure to risk is a useful exercise.

Well that’s fundamentally how security works too. There is some reward to be gained by performing the activity allowed by an app: that might be the enjoyment of playing a game, the cost savings of keeping track of your finances, or the health benefits of seeing what food you consume. But using the app also brings some risk, and so security people seek to quantify and reduce the risk inherent in any app.

I’m going to compare a business risk (infringing on another’s patent) to an information security risk (leaking confidential data) to show just how similar these fields are. I choose patent infringement because it’s an apposite case: however you’ll find that I don’t name particular patents or companies for reasons that will be entered into below. Suffice it to say that I have dealt with software patent lawyers in the past and have some – but not much, by any means – experience of how the US patent system operates. If you choose to infer any advice from this blog post, please seek appropriate counsel before acting on it: I am not a lawyer, and I am certainly not your lawyer.

Quantisation

A risk to either a business or a user can be summed up by the expected damage caused by the event coming to pass. That is, the estimated cost (financial, emotional, intangible etc.) of the risky event multiplied by the expected probability of that event happening.

In the leaky data case, the expected damage would be “what chance is there that an attacker will retrieve the data” × “what is the impact to the user of exposing the data”? Both of these parameters are hard to quantify: information about data security breaches is notoriously hard to get hold of because companies are reluctant to talk about problems they’ve had protecting their customer records. Combine with that the fact that in many fields even the direct costs of a breach are hard to arrive at, and you end up multiplying two very big error bars together.

In the infringement case, things are a bit more straightforward. Legal reports are – in many jurisdictions – a matter of public record, so seeing what the damage of a case “like yours” is going to be is quite easy. That covers direct costs, anyway: indirect costs like lost custom, damaged reputation etc. are harder to arrive at. The likelihood of being caught infringing on a patentholder’s rights is harder to estimate, but I expect not beyond the realms of reason.

Mitigation

There are a few different approaches to reducing (mitigating) the risk involved, which either address the likelihood or expected cost of impact. Let’s look at them. You don’t have to choose any one approach: a successful strategy may combine tactics from each of these categories and even use more than one tactic from the same category.

Withdrawal

Remove any likelihood and impact of a risky event occurring by refusing to participate in the risky activity. In the confidentiality case, this means not storing the secrets in the first place. In the patent case, it means not using the potentially infringing invention.

In either case withdrawing from the activity certainly mitigates any risk very reliably, but it also means no possibility of gaining the reward associated with participation.

This is why, going back to an earlier point, I don’t comment on particular patent cases. Given that patent rights asserters are, in my opinion, more litigious than I, there’s a chance that if I talk about a particular case I’ll be considered defamatory. I’d rather avoid that risk, and choose to control it by withdrawing from talking about the cases.

Transference

You can opt to transfer the risk to another party, usually for a fee: this basically means taking out insurance. In either of our case studies, look for insurance that protects against the damage incurred. This doesn’t affect the probability that our risky event will come to pass, but means that someone else is liable for the damages.

Employing Countermeasures

Finding some technical or process approach to reduce the risk. In the patent case this is simple: the countermeasure to “sued by patent holder” is “license patent”.

In the confidentiality case, this means technical countermeasures: access control, cryptography and the like.

But think about deploying these countermeasures: you’ve now made your business or your application a bit more complex. Have you introduced new risks? Have you increased the potential damage from some risks by reducing others? And, of course, is your countermeasure cost-effective? The traditional security mantra is “don’t spend $1000 to save $100”: don’t license $1000 of patents to protect a $100 product, and don’t implement $1000 of crypto to hide $100 of data.

Acceptance

The “suck it up” approach to security: accept that the risk exists and that you may be liable for the damage if it ever comes to pass. In our information security case, this means storing the data and accepting that someone else might be able to read it. In our patent case, this means adopting the potentially-infringing invention and accepting that a licensor might come a-knocking.

All risk mitigation strategies have a certain amount of acceptance involved, apart from withdrawal. Imagine that you pay some insurance premium, and that indemnifies you up to $10M with an excess of $1000. You have to choose whether you accept the residual risk exposed in covering the excess and any overage.

Similarly, in the information security case, let’s say you have data assets which, if leaked, would cost $1M in damages. You implement a particular cryptography technique that reduces the likelihood of leaking the data from an estimated once per year to an estimated once per thousand years. Again, do you accept the remaining $1000?

Conclusion

Information security and business management are actually pretty closely related. It’s just that information security requires specialised technical knowledge: and that’s where I come in ;-).

Posted in Business, IANAL | Comments Off on “Patently” secure