On Windows 8

Right from the beginning, you have to accept that this analysis is based on the presentation of Windows 8 shown at the //build/windows conference. I’ve watched the presentation, I’m downloading the developer preview but I’m over an hour away from even discovering whether I have anything I can install that on.

My biggest concern

The thing I was most worried that Windows 8 would bring was yet another chance for Microsoft to flub their developer experience. If you think about it, Microsoft’s future now depends more than it has on any time since the 1980s on their third-party developers. Whereas they used to be able to rely on OEM licensing deals (and strongarming) to ensure that people wanted to use their stuff, that’s starting to wane.

Sure, the main desktop Windows-NT-plus-Active-Directory-plus-Office-plus-Exchange core is still going strong. But look at the web stack: servers are IIS or Apache or one of various other servers that support Rails, PHP and so on. They’re hosted on my Wintel boxen or my Lintel boxen or one of various third-party choices (including Microsoft’s own Azure). The client is a browser (based on anything) or a native mobile app on Windows Phone 7 or one of a handful of other native app platforms, including iOS and Android.

Where Microsoft have the potential to win over all of their competitors is providing a consistent developer experience across the whole of their stack. This isn’t about locking developers in, this is about making it easiest for them to do Microsoft end-to-end.

Consider Apple for a moment. They have a very consistent experience for writing desktop apps and mobile apps, with Cocoa and Cocoa Touch. Do you want to write server software? You can do it on Apple kit, but not with Apple tools. You’ll probably use Rails or another open source UNIX-based stack, at which point you may as well deploy it to someone else’s kit, maybe someone who offers rackmount hardware. You can use iCloud on Apple’s terms for distributed storage, but again if you want generic cloud computing or a different approach to cloud storage, you’re on your own.

On the other hand, Microsoft offers a complete stack. Azure, Windows Server, WPF (desktop client) and SilverLight (browser and phone) all have the same or very similar APIs in the same languages, so it’s easy to migrate between and share code across multiple layers in a complete computing environment.

But, would they do that? The one place where Microsoft itself seems to have failed to recognise its own strength is in the Windows division. They “pulled a Solaris”, and have consistently ignored .NET ever since its inception, in just the same way that the Solaris Operating Environment team in Sun ignored Java. That leads developers to think there must be something wrong with the tech if real developers inside the company who made it don’t want to use it.

So when Microsoft (or rather, Steve Ballmer) announced that the Windows 8 SDK would be JavaScript and HTML5, it made me think that the Windows 8 team might have jumped the shark. That stuff isn’t compatible with C# and XAML, or with the .NET APIs. I thought they’d got themselves into the position where their phone is SilverLight, their desktop/tablet is HTML5 (except legacy stuff which is Win32), their server is .NET… ugh.

So what happened?

Well, in a nutshell, they didn’t do that. The need for a new runtime makes sense because the .NET runtime was only ever a bolt-on on top of Windows, leading to interesting compatibility issues (to run App version x you need .NET version y and Windows version z; but OtherApp version a wants .NET version b). But the fact that the new HTML5 stuff is actually layered on top of the same runtime, and provides access to the same APIs as are available in C#, VB and C++ makes me believe that Microsoft may actually understand that their success depends on developers adopting their entire stack.

I’ll make this point clear now: HTML5 is not the future of app development. Its strength is based on a couple of things: every device needs a high-performance HTML engine anyway in order to provide web browsing; and HTML5 should work across multiple devices. Well, as a third-party developer, the first of those points is not my problem, it’s Microsoft’s problem, or Apple’s, or RIM’s. I don’t care that they get to re-use the same HTML renderer in multiple contexts.

The second of those points will quickly disintegrate when every platform vendor is trying to differentiate itself and provide an HTML5 development environment. Did you see that Expression Blend for HTML demo? That’s a very cool tool, but the demo of using “standard HTML5” relied on a keyword called “ms-grid”. Wait, MS? Is that cross-platform HTML? Also observe that the JS needs to call into the WinRT API, so this really isn’t a cross-platform solution.

No, for me the big story is that you can use C# everywhere: currently with .NET in most places and WinRT on the desktop. Any client application can use XAML for its user interface. That’s big for ecosystem-wide development. There’s one thing I need to learn in order to do mobile, tablet, desktop, server and cloud programming. It’s slightly weird that the namespaces on .NET and WinRT are different, and it would be a good strategic move to support the new namespaces in newer versions of SilverLight (i.e. in Windows Phone 8).

What else happened?

I’m not going to talk much about Metro, it’s a well-designed UI that works well. I’m not sure yet how it works if you’re interacting through a mouse or trackpad. I spoke about it before when discussing Windows Phone 7; where I also expressed my belief that sharing the developer experience across desktop and phone was a good move.

What I will point out is that the Windows team no longer think that they can do what they want and people will buy it. They’re noticing that the competition exists, and that they need to do better than the competition. In this case, “the competition” is Apple, but isn’t Google.

Why do I think that? Sinofsky felt comfortable making a joke about “Chrome-free browser experience”, but the iPad was the elephant in the room. Tablets/slates/whatever were summarised as “other platforms”, although you feel that if Microsoft had a point to score over Android they would have mentioned it specifically.

Conclusions

This means that – perhaps with the exception of Office – Microsoft’s complacency is officially behind it. Sure, Windows 7 and Windows Phone 7 were indications that Microsoft noticed people talking about them falling behind. But now they’ve started to show a strategy that indicates they intend to innovate their way back to the front.

While it’s currently developer preview demoware, Windows 8 looks fast. It also looks different. They’ve chosen to respond to Apple’s touchscreen mobile devices by doing touchscreen everywhere, and to eschew skeuomorphism in favour of a very abstract user interface. Importantly, the Windows APIs are now the same as they’re getting 3rd-party developers to use, and are the same as (or very similar to) the APIs on the rest of their platforms.

Posted in Uncategorized | 1 Comment

Don’t be a dick

In a recent post on device identifiers, I wrote a guideline that I’ve previously invoked when it comes to sharing user data. Here is, in both more succinct and complete form than in the above-linked post, the Don’t Be A Dick Guide to Data Privacy:

  • The only things you are entitled to know are those things that the user told you.
  • The only things you are entitled to share are those things that the user permitted you to share.
  • The only entities with which you may share are those entities with which the user permitted you to share.
  • The only reason for sharing a user’s things is that the user wants to do something that requires sharing those things.

It’s simple, which makes for a good user experience. It’s explicit, which means culturally-situated ideas of acceptable implicit sharing do not muddy the issue.

It’s also general. One problem I’ve seen with privacy discussions is that different people have specific ideas of what the absolutely biggest privacy issue that must be solved now is. For many people, it’s location: they don’t like the idea that an organisation (public or private) can see where they are at any time. For others, it’s unique identifiers that would allow an entity to form an aggregate view of their data across multiple functions. For others, it’s conversations they have with their boss, mistress, whistle-blower or others.

Because the DBADG mentions none of these, it covers all of these. And more. Who knows what sensors and capabilities will exist in future smartphone kit? They might use mesh networks that can accurately position users in a crowd with respect to other members. They could include automatic person recognition to alert when your friends are nearby. A handset might include a blood sugar monitor. The fact is that by not stopping to cover any particular form of data, the above guideline covers all of these and any others that I didn’t think of.

There’s one thing it doesn’t address: just because a user wants to share something, should the app allow it? This is particularly a question that makers of apps for children should ask themselves. Children (and everybody else) deserve the default-private treatment of their data that the DBADG promotes. However, children also deserve impartial guidance on what it is a good or a bad idea to share with the interwebs at large, and that should be baked into the app experience. “Please check with a responsible adult before pressing this button” does not cut it: just don’t give them the button.

Posted in Data Leakage, IANAL, Policy, Privacy | Comments Off on Don’t be a dick

So you don’t like your IDE

There are many different tools for writing Objective-C code, though of course many people never stray much beyond the default that’s provided by their OS vendor. Here are some of the alternatives I’ve used: this isn’t an in-depth review of any tool, but if you’re looking for different ways to write your code, these are tools to check out. If you know of more, please add to the comments.

In most cases, if you’re writing Mac or iOS apps, you’ll need the Xcode tools package installed anyway in order to provide the SDK frameworks (the libraries and headers you’ll be linking against). The tools basically provide a different user interface to the same development experience.

A word on IDEs. The writers of IDEs have an uphill struggle, in that every user of an IDE assumes that they are a power user, though in fact there are a wide range of abilities and uses to which IDEs are bent. My experience with Eclipse, for example, ranges from learning Fortran 77 to developing a different IDE, in Java and Jython, using the same app. Visual Studio is used to write single screen line-of-business apps in Visual BASIC, and Windows. Different strokes for different folks, and yet most IDEs attempt to accommodate all of these developers.

CodeRunner

CodeRunner is not a full IDE, rather it’s a tool for testing out bits of code. You get a syntax-highlighting, auto completing editor that supports a few different languages, and the ability to quickly type some code in and hit “Run”. Much simpler than setting up a test project in an IDE, and only a handful of denarii from the App Store.

CodeRunner.app

AppCode

AppCode is the engine from well-respected IDE IntelliJ IDEA, with Objective-C language support. Its language support is actually very good, with better (i.e. more likely to work) refactoring capabilities than Xcode, useful templates for creating new classes that conform to protocols with default implementations of required methods, and quick (and useful!) navigation categories.

AppCode uses Xcode project files and workspaces, and uses xcodebuild as its build system so it’s completely compatible with Xcode. It doesn’t support XIBs, and just launches Xcode when you try to edit those files.

It’s also important to notice that AppCode is currently available under an “early access” program, with expiring licences that force upgrade to newer builds. There’s no indication of when AppCode will actually be released, and what it will cost when it gets there. For comparison, the community edition of IDEA is free while the “Ultimate” edition costs anything up to £367, depending on circumstances.

AppCode.app

Emacs

Emacs’s Objective-C mode is pretty good, and it can use ctags to build a cross-file symbol table to provide jumping to definition like a “real” IDE. In fact, it would be fair to say that emacs is a real IDE: you can interact with version control directly, do builds (by default this uses Make, though you can easily tell it to run xcodebuild instead), run your app through gdb and other goodies.

That said, I think it’s fair to say that emacs is only a powerful tool in the hands of those who are comfortable with Emacs. It’s only because I’ve been using it forever (I actually started with MicroEmacs on AmigaOS, and wrote LISP modes in my Uni days) that I even consider it an option. It’s nothing like an app on any other system: Cocoa’s field editor supports a subset of emacs shortcuts but otherwise emacs feels nothing like a Mac, UNIX, Windows or anything else app. (Before the snarky comments come in, vim is just as odd.)

Emacs.app

Conclusion

There are more ways out there to write your Objective-C code than just Xcode. These are a few of them: find one that works well for you.

Posted in code-level, tool-support | Comments Off on So you don’t like your IDE

On device identifiers

Note: as ever, this blog refrains from commenting on speculation regarding undisclosed product innovations from device providers. This post is about the concept of tracking users via a device identifier. You might find the discussion useful in considering future product directions; that’s fine.

Keeping records of what users are up to has its benefits. As a security boffin, I’m fully aware of the benefits of good auditing: discovering what users (and others) have (or haven’t) done to a valuable system. It also lets developers find out how users are getting on with their application: whether they ignore particular features, or have trouble deciding what to do at points in the app’s workflow.

Indeed, sometimes users actively enjoy having their behaviour tracked. Every browser has a history feature; many games let players see what they’ve achieved and compare it with others. Location games would be pretty pointless if they could only tell me where I am now, not tell the story of where I’ve been.

A whole bunch of companies package APIs for tracking user behaviour in smartphone devices. These are the Analytics companies. To paraphrase Scott Adams: analytics is derived from the root word “anal”, and the Greek “lytics” meaning “to pull a business model from”. What they give developers for free is the API to put analytics services in their apps, and the tools to draw useful conclusions from these data.

This is where the fun begins. Imagine that the analytics company uses some material derived from a device identifier (a UDID, IMEI, or some other hardware key) as the database key to associate particular events with users. Now, if the same user uses multiple apps even by different developers on the same device, and they all use that analytics API, then that analytics provider can aggregate the data across all of the apps and build up a bigger picture of that user’s behaviour.

If only one of the apps records the user’s name as part of its analytics, then the analytics company – a company with whom the user has no relationship – gets to associate all of that behaviour with the user’s real name. So, of course, do that company’s customers: remember that the users and developers are provided with their stuff for free, and that businesses have limited tendency toward altruism. The value in an analytics company is their database, so of course they sell that database to those who will buy it: like advertisers, but again, companies with whom the user has no direct relationship.

People tend to be uneasy about invisible or unknown sharing of their information, particularly when the scope or consequences of such sharing are not obvious up front[*]. The level of identifiable[**] information and scope of data represented by a cross-app analysis of a smartphone user’s behaviour – whether aggregated via the model described above or other means – is downright stalker-ish, and will make users uncomfortable.

One can imagine a scenario where smartphone providers try not to make their users uncomfortable: after all, they are the providers’ bread and butter. So they don’t give developers access to such a “primary key” as has been described here. Developers would then be stuck with generating identifiers inside their apps, so tracking a single user inside a single app would work but it would be impossible to aggregate the data across multiple apps, or the same app across multiple devices.

Unless, of course, the developer can coerce the user into associating all of those different identifiers with some shared identifier, such as a network service username. But how do you get users to sign up for network services? By ensuring that the service has value for the user. Look at game networks that do associate user activity across multiple apps, like OpenFeint and Game Center: they work because players like seeing what games their friends are playing, and sharing their achievements with other people.

The conclusion is, in the no-device-identifier world, it’s still possible to aggregate user behaviour, but only if you exchange that ability for something that the user values. Seems like a fair deal.

[*] My “don’t be a dick” guide to data privacy takes into account the fact that people like sharing information via online services such as Facebook, foursquare etc. but that they want to do so on their own terms. It goes like this: you have no right to anything except what the user told you. You have no right to share anything except what the user told you to share; they will tell you who you may share it with, and can change their minds. The user has a right to know what they’re telling you and how you’re sharing it.

[**] Given UK-sized postal zones, your surname and postcode are sufficient to uniquely identify you. Probably your birthday and postcode would also work. It doesn’t take much information to uniquely identify someone, anyway.

Posted in Uncategorized | 6 Comments

Remember the future?

The future is notoriously hard to pin down. For example, what is Seattle’s lasting legacy from 20th Century technology? What would people have pointed to in, say, the 1970s? Of course, Seattle is the home of Boeing, who did a lot of construction for NASA (and bought most of the other companies that were also doing so) on projects like the Saturn V rocket and the Space Shuttle. Toward the end of the 1970s, those in the know in Seattle would have confidently claimed that the Shuttle’s weekly trips into space as the world’s longest-distance haulage provider will be central to 21st century Space Age technology. But neither the shuttle nor the Saturn V works any more, and nothing equivalent has come along to replace them (certainly not from Boeing). The permanent remnant of Seattle’s part in the space race comes from earlier on, when the USSR already had satellites in orbit, Gagarin had safely returned, and the USA wanted to assert its technological superiority over the Soviets. I’m talking, of course, about Seattle Center and its most famous landmark: a giant lift shaft with a restaurant at one end and a gift shop at the other.

Space NeedlePeople like to prognosticate about how our industry, society or civilization is going to change: both in the short term, and in the distant future (which can be anything beyond about three years in software terms). Sure, it’s fun to speculate. Earlier this year I took part in a panel with my arch-brother among others, where many of the questions were about what direction we thought Apple might take with the iPad, how existing companies would work their software to fit in post-PC devices, that sort of thing. That means that not only do we enjoy prognostication, but we seek it out. People enjoy playing the game of deciding what the future will be like, and hope for that spark of satisfaction of knowing either that they were right, or that they were there when someone else was right.

But why? It’s not as if we’re any good at it. The one thing that history lets us discover is that people who predict the future generally get it wrong. If they have the foresight to make really grandiose predictions they get away with it, because no-one finds out that they were talking out of their arses until well after they died. But just as the Space Needle has outlived the Space Shuttle, so the “next big thing” can easily turn out to be a fad while something apparently small and inconsequential now turns out to last and last.

Space NeedleOf course I’ll discuss the computing industry in this article, but don’t think this is specific to computing. In general, people go for two diametric visions of the future: either it’s entirely different from what came before, or it’s the same but a little better. The horses are faster, that kind of thing. Typically, experts in an industry are the people who find it hardest to predict that middle ground: a lot of things are the same, but one or two things have created a large change. Like the people at the air ministry who knew that superchargers were too heavy to ever allow Frank Whittle’s jet turbine to take off. Or the people who didn’t believe that people could travel at locomotive speeds. Or H.G. Wells, who predicted men on (well, in) the Moon, severely stratified society, life on other planets…just not the computer that was invented during his lifetime.

OK, so, computing. Remember the future of computers? The future will contain "maybe five" computers, according to Thomas Watson at IBM. I’m in a room now with about nine computers, not including the microcontrollers in my watch, hi-fi, cameras and so forth. There were around ten examples of Collosus produced in the 1940s. Why maybe five computers? Because computers are so damned heavy you need to reinforce the main frame of your floor to put them in. Because there are perhaps two dozen people in the world who understand computers. Because if you have too many then you have loads of dangerous mercury sloshing around. Because companies are putting themselves out of business attempting to sell these things for a million dollars when the parts cost nearly two million. And, finally, because there’s just not much you can do on a computer: not everyone needs ballistics tables (and most of the people who do want them, we don’t want to sell to).

Space NeedleEnough of the dim depths of computing. Let’s come into the future’s future, and ask whether you remember the other future of computers: the workstation. Of course, now we know that mainframes are old and busted, and while minicomputers based on transistor-to-transistor logic are cheaper, smaller, more reliable and all, they’re still kindof big. Of course, micros like the Altair and the Apple are clearly toys, designed as winter-evening hobbies for married men[*]. Wouldn’t it be better to use VLSI technology so that everyone can have their own time-sharing UNIX systems[**] on their desks, connected perhaps through the ultra-fast thinwire networks?

Better, maybe, but not best. Let’s look at some of the companies involved, in alphabetical order. Apollo? Acquired by HP. Digital? Acquired, circuitously, by HP. HP? Still going, but not making workstations (nor, apparently, much else) any more. IBM? See HP. NeXT? Acquired, making consumer electronics these days. Silicon Graphics? Acquired (after having left the workstation industry). Stanford University Networks? Acquired by a service company, very much in the vein of IBM or HP. Symbolics, the owners of the first ever .com domain? Went bankrupt.

The problem with high-end, you see, is that it has a tendency to become low-end. Anything a 1980s workstation can do could be done in a “personal” computer or a micro by, well, by the 1980s. It’s hard to sell bog standard features at a premium price, and by the time PCs had caught up to workstations, workstations hadn’t done anything new. Well, nothing worth talking about…who’d want a camera on their computer? Notice that the companies that did stay around-IBM and HP-did so by getting out of the workstation business: something SGI and Sun both also tried to do and failed. The erosion of the workstation market by the domestic computer is writ most large in the Apple-NeXT purchase.

Space NeedleSo workstations aren’t the future. How about the future of user interfaces? We all know the problem, of course: novice computer users are confused and dissuaded by the “computery-ness” of computers, and by the abstract nature of the few metaphors that do exist (how many of you wallpaper your desktop?). The solution is obvious: we need to dial up the use of metaphor and skeuomorphism to make the user more comfortable in their digital surroundings. In other words, we need Bob. By taking more metaphors from the real world, we provide a familiar environment for users who can rely on what they already know about inboxes, bookshelves, desk drawers and curtains(!) in order to navigate the computer.

Actually, what we need is to get rid of every single mode in the computer’s interface. This is, perhaps, a less well-known future of computing than the Bob future of computing, despite being documented in the classic book The Humane Interface, by Jef Raskin. The theory goes like this: we’ve got experience of modal user interfaces, and we know that they suck. They force the user to stop working while the computer asks some asinine question, or tells them something pointless about the state of their application. They effectively reverse the master-slave relationship, making the user submit to the computer’s will for a while. That means that in the future, computers will surely dispose of modes completely. Well, full modes: of course partial modes that are entirely under the user’s control (the Shift key represents a partial mode, as does the Spotlight search field) are still permitted. So when the future comes to invent the smartphone, there’ll be no need for a modal view controller in the phone’s API because future UI designers will be enlightened regarding the evils of modality.

A little closer to home, and a little nerdier, do you remember the future of the filesystem? HFS+ is, as we know, completely unsuitable as a filesystem for 2009 so future Macs will instead use Sun’s ZFS. This will allow logical volume management, versioned files…the sorts of goodies that can’t be done on HFS+. Oh, wait.

These are all microcosmic examples of how the future of computing hasn’t quite gone according to the predictions. I could quote more (one I’ve used before is Bob Cringely’s assertion in 1992 that in fifteen years, we’ll have post-PC PCs; well I’m still using a PC to write this post and it’s 2011), but it’s time to look at the bigger picture, so I’m going to examine why the predictions from one particular book have or haven’t come about. I’m not picking this book because I want to hate on it; in fact in a number of areas the predictions are spot on. I’m picking on this book because the author specifically set out to make short, medium and long-term forecasts about the silicon revolution, and the longest-term predictions were due to have become real by the year 2000. The book is The Mighty Micro: Impact of the Computer Revolution by Dr. Christopher Evans, published in 1979.

According to the Mighty Micro the following should have all happened by now.

  • Openness and availability of information leads to the collapse of the Soviet Union. ✓
  • A twenty-hour working week and retirement at fifty. ✗
  • Microcontroller-based home security. ✓ For everyone, replacing the physical lock-and-key. ✗
  • Cars that anticipate and react to danger. ✓ As the standard. ✗
  • A “wristwatch” that monitors pulse and blood pressure. ✓
  • An entire library stored in the volume of a paperback book. ✓
  • A complete end to paper money. ✗
  • An end to domestic crime. ✗

So what happened? Well, “processors and storage will get smaller and cheaper” was the prevailing trend from the forties to the seventies, i.e. over the entire history of electronic computing. Assuming that would continue, and that new applications for tiny computers would be discovered, was a fairly safe bet, and one that played out well. The fundamental failures behind all of the other predictions were twofold: that such applications would necessarily replace, rather than augment, whatever it was that we were doing before computers, and that we would not find novel things to do with our time once computers were doing the things we already did. The idea was that once computers were doing half of our work, we would have 50% as much work to do: not that we would be able to do other types of work for that 50% of our working week.

One obvious thing we-well, some of us-have to do now that we didn’t before is program computers. Borrowing some figures from the BSA, there were 1.7M people working in software in the US in 2007, earning significantly more than the national average wage (though remember that this was during the outsourcing craze, so a lot of costs and jobs even for American companies might be missing here). The total worldwide expenditure on (packaged, not bespoke) software was estimated at $300bn. Once you include the service aspects and bespoke or in-house development, it’s likely that software was already a trillion-dollar industry by 2007. Before, if you remember, the smartphone app gold rush.

This is a huge (and, if we’re being brutally honest, inefficient) industry, with notoriously short deadlines, long working hours, capricious investors and variable margins. Why was it not predicted that, just as farmhands became machine operators, machine operators would become computer programmers? That the work of not having a computer would be replaced by the work of having a computer?

So, to conclude, I’ll return to a point from the article’s introduction: that making predictions is easy and fun, but making accurate predictions is hard. When a pundit tells you that something is a damp squib or a game-changer, they might be correct…but you might want to hedge your bets. Of course, carry on prognosticating and asking me to do so: it’s enjoyable.

[*] This is one of the more common themes of futurology; whatever the technological changes, whatever their impacts on the political or economic structure of the world, you can bet that socially things don’t change much, at least in the eye of the prognosticators. Take the example of the Honeywell Kitchen Computer: computers will revolutionise the way we do everything, but don’t expect women to use them for work.

[**] Wait, if we’ve each got our own computer, why do they have to be time-sharing?

Posted in books | Leave a comment

Want to hire iamleeg?

Well, that was fun. For nearly a year I’ve been running Fuzzy Aliens, a consultancy for app developers to help get security and privacy requirements correct, reducing the burden on the users. This came after a year of doing the same as a contractor, and a longer period of helping out via conference talks, a book on the topic, podcasts and so on. I’ve even been helping the public to understand the computer industry at museums.

Everything changes, and Fuzzy Aliens is no exception. While I’ve been able to help out on some really interesting projects, for everyone from indies to banks, there hasn’t been enough of this work to pay the bills. I’ve spoken with a number of people on this, and have heard a range of opinions on why there isn’t enough mobile app security work out there to keep one person employed. My own reflection on the year leads me to these items as the main culprits:

  • There isn’t a high risk to developers associated with getting security wrong;
  • Avoiding certain behaviour to improve security can mean losing out to competitors who don’t feel the need to comply;
  • The changes I want to make to the industry won’t come from a one-person company; and
  • I haven’t done a great job of communicating the benefits of app security to potential customers.

Some people say things like Fuzzy Aliens is “too early”, or that the industry “isn’t ready” for such a service: those are actually indications that I haven’t made the industry ready: in other words, that I didn’t get the advantages across well enough. Anyway, the end results are that I can and will learn from Fuzzy Aliens, and that I still want to make the world a better place. I will be doing so through the medium of salaried employment. In other words, you can give me a job (assuming you want to). The timeline is this:

  • The next month or so will be my Time of Searching. If you think you might want to hire me, get in touch and arrange an interview during August or early September.
  • Next will come my Time of Changing. Fuzzy Aliens will still offer consultancy for a very short while, so if you have been sitting on the fence about getting some security expertise on your app, now is the time to do it. But this will be when I research the things I’ll need to know for…
  • whatever it is that comes next.

What do I want to do?

Well, of course my main areas of experience are in applications and tools for UNIX platforms—particularly Mac OS X and iOS—and application security, and I plan to continue in that field. A former manager of mine described me thus on LinkedIn:

Graham is so much more than a highly competent software engineer. A restless “information scout” – finding it, making sense of it, bearing on a problem at hand or forging a compelling vision. Able to move effortlessly between “big picture” and an obscure detail. Highly capable relationships builder, engaging speaker and persuasive technology evangelist. Extremely fast learner. Able to use all those qualities very effectively to achieve ambitious goals.

Those skills can best be applied strategically I think: so it’s time to become a senior/chief technologist, technology evangelist, technical strategy officer or developer manager. That’s the kind of thing I’ll be looking for, or for an opportunity that can lead to it. I want to spend an appreciable amount of time supporting a product or community that’s worth supporting: much as I’ve been doing for the last few years with the Cocoa community.

Training and mentoring would also be good things for me to do, I think. My video training course on unit testing seems to have been well-received, and of course I spent a lot of my consulting time on helping developers, project managers and C*Os to understand security issues in terms relevant to their needs.

Where do I want to do it?

Location is somewhat important, though obviously with a couple of years’ experience at telecommuting I’m comfortable with remote working too. The roles I’ve described above, which depend as much on relationships as on sitting at a computer, may be best suited by split working.

My first choice preference for the location of my desk is a large subset of the south of the UK, bounded by Weston-Super-Mare and Lyme Regis to the west, Gloucester and Oxford to the north, Reading and Chichester to the east and the water to the south (though not the Solent: IoW is fine). Notice that London is outside this area: having worked for both employers and clients in the big smoke, I would rather not be in that city every day for any appreciable amount of time.

I’d be willing to entertain relocation elsewhere in Europe for a really cool opportunity. Preferably somewhere with a Germanic language because I can understand those (including, if push comes to shove, Icelandic and Faroese). Amsterdam, Stockholm and Dublin would all be cool. The States? No: I couldn’t commit to living over there for more than a short amount of time.

Who will you do it for?

That part is still open: it could be you. I would work for commercial, charity or government/academic sectors, but I have this restriction: you must not just be another contract app development agency/studio. You must be doing what you do because you think it’s worth doing, because that’s the standard I hold myself to. And charging marketing departments to slap their logo onto a UITableView displaying their blog’s RSS feed is not worth doing.

That’s why I’m not just falling back on contract iOS app development: it’s not very satisfying. I’d rather be paid enough to live doing something great, than make loads of money on asinine and unimportant contracts. Also, I’d rather work with other cool and motivated people, and that’s hard to do when you change project every couple of months.

So you’re doing something you believe in, and as long as you can convince me it’s worth believing in and will be interesting to do, and you know where to find me, then perhaps I’ll help you to do it. Look at my CV, then as I said before, e-mail me and we’ll sort out an interview.

I expect my reward to be dependent on how successful I make the product or community I’m supporting: it’s how I’ll be measuring myself so I hope it’s how you will be too. Of course, we all know that stock and options schemes are bullshit unless the stock is actually tradeable, so I’ll be bearing that in mind.

Some miscellaneous stuff

Here’s some things I’m looking for, either to embrace or avoid, that don’t quite fit in to the story above but are perhaps interesting to relate.

Things I’ve never done, but would

These aren’t necessarily things my next job must have, and aren’t all even work-related, but are things that I would take the opportunity to do.

  • Give a talk to an audience of more than 1,000 people.
  • Work in a field on a farm. Preferably in control of a tractor.
  • Write a whole application without using any accessors.
  • Ride a Harley-Davidson along the Californian coast.
  • Move the IT security industry away from throwing completed and deployed products at vulnerability testers, and towards understanding security as an appropriately-prioritised implicit customer requirement.
  • Have direct reports.

Things I don’t like

These are the things I would try to avoid.

  • “Rock star” developers, and companies who hire them.
  • Development teams organised in silos.
  • Technology holy wars.
  • Celery. Seriously, I hate celery.

OK, but first we like to Google our prospective hires.

I’ll save you the trouble.

Posted in Business, Policy, Responsibility, software-engineering | Leave a comment

On the new Lion security things

This post will take a high-level view of some of Lion’s new security features, and examine how they fit (or don’t) in the general UNIX security model and with that of other platforms.

App sandboxing

The really big news for most developers is that the app sandboxing from iOS is now here. The reason it’s big news is that pretty soon, any app on the Mac app store will need to sign up to sandboxing: apps that don’t will be rejected. But what is it?

Since 10.5, Mac OS X has included a mandatory access control framework called seatbelt, which enforces restrictions governing what processes can access what features, files and devices on the platform. This is completely orthogonal to the traditional user-based permissions system: even if a process is running in a user account that can use an asset, seatbelt can say no and deny that process access to that asset.

[N.B. There’s a daemon called sandboxd which is part of all this: apparently (thanks @radian) it’s just responsible for logging.]

In 10.5 and 10.6, it was hard for non-Apple processes to adopt the sandbox, and the range of available profiles (canned definitions of what a process can and cannot do) was severely limited. I did create a profile that allowed Cocoa apps to function, but it was very fragile and depended on the private details of the internal profile definition language.

The sandbox can be put into a trace mode, where it will report any attempt by a process to violate its current sandbox configuration. This trace mode can be used to profile the app’s expected behaviour: a tool called sandbox-simplify then allows construction of a profile that matches the app’s intentions. This is still all secret internal stuff to do with the implementation though; the new hotness as far as developers are concerned starts below.

With 10.7, Apple has introduced a wider range of profiles based on code signing entitlements, which makes it easier for third party applications to sign up to sandbox enforcement. An application project just needs an entitlements.plist indicating opt-in, and it gets a profile suitable for running a Cocoa app: communicating with the window server, pasteboard server, accessing areas of the file system and so on. Additional flags control access to extra features: the iSight camera, USB devices, users’ media folders and the like.

By default, a sandboxed app on 10.7 gets its own container area on the file system just like an iOS app. This means it has its own Library folder, its own Documents folder, and so on. It can’t see or interfere with the preferences, settings or documents of other apps. Of course, because Mac OS X still plays host to non-sandboxed apps including the Finder and Terminal, you don’t get any assurance that other processes can’t monkey with your files.

What this all means is that apps running as one user are essentially protected from each other by the sandbox: if any one goes rogue or is taken over by an attacker, its effect on the rest of the system is restricted. We’ll come to why this is important shortly in the section “User-based access control is old and busted”, but first: can we save an app from itself?

XPC

Applications often have multiple disparate capabilities from the operating system’s perspective, that all come together to support a user’s workflow. That is, indeed, the point of software, but it comes at a price: when an attacker can compromise one of an application’s entry points, he gets to misuse all of the other features that app can access.

Of course, mitigating that problem is nothing new. I discussed factoring an application into multiple processes in Professional Cocoa Application Security, using Authorization Services.

New in 10.7, XPC is a nearly fully automatic way to create a factored app. It takes care of the process management, and through the same mechanism as app sandboxing restricts what operating system features each helper process has access to. It even takes care of message dispatch and delivery, so all your app needs to do is send a message over to a helper. XPC will start that helper if necessary, wait for a response and deliver that asynchronously back to the app.

So now we have access control within an application. If any part of the app gets compromised—say, the network handling bundle—then it’s harder for the attacker to misuse the rest of the system because he can only send particular messages with specific content out of the XPC bundle, and then only to the host app.

Mac OS X is not the first operating system to provide intra-app access control. .NET allows different assemblies in the same process to have different privileges (for example, a “write files” privilege): code in one assembly can only call out to another if the caller has the privilege it’s trying to use in the callee, or an adapter assembly asserts that the caller is OK to use the callee. The second case could be useful in, for instance, NSUserDefaults: the calling code would need the “change preferences” privilege, which is implemented by writing to a file so an adapter would need to assert that “change preferences” is OK to call “write files”.

OK, so now the good stuff: why is this important?

User-based access control is old and busted

Mac OS X—and for that matter Windows, iOS, and almost all other current operating systems—are based on timesharing operating systems designed for minicomputers (in fact, Digital Equipment Corp’s PDP series computers in almost every case). On those systems, there are multiple users all trying to use the same computer at once, and they must not be able to trip each other up: mess with each others’ files, kill each others’ processes, that sort of thing.

Apart from a few server scenarios, that’s no longer the case. On this iMac, there’s exactly one user: me. However I have to have two user accounts (the one I’m writing this blog post in, and a member of the admin group), even though there’s only one of me. Apple (or more correctly, software deposited by Apple) has more accounts than me: 75 of them.

The fact is that there are multiple actors on the system, but mapping them on to UNIX-style user accounts doesn’t work so well. I am one actor. Apple is another. In fact, the root account is running code from three different vendors, and “I” am running code from 11 (which are themselves talking to a bunch of network servers, all of which are under the control of a different set of people again).

So it really makes sense to treat “provider of twitter.com HTTP responses” as a different actor to “code supplied as part of Accessorizer” as a different actor to “user at the console” as a different actor to “Apple”. By treating these actors as separate entities with distinct rights to parts of my computer, we get to be more clever about privilege separation and assignment of privileges to actors than we can be in a timesharing-based account scheme.

Sandboxing and XPC combine to give us a partial solution to this treatment, by giving different rights to different apps, and to different components within the same app.

The future

This is not necessarily Apple’s future: this is where I see the privilege system described above as taking the direction of the operating system.

XPC (or something better) for XNU

Kernel extensions—KEXTs—are the most dangerous third-party code that exists on the platform. They run in the same privilege space as the kernel, so can grub over any writable memory in the system and make the computer do more or less anything: even actions that are forbidden to user-mode code running as root are open to KEXTs.

For the last eleventy billion years (or since 10.4 anyway), developers of KEXTs for Mac OS X have had to use the Kernel Programming Interfaces to access kernel functionality. Hopefully, well-designed KEXTs aren’t actually grubbing around in kernel memory: they’re providing I/O Kit classes with known APIs and KAUTH veto functions. That means they could be run in their own tasks, with the KPIs proxied into calls to the kernel. If a KEXT dies or tries something naughty, that’s no longer a kernel panic: the KEXT’s task dies and its device becomes unavailable.

Notice that I’m not talking about a full microkernel approach like real Mach or Minix: just a monolithic kernel with separate tasks for third-party KEXTs. Remember that “Apple’s kernel code” can be one actor and, for example, “Symantec’s kernel code” can be another.

Sandboxing and XPC for privileged processes

Currently, operating system services are protected from the outside world and each other by the 75 user accounts identified earlier. Some daemons also have custom sandboxd profiles, written in the internal-use-only Scheme dialect and located at /usr/share/sandbox.

In fact, the sandbox approach is a better match to the operating system’s intention than the multi-user approach is. There’s only one actor involved, but plenty of pieces of code that have different needs. Just as Microsoft has the SYSTEM account for Windows code, it would make sense for Apple to have a user account for operating system code that can do things Administrator users cannot do; and then a load of factored executables that can only do the things they need.

Automated system curation

This one might worry sysadmins, but just as the Chrome browser updates itself as it needs, so could Mac OS X. With the pieces described above in place, every Mac would be able to identify an “Apple” actor whose responsibility is to curate the operating system tasks, code, and default configuration. So it should be able to allow the Apple actor to get on with that where it needs to.

That doesn’t obviate an “Administrator” actor, whose job is to override the system-supplied configuration, enable and configure additional services and provide access to other actors. So sysadmins wouldn’t be completely out of a job.

Posted in Authentication, Authorization, Codesign, Mac, PCAS, sandbox | 4 Comments

TDD/unit testing video training for iOS developers

I recently recorded a series of videos on unit testing and test-driven development for iOS developers with Scotty of iDeveloper.tv. The videos and associated source code is now available for purchase and download.

Posted in code-level, iDeveloper.TV, iPad, iPhone, software-engineering, Talk, TDD, tool-support | Comments Off on TDD/unit testing video training for iOS developers

Making computing exciting

Over the last couple of years, I have visited three different museums of computing. NSBBQ in 2009 and 2010 visited the National Museum of Computing at Bletchley Park and the Museum of Computing in Swindon respectively. At this year’s WWDC I got the chance, along with a great group of friends, to visit the Computer History Museum in Mountain View.

While each has its interesting points, each also has its disappointments. My principle problem is this: most of the kit is switched off. Without a supply of electrons and an output device, most computers from my childhood just look like beige typewriters. Earlier computers look like poorly thought out hi-fi equipment, or refrigerators that Stanley Kubrick tarted up to use as props. The way you find out just how much computers have advanced over the last few decades is not by looking at the cases: it’s by using the computers.

If you’re anything like me, you keep track of your finances and tax return figures in Numbers. Now imagine doing it in Visicalc. Better still: try doing it in Visicalc. Or take your iOS app, and implement the core features in Microsoft BASIC (or MC6809 machine code, if you’re feeling hardcore). Write your next blog post in PenDown. It’s this experience that will demonstrate just how primitive even a 15 year old desktop computer feels. And the portables? See if you can lift one!

Of course, complaining is the easy part. Fixing it is harder. Which is why I’m now a volunteer at the Swindon museum of computing, on the team that designs the gallery. My main goal is to make the whole experience more interactive. In the short term, this means designing programming challenges for kids to try out: let’s face it, if we want more children to be interested in programming, we need to make programming more interesting to children. I certainly don’t relish the prospect of becoming a portable brain in a pickle jar just because the next generation doesn’t know any objective-c.

So it won’t happen overnight, but if I’m at all successful then we should be able to make the museum gallery more interactive, more educational, and more fun. To find out how it’s going, follow @MuseumComputing.

Posted in Uncategorized | 4 Comments

On what Marcus said

This post is a response to Why so serious? over at Cocoa is my Girlfriend. Read that.

Welcome back. OK, so firstly let’s talk about that damned carousel. Kudos to the developer who wrote a nice smoothly scrolling layer-backed image pager, but as Marcus says, that’s not the same as doing a nice smoothly scrolling carousel. Believe me, I’ve taken around one hundred Instruments traces of the carousel. Swirling images around an iPad screen is the least of its concerns.

Now, let’s start looking at the state of the community thing. It’s like an iceberg, or a duck. Or maybe a duck with the proportions of an iceberg. The point is that what you see is a bunch of developers being flown around the world to talk at conferences, plugging their books (the evil capitalist bastards). What you get is a bunch of people who have put their jobs and careers into the background for a while because they learned something cool and want to share it with the class. The 7/8ths of the duck kicking frantically below the ice is people not getting paid to help everyone else do their job as well as they can.

I can’t speak for Marcus’s experience, but I can describe my own. That security book? The one that I’m already planning to replace because there’ll be so much more stuff to talk about after Monday? The one where I know you read chapter one then put it on the shelf until such time as one of the other chapters describes a problem you have? Around nine months of research, study, and staring blankly at an OpenOffice window. During that time, almost all of my coding was either learning about or preparing samples for the content of the book. I got a warm feeling when I saw it in print, but they don’t pay rent.

The same, but in smaller writing, for conference talks (one to two weeks of preparation each) and even blog posts (half to two days of preparation each). That’s why I love reading posts from CIMGF, TheoCacao, Mike Ash and others: each new post represents time someone else has taken to make me a better programmer: time they could have billed to a client. By the way I don’t know whether this is commonly known, but there’s no pay for doing technical talks at iOS developer conferences. The keynote speakers sometimes get paid, the content speakers do not.

Ok, so that’s me on my high horse, but we were supposed to be talking about snarking in the community. That happens. My favourite recent example was the one piece of negative feedback I got from a recent conference talk: a page-long missive describing how I’d wasted the person’s time by talking about the subject of my talk rather than the topic they wanted to hear about.

Thing is, there’s a lesson in there. I could have done a better job at either describing the importance of my subject to that attendee, or getting them to leave the room early on in the talk. Could have, but didn’t. Next time, I will. And so that’s great, this commenter told me something I didn’t know before, something I can use to change the way I work.

But that’s not always the case. Sometimes, you look for the lesson and there isn’t one. The tweeter just doesn’t like you. The best way to get past this is to realise that the exchange has been neutral: you got nothing from their feedback, but in return because they chose to ignore you, you gave them nothing too. Maybe that guy does know the topic better than you. Maybe he’s just a blowhard. Either way, you gave nothing, you got nothing: it’s not a loss, it’s a no-score draw.

But then there are the other times. You know what I mean, the dark times. When your amygdala or whatever weird bit of your brain it is responds before your cortex does (I’m no neuroscientist, and I don’t even play one in my armchair), and you get the visceral rage before you get a chance to rationally respond.

There’s one common case that still turns me into a big green hulk of fury, even though I should have got over it years ago. It’s the times when a commentator or talk attendee decides that my entire argument is broken because that person either disagrees with my choice of terminology, or can think of an edge case where my solution can’t be rubber-stamped in.

On the one hand, as software engineers we are used to finding edge cases where the requirements don’t quite seem to fit. On the other hand, as software engineers it is our job to solve these problems and edge cases. If you find a situation at work where a particular set of circumstances causes your app to fail, I’m willing to bet that you consider that a bug and try to find a way to fix that app, then you give the bug fix to your users. I doubt you pull the app from the store and smugly proclaim that your users were idiots for thinking it could solve their problems in the first place.

So apply that same thinking to solutions other people are showing you. If you have to drill down to an edge case to find the problem, then what you’re saying is not that the solution is wrong, but that it’s almost right. Provide not a repudiation but an enhancement, a bug fix if you will. Make the solution better and we’ve all learned something.

Conclusion

Of course, don’t be a dick. Your twitter-wang is not the most important thing in your career, knowledge is. You’re a knowledge worker. The person who got up on that stage, or wrote that post or that book, did it because they found something cool and wanted everyone to benefit. They didn’t make you pay some percentage of your app revenue to use that knowledge, or withhold the knowledge, or supply it exclusively to your competition. They told you something they thought would help.

If it didn’t help, maybe that’s because you know something about the topic that they didn’t. That’s fine, but don’t stop at saying that they’re wrong. That doesn’t help you or them, or anyone else who listened. Understand their position, understand how your knowledge provides a different perspective, then combine the two to make the super-mega-awesome KnowledgeZoid. And now start sharing that.

But don’t expect that just because you’re not being a dick, everyone else will not be a dick. Just try to avoid taking it personally: which is hard, I certainly can’t do it all the time. You took a risk in raising your head above the parapet and trying to get us engineers to change the way we work: the reward for that far outweighs the cost of dealing with detractors.

One more thing

There’s another group of developers, of course. Bigger than the sharers, bigger than the detractors. That’s the group of developers who silently get on with building great things. Please, if you’re in that group, consider heading over to the dev forums or to stack overflow and answering one question. Or adding a paragraph to the cocoadev wiki (or just removing decade-old conversations from the content). We’re all eager to learn from you.

Posted in code-level, iDeveloper.TV, iPad, software-engineering | 1 Comment