App security consultancy from your favourite boffin

I’m very excited to soon be joining the ranks of Agant Ltd, working on some great apps with an awesome team of people. I’ll be bringing with me my favourite title, Smartphone Security Boffin. Any development team can benefit from a security boffin, but I’m also very excited to be in product development with the people who make some of the best products on the market.

There’s another side to all of this: once again can security boffinry be at your disposal. I’ll be available on a consultancy basis to help out with your application security and privacy issues. If you’re unsure how to do SSL right in your iOS app, are having difficulty getting your Mac software to behave in the App Store sandbox, or need help to identify the security pain points in your application’s design or code, I can lend a hand.

Of course, it’s not just about technology. The best way to help your developers get security right is to help your developers to help themselves. When I’ve done security training for developers before I’ve seen those flashes of enlightenment when delegates realise how the issues relate to their own work; the hasty notes of class or method names to look into back at the desk; the excited discusses in the breaks. Security training for iOS app developers is great for the people, great for the product – and, of course, something I can help out with.

To check availability and book some time (which will be no earlier than July), drop me a line on the twitters or at graham@agant.com.

A site for discussing app security

There’s a new IT security site over at Stack Exchange. Questions and answers on designing and implementing IT security policy, and on app security are all welcome.

I’m currently a moderator at the site, but that’s just an interim thing while the site is being bootstrapped. Obviously, if people subsequently vote for me as a permanent moderator I’ll stay in, but the converse is also true. Anyway, check out the site, ask and answer questions, let’s make it as good a venue for app security discussion as stackoverflow.com is for general programming.

On Fuzzy Aliens

I have just launched a new company, Fuzzy Aliens[*], offering application security consultancy services for smartphone app developers. This is not the FAQ list, this is the “questions I want to answer so that they don’t become frequently asked” list.

What do you offer?

The company’s services are all focussed on helping smartphone and tablet app developers discover and implement their applications’ security and privacy requirements. When planning an app, I can help with threat modelling, with training developers, securing the development lifecycle, requirements elicitation, secure user experience design, and with developing a testing strategy.

When it comes to implementation, you can hire me to do the security work on your iOS or Android app. That may be some background “plumbing” like storing a password or encrypting sensitive content, or it might be an end-to-end security feature. I can also do security code reviews and vulnerability analysis on existing applications.

Why would I want that?

If you’re developing an application destined for the enterprise market, you probably need it. Company I.T. departments will demand applications that conform to local policy regarding data protection, perhaps based on published standards such as the ISO 27000 family or PCI-DSS.

In the consumer market, users are getting wise to the privacy problems associated with mobile apps. Whether it’s accidentally posting the wrong thing to facebook, or being spied on by their apps, the public don’t want to—and shouldn’t need to—deal with security issues when they’re trying to get their work done and play their games.

Can I afford that?

Having been a Micro-ISV and contracted for others, I know that many apps are delivered under tight budgets by one-person companies. If all you need is a half day together to work on a niggling problem, that’s all you need to pay for. On the other hand I’m perfectly happy to work on longer projects, too :).

Why’s it called Fuzzy Aliens?

Well, the word “fuzz” obviously has a specific meaning in the world of secure software development, but basically the answer is that I knew I could turn that into a cute logo (still pending), and that it hadn’t been registered by a UK Ltd yet.

So how do I contact you about this?

You already have – you’re here. But you could see the company’s contact page for more specific information.


[*] More accurately, I have indicated the intent to do so. The articles of association have not yet been returned by Companies House, so for the next couple of days the blue touch paper is quietly smouldering.

On voices that matter

In October I’ll be in Philadelphia, PA talking at Voices That Matter: Fall iPhone Developers’ Conference. I’m looking forward to meeting some old friends and new faces, and sucking up a little more of that energy and enthusiasm that pervades all of the Apple-focussed developer events I’ve been to. In comparison with other fields of software engineering, Cocoa and Cocoa Touch development have a sense of community that almost exudes from the very edifices of the conference venues.

But back to the talk. Nay, talks. While Alice was directed by the cake on the other side of the looking glass to “Eat Me”, the label on my slice said “bite off more of me than you can chew”. Therefore I’ll be speaking twice at this event, the first talk on Saturday is on Unit Testing, which I’ve taken over just now from Dave Dribin. Having seen Dave talk on software engineering practices before (and had lively discussions regarding coupling and cohesion in Cocoa code in the bar afterwards), I’m fully looking forward to adding his content’s biological and technological distinctiveness to my own. I’ll be covering the why of unit testing in addition to the how and what.

My second talk is on – what else – security and encryption in iOS applications. In this talk I’ll be looking at some of the common features of iOS apps that require security consideration, how to tease security requirements out of your customers and product managers and then looking at the operating system support for satisfying those requirements. This gives me a chance to focus more intently on iOS than I was able to in Professional Cocoa Application Security (Wrox Professional Guides), looking at specific features and gotchas of the SDK and the distinctive environment iOS apps find themselves in.

I haven’t decided what my schedule for the weekend will be, but there are plenty of presentations I’m interested in watching. So look for me on the conference floor (and, of course, in the bar)!

On Fitt’s Law and Security

…eh? Don’t worry, read on and all shall be explained.

I’ve said in multiple talks and podcasts before that one key to good security is good user interface design. If users are comfortable performing their tasks, and your application is designed such that the easiest way to use it is to do the correct thing, then your users will make fewer mistakes. Not only will they be satisfied with the experience, they’ll be less inclined to stray away from the default security settings you provide (your app is secure by default, right?) and less likely to damage their own data.

Let’s look at a concrete example. Matt Gemmell has already done a pretty convincing job of destroying this UI paradigm:

The dreaded add/remove buttonsThe button on the left adds something new, which is an unintrusive, undoable operation, and one that users might want to do often. The button on the right destroys data (or configuration settings or whatever), which if unintended represents an accidental loss of data integrity or possibly service availability.

Having an undo feature here is not necessarily a good mitigation, because a user flustered at having just lost a load of work or important data cannot be guaranteed to think of using it. Especially as undo is one of those features that has different meaning in different apps, and is always hidden (there’s no “you can undo this” message shown to users). Having an upfront confirmation of deletion is not useful either, and Matt’s video has a good discussion of why.

Desktop UI

Anyway, this is a 19×19 px button (or two of them), right? Who’s going to miss that? This is where we start using the old cognometrics and physiology. If a user needs to add a new thing, he must:

  1. Think about what he wants to do (add a new thing)
  2. Think about how to do it (click the plus button, but you can easily make this stage longer by adding contextual menus, a New menu item and a keyboard shortcut, forcing him to choose)
  3. Move his hand to the mouse (if it isn’t there already)
  4. Locate the mouse pointer
  5. Move the pointer over the button
  6. Click the mouse button

Then there’s the cognitive effort of checking that the feedback is consistent with the expected outcome. Anyway, Fitt’s Law comes in at step 5. It tells us, in a very precise mathematical way, that it’s quicker to move the pointer to a big target than a small one. So while a precise user might take time to click in this region:

Fastidious clicking action

A user in a hurry (one who cares more about getting the task done than on positioning a pointing device) can speed up the operation by clicking in this region:

Hurried clicking action

The second region includes around a third of the remove button’s area; the user has a ~50% chance of getting the intended result, and ~25% of doing something bad.

Why are the click areas elliptical? Think about moving your hand – specifically moving the mouse in your hand (of course this discussion does not apply to trackpads, trackballs etc, but a similar one does). You can move your wrist left/right easily, covering large (in mousing terms) distances. Moving the mouse forward/back needs to be done with the fingers, a more fiddly task that covers less distance in the same time. If you consider arm motions the same applies – swinging your radius and ulna at the elbow to get (roughly) left/right motions is easier than pulling your whole arm at the shoulder to get forward/back. Therefore putting the remove button at one side of the add button is, from an ergonomic perspective, the absolutely worst place for it to go.

Touch UI

Except on touch devices. We don’t really need to worry about the time spent moving our fingers to the right place on a touchscreen – most people develop a pretty good unconscious sense of proprioception (knowing where the bits of our bodies are in space) at a very early age and can readily put a digit where they want it. The problem is that we don’t want it in the same place the device does.

We spend most of our lives pointing at things, not stabbing them with the fleshy part of a phalange. Therefore when people go to tap a target on the screen, the biggest area of finger contact will be slightly below (and to one side, depending on handedness) the intended target. And of course they may not accurately judge the target’s location in the first place. In the image below, the user probably intended to tap on “Presentations”, but would the tap register there or in “LinkedIn Profile”? Or neither?

A comparatively thin finger

The easiest way to see this for yourself is to move from an iPhone (which automatically corrects for the user’s intended tap location) and an Android device (which doesn’t, so the tap occurs where the user actually touched the screen) or vice versa. Try typing on one for a while, then moving over to type on the other. Compare the sorts of typos you make on each device (obviously independent of the auto-correct facilities, which are good on both operating systems), and you’ll find that after switching devices you spend a lot of time hitting keys to the side of or below your target keys.

Conclusion

Putting any GUI controls next to each other invites the possibility that users will tap the wrong one. When a frequently-used control has a dangerous control as a near neighbour, the chance for accidental security problems to occur exists. Making dangerous controls hard to accidentally activate does not only provide a superior user experience, but a superior security experience.

On improved tool support for Cocoa developers

I started writing some tweets, that were clearly taking up too much room. They started like this:

My own thoughts: tool support is very important to good software engineering. 3.3.1 is not a big inhibitor to novel tools. /cc @rentzsch

then this:

There’s still huge advances to make in automating design, bug-hunting/squashing and traceability/accountability, for instance.

(The train of thought was initiated by the Dog Spanner’s [c4 release]; post.)

In terms of security tools, the Cocoa community needs to catch up with where Microsoft are before we need to start wondering whether Apple might be holding us back. Yes, I have started working on this, I expect to have something to show for it at NSConference MINI. However, I don’t mind whether it’s you or me who gets the first release, the important thing is that the tools should be available for all of us. So I don’t mind sharing my impression of where the important software security engineering tools for Mac and iPhone OS developers will be in the next few years.

Requirements comprehension

My first NSConference talk was on understanding security requirements, and it’s the focus of Chapter 1 of Professional Cocoa Application Security. The problem is, most of you aren’t actually engineers of security requirements, you’re engineers of beautiful applications. Where do you dump all of that security stuff while you’re focussing on making the beautiful app? It’s got to be somewhere that it’s still accessible, somewhere that it stays up to date, and it’s got to be available when it’s relevant. In other words, this information needs to be only just out of your way. A Pages document doesn’t really cut it.

Now over in the Windows world, they have Microsoft Threat Modeling Tool, which makes it easy to capture and organise the security requirements. But stops short of providing any traceability or integration with the rest of the engineering process. It’d be great to know how each security requirement impacts each class, or the data model, etc.

Bug-finding

The Clang analyser is just the start of what static analysis can do. Many parts of Cocoa applications are data-driven, and good analysis tools should be able to inspect the relationship between the code and the data. Other examples: currently if you want to ensure your UI is hooked up properly, you manually write tests that inspect the outlets, actions and bindings you set up in the XIB. If you want to ensure your data model is correct, you manually write tests to inspect your entity descriptions and relationships. Ugh. Code-level analysis can already reverse-engineer test conditions from the functions and methods in an app, they ought to be able to use the rest of the app too. And it ought to make use of the security model, described above.

I have recently got interested in another LLVM project called KLEE, a symbolic execution tool. Current security testing practices largely involve “fuzzing”, or choosing certain malformed/random input to give to an app and seeing what it does. KLEE can take this a step further by (in effect) testing any possible input, and reporting on the outcomes for various conditions. It can even generate automated tests to make it easy to see what effect your fixes are having. Fuzzing will soon become obsolete, but we Mac people don’t even have a good and conventional tool for that yet.

Bug analysis

Once you do have fuzz tests or KLEE output, you start to get crash reports. But what are the security issues? Apple’s CrashWrangler tool can take a stab at analysing the crash logs to see whether a buffer overflow might potentially lead to remote code execution, but again this is just the tip of the iceberg. Expect KLEE-style tools to be able to report on deviations from expected behaviour and security issues without having to wait for a crash, just as soon as we can tell the tool what the expected behaviour is. And that’s an interesting problem in itself, because really the specification of what you want the computer to do is your application’s source code, and yet we’re trying to determine whether or not that is correct.

Safe execution

Perhaps the bitterest pill to swallow for long time Objective-C programmers: some time soon you will be developing for a managed environment. It might not be as high-level as the .Net runtime (indeed my money is on the LLVM intermediate representation, as hardware-based managed runtimes have been and gone), but the game has been up for C arrays, memory dereferencing and monolithic process privileges for years. Just as garbage collectors have obsoleted many (but of course not all) memory allocation problems, so environment-enforced buffer safety can obsolete buffer overruns, enforced privilege checking can obsolete escalation problems and so on. We’re starting to see this kind of safety retrofitted to compiled code using stack guards and the like, but by the time the transition is complete (if it ever is), expect your application’s host to be unrecognisable to the app as an armv7 or x86_64, even if the same name is still used.

Which vendor “is least secure”?

The people over at Intego have a blog post, Which big vendor is least secure? They discuss that because Microsoft have upped their game, malware authors have started to target other products, notably those produced by Adobe and Apple.

That doesn’t really address the question though: which big vendor is least secure (or more precisely, which big vendor creates the least secure products)? It’s an interesting question, and one that’s so hard to answer, people usually get it wrong.

The usual metrics for vendor software security are:

  • Number of vulnerability reports/advisories last year
  • Speed of addressing reported vulnerabilities

Both are just proxies for the question we really want to know the answer to: “what risk does this product expose its users to?” Each has drawbacks when used as such a proxy.

The previous list of vulnerabilities seems to correlate with a company’s development practices – if they were any good at threat modelling, they wouldn’t have released software with those vulnerabilities in, right? Well, maybe. But maybe they did do some analysis, discovered the vulnerability, and decided to accept it. Perhaps the vulnerability reports were actually the result of their improved secure development lifecycle, and some new technique, tool or consultant has patched up a whole bunch of issues. Essentially all we know is what problems have been addressed and who found them, and we can tell something about the risk that users were exposed to while those vulnerabilities were present. Actually, we can’t tell too much about that, unless we can find evidence that it was exploited (or not, which is harder). We really know nothing about the remaining risk profile of the application – have 1% or 100% of vulnerabilities been addressed?

The only time we really know something about the present risk is in the face of zero-day vulnerabilities, because we know that a problem exists and has yet to be addressed. But reports of zero-days are comparatively rare, because the people who find them usually have no motivation to report them. It’s only once the zero-day gets exploited, and the exploit gets discovered and reported that we know the problem existed in the first place.

The speed of addressing vulnerabilities tells us some information about the vendor’s ability to react to security issues. Well, you might think it does, it actually tells you a lot more about the vendor’s perception of their customers’ appetite for installing updates. Look at enterprise-focussed vendors like Sophos and Microsoft, and you’ll find that most security patches are distributed on a regular schedule so that sysadmins know when to expect them and can plan their testing and deployment accordingly. Both companies have issued out-of-band updates, but only in extreme circumstances.

Compare that model with Apple’s, a company that is clearly focussed on the consumer market. Apple typically have an ad hoc (or at least opaque) update schedule, with security and non-security content alike bundled into infrequent patch releases. Security content is simultaneously released for some earlier operating systems in a separate update. Standalone security updates are occasionally seen on the Mac, rarely (if ever) on the iPhone.

I don’t really use any Adobe software so had to research their security update schedule specifically for this post. In short, it looks like they have very frequent security updates, but without any public schedule. Using Adobe Reader is an exercise in unexpected update installation.

Of course, we can see when the updates come out, but that doesn’t directly mean we know how long they take to fix problems – for that we need to know when problems were reported. Microsoft’s monthly updates don’t necessarily address bugs that were reported within the last month, they might be working on a huge backlog.

Where we can compare vendors is situations in which they all ship the same component with the same vulnerabilities, and must provide the same update. The more reactive companies (who don’t think their users mind installing updates) will release the fixes first. In the case of Apple we can compare their fixes of shared components like open source UNIX tools or Java with other vendors – Linux distributors and Oracle mainly. It’s this comparison that Apple frequently loses, by taking longer to release the same patch than other Oracle, Red Hat, Canonical and friends.

So ultimately what we’d like to know is “which vendor exposes its customers to most risk?”, for which we’d need an honest, accurate and comprehensive risk analysis from each vendor or an independent source. Of course, few customers are going to want to wade through a full risk analysis of an operating system.