On the new Lion security things

This post will take a high-level view of some of Lion’s new security features, and examine how they fit (or don’t) in the general UNIX security model and with that of other platforms.

App sandboxing

The really big news for most developers is that the app sandboxing from iOS is now here. The reason it’s big news is that pretty soon, any app on the Mac app store will need to sign up to sandboxing: apps that don’t will be rejected. But what is it?

Since 10.5, Mac OS X has included a mandatory access control framework called seatbelt, which enforces restrictions governing what processes can access what features, files and devices on the platform. This is completely orthogonal to the traditional user-based permissions system: even if a process is running in a user account that can use an asset, seatbelt can say no and deny that process access to that asset.

[N.B. There’s a daemon called sandboxd which is part of all this: apparently (thanks @radian) it’s just responsible for logging.]

In 10.5 and 10.6, it was hard for non-Apple processes to adopt the sandbox, and the range of available profiles (canned definitions of what a process can and cannot do) was severely limited. I did create a profile that allowed Cocoa apps to function, but it was very fragile and depended on the private details of the internal profile definition language.

The sandbox can be put into a trace mode, where it will report any attempt by a process to violate its current sandbox configuration. This trace mode can be used to profile the app’s expected behaviour: a tool called sandbox-simplify then allows construction of a profile that matches the app’s intentions. This is still all secret internal stuff to do with the implementation though; the new hotness as far as developers are concerned starts below.

With 10.7, Apple has introduced a wider range of profiles based on code signing entitlements, which makes it easier for third party applications to sign up to sandbox enforcement. An application project just needs an entitlements.plist indicating opt-in, and it gets a profile suitable for running a Cocoa app: communicating with the window server, pasteboard server, accessing areas of the file system and so on. Additional flags control access to extra features: the iSight camera, USB devices, users’ media folders and the like.

By default, a sandboxed app on 10.7 gets its own container area on the file system just like an iOS app. This means it has its own Library folder, its own Documents folder, and so on. It can’t see or interfere with the preferences, settings or documents of other apps. Of course, because Mac OS X still plays host to non-sandboxed apps including the Finder and Terminal, you don’t get any assurance that other processes can’t monkey with your files.

What this all means is that apps running as one user are essentially protected from each other by the sandbox: if any one goes rogue or is taken over by an attacker, its effect on the rest of the system is restricted. We’ll come to why this is important shortly in the section “User-based access control is old and busted”, but first: can we save an app from itself?

XPC

Applications often have multiple disparate capabilities from the operating system’s perspective, that all come together to support a user’s workflow. That is, indeed, the point of software, but it comes at a price: when an attacker can compromise one of an application’s entry points, he gets to misuse all of the other features that app can access.

Of course, mitigating that problem is nothing new. I discussed factoring an application into multiple processes in Professional Cocoa Application Security, using Authorization Services.

New in 10.7, XPC is a nearly fully automatic way to create a factored app. It takes care of the process management, and through the same mechanism as app sandboxing restricts what operating system features each helper process has access to. It even takes care of message dispatch and delivery, so all your app needs to do is send a message over to a helper. XPC will start that helper if necessary, wait for a response and deliver that asynchronously back to the app.

So now we have access control within an application. If any part of the app gets compromised—say, the network handling bundle—then it’s harder for the attacker to misuse the rest of the system because he can only send particular messages with specific content out of the XPC bundle, and then only to the host app.

Mac OS X is not the first operating system to provide intra-app access control. .NET allows different assemblies in the same process to have different privileges (for example, a “write files” privilege): code in one assembly can only call out to another if the caller has the privilege it’s trying to use in the callee, or an adapter assembly asserts that the caller is OK to use the callee. The second case could be useful in, for instance, NSUserDefaults: the calling code would need the “change preferences” privilege, which is implemented by writing to a file so an adapter would need to assert that “change preferences” is OK to call “write files”.

OK, so now the good stuff: why is this important?

User-based access control is old and busted

Mac OS X—and for that matter Windows, iOS, and almost all other current operating systems—are based on timesharing operating systems designed for minicomputers (in fact, Digital Equipment Corp’s PDP series computers in almost every case). On those systems, there are multiple users all trying to use the same computer at once, and they must not be able to trip each other up: mess with each others’ files, kill each others’ processes, that sort of thing.

Apart from a few server scenarios, that’s no longer the case. On this iMac, there’s exactly one user: me. However I have to have two user accounts (the one I’m writing this blog post in, and a member of the admin group), even though there’s only one of me. Apple (or more correctly, software deposited by Apple) has more accounts than me: 75 of them.

The fact is that there are multiple actors on the system, but mapping them on to UNIX-style user accounts doesn’t work so well. I am one actor. Apple is another. In fact, the root account is running code from three different vendors, and “I” am running code from 11 (which are themselves talking to a bunch of network servers, all of which are under the control of a different set of people again).

So it really makes sense to treat “provider of twitter.com HTTP responses” as a different actor to “code supplied as part of Accessorizer” as a different actor to “user at the console” as a different actor to “Apple”. By treating these actors as separate entities with distinct rights to parts of my computer, we get to be more clever about privilege separation and assignment of privileges to actors than we can be in a timesharing-based account scheme.

Sandboxing and XPC combine to give us a partial solution to this treatment, by giving different rights to different apps, and to different components within the same app.

The future

This is not necessarily Apple’s future: this is where I see the privilege system described above as taking the direction of the operating system.

XPC (or something better) for XNU

Kernel extensions—KEXTs—are the most dangerous third-party code that exists on the platform. They run in the same privilege space as the kernel, so can grub over any writable memory in the system and make the computer do more or less anything: even actions that are forbidden to user-mode code running as root are open to KEXTs.

For the last eleventy billion years (or since 10.4 anyway), developers of KEXTs for Mac OS X have had to use the Kernel Programming Interfaces to access kernel functionality. Hopefully, well-designed KEXTs aren’t actually grubbing around in kernel memory: they’re providing I/O Kit classes with known APIs and KAUTH veto functions. That means they could be run in their own tasks, with the KPIs proxied into calls to the kernel. If a KEXT dies or tries something naughty, that’s no longer a kernel panic: the KEXT’s task dies and its device becomes unavailable.

Notice that I’m not talking about a full microkernel approach like real Mach or Minix: just a monolithic kernel with separate tasks for third-party KEXTs. Remember that “Apple’s kernel code” can be one actor and, for example, “Symantec’s kernel code” can be another.

Sandboxing and XPC for privileged processes

Currently, operating system services are protected from the outside world and each other by the 75 user accounts identified earlier. Some daemons also have custom sandboxd profiles, written in the internal-use-only Scheme dialect and located at /usr/share/sandbox.

In fact, the sandbox approach is a better match to the operating system’s intention than the multi-user approach is. There’s only one actor involved, but plenty of pieces of code that have different needs. Just as Microsoft has the SYSTEM account for Windows code, it would make sense for Apple to have a user account for operating system code that can do things Administrator users cannot do; and then a load of factored executables that can only do the things they need.

Automated system curation

This one might worry sysadmins, but just as the Chrome browser updates itself as it needs, so could Mac OS X. With the pieces described above in place, every Mac would be able to identify an “Apple” actor whose responsibility is to curate the operating system tasks, code, and default configuration. So it should be able to allow the Apple actor to get on with that where it needs to.

That doesn’t obviate an “Administrator” actor, whose job is to override the system-supplied configuration, enable and configure additional services and provide access to other actors. So sysadmins wouldn’t be completely out of a job.

On the extension of code signing

One of the public releases Apple has made this WWDC week is that of Safari 5, the latest version of their web browser. Safari 5 is the first version of the software to provide a public extensions API, and there are already numerous extensions to provide custom functionality.

The documentation for developing extensions is zero-cost, but you have to sign up with Apple’s Safari developer program in order to get access. One of the things we see advertised before signing up is the ability to manage and download extension certificates using the Apple Extension Certificate Utility. Yes, that’s correct, all Safari extensions will be signed, and the certificates will be provided by Apple.

For those of you who have provisioned apps using the iTunes Connect and Provisioning Portal, you will find the Safari extension certificate dance to be a little easier. There’s no messing around with UDIDs, for a start, and extensions can be delivered from your own web site rather than having to go through Apple’s storefront. Anyway, the point is that earlier statement – despite being entirely JavaScript, HTML and CSS (so no “executable code” in the traditional sense of machine-language libraries or programs), all Safari extensions are signed using a certificate generated by Apple.

That allows Apple to exert tight control over the extensions that users will have access to. In addition to the technology-level security features (extensions are sandboxed, and being JavaScript run entirely in a managed environment where buffer overflows and stack smashes are nonexistent), Apple effectively have the same “kill switch” control over the market that they claim over the iPhone distribution channel. Yes, even without the store in place. If a Safari extension is discovered to be malicious, Apple could not stop the vendor from distributing it but by revoking the extension’s certificate they could stop Safari from loading the extension on consumer’s computers.

But what possible malicious software could you run in a sandboxed browser extension? Quite a lot, as it happens. Adware and spyware are obvious examples. An extension could automatically “Like” facebook pages when it detects that you’re logged in, post Twitter updates on your behalf, or so on. Having the kill switch in place (and making developers reveal their identities to Apple) will undoubtedly come in useful.

I think Apple are playing their hand here; not only do they expect more code to require signing in the future, but they expect to have control over who has certificates that allow signed code to be used with the customers. Expect more APIs added to the Mac platform to feature the same distribution requirements. Expect new APIs to replace existing APIs, again with the same requirement that Apple decide who is or isn’t allowed to play.

Regaining your identity

In my last post, losing your identity, I pointed out an annoying problem with the Sparkle update framework, in that if you lose your private key you can no longer post any updates. Using code signing identities would offer a get-out, in addition to reducing the complexity associated with releasing a build. You do already sign your apps, right?

I implemented a version of Sparkle that does codesign validation, which you can grab using git or view on github. After Sparkle has downloaded its update, it will test that the new application satisfies the designated requirement for the host application – in other words, that the two are the same app. It will not replace the host unless they are the same app. Note that this feature only works on 10.6, because I use the new Code Signing Services API in Security.framework.

Losing your identity

Developers make use of cryptographic signatures in multiple places in the software lifecycle. No iPad or iPhone application may be distributed without having been signed by the developer. Mac developers who sign their applications get to annoy their customers much less when they ship updates, and indeed the Sparkle framework allows developers to sign the download file for each update (which I heartily recommend you do). PackageMaker allows developers to sign installer packages. In each of these cases, the developer provides assurance that the application definitely came from their build process, and definitely hasn’t been changed since then (for wholly reasonable values of “definitely”, anyway).

No security measure comes for free. Adding a step like code or update signing mitigates certain risks, but introduces new ones. That’s why security planning must be an iterative process – every time you make changes, you reduce some risks and create or increase others. The risks associated with cryptographic signing are that your private key could be lost or deleted, or it could be disclosed to a third party. In the case of keys associated with digital certificates, there’s also the risk that your certificate expires while you’re still relying on it (I’ve seen that happen).

Of course you can take steps to protect the key from any of those eventualities, but you cannot reduce the risk to zero (at least not while spending a finite amount of time and effort on the problem). You should certainly have a plan in place for migrating from an expired identity to a new one. Having a contingency plan for dealing with a lost or compromised private key will make your life easier if it ever happens – you can work to the plan rather than having to both manage the emergency and figure out what you’re supposed to be doing at the same time.

iPhone/iPad signing certificate compromise

This is the easiest situation to deal with. Let’s look at the consequences for each of the problems identified:

Expired Identity
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Destroyed Private Key
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Disclosed Private Key
Someone else can submit apps to the store and provision betas on your behalf. (They can also test their apps on their phone using your identity, though that’s hardly a significant problem.)

In the case of an expired identity, Apple should lead you through renewal instructions using iTunes Connect. You ought to get some warning, and it’s in their interests to help you as they’ll get another $99 out of you :-). There’s not really much of a risk here, you just need to note in your calendar to sort out renewal.

The cases of a destroyed or disclosed private key are exceptional, and you need to contact Apple to get your old identity revoked and a new one issued. Speed is of the essence if there’s a chance your private key has been leaked, because if someone else submits an “update” on your behalf Apple will treat it as a release from you. It will be hard for you to repudiate the update (claim it isn’t yours) – after all, it’s signed with your identity. If you manage to deal with Apple quickly and get your identity revoked, the only remaining possibility is that an attacker could have used your identity to send out some malicious apps as betas. Because of the limited exposure beta apps have, there will only be a slight impact: though you’ll probably want to communicate the issue to the public to motivate users of “your” beta app to remove it from their phones.

By the way, notice that no application on the store has actually been signed by the developer who wrote it – the .ipa bundles are all re-signed by Apple before distribution.

Mac code signing certificate compromise

Again, let’s start with the consequences.

Expired Identity
You can’t sign new products. Existing releases continue to work, as Mac OS X ignores certificate expiration in code signing checks by default.
Destroyed Private Key
You can’t sign new products.
Disclosed Private Key
Someone else can sign applications that appear to be yours. Such applications will receive the same keychain and firewall access rights as your legitimate apps.

If you just switch identities without any notice, there will be some annoyances for users – the keychain, firewall etc. dialogues indicating that your application cannot be identified as a legitimate update will appear for the update where the identities switch. Unfortunately this situation cannot be distinguished from a Trojan horse version of your app being deployed (even more annoyingly there’s no good way to inspect an application distributor’s identity, so users can’t make the distinction themselves). It would be good to make the migration seamless, so that users don’t get bugged by the update warnings, and learn to treat them as suspicious.

When you’re planning a certificate migration, you can arrange for that to happen easily. Presumably you know how long it takes for most users to update your app (where “most users” is defined to be some large fraction such that you can accept having to give the remainder additional support). At least that long before you plan to migrate identities, release an update that changes your application’s designated requirement such that it’s satisfied by both old and new identities. This update should be signed by your existing (old) identity, so that it’s recognised as an update to the older releases of the app. Once that update’s had sufficient uptake, release another update that’s satisfied by only the new identity, and signed by that new identity.

If you’re faced with an unplanned identity migration, that might not be possible (or in the case of a leaked private key, might lead to an unacceptably large window of vulnerability). So you need to bake identity migration readiness into your release process from the start.

Assuming you use certificates provided by vendor CAs whose own identities are trusted by Mac OS X, you can provide a designated requirement that matches any certificate issued to you. The requirement would be of the form (warning: typed directly into MarsEdit):

identifier "com.securemacprogramming.MyGreatApp" and cert leaf[subject.CN]="Secure Mac Programming Code Signing" and cert leaf[subject.O]="Secure Mac Programming Plc." and anchor[subject.O]="Verisign, Inc." and anchor trusted

Now if one of your private keys is compromised, you coordinate with your CA to revoke the certificate and migrate to a different identity. The remaining risks are that the CA might issue a certificate with the same common name and organisation name to another entity: something you need to take up with the CA in their service-level agreement; or Apple might choose to trust a different CA called “Verisign, Inc.” which seems unlikely.

If you use self-signed certificates, then you need to manage this migration process yourself. You can generate a self-signed CA from which you issue signing certificates, then you can revoke individual signing certs as needed. However, you now have two problems: distributing the certificate revocation list (CRL) to customers, and protecting the private key of the top-level certificate.

Package signing certificate compromise

The situation with signed Installer packages is very similar to that with signed Mac applications, except that there’s no concept of upgrading a package and thus no migration issues. When a package is installed, its certificate is used to check its identity. You just have to make sure that your identity is valid at time of signing, and that any certificate associated with a disclosed private key is revoked.

Sparkle signing key compromise

You have to be very careful that your automatic update mechanism is robust. Any other bug in an application can be fixed by deploying an update to your customers. A bug in the update mechanism might mean that customers stop receiving updates, making it very hard for you to tell them about a fix for that problem, or ship any fixes for other bugs. Sparkle doesn’t use certificates, so keys don’t have any expiration associated with them. The risks and consequences are:

Destroyed Private Key
You can’t update your application any more.
Disclosed Private Key
Someone else can release an “update” to your app; provided they can get the Sparkle instance on the customer’s computer to download it.

In the case of a disclosed private key, the conditions that need to be met to actually distribute a poisoned update are specific and hard to achieve. Either the webserver hosting your appcast or the DNS for that server must be compromised, so that the attacker can get the customer’s app to think there’s an update available that the attacker controls. All of that means that you can probably get away with a staggered key update without any (or many, depending on who’s attacking you) customers getting affected:

  • Release a new update signed by the original key. The update contains the new key pair’s public key.
  • Some time later, release another update signed by the new key.

The situation if you actually lose your private key is worse: you can’t update at all any more. You can’t generate a new key pair and start using that, because your updates won’t be accepted by the apps already out in the field. You can’t bake a “just in case” mechanism in, because Sparkle only expects a single key pair. You’ll have to find a way to contact all of your customers directly, explain the situation and get them to manually update to a new version of your app. That’s one reason I’d like to see auto-update libraries use Mac OS X code signing as their integrity-check mechanisms: so that they are as flexible as the platform on which they run.

Code snippit from NSConference presentation

Here’s the code I used to display the code signature status within the sample app at NSConference. You need to be using the 10.6 SDK, and link against Security.framework.

#import <Security/SecCode.h>
- (void)updateSignatureStatus { SecCodeRef myCode = NULL; OSStatus secReturn = SecCodeCopySelf(kSecCSDefaultFlags, &myCode); if (noErr != secReturn) { [statusField setIntValue: secReturn]; return; } CFMakeCollectable(myCode); SecRequirementRef designated = NULL; secReturn = SecCodeCopyDesignatedRequirement(myCode, kSecCSDefaultFlags, &designated); if (noErr != secReturn) { [statusField setIntValue: secReturn]; return; } CFMakeCollectable(designated); secReturn = SecCodeCheckValidity(myCode, kSecCSDefaultFlags, designated); [statusField setIntValue: secReturn]; }

That’s all there is to it. As discussed in my talk, the designated requirement is a requirement baked into the signature seal when the app is signed in Xcode. Application identity is verified by testing whether:

  • the signature is valid;
  • the seal is unbroken;
  • the code satisfies its designated requirement.

Of course there’s nothing to stop you testing code against any requirement you may come up with: the designated requirement just provides “I am the same application that my developer signed”.