Did the UK create a new kind of “Crypto Mule”?

It’s almost always the case that a new or changed law means that there is a new kind of criminal, because there is by definition a way to contravene the new law. However, when the law allows the real criminals to hide behind others who will take the fall, that’s probably a failure in the legislation.

The Regulation of Investigatory Powers Act 2000 may be doing just that. In Section 51, we find that a RIPA order to disclose information can be satisfied by disclosing the encryption key, if the investigating power already has the ciphertext.

Now consider this workflow. Alice Qaeda needs to send information confidentially to Bob Laden (wait: Alice and Bob aren’t always the good guys? Who knew?). She doesn’t want it intercepted by Eve Sergeant, who works for SOCA (wait: Eve isn’t always the bad guy etc.). So she prepares the information, and encrypts it using Molly Mule’s public key. She then gives the ciphertext to Michael Mule.

Michael’s job is to get from Alice’s location to Bob’s. Molly is also at Bob’s location, and can use her private key to show the plaintext to Bob. She doesn’t necessarily see the plaintext herself; she just prepares it for Bob to view.

Now Alice and Bob are notoriously difficult for Eve to track down, so she stops Michael and gets her superintendent to write a RIPA demand for the encryption key. But Michael doesn’t have they key. He’ll still probably get sent down for two years on a charge of failing to comply with the RIPA request. Even if Eve manages to locate and serve Molly with the same request, Molly just needs to lie about knowing the key and go down for two years herself.

The likelihood is that Molly and Michael will be coerced into performing their roles, just as mules are in other areas of organised crime. So has the legislation, in trying to set out government snooping permissions, created a new slave trade in crypto mules?

On how to get crypto wrong

I’ve said time and time again: don’t write your own encryption algorithm. Once you’ve chosen an existing algorithm, don’t write your own implementation.

Today I had to look at an encryption library that had been developed to store some files in an app. The library used a custom implementation of SHA256-HMAC, and a custom implementation of CBC mode. The implementations certainly looked OK, and seemed to match the descriptions in the textbooks. They also seem to work – you can encrypt a file to get gibberish, and decrypt the gibberish to get the file back.

So the first thing I did was to crack open Xcode and replace these custom functions with CommonCrypto. CommonCrypto’s internals also look a lot like the textbook descriptions of the methods, too. So it would be surprising if these two approaches yielded different results.

These two approaches yielded different results. This was surprising. Specifically, I found that the CBC implementation would sometimes use junk memory, which the CommonCrypto version never does. Of course, the way in which this junk was used was predictable enough that the encryption routine was still reversible – but could it be that the custom implementation was leaking information about the plaintext in the cipher-text by inappropriate re-use of the buffer? Possibly, and that’s good enough for me to throw the custom implementation out. Proving whether or not this implementation is “safe” is something that a specialist cryptographer could probably do in half a day. However, as I was able to use half a day to produce something I had more confidence in, just by using a tested implementation, I decided there was no need to do that work.

On the extension of code signing

One of the public releases Apple has made this WWDC week is that of Safari 5, the latest version of their web browser. Safari 5 is the first version of the software to provide a public extensions API, and there are already numerous extensions to provide custom functionality.

The documentation for developing extensions is zero-cost, but you have to sign up with Apple’s Safari developer program in order to get access. One of the things we see advertised before signing up is the ability to manage and download extension certificates using the Apple Extension Certificate Utility. Yes, that’s correct, all Safari extensions will be signed, and the certificates will be provided by Apple.

For those of you who have provisioned apps using the iTunes Connect and Provisioning Portal, you will find the Safari extension certificate dance to be a little easier. There’s no messing around with UDIDs, for a start, and extensions can be delivered from your own web site rather than having to go through Apple’s storefront. Anyway, the point is that earlier statement – despite being entirely JavaScript, HTML and CSS (so no “executable code” in the traditional sense of machine-language libraries or programs), all Safari extensions are signed using a certificate generated by Apple.

That allows Apple to exert tight control over the extensions that users will have access to. In addition to the technology-level security features (extensions are sandboxed, and being JavaScript run entirely in a managed environment where buffer overflows and stack smashes are nonexistent), Apple effectively have the same “kill switch” control over the market that they claim over the iPhone distribution channel. Yes, even without the store in place. If a Safari extension is discovered to be malicious, Apple could not stop the vendor from distributing it but by revoking the extension’s certificate they could stop Safari from loading the extension on consumer’s computers.

But what possible malicious software could you run in a sandboxed browser extension? Quite a lot, as it happens. Adware and spyware are obvious examples. An extension could automatically “Like” facebook pages when it detects that you’re logged in, post Twitter updates on your behalf, or so on. Having the kill switch in place (and making developers reveal their identities to Apple) will undoubtedly come in useful.

I think Apple are playing their hand here; not only do they expect more code to require signing in the future, but they expect to have control over who has certificates that allow signed code to be used with the customers. Expect more APIs added to the Mac platform to feature the same distribution requirements. Expect new APIs to replace existing APIs, again with the same requirement that Apple decide who is or isn’t allowed to play.

Regaining your identity

In my last post, losing your identity, I pointed out an annoying problem with the Sparkle update framework, in that if you lose your private key you can no longer post any updates. Using code signing identities would offer a get-out, in addition to reducing the complexity associated with releasing a build. You do already sign your apps, right?

I implemented a version of Sparkle that does codesign validation, which you can grab using git or view on github. After Sparkle has downloaded its update, it will test that the new application satisfies the designated requirement for the host application – in other words, that the two are the same app. It will not replace the host unless they are the same app. Note that this feature only works on 10.6, because I use the new Code Signing Services API in Security.framework.

Losing your identity

Developers make use of cryptographic signatures in multiple places in the software lifecycle. No iPad or iPhone application may be distributed without having been signed by the developer. Mac developers who sign their applications get to annoy their customers much less when they ship updates, and indeed the Sparkle framework allows developers to sign the download file for each update (which I heartily recommend you do). PackageMaker allows developers to sign installer packages. In each of these cases, the developer provides assurance that the application definitely came from their build process, and definitely hasn’t been changed since then (for wholly reasonable values of “definitely”, anyway).

No security measure comes for free. Adding a step like code or update signing mitigates certain risks, but introduces new ones. That’s why security planning must be an iterative process – every time you make changes, you reduce some risks and create or increase others. The risks associated with cryptographic signing are that your private key could be lost or deleted, or it could be disclosed to a third party. In the case of keys associated with digital certificates, there’s also the risk that your certificate expires while you’re still relying on it (I’ve seen that happen).

Of course you can take steps to protect the key from any of those eventualities, but you cannot reduce the risk to zero (at least not while spending a finite amount of time and effort on the problem). You should certainly have a plan in place for migrating from an expired identity to a new one. Having a contingency plan for dealing with a lost or compromised private key will make your life easier if it ever happens – you can work to the plan rather than having to both manage the emergency and figure out what you’re supposed to be doing at the same time.

iPhone/iPad signing certificate compromise

This is the easiest situation to deal with. Let’s look at the consequences for each of the problems identified:

Expired Identity
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Destroyed Private Key
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Disclosed Private Key
Someone else can submit apps to the store and provision betas on your behalf. (They can also test their apps on their phone using your identity, though that’s hardly a significant problem.)

In the case of an expired identity, Apple should lead you through renewal instructions using iTunes Connect. You ought to get some warning, and it’s in their interests to help you as they’ll get another $99 out of you :-). There’s not really much of a risk here, you just need to note in your calendar to sort out renewal.

The cases of a destroyed or disclosed private key are exceptional, and you need to contact Apple to get your old identity revoked and a new one issued. Speed is of the essence if there’s a chance your private key has been leaked, because if someone else submits an “update” on your behalf Apple will treat it as a release from you. It will be hard for you to repudiate the update (claim it isn’t yours) – after all, it’s signed with your identity. If you manage to deal with Apple quickly and get your identity revoked, the only remaining possibility is that an attacker could have used your identity to send out some malicious apps as betas. Because of the limited exposure beta apps have, there will only be a slight impact: though you’ll probably want to communicate the issue to the public to motivate users of “your” beta app to remove it from their phones.

By the way, notice that no application on the store has actually been signed by the developer who wrote it – the .ipa bundles are all re-signed by Apple before distribution.

Mac code signing certificate compromise

Again, let’s start with the consequences.

Expired Identity
You can’t sign new products. Existing releases continue to work, as Mac OS X ignores certificate expiration in code signing checks by default.
Destroyed Private Key
You can’t sign new products.
Disclosed Private Key
Someone else can sign applications that appear to be yours. Such applications will receive the same keychain and firewall access rights as your legitimate apps.

If you just switch identities without any notice, there will be some annoyances for users – the keychain, firewall etc. dialogues indicating that your application cannot be identified as a legitimate update will appear for the update where the identities switch. Unfortunately this situation cannot be distinguished from a Trojan horse version of your app being deployed (even more annoyingly there’s no good way to inspect an application distributor’s identity, so users can’t make the distinction themselves). It would be good to make the migration seamless, so that users don’t get bugged by the update warnings, and learn to treat them as suspicious.

When you’re planning a certificate migration, you can arrange for that to happen easily. Presumably you know how long it takes for most users to update your app (where “most users” is defined to be some large fraction such that you can accept having to give the remainder additional support). At least that long before you plan to migrate identities, release an update that changes your application’s designated requirement such that it’s satisfied by both old and new identities. This update should be signed by your existing (old) identity, so that it’s recognised as an update to the older releases of the app. Once that update’s had sufficient uptake, release another update that’s satisfied by only the new identity, and signed by that new identity.

If you’re faced with an unplanned identity migration, that might not be possible (or in the case of a leaked private key, might lead to an unacceptably large window of vulnerability). So you need to bake identity migration readiness into your release process from the start.

Assuming you use certificates provided by vendor CAs whose own identities are trusted by Mac OS X, you can provide a designated requirement that matches any certificate issued to you. The requirement would be of the form (warning: typed directly into MarsEdit):

identifier "com.securemacprogramming.MyGreatApp" and cert leaf[subject.CN]="Secure Mac Programming Code Signing" and cert leaf[subject.O]="Secure Mac Programming Plc." and anchor[subject.O]="Verisign, Inc." and anchor trusted

Now if one of your private keys is compromised, you coordinate with your CA to revoke the certificate and migrate to a different identity. The remaining risks are that the CA might issue a certificate with the same common name and organisation name to another entity: something you need to take up with the CA in their service-level agreement; or Apple might choose to trust a different CA called “Verisign, Inc.” which seems unlikely.

If you use self-signed certificates, then you need to manage this migration process yourself. You can generate a self-signed CA from which you issue signing certificates, then you can revoke individual signing certs as needed. However, you now have two problems: distributing the certificate revocation list (CRL) to customers, and protecting the private key of the top-level certificate.

Package signing certificate compromise

The situation with signed Installer packages is very similar to that with signed Mac applications, except that there’s no concept of upgrading a package and thus no migration issues. When a package is installed, its certificate is used to check its identity. You just have to make sure that your identity is valid at time of signing, and that any certificate associated with a disclosed private key is revoked.

Sparkle signing key compromise

You have to be very careful that your automatic update mechanism is robust. Any other bug in an application can be fixed by deploying an update to your customers. A bug in the update mechanism might mean that customers stop receiving updates, making it very hard for you to tell them about a fix for that problem, or ship any fixes for other bugs. Sparkle doesn’t use certificates, so keys don’t have any expiration associated with them. The risks and consequences are:

Destroyed Private Key
You can’t update your application any more.
Disclosed Private Key
Someone else can release an “update” to your app; provided they can get the Sparkle instance on the customer’s computer to download it.

In the case of a disclosed private key, the conditions that need to be met to actually distribute a poisoned update are specific and hard to achieve. Either the webserver hosting your appcast or the DNS for that server must be compromised, so that the attacker can get the customer’s app to think there’s an update available that the attacker controls. All of that means that you can probably get away with a staggered key update without any (or many, depending on who’s attacking you) customers getting affected:

  • Release a new update signed by the original key. The update contains the new key pair’s public key.
  • Some time later, release another update signed by the new key.

The situation if you actually lose your private key is worse: you can’t update at all any more. You can’t generate a new key pair and start using that, because your updates won’t be accepted by the apps already out in the field. You can’t bake a “just in case” mechanism in, because Sparkle only expects a single key pair. You’ll have to find a way to contact all of your customers directly, explain the situation and get them to manually update to a new version of your app. That’s one reason I’d like to see auto-update libraries use Mac OS X code signing as their integrity-check mechanisms: so that they are as flexible as the platform on which they run.