Protecting source code

As I mentioned on the missing Live episode, one of the consequences of the Gawker hack was that their source code for their internal software was leaked into the Internet. I doubt any of my readers would want that to happen to their code, so I’m going to share the details of how I protect my clients’ code when I’m working. Maybe some of this will work for you.

In the office, I work at a desktop iMac. This has an external time machine backup disk and a DropBox for off-site storage. However, client code does _not_ go onto the DropBox. Instead I keep a separate, encrypted sparse disk image for each project I’m working on. The password for each is different. As well as protecting against snooping, this helps stop cross-contamination. I rarely have two such images mounted at once. Note that it’s not just source that goes into these images: build products, notes, Instruments traces, and images all go into the encrypted containers.

Obviously that means a lot of passwords, and no I can’t remember them all. I use a keychain. It locks automatically when not in use, and has a passphrase that’s different from my login passphrase.

The devices I test on are all encrypted where available (if a client needs me to test on an iPhone 3G, then I can, but it isn’t encrypted). They are passphrase locked, set to require passphrase immediately. And I NEVER take them away from the desk before deleting any developer builds, unless I need to do something special like a real-world location services test.

I rarely do coding work on the laptop, but when I do I copy the appropriate encrypted image onto it. The laptop additionally has FileVault configured, though I’m evaluating full-disk encryption options. Keychain configuration as above, additionally with a password required on wake from sleep or screensaver, and a firmware password.

For pushing work back to the clients, most clients use github or bitbucket which offer SSL-encrypted connections to the repositories. Personally, I have a self-run repo host available over HTTPS or SSH, but will probably move that to a github-like service because life’s too short. Their security policy seems acceptable to me.

On the Mac App Store

I’ve just come off iDeveloper.TV Live with Scotty and John, where we were talking about the Mac app store. I had some material prepared about the security side of the app store that we didn’t get on to – here’s a quick write up.

There’s a lot of discussion on twitter and the macsb mailing list, and doubtless elsewhere, about the encryption paperwork that Apple are making us fill in. It’s not Apple’s fault, it’s the U.S. Department of Commerce. You see, back in the cold war (and, frankly, ever since) the government have been of the opinion that encryption is a weapon (because it hides data from their agents) and so are powerful computers (because they can do encryption that’s expensive to crack). So the Bureau of Industry and Security developed the Export Administration Regulations to control the flow of such heinous weapons through the commercial sector.

Section 5, part 2 covers computer equipment and software. Specific provision is made for encryption, in the documentation we find that Items may be controlled as encryption items even if the encryption is actually performed by the operating system, an external library, a third-party product or a cryptographic processor. If an item uses encryption functionality, whether or not the code that performs the encryption is included with the item, then BIS evaluates the item based on the encryption functionality it uses.

So there you go. If you’re exporting software from the U.S. (and you are, if you’re selling via Apple’s app store) then you need to fill in the export notification.

Other Mac App Store security things, in “oh God is it that late already” format:

  • Receipt validation. No different really from existing licensing frameworks. All you can do is make it hard to find the tests from the binary. I had an idea about a specific way to do that, but want to test it before I release it. As you’ve no doubt found, anti-cracking measures aren’t easy.
  • Users. The user base for the MAS will be wider, and less tech-savvy, than the users existing micro-ISVs are selling to. Make sure your intent with regard to user data, particularly the consequences of your app’s workflow, are clear.
  • Similarly, be clear about the content of updates. Clearer than Apple are: “contains various fixes and improvements” WTF?
  • As we’ve found with the iOS store, it’s harder to push an update out than when you host the download yourself. Getting security right (or, pragmatically, not too wrong) the first time avoids emergency update submissions.
  • Permissions. Your app needs to run entirely as the current user, who may not be an admin. If you’re a developer, you’re probably running as an admin. Test with a non-admin account. Better, do all of your development in a non-admin account. Add yourself to the _developer group so you can still use gdb/Instruments properly.

Did the UK create a new kind of “Crypto Mule”?

It’s almost always the case that a new or changed law means that there is a new kind of criminal, because there is by definition a way to contravene the new law. However, when the law allows the real criminals to hide behind others who will take the fall, that’s probably a failure in the legislation.

The Regulation of Investigatory Powers Act 2000 may be doing just that. In Section 51, we find that a RIPA order to disclose information can be satisfied by disclosing the encryption key, if the investigating power already has the ciphertext.

Now consider this workflow. Alice Qaeda needs to send information confidentially to Bob Laden (wait: Alice and Bob aren’t always the good guys? Who knew?). She doesn’t want it intercepted by Eve Sergeant, who works for SOCA (wait: Eve isn’t always the bad guy etc.). So she prepares the information, and encrypts it using Molly Mule’s public key. She then gives the ciphertext to Michael Mule.

Michael’s job is to get from Alice’s location to Bob’s. Molly is also at Bob’s location, and can use her private key to show the plaintext to Bob. She doesn’t necessarily see the plaintext herself; she just prepares it for Bob to view.

Now Alice and Bob are notoriously difficult for Eve to track down, so she stops Michael and gets her superintendent to write a RIPA demand for the encryption key. But Michael doesn’t have they key. He’ll still probably get sent down for two years on a charge of failing to comply with the RIPA request. Even if Eve manages to locate and serve Molly with the same request, Molly just needs to lie about knowing the key and go down for two years herself.

The likelihood is that Molly and Michael will be coerced into performing their roles, just as mules are in other areas of organised crime. So has the legislation, in trying to set out government snooping permissions, created a new slave trade in crypto mules?

On how to get crypto wrong

I’ve said time and time again: don’t write your own encryption algorithm. Once you’ve chosen an existing algorithm, don’t write your own implementation.

Today I had to look at an encryption library that had been developed to store some files in an app. The library used a custom implementation of SHA256-HMAC, and a custom implementation of CBC mode. The implementations certainly looked OK, and seemed to match the descriptions in the textbooks. They also seem to work – you can encrypt a file to get gibberish, and decrypt the gibberish to get the file back.

So the first thing I did was to crack open Xcode and replace these custom functions with CommonCrypto. CommonCrypto’s internals also look a lot like the textbook descriptions of the methods, too. So it would be surprising if these two approaches yielded different results.

These two approaches yielded different results. This was surprising. Specifically, I found that the CBC implementation would sometimes use junk memory, which the CommonCrypto version never does. Of course, the way in which this junk was used was predictable enough that the encryption routine was still reversible – but could it be that the custom implementation was leaking information about the plaintext in the cipher-text by inappropriate re-use of the buffer? Possibly, and that’s good enough for me to throw the custom implementation out. Proving whether or not this implementation is “safe” is something that a specialist cryptographer could probably do in half a day. However, as I was able to use half a day to produce something I had more confidence in, just by using a tested implementation, I decided there was no need to do that work.

A site for discussing app security

There’s a new IT security site over at Stack Exchange. Questions and answers on designing and implementing IT security policy, and on app security are all welcome.

I’m currently a moderator at the site, but that’s just an interim thing while the site is being bootstrapped. Obviously, if people subsequently vote for me as a permanent moderator I’ll stay in, but the converse is also true. Anyway, check out the site, ask and answer questions, let’s make it as good a venue for app security discussion as is for general programming.

On Fuzzy Aliens

I have just launched a new company, Fuzzy Aliens[*], offering application security consultancy services for smartphone app developers. This is not the FAQ list, this is the “questions I want to answer so that they don’t become frequently asked” list.

What do you offer?

The company’s services are all focussed on helping smartphone and tablet app developers discover and implement their applications’ security and privacy requirements. When planning an app, I can help with threat modelling, with training developers, securing the development lifecycle, requirements elicitation, secure user experience design, and with developing a testing strategy.

When it comes to implementation, you can hire me to do the security work on your iOS or Android app. That may be some background “plumbing” like storing a password or encrypting sensitive content, or it might be an end-to-end security feature. I can also do security code reviews and vulnerability analysis on existing applications.

Why would I want that?

If you’re developing an application destined for the enterprise market, you probably need it. Company I.T. departments will demand applications that conform to local policy regarding data protection, perhaps based on published standards such as the ISO 27000 family or PCI-DSS.

In the consumer market, users are getting wise to the privacy problems associated with mobile apps. Whether it’s accidentally posting the wrong thing to facebook, or being spied on by their apps, the public don’t want to—and shouldn’t need to—deal with security issues when they’re trying to get their work done and play their games.

Can I afford that?

Having been a Micro-ISV and contracted for others, I know that many apps are delivered under tight budgets by one-person companies. If all you need is a half day together to work on a niggling problem, that’s all you need to pay for. On the other hand I’m perfectly happy to work on longer projects, too :).

Why’s it called Fuzzy Aliens?

Well, the word “fuzz” obviously has a specific meaning in the world of secure software development, but basically the answer is that I knew I could turn that into a cute logo (still pending), and that it hadn’t been registered by a UK Ltd yet.

So how do I contact you about this?

You already have – you’re here. But you could see the company’s contact page for more specific information.

[*] More accurately, I have indicated the intent to do so. The articles of association have not yet been returned by Companies House, so for the next couple of days the blue touch paper is quietly smouldering.