On the broken(?) Mac App Store

A day after the Mac App Store was launched, people are reporting that it has been cracked. There are two separate stories here, a vapourware circumvention of the FairPlay DRM used to generate the receipts and a report that certain apps aren’t validating the receipts properly. We can ignore the first case for the moment: it’s important, and if it’s true then Apple needs to fix it (and co-ordinate updating the validation code with us third-party developers). But for the moment, it’s more important that developers are implementing the protections that are in place in their applications – it’s those applications that are supposed to be protected.

Let’s skip, for the moment, the question of whether DRM or anti-cracking mechanisms are ethically right, worthwhile, or how much effort you want to put into them. Apple have done most of the legwork, in providing a vendor-signed receipt that’s part of your signed app bundle. What you need to do is:

  • Check whether you have a receipt
  • Check whether Apple signed the receipt you have
  • Check whether the receipt is valid for your product
  • Check whether the receipt is valid for this version of your product
  • Check whether the receipt is valid for this computer

That’s it in a nutshell. Of course, some nutshells surround very big and complex nuts, and that’s true in this case:

  • There’s some good example code for receipt validation at github/roddi/ValidateStoreReceipt, if you’re going to use it then don’t just paste it wholesale. If everyone uses the same code then it’s super-easy for someone to detect and strip that code from each instance.
  • Check at runtime, as well as before startup. If you just check at startup, then all an attacker needs to do is patch main() to jump straight into NSApplicationMain() and your app runs for free.
  • Code obfuscation is not a very effective tool. Having worked in anti-virus, I know it’s much easier to classify code based on what it does than what it is, and it’s quite easy to find the code that opens the receipt file, or calls exit(173). That said, some of the commercial obfuscation companies offer a guaranteed service, so you can still protect your revenue after the app gets cracked.
  • Update I have been advised privately and seen in a blog post that people are recommending hard-coding their app bundle IDs and version numbers into the binary rather than using Info.plist, because that file can be edited. Well, so can the app binary…and in either case you’d need to re-sign the product with a valid certificate to continue, because Apple have used the kill flag:
    heimdall:~ leeg$ codesign -dvvvv /Applications/Twitter.app/
    Executable=/Applications/Twitter.app/Contents/MacOS/Twitter
    Identifier=com.twitter.twitter-mac
    Format=bundle with Mach-O universal (i386 x86_64)
    CodeDirectory v=20100 size=12452 flags=0x200(kill) hashes=616+3 location=embedded
    CDHash=8e0736639d79a108a5a1ebe89f928d1da0d49d94
    Signature size=4169
    Authority=Apple Mac OS Application Signing
    Authority=Apple Worldwide Developer Relations Certification Authority
    Authority=Apple Root CA
    Info.plist entries=21
    Sealed Resources rules=4 files=78
    Internal requirements count=2 size=344
    

    Changing a hard-coded string in a binary file is not difficult. You can of course obfuscate the string, but the motivated cracker still finds the point where the comparison is made (particularly easily if you use NSStrings). Really, how far you want to go depends on how much you’re willing to spend.

Of course, Fuzzy Aliens Ltd has already been implementing receipt validation for customers, so if this is too hard for you or you don’t have the time… ;-)

A last word on publicising receipt-validation vulnerabilities

You and I both make our living by selling software, or by selling services to people who sell software. Crowing on the interwebs about how this application or that application doesn’t validate its receipts properly is not cool, because you are shitting on your own doorstep. There is no public benefit to public disclosure that class of vulnerability, because DRM is not a user security feature. Don’t do that. Send the developer a private message explaining your findings. Give them a chance to put extra effort into protecting their product, if that’s what they want to do.

Protecting source code

As I mentioned on the missing iDeveloper.tv Live episode, one of the consequences of the Gawker hack was that their source code for their internal software was leaked into the Internet. I doubt any of my readers would want that to happen to their code, so I’m going to share the details of how I protect my clients’ code when I’m working. Maybe some of this will work for you.

In the office, I work at a desktop iMac. This has an external time machine backup disk and a DropBox for off-site storage. However, client code does _not_ go onto the DropBox. Instead I keep a separate, encrypted sparse disk image for each project I’m working on. The password for each is different. As well as protecting against snooping, this helps stop cross-contamination. I rarely have two such images mounted at once. Note that it’s not just source that goes into these images: build products, notes, Instruments traces, and images all go into the encrypted containers.

Obviously that means a lot of passwords, and no I can’t remember them all. I use a keychain. It locks automatically when not in use, and has a passphrase that’s different from my login passphrase.

The devices I test on are all encrypted where available (if a client needs me to test on an iPhone 3G, then I can, but it isn’t encrypted). They are passphrase locked, set to require passphrase immediately. And I NEVER take them away from the desk before deleting any developer builds, unless I need to do something special like a real-world location services test.

I rarely do coding work on the laptop, but when I do I copy the appropriate encrypted image onto it. The laptop additionally has FileVault configured, though I’m evaluating full-disk encryption options. Keychain configuration as above, additionally with a password required on wake from sleep or screensaver, and a firmware password.

For pushing work back to the clients, most clients use github or bitbucket which offer SSL-encrypted connections to the repositories. Personally, I have a self-run repo host available over HTTPS or SSH, but will probably move that to a github-like service because life’s too short. Their security policy seems acceptable to me.

On the Mac App Store

I’ve just come off iDeveloper.TV Live with Scotty and John, where we were talking about the Mac app store. I had some material prepared about the security side of the app store that we didn’t get on to – here’s a quick write up.

There’s a lot of discussion on twitter and the macsb mailing list, and doubtless elsewhere, about the encryption paperwork that Apple are making us fill in. It’s not Apple’s fault, it’s the U.S. Department of Commerce. You see, back in the cold war (and, frankly, ever since) the government have been of the opinion that encryption is a weapon (because it hides data from their agents) and so are powerful computers (because they can do encryption that’s expensive to crack). So the Bureau of Industry and Security developed the Export Administration Regulations to control the flow of such heinous weapons through the commercial sector.

Section 5, part 2 covers computer equipment and software. Specific provision is made for encryption, in the documentation we find that Items may be controlled as encryption items even if the encryption is actually performed by the operating system, an external library, a third-party product or a cryptographic processor. If an item uses encryption functionality, whether or not the code that performs the encryption is included with the item, then BIS evaluates the item based on the encryption functionality it uses.

So there you go. If you’re exporting software from the U.S. (and you are, if you’re selling via Apple’s app store) then you need to fill in the export notification.

Other Mac App Store security things, in “oh God is it that late already” format:

  • Receipt validation. No different really from existing licensing frameworks. All you can do is make it hard to find the tests from the binary. I had an idea about a specific way to do that, but want to test it before I release it. As you’ve no doubt found, anti-cracking measures aren’t easy.
  • Users. The user base for the MAS will be wider, and less tech-savvy, than the users existing micro-ISVs are selling to. Make sure your intent with regard to user data, particularly the consequences of your app’s workflow, are clear.
  • Similarly, be clear about the content of updates. Clearer than Apple are: “contains various fixes and improvements” WTF?
  • As we’ve found with the iOS store, it’s harder to push an update out than when you host the download yourself. Getting security right (or, pragmatically, not too wrong) the first time avoids emergency update submissions.
  • Permissions. Your app needs to run entirely as the current user, who may not be an admin. If you’re a developer, you’re probably running as an admin. Test with a non-admin account. Better, do all of your development in a non-admin account. Add yourself to the _developer group so you can still use gdb/Instruments properly.

Did the UK create a new kind of “Crypto Mule”?

It’s almost always the case that a new or changed law means that there is a new kind of criminal, because there is by definition a way to contravene the new law. However, when the law allows the real criminals to hide behind others who will take the fall, that’s probably a failure in the legislation.

The Regulation of Investigatory Powers Act 2000 may be doing just that. In Section 51, we find that a RIPA order to disclose information can be satisfied by disclosing the encryption key, if the investigating power already has the ciphertext.

Now consider this workflow. Alice Qaeda needs to send information confidentially to Bob Laden (wait: Alice and Bob aren’t always the good guys? Who knew?). She doesn’t want it intercepted by Eve Sergeant, who works for SOCA (wait: Eve isn’t always the bad guy etc.). So she prepares the information, and encrypts it using Molly Mule’s public key. She then gives the ciphertext to Michael Mule.

Michael’s job is to get from Alice’s location to Bob’s. Molly is also at Bob’s location, and can use her private key to show the plaintext to Bob. She doesn’t necessarily see the plaintext herself; she just prepares it for Bob to view.

Now Alice and Bob are notoriously difficult for Eve to track down, so she stops Michael and gets her superintendent to write a RIPA demand for the encryption key. But Michael doesn’t have they key. He’ll still probably get sent down for two years on a charge of failing to comply with the RIPA request. Even if Eve manages to locate and serve Molly with the same request, Molly just needs to lie about knowing the key and go down for two years herself.

The likelihood is that Molly and Michael will be coerced into performing their roles, just as mules are in other areas of organised crime. So has the legislation, in trying to set out government snooping permissions, created a new slave trade in crypto mules?

On how to get crypto wrong

I’ve said time and time again: don’t write your own encryption algorithm. Once you’ve chosen an existing algorithm, don’t write your own implementation.

Today I had to look at an encryption library that had been developed to store some files in an app. The library used a custom implementation of SHA256-HMAC, and a custom implementation of CBC mode. The implementations certainly looked OK, and seemed to match the descriptions in the textbooks. They also seem to work – you can encrypt a file to get gibberish, and decrypt the gibberish to get the file back.

So the first thing I did was to crack open Xcode and replace these custom functions with CommonCrypto. CommonCrypto’s internals also look a lot like the textbook descriptions of the methods, too. So it would be surprising if these two approaches yielded different results.

These two approaches yielded different results. This was surprising. Specifically, I found that the CBC implementation would sometimes use junk memory, which the CommonCrypto version never does. Of course, the way in which this junk was used was predictable enough that the encryption routine was still reversible – but could it be that the custom implementation was leaking information about the plaintext in the cipher-text by inappropriate re-use of the buffer? Possibly, and that’s good enough for me to throw the custom implementation out. Proving whether or not this implementation is “safe” is something that a specialist cryptographer could probably do in half a day. However, as I was able to use half a day to produce something I had more confidence in, just by using a tested implementation, I decided there was no need to do that work.

A site for discussing app security

There’s a new IT security site over at Stack Exchange. Questions and answers on designing and implementing IT security policy, and on app security are all welcome.

I’m currently a moderator at the site, but that’s just an interim thing while the site is being bootstrapped. Obviously, if people subsequently vote for me as a permanent moderator I’ll stay in, but the converse is also true. Anyway, check out the site, ask and answer questions, let’s make it as good a venue for app security discussion as stackoverflow.com is for general programming.

On Fuzzy Aliens

I have just launched a new company, Fuzzy Aliens[*], offering application security consultancy services for smartphone app developers. This is not the FAQ list, this is the “questions I want to answer so that they don’t become frequently asked” list.

What do you offer?

The company’s services are all focussed on helping smartphone and tablet app developers discover and implement their applications’ security and privacy requirements. When planning an app, I can help with threat modelling, with training developers, securing the development lifecycle, requirements elicitation, secure user experience design, and with developing a testing strategy.

When it comes to implementation, you can hire me to do the security work on your iOS or Android app. That may be some background “plumbing” like storing a password or encrypting sensitive content, or it might be an end-to-end security feature. I can also do security code reviews and vulnerability analysis on existing applications.

Why would I want that?

If you’re developing an application destined for the enterprise market, you probably need it. Company I.T. departments will demand applications that conform to local policy regarding data protection, perhaps based on published standards such as the ISO 27000 family or PCI-DSS.

In the consumer market, users are getting wise to the privacy problems associated with mobile apps. Whether it’s accidentally posting the wrong thing to facebook, or being spied on by their apps, the public don’t want to—and shouldn’t need to—deal with security issues when they’re trying to get their work done and play their games.

Can I afford that?

Having been a Micro-ISV and contracted for others, I know that many apps are delivered under tight budgets by one-person companies. If all you need is a half day together to work on a niggling problem, that’s all you need to pay for. On the other hand I’m perfectly happy to work on longer projects, too :).

Why’s it called Fuzzy Aliens?

Well, the word “fuzz” obviously has a specific meaning in the world of secure software development, but basically the answer is that I knew I could turn that into a cute logo (still pending), and that it hadn’t been registered by a UK Ltd yet.

So how do I contact you about this?

You already have – you’re here. But you could see the company’s contact page for more specific information.


[*] More accurately, I have indicated the intent to do so. The articles of association have not yet been returned by Companies House, so for the next couple of days the blue touch paper is quietly smouldering.

On secrets

Secrets are hard. Especially in the digital domain, but we can see examples in other environments too. Let’s take a look at a couple of historical examples.

It used to be the case that all of Britain’s diplomatic traffic was safe from snooping. Why? The information was all conveyed over the telegraph system, and Britain controlled the telegraph system. Customer states could buy access to send and receive their own signals over the network. This of course meant that Britain could snoop on their signals. Where the cables went through neutral (or supposedly neutral) countries, said countries and their allies (who were not necessarily Britain’s allies) could also snoop on the traffic. Wait, didn’t I say this was a safe channel?

Even were the telegraph system snoop-proof, the telegraph operators might not be. The recipients of any message might not be. Come to that, neither might the senders. Because the British foreign office knew the communications to be secret, everyone else knew that this was where to look for their secrets.

Conversely, it has never been assumed that knowing how to make a nuclear weapon is an unknown secret. It’s trivial for anyone to get the plans to a nuke, and if you need parts, just look at the United States export restrictions documents and order those parts from Germany. So how come no-one has been nuked in 65 years? How come Al Qaeda aren’t busy nuking the western world, if they know how to do it? Because while nuking is easy to know, it’s hard to do. Acquiring the fuel is hard enough for most states, never mind small terrorist cells. And then getting the fuel into a bomb and the bomb into a target without incident is so hard that it’s not worth doing.

Conclusion? Often, making things secret isn’t sufficient. Secrecy is fleeting. Making it hard to use a preferably-secret fact can be more effective than ramping up the secrecy.

On utilities

When I worked on an antivirus application, we used to have a joke in our team that we’d choose which one of us would accept the Apple Design Award for our product. Not that we weren’t striving for ADA-quality work; we just knew that Apple would never suffer an anti-virus app to gain that sort of recognition. It’s not what they want customers to associate with the platform.

This doesn’t just apply to AV: any utility app falls into the same situation. The reason is that utilities exist to make up for shortcomings in the computing experience.

No-one wants to use a utility. You don’t wake up full of joy at the prospect of defragmenting your hard drive; you wake up hoping to write a great novel. Or put together that movie from your holiday. Or write a killer iPhone app. You defragmenting your hard drive because it needs doing; because the computer didn’t take care of it for you automatically. Writing your novel needs to wait while you twiddle with the internal workings of something called a filesystem.

Similarly, no-one wants to use anti-virus. It’s just that no-one wants to use a virus either. The computer lets you down by making it possible to run viruses, just like it lets you down by having a fragmented filesystem. And you have to suffer this let-down by running a utility app.

Any utility app – even a very well-written one – is symptomatic of a let-down in the computing experience. That’s why you don’t find utilities on the iPhone app store – and the conditions of the Mac app store will limit the availability of utilities there, too. Apple have no interest in making it easy for users to find the let-downs in the computing experience. (Readers with long memories will remember that even some of the built-in utilities on OS X were, for a long time, part of an optional package.)

Hitherto, most security features have been utilities – because the platform security has been a let-down. Anti-virus: let-down. Mail filters: let-down. Encryption: let-down. The way to get your application noticed is not to make a utility to address the let-down: it’s to design the let-down out of your application. Create something that lets users do what they want; without the compromises that lead to needing utilities. In other words, design the security into the application experience.

On phone support scams and fake AV

A couple of weeks ago, I posted on Twitter about a new scam:

Heard about someone who was phoned by a man “from Windows” who engineered his way into remote access to the mark’s computer.

Fast forward to now, the same story has finally been picked up by the security vendors and the mainstream media. This means it’s probably time to go into more depth.

I heard a first-hand account of the scam. The victim is the kind of person who shouldn’t be expected to understand IT security – a long distance lorry driver who uses his computer for browsing, e-mail, and that sort of thing. As he tells it, the person called, saying they were from Windows and that they had discovered his computer was infected. He was given instructions to give the caller remote access to help clean up the computer.

With remote access, the caller was able to describe some of the problems the victim was having with his computer, while taking control to “fix them”. The caller eventually discovered that the victim’s anti-virus was out of date, and that if he gave the caller his payment information he could get new software for £109. This is when the victim hung up; however his computer has not booted properly since then.

I think my audience here is probably tech-savvy enough not to need warning about scams like this, and to understand that the real damage was done even before any discussion of payments was made (hint: browser form-auto-fill data). It’s not the scam itself I want to focus on, but our reaction.

Some people I have told this story to in real life (it does happen) have rolled their eyes, and said something along the lines of “well of course the users are the weakest link” in a knowing way. If that’s true, why rely on the users to make all the security decisions? Why leave it to them to decide what’s legitimate and what’s scammy, as was the case here? Why is the solution to any problem to shovel another bucketload of computer knowledge on them and hope that it sticks, as Sophos and the BBC have tried in the articles above?

No. This is not a solution to anything. No matter how loudly you shout about how that isn’t how Microsoft does business, someone who says he is from Microsoft will phone your users up and tell them that it is.

This is the same problem facing anti-virus vendors trying to convince us not to get fooled by FakeAV scams. Vendor A tells us to buy their product instead of Vendor B’s, because it’s better. So, is Vendor A the FakeAV pedlar, or B? Or is it both? Or neither? You can’t tell.

It may seem that this is a problem that cannot be solved in technology, that it relies on hard-wired behaviour of us bald apes. I don’t think that’s so. I think that it would be possible to change the way we, legitimate software vendors, interact with our users, and the way they interact with our software, such that an offline scam like this would never come to pass. A full discussion would fill a whole whitepaper that I haven’t written yet. However, to take the most extreme point from it, the one I know you’re going to loathe, what if our home computers were managed remotely by the vendors? Do most users really need complete BIOS and kernel level access to their kit? Really?

Look for the whitepaper sometime in the new year.