More security processes go wrong

I just signed a piece of card so that I could take a picture of it, clean it up and attach it to a document, pretending that I’d printed the document out, signed it, and scanned it back in. I do that about once a year (it was more frequent when I ran my own business, but then I only signed the piece of card once).

Just a little reminder: it’s not having my signature that should be valued, it’s having seen me perform the act of signing. Signatures can easily be duplicated. If you’ve decided that I’m me, and you’ve seen me put my signature to a document, from that moment on you can know that I signed that document. If you didn’t see it, but got a validated statement from a known notary that they saw it, then fair enough. If you didn’t see it, and a notary didn’t see it, then all you know is that you have a sheet of paper containing some words and my signature. This should tell you nothing about how the two came into proximity.

On the top 5 iOS appsec issues

Nearly 13 months ago, the Intrepidus Group published their top 5 iPhone application development security issues. Two of them are valid issues, the other three they should perhaps have thought longer over.

The good

Sensitive data unprotected at rest

Secure communications to servers

Yes, indeed, if you’re storing data on a losable device then you need to protect the data from being lost, and if you’re retrieving that data from elsewhere then you need to ensure you don’t give it away while you’re transporting it.

Something I see a bit too often is people turning off SSL certificate validation while they’re dealing with their test servers, and forgetting to turn it on in production.

The bad

Buffer overflows and other C programming issues

While you can indeed crash an app this way, I’ve yet to see evidence you can exploit an iOS app through any old buffer overflow due to the stack guards, restrictive sandboxes, address-space layout randomisation and other mitigations. While there are occasional targeted attacks, I would have preferred if they’d been specific about which problems they think exist and what devs can do to address them.

Patching your application

Erm, no. Just get it right. If there are fast-moving parts that need to change frequently, extract them from the app and put them in a hosted component.

The platform itself

To quote Scott Pack in “The DMZ”, If you can’t trust your users to implement your security plan, then your security plan must work without their involvement. In other words, if you have a problem and the answer is to train 110 million people, then you have two problems.

On the broken(?) Mac App Store

A day after the Mac App Store was launched, people are reporting that it has been cracked. There are two separate stories here, a vapourware circumvention of the FairPlay DRM used to generate the receipts and a report that certain apps aren’t validating the receipts properly. We can ignore the first case for the moment: it’s important, and if it’s true then Apple needs to fix it (and co-ordinate updating the validation code with us third-party developers). But for the moment, it’s more important that developers are implementing the protections that are in place in their applications – it’s those applications that are supposed to be protected.

Let’s skip, for the moment, the question of whether DRM or anti-cracking mechanisms are ethically right, worthwhile, or how much effort you want to put into them. Apple have done most of the legwork, in providing a vendor-signed receipt that’s part of your signed app bundle. What you need to do is:

  • Check whether you have a receipt
  • Check whether Apple signed the receipt you have
  • Check whether the receipt is valid for your product
  • Check whether the receipt is valid for this version of your product
  • Check whether the receipt is valid for this computer

That’s it in a nutshell. Of course, some nutshells surround very big and complex nuts, and that’s true in this case:

  • There’s some good example code for receipt validation at github/roddi/ValidateStoreReceipt, if you’re going to use it then don’t just paste it wholesale. If everyone uses the same code then it’s super-easy for someone to detect and strip that code from each instance.
  • Check at runtime, as well as before startup. If you just check at startup, then all an attacker needs to do is patch main() to jump straight into NSApplicationMain() and your app runs for free.
  • Code obfuscation is not a very effective tool. Having worked in anti-virus, I know it’s much easier to classify code based on what it does than what it is, and it’s quite easy to find the code that opens the receipt file, or calls exit(173). That said, some of the commercial obfuscation companies offer a guaranteed service, so you can still protect your revenue after the app gets cracked.
  • Update I have been advised privately and seen in a blog post that people are recommending hard-coding their app bundle IDs and version numbers into the binary rather than using Info.plist, because that file can be edited. Well, so can the app binary…and in either case you’d need to re-sign the product with a valid certificate to continue, because Apple have used the kill flag:
    heimdall:~ leeg$ codesign -dvvvv /Applications/Twitter.app/
    Executable=/Applications/Twitter.app/Contents/MacOS/Twitter
    Identifier=com.twitter.twitter-mac
    Format=bundle with Mach-O universal (i386 x86_64)
    CodeDirectory v=20100 size=12452 flags=0x200(kill) hashes=616+3 location=embedded
    CDHash=8e0736639d79a108a5a1ebe89f928d1da0d49d94
    Signature size=4169
    Authority=Apple Mac OS Application Signing
    Authority=Apple Worldwide Developer Relations Certification Authority
    Authority=Apple Root CA
    Info.plist entries=21
    Sealed Resources rules=4 files=78
    Internal requirements count=2 size=344
    

    Changing a hard-coded string in a binary file is not difficult. You can of course obfuscate the string, but the motivated cracker still finds the point where the comparison is made (particularly easily if you use NSStrings). Really, how far you want to go depends on how much you’re willing to spend.

Of course, Fuzzy Aliens Ltd has already been implementing receipt validation for customers, so if this is too hard for you or you don’t have the time… ;-)

A last word on publicising receipt-validation vulnerabilities

You and I both make our living by selling software, or by selling services to people who sell software. Crowing on the interwebs about how this application or that application doesn’t validate its receipts properly is not cool, because you are shitting on your own doorstep. There is no public benefit to public disclosure that class of vulnerability, because DRM is not a user security feature. Don’t do that. Send the developer a private message explaining your findings. Give them a chance to put extra effort into protecting their product, if that’s what they want to do.

Careful how you define your properties

Spot the vulnerability in this Objective-C class interface:

@interface SomeParser : NSObject {
  @private
	NSString *content;
}
@property (nonatomic, retain) NSString *content;
- (void)beginParsing;
//...
@end

Any idea? Let’s have a look at a use of this class in action:

SomeParser *parser = [[SomeParser alloc] init];
NSMutableString *myMutableString = [self prepareContent];
parser.content = myMutableString;
[parser beginParsing];
[self modifyContent];

The SomeParser class retains an object that might be mutable. This can be a problem if the parser only functions correctly when its input is invariant. While it’s possible to stop the class’s API from mutating the data – perhaps using the State pattern to change the behaviour of the setters – if the ivar objects are mutable then the class cannot stop other code from making changes. Perhaps the string gets truncated while it’s being parsed, or valid data is replaced with invalid data while the parser is reading it.

If a class needs an instance variable to remain unmodified during the object’s lifetime (or during some lengthy operation), it should take a copy of that object. It’s easy to forget that in cases like strings and collections where the type of the ivar is immutable, but mutable subclasses exist. So to fix this parser:

@property (nonatomic, copy) NSString *content;

You could also make the property readonly and provide an -initWithContent: constructor, which takes a copy that will be worked on.

But with collection class properties these fixes may not be sufficient. Sure, you definitely get an immutable collection, but is it holding references to mutable elements? You need to check whether the collection class you’re using support shallow or deep copying—that is, whether copying the collection retains all of the elements or copies them. If you don’t have deep copying but need it, then you’ll end up having to implement a -deepCopy method yourself.

Note that the above discussion applies not only to collection classes, but to any object that has other objects as ivars and which is either itself mutable or might have mutable ivars. The general expression of the problem is fairly easy to express: if you don’t want your properties to change, then take copies of them. The specifics can vary from case to case and, as ever, the devil’s in the detail.

On localisation and security

Hot on the heels of Uli’s post on the problems of translation, I present another problem you might encounter while localising your code. This is a genuine bug (now fixed, of course) in code I have worked on in the past, only the data has been changed to protect the innocent.

We had a crash in the following line:

NSString *message = [NSString stringWithFormat:
	NSLocalizedString(@"%@ problems found", @"Discovery message"),
	problem];

Doesn’t appear to be anything wrong with that, does there? Well, as I say, it was a crasher. The app only crashed in one language though…for purposes of this argument, we’ll assume it was English. Let’s have a look at English.lproj/Localizable.strings:

/* Discovery message */
"%@ problems found" = "%@ found in %@";

Erm, that’s not so good. It would appear that at runtime, the variadic method +[NSString stringWithFormat: (NSString *)fmt, ...] is expecting two arguments to follow fmt, but only passed one, so it ends up reading its way off the end of the stack. That’s a classic format string vulnerability, but with a twist: none of our usual tools (by which I mean the various -Wformat flags and the static analyser) can detect this problem, because the format string is not contained in the code.

This problem should act as a reminder to ensure that the permissions on your app’s resources are correct, not just on the binary—an attacker can cause serious fun just by manipulating a text file. It should also suggest that you audit your translators’ work carefully, to ensure that these problems don’t arise in your app even without tampering.

Which vendor “is least secure”?

The people over at Intego have a blog post, Which big vendor is least secure? They discuss that because Microsoft have upped their game, malware authors have started to target other products, notably those produced by Adobe and Apple.

That doesn’t really address the question though: which big vendor is least secure (or more precisely, which big vendor creates the least secure products)? It’s an interesting question, and one that’s so hard to answer, people usually get it wrong.

The usual metrics for vendor software security are:

  • Number of vulnerability reports/advisories last year
  • Speed of addressing reported vulnerabilities

Both are just proxies for the question we really want to know the answer to: “what risk does this product expose its users to?” Each has drawbacks when used as such a proxy.

The previous list of vulnerabilities seems to correlate with a company’s development practices – if they were any good at threat modelling, they wouldn’t have released software with those vulnerabilities in, right? Well, maybe. But maybe they did do some analysis, discovered the vulnerability, and decided to accept it. Perhaps the vulnerability reports were actually the result of their improved secure development lifecycle, and some new technique, tool or consultant has patched up a whole bunch of issues. Essentially all we know is what problems have been addressed and who found them, and we can tell something about the risk that users were exposed to while those vulnerabilities were present. Actually, we can’t tell too much about that, unless we can find evidence that it was exploited (or not, which is harder). We really know nothing about the remaining risk profile of the application – have 1% or 100% of vulnerabilities been addressed?

The only time we really know something about the present risk is in the face of zero-day vulnerabilities, because we know that a problem exists and has yet to be addressed. But reports of zero-days are comparatively rare, because the people who find them usually have no motivation to report them. It’s only once the zero-day gets exploited, and the exploit gets discovered and reported that we know the problem existed in the first place.

The speed of addressing vulnerabilities tells us some information about the vendor’s ability to react to security issues. Well, you might think it does, it actually tells you a lot more about the vendor’s perception of their customers’ appetite for installing updates. Look at enterprise-focussed vendors like Sophos and Microsoft, and you’ll find that most security patches are distributed on a regular schedule so that sysadmins know when to expect them and can plan their testing and deployment accordingly. Both companies have issued out-of-band updates, but only in extreme circumstances.

Compare that model with Apple’s, a company that is clearly focussed on the consumer market. Apple typically have an ad hoc (or at least opaque) update schedule, with security and non-security content alike bundled into infrequent patch releases. Security content is simultaneously released for some earlier operating systems in a separate update. Standalone security updates are occasionally seen on the Mac, rarely (if ever) on the iPhone.

I don’t really use any Adobe software so had to research their security update schedule specifically for this post. In short, it looks like they have very frequent security updates, but without any public schedule. Using Adobe Reader is an exercise in unexpected update installation.

Of course, we can see when the updates come out, but that doesn’t directly mean we know how long they take to fix problems – for that we need to know when problems were reported. Microsoft’s monthly updates don’t necessarily address bugs that were reported within the last month, they might be working on a huge backlog.

Where we can compare vendors is situations in which they all ship the same component with the same vulnerabilities, and must provide the same update. The more reactive companies (who don’t think their users mind installing updates) will release the fixes first. In the case of Apple we can compare their fixes of shared components like open source UNIX tools or Java with other vendors – Linux distributors and Oracle mainly. It’s this comparison that Apple frequently loses, by taking longer to release the same patch than other Oracle, Red Hat, Canonical and friends.

So ultimately what we’d like to know is “which vendor exposes its customers to most risk?”, for which we’d need an honest, accurate and comprehensive risk analysis from each vendor or an independent source. Of course, few customers are going to want to wade through a full risk analysis of an operating system.

Security flaw liability

The Register recently ran an opinion piece called Don’t blame Willy the Mailboy for software security flaws. The article is a reaction to the following excerpt from a SANS sample application security procurement contract:

No Malicious Code

Developer warrants that the software shall not contain any code that does not support a software requirement and weakens the security of the application, including computer viruses, worms, time bombs, back doors, Trojan horses, Easter eggs, and all other forms of malicious code.

That seems similar to a requirement I have previously almost proposed voluntarily adopting:

If one of us [Mac developers] were, deliberately or accidentally, to distribute malware to our users, they would be (rightfully) annoyed. It would severely disrupt our reputation if we did that; in fact some would probably choose never to trust software from us again. Now Mac OS X allows us to put our identity to our software using code signing. Why not use that to associate our good reputations as developers with our software? By using anti-virus software to improve our confidence that our development environments and the software we’re building are clean, and by explaining to our customers why we’ve done this and what it means, we effectively pass some level of assurance on to our customer. Applications signed by us, the developers, have gone through a process which reduces the risk to you, the customers. While your customers trust you as the source of good applications, and can verify that you were indeed the app provider, they can believe in that assurance. They can associate that trust with the app; and the trust represents some tangible value.

Now what the draft contract seems to propose (and I have good confidence in this, due to the wording) is that if a logic bomb, back door, Easter Egg or whatever is implemented in the delivered application, then the developer who wrote that misfeature has violated the contract, not the vendor. Taken at face value, this seems just a little bad. In the subset of conditions listed above, the developer has introduced code into the application that was not part of the specification. It either directly affects the security posture of the application, or is of unknown quality because it’s untested: the testers didn’t even know it was there. This is clearly the fault of the developer, and the developer should be accountable. In most companies this would be a sacking offence, but the proposed contract goes further and says that the developer is responsible to the client. Fair enough, although the vendor should take some responsibility too, as a mature software organisation should have a process such that none of its products contain unaccounted code. This traceability from requirement to product is the daily bread of some very mature development lifecycle tools.

But what about the malware cases? It’s harder to assign blame to the developer for malware injection, and I would say that actually the vendor organisation should be held responsible, and should deal with punishment internally. Why? Because there are too many links in a chain for any one person to put malware into a software product. Let’s say one developer does decide to insert malware.

  • Developer introduces malware to his workstation. This means that any malware prevention procedures in place on the workstation have failed.
  • Developer commits malware to the source repository. Any malware prevention procedures in place on the SCM server have failed.
  • Developer submits build request to the builders.
  • Builder checks out build input and does not notice the malware, construct the product.
  • Builder does not spot the malware in the built product.
  • Testers do not spot the malware in final testing.
  • Release engineers do not spot the malware, and release the product.
  • Of course there are various points at which malware could be introduced, but for a developer to do so in a way consistent with his role as developer requires a systematic failure in the company’s procedures regarding information security, which implies that the CSO ought to be accountable in addition to the developer. It’s also, as with the Easter Egg case, symptomatic of a failure in the control of their development process, so the head of Engineering should be called to task as well. In addition, the head of IT needs to answer some uncomfortable questions.

    So, as it stands, the proposed contract seems well-intentioned but inappropriate. Now what if it’s the thin end of a slippery iceberg? Could developers be held to account for introducing vulnerabilities into an application? The SANS contract is quiet on this point. It requires that the vendor shall provide a “certification package” consisting of the security documentation created throughout the development process. The package shall establish that the security requirements, design, implementation, and test results were properly completed and all security issues were resolved appropriately and that Security issues discovered after delivery shall be handled in the same manner as other bugs and issues as specified in this Agreement. In other words, the vendor should prove that all known vulnerabilities have been mitigated before shipment and if a vulnerability is subsequently discovered and is dealt with in an agreed fashion, no-one did anything wrong.

    That seems fairly comprehensive, and definitely places the onus directly on the vendor (there are various other parts of the contract that imply the same, such as the requirement for the vendor to carry out background checks and provide security training for developers). Let’s investigate the consequences for a few different scenarios.

    1. The product is attacked via a vulnerability that was brought up in the original certification package, but the risk was accepted. This vulnerability just needs to be fixed and we all move on; the risk was known, documented and accepted, and the attack is a consequence of doing business in the face of known risks.

    2. The product is attacked via a novel class of vulnerability, details of which were unknown at the time the certification package was produced. I think that again, this is a case where we just need to fix the problem, of course with sufficient attention paid to discovering whether the application is vulnerable in different ways via this new class of flaw. While developers should be encouraged to think of new ways to break the system, it’s inevitable that some unpredicted attack vectors will be discovered. Fix them, incorporate them into your security planning.

    3. The product is attacked by a vulnerability that was not covered in the certification package, but that is a failure of the product to fulfil its original security requirements. This is a case I like to refer to as “someone fucked up”. It ought to be straightforward (if time-consuming) to apply a systematic security analysis process to an application and get a comprehensive catalogue of its vulnerabilities. If the analysis missed things out, then either it was sloppy, abbreviated or ignored.

    Sloppy analysis. The security lead did not systematically enumerate the vulnerabilities, or missed some section of the app. The security lead is at fault, and should be responsible.

    Abbreviated analysis. While the security lead was responsible for the risk analysis, he was not given the authority to see it through to completion or to implement its results in the application. Whoever withheld that authority is to blame and should accept responsibility. In my experience this is typically a marketing or product management decision, as they try to drop tasks to work backwards from a ship date to the amount of effort spent on the product. Sometimes it’s engineering management; it’s almost never project management.

    Ignored analysis. Example: the risk of attack via buffer overflow is noted in the analysis, then the developer writing the feature doesn’t code bounds-checking. That developer has screwed up, and ought to be responsible for their mistake. If you think that’s a bit harsh, check this line from the “duties” list in a software engineer job ad:

    Write code as directed by Development Lead or Manager to deliver against specified project timescales quality and functionality requirements

    When you’re a programmer, it’s your job to bake quality in.