Careful how you define your properties

Spot the vulnerability in this Objective-C class interface:

@interface SomeParser : NSObject {
  @private
	NSString *content;
}
@property (nonatomic, retain) NSString *content;
- (void)beginParsing;
//...
@end

Any idea? Let’s have a look at a use of this class in action:

SomeParser *parser = [[SomeParser alloc] init];
NSMutableString *myMutableString = [self prepareContent];
parser.content = myMutableString;
[parser beginParsing];
[self modifyContent];

The SomeParser class retains an object that might be mutable. This can be a problem if the parser only functions correctly when its input is invariant. While it’s possible to stop the class’s API from mutating the data – perhaps using the State pattern to change the behaviour of the setters – if the ivar objects are mutable then the class cannot stop other code from making changes. Perhaps the string gets truncated while it’s being parsed, or valid data is replaced with invalid data while the parser is reading it.

If a class needs an instance variable to remain unmodified during the object’s lifetime (or during some lengthy operation), it should take a copy of that object. It’s easy to forget that in cases like strings and collections where the type of the ivar is immutable, but mutable subclasses exist. So to fix this parser:

@property (nonatomic, copy) NSString *content;

You could also make the property readonly and provide an -initWithContent: constructor, which takes a copy that will be worked on.

But with collection class properties these fixes may not be sufficient. Sure, you definitely get an immutable collection, but is it holding references to mutable elements? You need to check whether the collection class you’re using support shallow or deep copying—that is, whether copying the collection retains all of the elements or copies them. If you don’t have deep copying but need it, then you’ll end up having to implement a -deepCopy method yourself.

Note that the above discussion applies not only to collection classes, but to any object that has other objects as ivars and which is either itself mutable or might have mutable ivars. The general expression of the problem is fairly easy to express: if you don’t want your properties to change, then take copies of them. The specifics can vary from case to case and, as ever, the devil’s in the detail.

Why OS X (almost) doesn’t need root any more

Note: this post was originally written for the Mac Developer Network.

In the beginning, there was the super-user. And the super-user was root.

When it comes to doling out responsibility for privileged work in an operating system, there are two easy ways out. Single-user operating systems just do whatever they’re told by whoever has access, so anyone can install or remove software or edit configuration. AmigaDOS, Classic Mac OS and MS-DOS all took this approach.

The next-simplest approach is to add multiple users, and let one of them do everything while all the others can do nothing. This is the approach taken by all UNIX systems since time immemorial – the root user can edit all files, set access rights for files and devices, start network services on low-numbered ports…and everyone else can’t.

The super-user approach has obvious advantages in a multi-user environment over the model with no privilege mechanism – only users who know how to log in as root can manage the computer. In fact it has advantages in a single-user environment as well: that one user can choose to restrict her own privileges to the times when she needs them, by using a non-privileged account the rest of the time.

It’s still a limited mechanism, in that it’s all-or-nothing. You either have the permission to do everything, or you don’t. Certain aspects like the ability to edit files can be delegated, but basically you’re either root or you’re useless. If you manage to get root – by intention or by malicious exploitation – you can do anything on the computer. If you exploit a root-running network service you can get it to load a kernel extension: not because network services need to load kernel extensions, but because there is nothing to stop root from doing so.

And that’s how pretty much all UNIX systems, including Mac OS X, work. Before getting up in arms about how Apple disabled root in OS X, remember this: they didn’t disable root, they disabled the account’s password. You can’t log in to a default OS X installation as root (though you can on Mac OS X Server). All of the admin facilities on Mac OS X are implemented by providing access to the monolithic root account – running a software update, configuring Sharing services, setting the FileVault master password all involve gaining root privilege.

The way these administrative features typically work is to use Authorization Services, and the principle of least privilege. I devoted a whole chapter to that in Professional Cocoa Application Security so won’t go into too much detail here, the high-level view is that there are two components, one runs as the regular user and the other as root. The unprivileged part performs an authorisation test and then, at its own discretion, decides whether to call the privileged helper. The privileged part might independently test whether the user application really did pass the authorisation test. The main issue is that the privileged part still has full root access.

So Authorization Services gives us discretionary access control, but there’s also a useful mandatory test relevant to the super-user. You see, traditional UNIX tests for whether a user is root by doing this:

if (process.p_euid == 0) {

Well, Mac OS X does do something similar in parts, but it actually has a more flexible test in places. There’s a kernel authorisation framework called kauth – again, there’s a chapter in PCAS on this so I don’t intend to cover too much detail. It basically allows the kernel to defer security policy decisions to callbacks provided by kernel extensions, one such policy question is “should I give this process root?”. Where the kernel uses this test, the super-user access is based not on the effective UID of the calling process, but on whatever the policy engine decides. Hmm…maybe the policy engine could use Authorization Services? If the application is an installer, and it has the installer right, and it’s trying to get root access to the filesystem, then it’s allowed.

Apple could then do away with monolithic root privileges completely, allowing the authorisation policy database to control who has privileged access for what tasks with which applications. The advantage is that if a privileged process ever gets compromised, the consequences for the rest of the OS are reduced.

On improved tool support for Cocoa developers

I started writing some tweets, that were clearly taking up too much room. They started like this:

My own thoughts: tool support is very important to good software engineering. 3.3.1 is not a big inhibitor to novel tools. /cc @rentzsch

then this:

There’s still huge advances to make in automating design, bug-hunting/squashing and traceability/accountability, for instance.

(The train of thought was initiated by the Dog Spanner’s [c4 release]; post.)

In terms of security tools, the Cocoa community needs to catch up with where Microsoft are before we need to start wondering whether Apple might be holding us back. Yes, I have started working on this, I expect to have something to show for it at NSConference MINI. However, I don’t mind whether it’s you or me who gets the first release, the important thing is that the tools should be available for all of us. So I don’t mind sharing my impression of where the important software security engineering tools for Mac and iPhone OS developers will be in the next few years.

Requirements comprehension

My first NSConference talk was on understanding security requirements, and it’s the focus of Chapter 1 of Professional Cocoa Application Security. The problem is, most of you aren’t actually engineers of security requirements, you’re engineers of beautiful applications. Where do you dump all of that security stuff while you’re focussing on making the beautiful app? It’s got to be somewhere that it’s still accessible, somewhere that it stays up to date, and it’s got to be available when it’s relevant. In other words, this information needs to be only just out of your way. A Pages document doesn’t really cut it.

Now over in the Windows world, they have Microsoft Threat Modeling Tool, which makes it easy to capture and organise the security requirements. But stops short of providing any traceability or integration with the rest of the engineering process. It’d be great to know how each security requirement impacts each class, or the data model, etc.

Bug-finding

The Clang analyser is just the start of what static analysis can do. Many parts of Cocoa applications are data-driven, and good analysis tools should be able to inspect the relationship between the code and the data. Other examples: currently if you want to ensure your UI is hooked up properly, you manually write tests that inspect the outlets, actions and bindings you set up in the XIB. If you want to ensure your data model is correct, you manually write tests to inspect your entity descriptions and relationships. Ugh. Code-level analysis can already reverse-engineer test conditions from the functions and methods in an app, they ought to be able to use the rest of the app too. And it ought to make use of the security model, described above.

I have recently got interested in another LLVM project called KLEE, a symbolic execution tool. Current security testing practices largely involve “fuzzing”, or choosing certain malformed/random input to give to an app and seeing what it does. KLEE can take this a step further by (in effect) testing any possible input, and reporting on the outcomes for various conditions. It can even generate automated tests to make it easy to see what effect your fixes are having. Fuzzing will soon become obsolete, but we Mac people don’t even have a good and conventional tool for that yet.

Bug analysis

Once you do have fuzz tests or KLEE output, you start to get crash reports. But what are the security issues? Apple’s CrashWrangler tool can take a stab at analysing the crash logs to see whether a buffer overflow might potentially lead to remote code execution, but again this is just the tip of the iceberg. Expect KLEE-style tools to be able to report on deviations from expected behaviour and security issues without having to wait for a crash, just as soon as we can tell the tool what the expected behaviour is. And that’s an interesting problem in itself, because really the specification of what you want the computer to do is your application’s source code, and yet we’re trying to determine whether or not that is correct.

Safe execution

Perhaps the bitterest pill to swallow for long time Objective-C programmers: some time soon you will be developing for a managed environment. It might not be as high-level as the .Net runtime (indeed my money is on the LLVM intermediate representation, as hardware-based managed runtimes have been and gone), but the game has been up for C arrays, memory dereferencing and monolithic process privileges for years. Just as garbage collectors have obsoleted many (but of course not all) memory allocation problems, so environment-enforced buffer safety can obsolete buffer overruns, enforced privilege checking can obsolete escalation problems and so on. We’re starting to see this kind of safety retrofitted to compiled code using stack guards and the like, but by the time the transition is complete (if it ever is), expect your application’s host to be unrecognisable to the app as an armv7 or x86_64, even if the same name is still used.

On localisation and security

Hot on the heels of Uli’s post on the problems of translation, I present another problem you might encounter while localising your code. This is a genuine bug (now fixed, of course) in code I have worked on in the past, only the data has been changed to protect the innocent.

We had a crash in the following line:

NSString *message = [NSString stringWithFormat:
	NSLocalizedString(@"%@ problems found", @"Discovery message"),
	problem];

Doesn’t appear to be anything wrong with that, does there? Well, as I say, it was a crasher. The app only crashed in one language though…for purposes of this argument, we’ll assume it was English. Let’s have a look at English.lproj/Localizable.strings:

/* Discovery message */
"%@ problems found" = "%@ found in %@";

Erm, that’s not so good. It would appear that at runtime, the variadic method +[NSString stringWithFormat: (NSString *)fmt, ...] is expecting two arguments to follow fmt, but only passed one, so it ends up reading its way off the end of the stack. That’s a classic format string vulnerability, but with a twist: none of our usual tools (by which I mean the various -Wformat flags and the static analyser) can detect this problem, because the format string is not contained in the code.

This problem should act as a reminder to ensure that the permissions on your app’s resources are correct, not just on the binary—an attacker can cause serious fun just by manipulating a text file. It should also suggest that you audit your translators’ work carefully, to ensure that these problems don’t arise in your app even without tampering.

Which vendor “is least secure”?

The people over at Intego have a blog post, Which big vendor is least secure? They discuss that because Microsoft have upped their game, malware authors have started to target other products, notably those produced by Adobe and Apple.

That doesn’t really address the question though: which big vendor is least secure (or more precisely, which big vendor creates the least secure products)? It’s an interesting question, and one that’s so hard to answer, people usually get it wrong.

The usual metrics for vendor software security are:

  • Number of vulnerability reports/advisories last year
  • Speed of addressing reported vulnerabilities

Both are just proxies for the question we really want to know the answer to: “what risk does this product expose its users to?” Each has drawbacks when used as such a proxy.

The previous list of vulnerabilities seems to correlate with a company’s development practices – if they were any good at threat modelling, they wouldn’t have released software with those vulnerabilities in, right? Well, maybe. But maybe they did do some analysis, discovered the vulnerability, and decided to accept it. Perhaps the vulnerability reports were actually the result of their improved secure development lifecycle, and some new technique, tool or consultant has patched up a whole bunch of issues. Essentially all we know is what problems have been addressed and who found them, and we can tell something about the risk that users were exposed to while those vulnerabilities were present. Actually, we can’t tell too much about that, unless we can find evidence that it was exploited (or not, which is harder). We really know nothing about the remaining risk profile of the application – have 1% or 100% of vulnerabilities been addressed?

The only time we really know something about the present risk is in the face of zero-day vulnerabilities, because we know that a problem exists and has yet to be addressed. But reports of zero-days are comparatively rare, because the people who find them usually have no motivation to report them. It’s only once the zero-day gets exploited, and the exploit gets discovered and reported that we know the problem existed in the first place.

The speed of addressing vulnerabilities tells us some information about the vendor’s ability to react to security issues. Well, you might think it does, it actually tells you a lot more about the vendor’s perception of their customers’ appetite for installing updates. Look at enterprise-focussed vendors like Sophos and Microsoft, and you’ll find that most security patches are distributed on a regular schedule so that sysadmins know when to expect them and can plan their testing and deployment accordingly. Both companies have issued out-of-band updates, but only in extreme circumstances.

Compare that model with Apple’s, a company that is clearly focussed on the consumer market. Apple typically have an ad hoc (or at least opaque) update schedule, with security and non-security content alike bundled into infrequent patch releases. Security content is simultaneously released for some earlier operating systems in a separate update. Standalone security updates are occasionally seen on the Mac, rarely (if ever) on the iPhone.

I don’t really use any Adobe software so had to research their security update schedule specifically for this post. In short, it looks like they have very frequent security updates, but without any public schedule. Using Adobe Reader is an exercise in unexpected update installation.

Of course, we can see when the updates come out, but that doesn’t directly mean we know how long they take to fix problems – for that we need to know when problems were reported. Microsoft’s monthly updates don’t necessarily address bugs that were reported within the last month, they might be working on a huge backlog.

Where we can compare vendors is situations in which they all ship the same component with the same vulnerabilities, and must provide the same update. The more reactive companies (who don’t think their users mind installing updates) will release the fixes first. In the case of Apple we can compare their fixes of shared components like open source UNIX tools or Java with other vendors – Linux distributors and Oracle mainly. It’s this comparison that Apple frequently loses, by taking longer to release the same patch than other Oracle, Red Hat, Canonical and friends.

So ultimately what we’d like to know is “which vendor exposes its customers to most risk?”, for which we’d need an honest, accurate and comprehensive risk analysis from each vendor or an independent source. Of course, few customers are going to want to wade through a full risk analysis of an operating system.

Why passwords aren’t always the right answer.

I realised something yesterday. I don’t know my master password.

Users of Mac OS X can use FileVault, a data protection feature that replaces the user’s home folder with an encrypted disk image. Encrypted disk images are protected by AES-128 or AES-256 encryption, but to get at the private key you need to supply one of two pieces of information. The first is the user’s login password, and the second is a private key for a recovery certificate. That private key is stored in a dedicated keychain, which is itself protected by….the master password. More information on the mechanism is available both in Professional Cocoa Application Security and Enterprise Mac.

Anyway, so this password is very useful – any FileVault-enabled home folder can be opened by the holder of the master password. Even if the user has forgotten his login password, has left the company or is being awkward, you can get at the encrypted content. It’s also hardly ever used. In fact, I’ve never used my own master password since I set it – and as a consequence have forgotten it.

There are a few different ways for users to recall passwords – by recital, by muscle memory or by revision. So when you enter the password, you either remember what the characters in the password are, where your hands need to be to type it or you look at the piece of paper/keychain where you wrote it down. Discounting the revision option (the keychain is off the menu, because if you forget your login password you can’t decrypt your login keychain in order to view the recorded password), the only ways to reinforce a password in your memory are to use it. And you never use the FileVault master password.

I submit that as a rarely-used authentication step, the choice of a password to protect FileVault recovery is a particularly bad one. Of course you don’t want attackers able to use the recovery mechanism, but you do want that when you really need to recover your encrypted data, the OS doesn’t keep you out, too.

Regaining your identity

In my last post, losing your identity, I pointed out an annoying problem with the Sparkle update framework, in that if you lose your private key you can no longer post any updates. Using code signing identities would offer a get-out, in addition to reducing the complexity associated with releasing a build. You do already sign your apps, right?

I implemented a version of Sparkle that does codesign validation, which you can grab using git or view on github. After Sparkle has downloaded its update, it will test that the new application satisfies the designated requirement for the host application – in other words, that the two are the same app. It will not replace the host unless they are the same app. Note that this feature only works on 10.6, because I use the new Code Signing Services API in Security.framework.

Losing your identity

Developers make use of cryptographic signatures in multiple places in the software lifecycle. No iPad or iPhone application may be distributed without having been signed by the developer. Mac developers who sign their applications get to annoy their customers much less when they ship updates, and indeed the Sparkle framework allows developers to sign the download file for each update (which I heartily recommend you do). PackageMaker allows developers to sign installer packages. In each of these cases, the developer provides assurance that the application definitely came from their build process, and definitely hasn’t been changed since then (for wholly reasonable values of “definitely”, anyway).

No security measure comes for free. Adding a step like code or update signing mitigates certain risks, but introduces new ones. That’s why security planning must be an iterative process – every time you make changes, you reduce some risks and create or increase others. The risks associated with cryptographic signing are that your private key could be lost or deleted, or it could be disclosed to a third party. In the case of keys associated with digital certificates, there’s also the risk that your certificate expires while you’re still relying on it (I’ve seen that happen).

Of course you can take steps to protect the key from any of those eventualities, but you cannot reduce the risk to zero (at least not while spending a finite amount of time and effort on the problem). You should certainly have a plan in place for migrating from an expired identity to a new one. Having a contingency plan for dealing with a lost or compromised private key will make your life easier if it ever happens – you can work to the plan rather than having to both manage the emergency and figure out what you’re supposed to be doing at the same time.

iPhone/iPad signing certificate compromise

This is the easiest situation to deal with. Let’s look at the consequences for each of the problems identified:

Expired Identity
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Destroyed Private Key
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Disclosed Private Key
Someone else can submit apps to the store and provision betas on your behalf. (They can also test their apps on their phone using your identity, though that’s hardly a significant problem.)

In the case of an expired identity, Apple should lead you through renewal instructions using iTunes Connect. You ought to get some warning, and it’s in their interests to help you as they’ll get another $99 out of you :-). There’s not really much of a risk here, you just need to note in your calendar to sort out renewal.

The cases of a destroyed or disclosed private key are exceptional, and you need to contact Apple to get your old identity revoked and a new one issued. Speed is of the essence if there’s a chance your private key has been leaked, because if someone else submits an “update” on your behalf Apple will treat it as a release from you. It will be hard for you to repudiate the update (claim it isn’t yours) – after all, it’s signed with your identity. If you manage to deal with Apple quickly and get your identity revoked, the only remaining possibility is that an attacker could have used your identity to send out some malicious apps as betas. Because of the limited exposure beta apps have, there will only be a slight impact: though you’ll probably want to communicate the issue to the public to motivate users of “your” beta app to remove it from their phones.

By the way, notice that no application on the store has actually been signed by the developer who wrote it – the .ipa bundles are all re-signed by Apple before distribution.

Mac code signing certificate compromise

Again, let’s start with the consequences.

Expired Identity
You can’t sign new products. Existing releases continue to work, as Mac OS X ignores certificate expiration in code signing checks by default.
Destroyed Private Key
You can’t sign new products.
Disclosed Private Key
Someone else can sign applications that appear to be yours. Such applications will receive the same keychain and firewall access rights as your legitimate apps.

If you just switch identities without any notice, there will be some annoyances for users – the keychain, firewall etc. dialogues indicating that your application cannot be identified as a legitimate update will appear for the update where the identities switch. Unfortunately this situation cannot be distinguished from a Trojan horse version of your app being deployed (even more annoyingly there’s no good way to inspect an application distributor’s identity, so users can’t make the distinction themselves). It would be good to make the migration seamless, so that users don’t get bugged by the update warnings, and learn to treat them as suspicious.

When you’re planning a certificate migration, you can arrange for that to happen easily. Presumably you know how long it takes for most users to update your app (where “most users” is defined to be some large fraction such that you can accept having to give the remainder additional support). At least that long before you plan to migrate identities, release an update that changes your application’s designated requirement such that it’s satisfied by both old and new identities. This update should be signed by your existing (old) identity, so that it’s recognised as an update to the older releases of the app. Once that update’s had sufficient uptake, release another update that’s satisfied by only the new identity, and signed by that new identity.

If you’re faced with an unplanned identity migration, that might not be possible (or in the case of a leaked private key, might lead to an unacceptably large window of vulnerability). So you need to bake identity migration readiness into your release process from the start.

Assuming you use certificates provided by vendor CAs whose own identities are trusted by Mac OS X, you can provide a designated requirement that matches any certificate issued to you. The requirement would be of the form (warning: typed directly into MarsEdit):

identifier "com.securemacprogramming.MyGreatApp" and cert leaf[subject.CN]="Secure Mac Programming Code Signing" and cert leaf[subject.O]="Secure Mac Programming Plc." and anchor[subject.O]="Verisign, Inc." and anchor trusted

Now if one of your private keys is compromised, you coordinate with your CA to revoke the certificate and migrate to a different identity. The remaining risks are that the CA might issue a certificate with the same common name and organisation name to another entity: something you need to take up with the CA in their service-level agreement; or Apple might choose to trust a different CA called “Verisign, Inc.” which seems unlikely.

If you use self-signed certificates, then you need to manage this migration process yourself. You can generate a self-signed CA from which you issue signing certificates, then you can revoke individual signing certs as needed. However, you now have two problems: distributing the certificate revocation list (CRL) to customers, and protecting the private key of the top-level certificate.

Package signing certificate compromise

The situation with signed Installer packages is very similar to that with signed Mac applications, except that there’s no concept of upgrading a package and thus no migration issues. When a package is installed, its certificate is used to check its identity. You just have to make sure that your identity is valid at time of signing, and that any certificate associated with a disclosed private key is revoked.

Sparkle signing key compromise

You have to be very careful that your automatic update mechanism is robust. Any other bug in an application can be fixed by deploying an update to your customers. A bug in the update mechanism might mean that customers stop receiving updates, making it very hard for you to tell them about a fix for that problem, or ship any fixes for other bugs. Sparkle doesn’t use certificates, so keys don’t have any expiration associated with them. The risks and consequences are:

Destroyed Private Key
You can’t update your application any more.
Disclosed Private Key
Someone else can release an “update” to your app; provided they can get the Sparkle instance on the customer’s computer to download it.

In the case of a disclosed private key, the conditions that need to be met to actually distribute a poisoned update are specific and hard to achieve. Either the webserver hosting your appcast or the DNS for that server must be compromised, so that the attacker can get the customer’s app to think there’s an update available that the attacker controls. All of that means that you can probably get away with a staggered key update without any (or many, depending on who’s attacking you) customers getting affected:

  • Release a new update signed by the original key. The update contains the new key pair’s public key.
  • Some time later, release another update signed by the new key.

The situation if you actually lose your private key is worse: you can’t update at all any more. You can’t generate a new key pair and start using that, because your updates won’t be accepted by the apps already out in the field. You can’t bake a “just in case” mechanism in, because Sparkle only expects a single key pair. You’ll have to find a way to contact all of your customers directly, explain the situation and get them to manually update to a new version of your app. That’s one reason I’d like to see auto-update libraries use Mac OS X code signing as their integrity-check mechanisms: so that they are as flexible as the platform on which they run.

Security flaw liability

The Register recently ran an opinion piece called Don’t blame Willy the Mailboy for software security flaws. The article is a reaction to the following excerpt from a SANS sample application security procurement contract:

No Malicious Code

Developer warrants that the software shall not contain any code that does not support a software requirement and weakens the security of the application, including computer viruses, worms, time bombs, back doors, Trojan horses, Easter eggs, and all other forms of malicious code.

That seems similar to a requirement I have previously almost proposed voluntarily adopting:

If one of us [Mac developers] were, deliberately or accidentally, to distribute malware to our users, they would be (rightfully) annoyed. It would severely disrupt our reputation if we did that; in fact some would probably choose never to trust software from us again. Now Mac OS X allows us to put our identity to our software using code signing. Why not use that to associate our good reputations as developers with our software? By using anti-virus software to improve our confidence that our development environments and the software we’re building are clean, and by explaining to our customers why we’ve done this and what it means, we effectively pass some level of assurance on to our customer. Applications signed by us, the developers, have gone through a process which reduces the risk to you, the customers. While your customers trust you as the source of good applications, and can verify that you were indeed the app provider, they can believe in that assurance. They can associate that trust with the app; and the trust represents some tangible value.

Now what the draft contract seems to propose (and I have good confidence in this, due to the wording) is that if a logic bomb, back door, Easter Egg or whatever is implemented in the delivered application, then the developer who wrote that misfeature has violated the contract, not the vendor. Taken at face value, this seems just a little bad. In the subset of conditions listed above, the developer has introduced code into the application that was not part of the specification. It either directly affects the security posture of the application, or is of unknown quality because it’s untested: the testers didn’t even know it was there. This is clearly the fault of the developer, and the developer should be accountable. In most companies this would be a sacking offence, but the proposed contract goes further and says that the developer is responsible to the client. Fair enough, although the vendor should take some responsibility too, as a mature software organisation should have a process such that none of its products contain unaccounted code. This traceability from requirement to product is the daily bread of some very mature development lifecycle tools.

But what about the malware cases? It’s harder to assign blame to the developer for malware injection, and I would say that actually the vendor organisation should be held responsible, and should deal with punishment internally. Why? Because there are too many links in a chain for any one person to put malware into a software product. Let’s say one developer does decide to insert malware.

  • Developer introduces malware to his workstation. This means that any malware prevention procedures in place on the workstation have failed.
  • Developer commits malware to the source repository. Any malware prevention procedures in place on the SCM server have failed.
  • Developer submits build request to the builders.
  • Builder checks out build input and does not notice the malware, construct the product.
  • Builder does not spot the malware in the built product.
  • Testers do not spot the malware in final testing.
  • Release engineers do not spot the malware, and release the product.
  • Of course there are various points at which malware could be introduced, but for a developer to do so in a way consistent with his role as developer requires a systematic failure in the company’s procedures regarding information security, which implies that the CSO ought to be accountable in addition to the developer. It’s also, as with the Easter Egg case, symptomatic of a failure in the control of their development process, so the head of Engineering should be called to task as well. In addition, the head of IT needs to answer some uncomfortable questions.

    So, as it stands, the proposed contract seems well-intentioned but inappropriate. Now what if it’s the thin end of a slippery iceberg? Could developers be held to account for introducing vulnerabilities into an application? The SANS contract is quiet on this point. It requires that the vendor shall provide a “certification package” consisting of the security documentation created throughout the development process. The package shall establish that the security requirements, design, implementation, and test results were properly completed and all security issues were resolved appropriately and that Security issues discovered after delivery shall be handled in the same manner as other bugs and issues as specified in this Agreement. In other words, the vendor should prove that all known vulnerabilities have been mitigated before shipment and if a vulnerability is subsequently discovered and is dealt with in an agreed fashion, no-one did anything wrong.

    That seems fairly comprehensive, and definitely places the onus directly on the vendor (there are various other parts of the contract that imply the same, such as the requirement for the vendor to carry out background checks and provide security training for developers). Let’s investigate the consequences for a few different scenarios.

    1. The product is attacked via a vulnerability that was brought up in the original certification package, but the risk was accepted. This vulnerability just needs to be fixed and we all move on; the risk was known, documented and accepted, and the attack is a consequence of doing business in the face of known risks.

    2. The product is attacked via a novel class of vulnerability, details of which were unknown at the time the certification package was produced. I think that again, this is a case where we just need to fix the problem, of course with sufficient attention paid to discovering whether the application is vulnerable in different ways via this new class of flaw. While developers should be encouraged to think of new ways to break the system, it’s inevitable that some unpredicted attack vectors will be discovered. Fix them, incorporate them into your security planning.

    3. The product is attacked by a vulnerability that was not covered in the certification package, but that is a failure of the product to fulfil its original security requirements. This is a case I like to refer to as “someone fucked up”. It ought to be straightforward (if time-consuming) to apply a systematic security analysis process to an application and get a comprehensive catalogue of its vulnerabilities. If the analysis missed things out, then either it was sloppy, abbreviated or ignored.

    Sloppy analysis. The security lead did not systematically enumerate the vulnerabilities, or missed some section of the app. The security lead is at fault, and should be responsible.

    Abbreviated analysis. While the security lead was responsible for the risk analysis, he was not given the authority to see it through to completion or to implement its results in the application. Whoever withheld that authority is to blame and should accept responsibility. In my experience this is typically a marketing or product management decision, as they try to drop tasks to work backwards from a ship date to the amount of effort spent on the product. Sometimes it’s engineering management; it’s almost never project management.

    Ignored analysis. Example: the risk of attack via buffer overflow is noted in the analysis, then the developer writing the feature doesn’t code bounds-checking. That developer has screwed up, and ought to be responsible for their mistake. If you think that’s a bit harsh, check this line from the “duties” list in a software engineer job ad:

    Write code as directed by Development Lead or Manager to deliver against specified project timescales quality and functionality requirements

    When you’re a programmer, it’s your job to bake quality in.

One Window that is good for Mac security

I realise now that I didn’t cover this when it happened back at the beginning of March, but that not everyone in either the Apple world nor the general infosec community is aware of it. Nearly one month ago, Apple hired a new Security Product Manager (the position was vacant at the time of WWDC 2008 and I think it was just being covered by another product manager in the interim): welcome Window Snyder.

Window has a good history in the infosec world; after working as security design architect at @stake, she moved to Microsoft to act as security sign-off for XP Service Pack 2 (Microsoft’s first OS release focussed solely on security improvements) and Windows Server 2003 (their first completely new OS release after the security push of 2002). It was during her watch that Microsoft became more open about their vulnerability reporting, and introduced “Patch Tuesday” to help systems administrators manage the patch lifecycle. I happen not to like the Patch Tuesday mentality, but at least Microsoft thought about the issue and reacted to it.

After Microsoft, Window became Chief Security Something-or-Other at Mozilla. Here she promoted measurement and tracking of security issues, process improvements and greater transparency, both in terms of Mozilla’s reporting and that of other vendors.

I think that, given the authority to make process and reporting changes regarding Apple’s security procedures, she will be a great addition to Apple’s security teams. Apple typically drop security updates without warning and with minimal information on the content and severity of the vulnerabilities addressed; they maintain what could be charitably described as an “arm’s length” relationship with security vendors and have a history of slow reaction to vulnerabilities discovered in open source components. I have great hope for those facets of Apple’s security work changing soon.