On localisation and security

Hot on the heels of Uli’s post on the problems of translation, I present another problem you might encounter while localising your code. This is a genuine bug (now fixed, of course) in code I have worked on in the past, only the data has been changed to protect the innocent.

We had a crash in the following line:

NSString *message = [NSString stringWithFormat:
	NSLocalizedString(@"%@ problems found", @"Discovery message"),
	problem];

Doesn’t appear to be anything wrong with that, does there? Well, as I say, it was a crasher. The app only crashed in one language though…for purposes of this argument, we’ll assume it was English. Let’s have a look at English.lproj/Localizable.strings:

/* Discovery message */
"%@ problems found" = "%@ found in %@";

Erm, that’s not so good. It would appear that at runtime, the variadic method +[NSString stringWithFormat: (NSString *)fmt, ...] is expecting two arguments to follow fmt, but only passed one, so it ends up reading its way off the end of the stack. That’s a classic format string vulnerability, but with a twist: none of our usual tools (by which I mean the various -Wformat flags and the static analyser) can detect this problem, because the format string is not contained in the code.

This problem should act as a reminder to ensure that the permissions on your app’s resources are correct, not just on the binary—an attacker can cause serious fun just by manipulating a text file. It should also suggest that you audit your translators’ work carefully, to ensure that these problems don’t arise in your app even without tampering.

Posted in buffer-overflow, l10n, Mac, Vulnerability | 2 Comments

Which vendor “is least secure”?

The people over at Intego have a blog post, Which big vendor is least secure? They discuss that because Microsoft have upped their game, malware authors have started to target other products, notably those produced by Adobe and Apple.

That doesn’t really address the question though: which big vendor is least secure (or more precisely, which big vendor creates the least secure products)? It’s an interesting question, and one that’s so hard to answer, people usually get it wrong.

The usual metrics for vendor software security are:

  • Number of vulnerability reports/advisories last year
  • Speed of addressing reported vulnerabilities

Both are just proxies for the question we really want to know the answer to: “what risk does this product expose its users to?” Each has drawbacks when used as such a proxy.

The previous list of vulnerabilities seems to correlate with a company’s development practices – if they were any good at threat modelling, they wouldn’t have released software with those vulnerabilities in, right? Well, maybe. But maybe they did do some analysis, discovered the vulnerability, and decided to accept it. Perhaps the vulnerability reports were actually the result of their improved secure development lifecycle, and some new technique, tool or consultant has patched up a whole bunch of issues. Essentially all we know is what problems have been addressed and who found them, and we can tell something about the risk that users were exposed to while those vulnerabilities were present. Actually, we can’t tell too much about that, unless we can find evidence that it was exploited (or not, which is harder). We really know nothing about the remaining risk profile of the application – have 1% or 100% of vulnerabilities been addressed?

The only time we really know something about the present risk is in the face of zero-day vulnerabilities, because we know that a problem exists and has yet to be addressed. But reports of zero-days are comparatively rare, because the people who find them usually have no motivation to report them. It’s only once the zero-day gets exploited, and the exploit gets discovered and reported that we know the problem existed in the first place.

The speed of addressing vulnerabilities tells us some information about the vendor’s ability to react to security issues. Well, you might think it does, it actually tells you a lot more about the vendor’s perception of their customers’ appetite for installing updates. Look at enterprise-focussed vendors like Sophos and Microsoft, and you’ll find that most security patches are distributed on a regular schedule so that sysadmins know when to expect them and can plan their testing and deployment accordingly. Both companies have issued out-of-band updates, but only in extreme circumstances.

Compare that model with Apple’s, a company that is clearly focussed on the consumer market. Apple typically have an ad hoc (or at least opaque) update schedule, with security and non-security content alike bundled into infrequent patch releases. Security content is simultaneously released for some earlier operating systems in a separate update. Standalone security updates are occasionally seen on the Mac, rarely (if ever) on the iPhone.

I don’t really use any Adobe software so had to research their security update schedule specifically for this post. In short, it looks like they have very frequent security updates, but without any public schedule. Using Adobe Reader is an exercise in unexpected update installation.

Of course, we can see when the updates come out, but that doesn’t directly mean we know how long they take to fix problems – for that we need to know when problems were reported. Microsoft’s monthly updates don’t necessarily address bugs that were reported within the last month, they might be working on a huge backlog.

Where we can compare vendors is situations in which they all ship the same component with the same vulnerabilities, and must provide the same update. The more reactive companies (who don’t think their users mind installing updates) will release the fixes first. In the case of Apple we can compare their fixes of shared components like open source UNIX tools or Java with other vendors – Linux distributors and Oracle mainly. It’s this comparison that Apple frequently loses, by taking longer to release the same patch than other Oracle, Red Hat, Canonical and friends.

So ultimately what we’d like to know is “which vendor exposes its customers to most risk?”, for which we’d need an honest, accurate and comprehensive risk analysis from each vendor or an independent source. Of course, few customers are going to want to wade through a full risk analysis of an operating system.

Posted in Business, Responsibility, threatmodel, Vulnerability | 2 Comments

Why passwords aren’t always the right answer.

I realised something yesterday. I don’t know my master password.

Users of Mac OS X can use FileVault, a data protection feature that replaces the user’s home folder with an encrypted disk image. Encrypted disk images are protected by AES-128 or AES-256 encryption, but to get at the private key you need to supply one of two pieces of information. The first is the user’s login password, and the second is a private key for a recovery certificate. That private key is stored in a dedicated keychain, which is itself protected by….the master password. More information on the mechanism is available both in Professional Cocoa Application Security and Enterprise Mac.

Anyway, so this password is very useful – any FileVault-enabled home folder can be opened by the holder of the master password. Even if the user has forgotten his login password, has left the company or is being awkward, you can get at the encrypted content. It’s also hardly ever used. In fact, I’ve never used my own master password since I set it – and as a consequence have forgotten it.

There are a few different ways for users to recall passwords – by recital, by muscle memory or by revision. So when you enter the password, you either remember what the characters in the password are, where your hands need to be to type it or you look at the piece of paper/keychain where you wrote it down. Discounting the revision option (the keychain is off the menu, because if you forget your login password you can’t decrypt your login keychain in order to view the recorded password), the only ways to reinforce a password in your memory are to use it. And you never use the FileVault master password.

I submit that as a rarely-used authentication step, the choice of a password to protect FileVault recovery is a particularly bad one. Of course you don’t want attackers able to use the recovery mechanism, but you do want that when you really need to recover your encrypted data, the OS doesn’t keep you out, too.

Posted in Encryption, Keychain, Mac, password | 3 Comments

WWDC dates announced

The entire of Twitter has imploded after noticing that Apple has announced the dates for WWDC, this year June 7-11. That’s too short notice for me to go, and having only recently started working again after a few months concentrating solely on Professional Cocoa Application Security, I can’t scrape together the few thousand pounds needed to reserve flights, hotel and ticket at a month’s notice.

I hope that those of you who are going have a great time. The conference looks decidedly thin on Mac content this year, and while I still class myself as more of a Mac developer than an iP* developer that shouldn’t be too much of a problem. The main value in WWDC is in the social/networking side first, the labs second, and the lecture content third – so as long as you can find an engineer in the labs who remembers how a Mac works, you’ll probably still have a great week and learn a lot.

Posted in carbon, conference, nextstep | 2 Comments

The difference between NSTableView and UITableView

A number of times, I’ve chased myself down rat holes in iPhone projects because I’ve created a design or implementation that assumes UITableView and NSTableView are similar objects. They aren’t.

The main problem I come across is related to how the cells are treated in Cocoa and in Cocoa touch. An AppKit table comprises columns, each of which uses a cell to display its content. A cell contains the drawing and event-handling stuff of a view, but nothing to do with its place in the view hierarchy or responder chain. It’s essentially a light-weight view. For each row in the table, NSTableColumn takes its cell, configures it for the content in that row and then draws the cell at its location in the column. No matter how many rows there are, a single cell is used.

UIKit works differently. Of course a UITableView only has one column, but it also displays views rather than cells. This is good, but leads to the key distinction that always trips me up: you can’t use the same view more than once in a table view. Of course, sections in a UITableView will often have more than one row, but each row that is visible on-screen will needs its own instance of UITableViewCell (which is a subclass of UIView, and therefore a view in the traditional sense rather than a cell). If you try to re-use the same instance multiple times, the table view will configure each row but only the last one it prepared will be drawn.

So what’s this -reuseIdentifier? stuff? That’s related to caching views for scrolling. Imagine a table view with 10 rows, of which 4 can be seen on screen at once. Each uses the same type of cell in this example. When the table view first becomes visible there will be 4 UITableViewCell instances in use, displaying rows 0-3. Now you start to scroll the view. UITableView finds it needs an extra cell to display row 4, which is now partially on-screen and row 0 is starting to slide off. When row 0 disappears completely, the table view could just delete its cell – but rather than do that, it adds it to a queue of reusable cells. When row 5 starts to appear, the table view can re-use the object it’s already created for row 0, because it’s the same type of cell as the one for row 5 and is currently unused.

So, that’s that really. Note to self: don’t treat UIKit like it’s just AppKit, you’ll end up wasting a day of code.

Posted in cocoa, iPad, iPhone, objc | 3 Comments

Regaining your identity

In my last post, losing your identity, I pointed out an annoying problem with the Sparkle update framework, in that if you lose your private key you can no longer post any updates. Using code signing identities would offer a get-out, in addition to reducing the complexity associated with releasing a build. You do already sign your apps, right?

I implemented a version of Sparkle that does codesign validation, which you can grab using git or view on github. After Sparkle has downloaded its update, it will test that the new application satisfies the designated requirement for the host application – in other words, that the two are the same app. It will not replace the host unless they are the same app. Note that this feature only works on 10.6, because I use the new Code Signing Services API in Security.framework.

Posted in Codesign, Crypto, PCAS, Updates | Leave a comment

Losing your identity

Developers make use of cryptographic signatures in multiple places in the software lifecycle. No iPad or iPhone application may be distributed without having been signed by the developer. Mac developers who sign their applications get to annoy their customers much less when they ship updates, and indeed the Sparkle framework allows developers to sign the download file for each update (which I heartily recommend you do). PackageMaker allows developers to sign installer packages. In each of these cases, the developer provides assurance that the application definitely came from their build process, and definitely hasn’t been changed since then (for wholly reasonable values of “definitely”, anyway).

No security measure comes for free. Adding a step like code or update signing mitigates certain risks, but introduces new ones. That’s why security planning must be an iterative process – every time you make changes, you reduce some risks and create or increase others. The risks associated with cryptographic signing are that your private key could be lost or deleted, or it could be disclosed to a third party. In the case of keys associated with digital certificates, there’s also the risk that your certificate expires while you’re still relying on it (I’ve seen that happen).

Of course you can take steps to protect the key from any of those eventualities, but you cannot reduce the risk to zero (at least not while spending a finite amount of time and effort on the problem). You should certainly have a plan in place for migrating from an expired identity to a new one. Having a contingency plan for dealing with a lost or compromised private key will make your life easier if it ever happens – you can work to the plan rather than having to both manage the emergency and figure out what you’re supposed to be doing at the same time.

iPhone/iPad signing certificate compromise

This is the easiest situation to deal with. Let’s look at the consequences for each of the problems identified:

Expired Identity
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Destroyed Private Key
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Disclosed Private Key
Someone else can submit apps to the store and provision betas on your behalf. (They can also test their apps on their phone using your identity, though that’s hardly a significant problem.)

In the case of an expired identity, Apple should lead you through renewal instructions using iTunes Connect. You ought to get some warning, and it’s in their interests to help you as they’ll get another $99 out of you :-). There’s not really much of a risk here, you just need to note in your calendar to sort out renewal.

The cases of a destroyed or disclosed private key are exceptional, and you need to contact Apple to get your old identity revoked and a new one issued. Speed is of the essence if there’s a chance your private key has been leaked, because if someone else submits an “update” on your behalf Apple will treat it as a release from you. It will be hard for you to repudiate the update (claim it isn’t yours) – after all, it’s signed with your identity. If you manage to deal with Apple quickly and get your identity revoked, the only remaining possibility is that an attacker could have used your identity to send out some malicious apps as betas. Because of the limited exposure beta apps have, there will only be a slight impact: though you’ll probably want to communicate the issue to the public to motivate users of “your” beta app to remove it from their phones.

By the way, notice that no application on the store has actually been signed by the developer who wrote it – the .ipa bundles are all re-signed by Apple before distribution.

Mac code signing certificate compromise

Again, let’s start with the consequences.

Expired Identity
You can’t sign new products. Existing releases continue to work, as Mac OS X ignores certificate expiration in code signing checks by default.
Destroyed Private Key
You can’t sign new products.
Disclosed Private Key
Someone else can sign applications that appear to be yours. Such applications will receive the same keychain and firewall access rights as your legitimate apps.

If you just switch identities without any notice, there will be some annoyances for users – the keychain, firewall etc. dialogues indicating that your application cannot be identified as a legitimate update will appear for the update where the identities switch. Unfortunately this situation cannot be distinguished from a Trojan horse version of your app being deployed (even more annoyingly there’s no good way to inspect an application distributor’s identity, so users can’t make the distinction themselves). It would be good to make the migration seamless, so that users don’t get bugged by the update warnings, and learn to treat them as suspicious.

When you’re planning a certificate migration, you can arrange for that to happen easily. Presumably you know how long it takes for most users to update your app (where “most users” is defined to be some large fraction such that you can accept having to give the remainder additional support). At least that long before you plan to migrate identities, release an update that changes your application’s designated requirement such that it’s satisfied by both old and new identities. This update should be signed by your existing (old) identity, so that it’s recognised as an update to the older releases of the app. Once that update’s had sufficient uptake, release another update that’s satisfied by only the new identity, and signed by that new identity.

If you’re faced with an unplanned identity migration, that might not be possible (or in the case of a leaked private key, might lead to an unacceptably large window of vulnerability). So you need to bake identity migration readiness into your release process from the start.

Assuming you use certificates provided by vendor CAs whose own identities are trusted by Mac OS X, you can provide a designated requirement that matches any certificate issued to you. The requirement would be of the form (warning: typed directly into MarsEdit):

identifier "com.securemacprogramming.MyGreatApp" and cert leaf[subject.CN]="Secure Mac Programming Code Signing" and cert leaf[subject.O]="Secure Mac Programming Plc." and anchor[subject.O]="Verisign, Inc." and anchor trusted

Now if one of your private keys is compromised, you coordinate with your CA to revoke the certificate and migrate to a different identity. The remaining risks are that the CA might issue a certificate with the same common name and organisation name to another entity: something you need to take up with the CA in their service-level agreement; or Apple might choose to trust a different CA called “Verisign, Inc.” which seems unlikely.

If you use self-signed certificates, then you need to manage this migration process yourself. You can generate a self-signed CA from which you issue signing certificates, then you can revoke individual signing certs as needed. However, you now have two problems: distributing the certificate revocation list (CRL) to customers, and protecting the private key of the top-level certificate.

Package signing certificate compromise

The situation with signed Installer packages is very similar to that with signed Mac applications, except that there’s no concept of upgrading a package and thus no migration issues. When a package is installed, its certificate is used to check its identity. You just have to make sure that your identity is valid at time of signing, and that any certificate associated with a disclosed private key is revoked.

Sparkle signing key compromise

You have to be very careful that your automatic update mechanism is robust. Any other bug in an application can be fixed by deploying an update to your customers. A bug in the update mechanism might mean that customers stop receiving updates, making it very hard for you to tell them about a fix for that problem, or ship any fixes for other bugs. Sparkle doesn’t use certificates, so keys don’t have any expiration associated with them. The risks and consequences are:

Destroyed Private Key
You can’t update your application any more.
Disclosed Private Key
Someone else can release an “update” to your app; provided they can get the Sparkle instance on the customer’s computer to download it.

In the case of a disclosed private key, the conditions that need to be met to actually distribute a poisoned update are specific and hard to achieve. Either the webserver hosting your appcast or the DNS for that server must be compromised, so that the attacker can get the customer’s app to think there’s an update available that the attacker controls. All of that means that you can probably get away with a staggered key update without any (or many, depending on who’s attacking you) customers getting affected:

  • Release a new update signed by the original key. The update contains the new key pair’s public key.
  • Some time later, release another update signed by the new key.

The situation if you actually lose your private key is worse: you can’t update at all any more. You can’t generate a new key pair and start using that, because your updates won’t be accepted by the apps already out in the field. You can’t bake a “just in case” mechanism in, because Sparkle only expects a single key pair. You’ll have to find a way to contact all of your customers directly, explain the situation and get them to manually update to a new version of your app. That’s one reason I’d like to see auto-update libraries use Mac OS X code signing as their integrity-check mechanisms: so that they are as flexible as the platform on which they run.

Posted in Codesign, Crypto, iPad, iPhone, Mac, PCAS, Policy, Updates | Comments Off on Losing your identity

On writing a book

Well, I’ve performed my final author’s review, and Professional Cocoa Application Security is all with the printers. This post is about my experiences writing the book, not the book material itself.

My original motivation for writing PCAS was that it was a topic someone needed to talk about, and nobody had hitherto done the talking. I was actually initially approached by Wiley to see if I’d write anything at all – their commissioning editor had seen my Objective-C FAQ and this blog, and liked my style. I said that I didn’t want to write a book, I wanted to write this book. It was the best way I could pay back the community that has helped me so much in the years I’ve been a Mac developer.

It took about 6 months to write the draft – my original estimate was much shorter but once we realised I couldn’t meet that we revised it, after which I stayed on track with the updated schedule. That’s six months of nearly full time work. Some other books probably don’t take so long, but in this case I had set myself a very ambitious scope and needed to research quite a lot of the topics before I could write on them. If you’ve already got a series of blog posts, training material or something that you just want to turn into a book, I can imagine the drafting process being much quicker.

I’ve found that book authorship is not the best vehicle for self-study. You get a biased view of the material, looking for things that would be interesting to readers rather than things you will need to use yourself. Because the goal of the book is to provide utility to the readers, you end up with a gotcha-oriented approach to research, looking for the subtle benefits or issues that are not obvious on a casual inspection. That said, it was still a good motivation to learn about the technologies I wrote about, so I’m glad I did it. Parts of the writing process were a lot of fun: I got to find out about some cool frameworks and APIs, absorb loads of information and re-emit it in a form that is, I hope, engaging and interesting. I’ve looked at my bookshelves and have about 6ft of books that I used as source material – and that doesn’t include websites, ebooks and journal papers. On the other hand, I’m not going to deny that I had occasional days that just felt like a long slog to get the day’s section written. I didn’t mind solving hard problems, but there are some subjects that just seem impossible to say anything interesting about. It’s when writing those sections that you find yourself staring at a half-written sentence for an hour, wondering just what it was you were thinking when you wrote the ToC.

You’re not going to get rich off the advance :). I was in a good position where I could live off practically no income while writing, meaning that devoting a few months to producing the drafts was not a problem. What I have become rich in is exposure and recognition, even before the book was published. Because both the proposal and the book content must be peer-reviewed by a technical reviewer, “I am writing a book” says “there are people out there who trust that I know my subject”. Of course, “I have written a book and you can read it” carries more weight, so I expect this exposure to increase after publication.

I’ve worked on reviewing proposals too, and the things you really need to make sure if you are trying to punt a proposal to publishers are:

  • You need to tell the publishers that the market for your book exists, who is in that market and how big it is. They’re not going to go and look for the buyers on your behalf (but they will get a reviewer to make sure you’re not talking bullshit).
  • Having identified your reader, the goals and content of the book must be appropriate to the reader. Don’t put an introduction to Xcode in your Advanced iPad Apps book, just to make up an example.

This theme carries on into the review process for the actual content. The technical reviewer (a role I’ve also taken before) is not just there to check that the code compiles. Responsibilities include verifying the accuracy of the content and appropriateness for the target reader, and indeed review comments I’ve made on book drafts have been split roughly evenly between “this isn’t quite right” and “your reader won’t understand this” (though I’m more verbose in the actual review).

So, in short, you will not make money writing a book. You will gain kudos and satisfaction. If you’ve got something that you think the world desperately needs to know, and you know that you can explain it in a way the world will want to pay attention to, then by all means write! If you want to make a few thousand dollars, or want an easy project between apps, then I’d suggest finding something else. Writing’s fun, and it’s worthwhile, but it’s certainly not an easy life.

Posted in book, cocoa, security | 1 Comment

Rehearsals in beta!

I have a new application, Rehearsals, an online practice diary for musicians. If that sounds like the kind of thing you’re interested in, and you have Mac OS X 10.6 or newer, then please download the beta release and test it out. There’s absolutely no charge, and if you submit feedback to support <at> rehearsalsapp <dot> com you’ll be eligible for a free licence for version 1.0 once that’s released. There are no limitations on the beta version, so please do download and start using it!

You can follow @rehearsals_app for updates to the beta programme (new releases are automatically downloaded using Sparkle, if you enable it in the app).

Posted in cocoa, metadev, rehearsals, test | Leave a comment

Security flaw liability

The Register recently ran an opinion piece called Don’t blame Willy the Mailboy for software security flaws. The article is a reaction to the following excerpt from a SANS sample application security procurement contract:

No Malicious Code

Developer warrants that the software shall not contain any code that does not support a software requirement and weakens the security of the application, including computer viruses, worms, time bombs, back doors, Trojan horses, Easter eggs, and all other forms of malicious code.

That seems similar to a requirement I have previously almost proposed voluntarily adopting:

If one of us [Mac developers] were, deliberately or accidentally, to distribute malware to our users, they would be (rightfully) annoyed. It would severely disrupt our reputation if we did that; in fact some would probably choose never to trust software from us again. Now Mac OS X allows us to put our identity to our software using code signing. Why not use that to associate our good reputations as developers with our software? By using anti-virus software to improve our confidence that our development environments and the software we’re building are clean, and by explaining to our customers why we’ve done this and what it means, we effectively pass some level of assurance on to our customer. Applications signed by us, the developers, have gone through a process which reduces the risk to you, the customers. While your customers trust you as the source of good applications, and can verify that you were indeed the app provider, they can believe in that assurance. They can associate that trust with the app; and the trust represents some tangible value.

Now what the draft contract seems to propose (and I have good confidence in this, due to the wording) is that if a logic bomb, back door, Easter Egg or whatever is implemented in the delivered application, then the developer who wrote that misfeature has violated the contract, not the vendor. Taken at face value, this seems just a little bad. In the subset of conditions listed above, the developer has introduced code into the application that was not part of the specification. It either directly affects the security posture of the application, or is of unknown quality because it’s untested: the testers didn’t even know it was there. This is clearly the fault of the developer, and the developer should be accountable. In most companies this would be a sacking offence, but the proposed contract goes further and says that the developer is responsible to the client. Fair enough, although the vendor should take some responsibility too, as a mature software organisation should have a process such that none of its products contain unaccounted code. This traceability from requirement to product is the daily bread of some very mature development lifecycle tools.

But what about the malware cases? It’s harder to assign blame to the developer for malware injection, and I would say that actually the vendor organisation should be held responsible, and should deal with punishment internally. Why? Because there are too many links in a chain for any one person to put malware into a software product. Let’s say one developer does decide to insert malware.

  • Developer introduces malware to his workstation. This means that any malware prevention procedures in place on the workstation have failed.
  • Developer commits malware to the source repository. Any malware prevention procedures in place on the SCM server have failed.
  • Developer submits build request to the builders.
  • Builder checks out build input and does not notice the malware, construct the product.
  • Builder does not spot the malware in the built product.
  • Testers do not spot the malware in final testing.
  • Release engineers do not spot the malware, and release the product.
  • Of course there are various points at which malware could be introduced, but for a developer to do so in a way consistent with his role as developer requires a systematic failure in the company’s procedures regarding information security, which implies that the CSO ought to be accountable in addition to the developer. It’s also, as with the Easter Egg case, symptomatic of a failure in the control of their development process, so the head of Engineering should be called to task as well. In addition, the head of IT needs to answer some uncomfortable questions.

    So, as it stands, the proposed contract seems well-intentioned but inappropriate. Now what if it’s the thin end of a slippery iceberg? Could developers be held to account for introducing vulnerabilities into an application? The SANS contract is quiet on this point. It requires that the vendor shall provide a “certification package” consisting of the security documentation created throughout the development process. The package shall establish that the security requirements, design, implementation, and test results were properly completed and all security issues were resolved appropriately and that Security issues discovered after delivery shall be handled in the same manner as other bugs and issues as specified in this Agreement. In other words, the vendor should prove that all known vulnerabilities have been mitigated before shipment and if a vulnerability is subsequently discovered and is dealt with in an agreed fashion, no-one did anything wrong.

    That seems fairly comprehensive, and definitely places the onus directly on the vendor (there are various other parts of the contract that imply the same, such as the requirement for the vendor to carry out background checks and provide security training for developers). Let’s investigate the consequences for a few different scenarios.

    1. The product is attacked via a vulnerability that was brought up in the original certification package, but the risk was accepted. This vulnerability just needs to be fixed and we all move on; the risk was known, documented and accepted, and the attack is a consequence of doing business in the face of known risks.

    2. The product is attacked via a novel class of vulnerability, details of which were unknown at the time the certification package was produced. I think that again, this is a case where we just need to fix the problem, of course with sufficient attention paid to discovering whether the application is vulnerable in different ways via this new class of flaw. While developers should be encouraged to think of new ways to break the system, it’s inevitable that some unpredicted attack vectors will be discovered. Fix them, incorporate them into your security planning.

    3. The product is attacked by a vulnerability that was not covered in the certification package, but that is a failure of the product to fulfil its original security requirements. This is a case I like to refer to as “someone fucked up”. It ought to be straightforward (if time-consuming) to apply a systematic security analysis process to an application and get a comprehensive catalogue of its vulnerabilities. If the analysis missed things out, then either it was sloppy, abbreviated or ignored.

    Sloppy analysis. The security lead did not systematically enumerate the vulnerabilities, or missed some section of the app. The security lead is at fault, and should be responsible.

    Abbreviated analysis. While the security lead was responsible for the risk analysis, he was not given the authority to see it through to completion or to implement its results in the application. Whoever withheld that authority is to blame and should accept responsibility. In my experience this is typically a marketing or product management decision, as they try to drop tasks to work backwards from a ship date to the amount of effort spent on the product. Sometimes it’s engineering management; it’s almost never project management.

    Ignored analysis. Example: the risk of attack via buffer overflow is noted in the analysis, then the developer writing the feature doesn’t code bounds-checking. That developer has screwed up, and ought to be responsible for their mistake. If you think that’s a bit harsh, check this line from the “duties” list in a software engineer job ad:

    Write code as directed by Development Lead or Manager to deliver against specified project timescales quality and functionality requirements

    When you’re a programmer, it’s your job to bake quality in.

Posted in Malware, Policy, Responsibility, Vulnerability | Comments Off on Security flaw liability