Security consultancy from the other side

I used to run an application security consultancy business, back before the kinds of businesses who knew they needed to consider application security had got past assessing creating mobile apps. Whoops!

Something that occasionally, nay, often happened was that clients would get frustrated if I didn’t give them a direct answer to a question they asked, or if the answer to an apparently simple question was a one-day workshop. It certainly seems sketchy that someone who charges a day rate would rather engage in another day’s work than write a quick email, doesn’t it?

Today I was on the receiving end of this interaction. I’m defining the data encryption standards for our applications, so talked to our tame infosecnician about the problem. Of course, rather than specific recommendations about protocols, modes of operation, key lengths, rotation frequencies etc he would only answer my questions with other questions.

And this is totally normal and OK. The reason that I want to encrypt (some of) this data is that its confidentiality and integrity are valuable to me (my employer) and it’s worth investing some of my (our) time in protecting those attributes. How much is it worth? How much risk am I willing to leave on the table? How long should the data be protected for? Who do I trust with the data, and how much do I trust them?

Only I can answer those questions, so it’s not up to someone else to answer them for me, but they can remind me that I need to ask them.

Don’t be a dick

In a recent post on device identifiers, I wrote a guideline that I’ve previously invoked when it comes to sharing user data. Here is, in both more succinct and complete form than in the above-linked post, the Don’t Be A Dick Guide to Data Privacy:

  • The only things you are entitled to know are those things that the user told you.
  • The only things you are entitled to share are those things that the user permitted you to share.
  • The only entities with which you may share are those entities with which the user permitted you to share.
  • The only reason for sharing a user’s things is that the user wants to do something that requires sharing those things.

It’s simple, which makes for a good user experience. It’s explicit, which means culturally-situated ideas of acceptable implicit sharing do not muddy the issue.

It’s also general. One problem I’ve seen with privacy discussions is that different people have specific ideas of what the absolutely biggest privacy issue that must be solved now is. For many people, it’s location: they don’t like the idea that an organisation (public or private) can see where they are at any time. For others, it’s unique identifiers that would allow an entity to form an aggregate view of their data across multiple functions. For others, it’s conversations they have with their boss, mistress, whistle-blower or others.

Because the DBADG mentions none of these, it covers all of these. And more. Who knows what sensors and capabilities will exist in future smartphone kit? They might use mesh networks that can accurately position users in a crowd with respect to other members. They could include automatic person recognition to alert when your friends are nearby. A handset might include a blood sugar monitor. The fact is that by not stopping to cover any particular form of data, the above guideline covers all of these and any others that I didn’t think of.

There’s one thing it doesn’t address: just because a user wants to share something, should the app allow it? This is particularly a question that makers of apps for children should ask themselves. Children (and everybody else) deserve the default-private treatment of their data that the DBADG promotes. However, children also deserve impartial guidance on what it is a good or a bad idea to share with the interwebs at large, and that should be baked into the app experience. “Please check with a responsible adult before pressing this button” does not cut it: just don’t give them the button.

Want to hire iamleeg?

Well, that was fun. For nearly a year I’ve been running Fuzzy Aliens, a consultancy for app developers to help get security and privacy requirements correct, reducing the burden on the users. This came after a year of doing the same as a contractor, and a longer period of helping out via conference talks, a book on the topic, podcasts and so on. I’ve even been helping the public to understand the computer industry at museums.

Everything changes, and Fuzzy Aliens is no exception. While I’ve been able to help out on some really interesting projects, for everyone from indies to banks, there hasn’t been enough of this work to pay the bills. I’ve spoken with a number of people on this, and have heard a range of opinions on why there isn’t enough mobile app security work out there to keep one person employed. My own reflection on the year leads me to these items as the main culprits:

  • There isn’t a high risk to developers associated with getting security wrong;
  • Avoiding certain behaviour to improve security can mean losing out to competitors who don’t feel the need to comply;
  • The changes I want to make to the industry won’t come from a one-person company; and
  • I haven’t done a great job of communicating the benefits of app security to potential customers.

Some people say things like Fuzzy Aliens is “too early”, or that the industry “isn’t ready” for such a service: those are actually indications that I haven’t made the industry ready: in other words, that I didn’t get the advantages across well enough. Anyway, the end results are that I can and will learn from Fuzzy Aliens, and that I still want to make the world a better place. I will be doing so through the medium of salaried employment. In other words, you can give me a job (assuming you want to). The timeline is this:

  • The next month or so will be my Time of Searching. If you think you might want to hire me, get in touch and arrange an interview during August or early September.
  • Next will come my Time of Changing. Fuzzy Aliens will still offer consultancy for a very short while, so if you have been sitting on the fence about getting some security expertise on your app, now is the time to do it. But this will be when I research the things I’ll need to know for…
  • whatever it is that comes next.

What do I want to do?

Well, of course my main areas of experience are in applications and tools for UNIX platforms—particularly Mac OS X and iOS—and application security, and I plan to continue in that field. A former manager of mine described me thus on LinkedIn:

Graham is so much more than a highly competent software engineer. A restless “information scout” – finding it, making sense of it, bearing on a problem at hand or forging a compelling vision. Able to move effortlessly between “big picture” and an obscure detail. Highly capable relationships builder, engaging speaker and persuasive technology evangelist. Extremely fast learner. Able to use all those qualities very effectively to achieve ambitious goals.

Those skills can best be applied strategically I think: so it’s time to become a senior/chief technologist, technology evangelist, technical strategy officer or developer manager. That’s the kind of thing I’ll be looking for, or for an opportunity that can lead to it. I want to spend an appreciable amount of time supporting a product or community that’s worth supporting: much as I’ve been doing for the last few years with the Cocoa community.

Training and mentoring would also be good things for me to do, I think. My video training course on unit testing seems to have been well-received, and of course I spent a lot of my consulting time on helping developers, project managers and C*Os to understand security issues in terms relevant to their needs.

Where do I want to do it?

Location is somewhat important, though obviously with a couple of years’ experience at telecommuting I’m comfortable with remote working too. The roles I’ve described above, which depend as much on relationships as on sitting at a computer, may be best suited by split working.

My first choice preference for the location of my desk is a large subset of the south of the UK, bounded by Weston-Super-Mare and Lyme Regis to the west, Gloucester and Oxford to the north, Reading and Chichester to the east and the water to the south (though not the Solent: IoW is fine). Notice that London is outside this area: having worked for both employers and clients in the big smoke, I would rather not be in that city every day for any appreciable amount of time.

I’d be willing to entertain relocation elsewhere in Europe for a really cool opportunity. Preferably somewhere with a Germanic language because I can understand those (including, if push comes to shove, Icelandic and Faroese). Amsterdam, Stockholm and Dublin would all be cool. The States? No: I couldn’t commit to living over there for more than a short amount of time.

Who will you do it for?

That part is still open: it could be you. I would work for commercial, charity or government/academic sectors, but I have this restriction: you must not just be another contract app development agency/studio. You must be doing what you do because you think it’s worth doing, because that’s the standard I hold myself to. And charging marketing departments to slap their logo onto a UITableView displaying their blog’s RSS feed is not worth doing.

That’s why I’m not just falling back on contract iOS app development: it’s not very satisfying. I’d rather be paid enough to live doing something great, than make loads of money on asinine and unimportant contracts. Also, I’d rather work with other cool and motivated people, and that’s hard to do when you change project every couple of months.

So you’re doing something you believe in, and as long as you can convince me it’s worth believing in and will be interesting to do, and you know where to find me, then perhaps I’ll help you to do it. Look at my CV, then as I said before, e-mail me and we’ll sort out an interview.

I expect my reward to be dependent on how successful I make the product or community I’m supporting: it’s how I’ll be measuring myself so I hope it’s how you will be too. Of course, we all know that stock and options schemes are bullshit unless the stock is actually tradeable, so I’ll be bearing that in mind.

Some miscellaneous stuff

Here’s some things I’m looking for, either to embrace or avoid, that don’t quite fit in to the story above but are perhaps interesting to relate.

Things I’ve never done, but would

These aren’t necessarily things my next job must have, and aren’t all even work-related, but are things that I would take the opportunity to do.

  • Give a talk to an audience of more than 1,000 people.
  • Work in a field on a farm. Preferably in control of a tractor.
  • Write a whole application without using any accessors.
  • Ride a Harley-Davidson along the Californian coast.
  • Move the IT security industry away from throwing completed and deployed products at vulnerability testers, and towards understanding security as an appropriately-prioritised implicit customer requirement.
  • Have direct reports.

Things I don’t like

These are the things I would try to avoid.

  • “Rock star” developers, and companies who hire them.
  • Development teams organised in silos.
  • Technology holy wars.
  • Celery. Seriously, I hate celery.

OK, but first we like to Google our prospective hires.

I’ll save you the trouble.

Protecting source code

As I mentioned on the missing iDeveloper.tv Live episode, one of the consequences of the Gawker hack was that their source code for their internal software was leaked into the Internet. I doubt any of my readers would want that to happen to their code, so I’m going to share the details of how I protect my clients’ code when I’m working. Maybe some of this will work for you.

In the office, I work at a desktop iMac. This has an external time machine backup disk and a DropBox for off-site storage. However, client code does _not_ go onto the DropBox. Instead I keep a separate, encrypted sparse disk image for each project I’m working on. The password for each is different. As well as protecting against snooping, this helps stop cross-contamination. I rarely have two such images mounted at once. Note that it’s not just source that goes into these images: build products, notes, Instruments traces, and images all go into the encrypted containers.

Obviously that means a lot of passwords, and no I can’t remember them all. I use a keychain. It locks automatically when not in use, and has a passphrase that’s different from my login passphrase.

The devices I test on are all encrypted where available (if a client needs me to test on an iPhone 3G, then I can, but it isn’t encrypted). They are passphrase locked, set to require passphrase immediately. And I NEVER take them away from the desk before deleting any developer builds, unless I need to do something special like a real-world location services test.

I rarely do coding work on the laptop, but when I do I copy the appropriate encrypted image onto it. The laptop additionally has FileVault configured, though I’m evaluating full-disk encryption options. Keychain configuration as above, additionally with a password required on wake from sleep or screensaver, and a firmware password.

For pushing work back to the clients, most clients use github or bitbucket which offer SSL-encrypted connections to the repositories. Personally, I have a self-run repo host available over HTTPS or SSH, but will probably move that to a github-like service because life’s too short. Their security policy seems acceptable to me.

On the Mac App Store

I’ve just come off iDeveloper.TV Live with Scotty and John, where we were talking about the Mac app store. I had some material prepared about the security side of the app store that we didn’t get on to – here’s a quick write up.

There’s a lot of discussion on twitter and the macsb mailing list, and doubtless elsewhere, about the encryption paperwork that Apple are making us fill in. It’s not Apple’s fault, it’s the U.S. Department of Commerce. You see, back in the cold war (and, frankly, ever since) the government have been of the opinion that encryption is a weapon (because it hides data from their agents) and so are powerful computers (because they can do encryption that’s expensive to crack). So the Bureau of Industry and Security developed the Export Administration Regulations to control the flow of such heinous weapons through the commercial sector.

Section 5, part 2 covers computer equipment and software. Specific provision is made for encryption, in the documentation we find that Items may be controlled as encryption items even if the encryption is actually performed by the operating system, an external library, a third-party product or a cryptographic processor. If an item uses encryption functionality, whether or not the code that performs the encryption is included with the item, then BIS evaluates the item based on the encryption functionality it uses.

So there you go. If you’re exporting software from the U.S. (and you are, if you’re selling via Apple’s app store) then you need to fill in the export notification.

Other Mac App Store security things, in “oh God is it that late already” format:

  • Receipt validation. No different really from existing licensing frameworks. All you can do is make it hard to find the tests from the binary. I had an idea about a specific way to do that, but want to test it before I release it. As you’ve no doubt found, anti-cracking measures aren’t easy.
  • Users. The user base for the MAS will be wider, and less tech-savvy, than the users existing micro-ISVs are selling to. Make sure your intent with regard to user data, particularly the consequences of your app’s workflow, are clear.
  • Similarly, be clear about the content of updates. Clearer than Apple are: “contains various fixes and improvements” WTF?
  • As we’ve found with the iOS store, it’s harder to push an update out than when you host the download yourself. Getting security right (or, pragmatically, not too wrong) the first time avoids emergency update submissions.
  • Permissions. Your app needs to run entirely as the current user, who may not be an admin. If you’re a developer, you’re probably running as an admin. Test with a non-admin account. Better, do all of your development in a non-admin account. Add yourself to the _developer group so you can still use gdb/Instruments properly.

A site for discussing app security

There’s a new IT security site over at Stack Exchange. Questions and answers on designing and implementing IT security policy, and on app security are all welcome.

I’m currently a moderator at the site, but that’s just an interim thing while the site is being bootstrapped. Obviously, if people subsequently vote for me as a permanent moderator I’ll stay in, but the converse is also true. Anyway, check out the site, ask and answer questions, let’s make it as good a venue for app security discussion as stackoverflow.com is for general programming.

On Trashing

Back in the 1980s and 1990s, people who wanted to clandestinely gain information about a company or organisation would go trashing.[*] That just meant diving in the bins to find information about the company structure – who worked there, who reported to whom, what orders or projects were currently in progress etc.

You’d think that these days trashing had been thwarted by the invention of the shredder, but no. While many companies do indeed destroy or shred confidential information, this is not universal. Those venues where shredding is common leave it up to their staff to decide what goes in the bin and what goes in the shredder; these staff do not always get it correct (they’re likely to think about whether a document is secret to them rather than the impact on the company). Nor do they always want to think about which bin to put a worthless sheet of paper in.

Even better: in those places that do shred secret papers, they helpfully collect all of the secrets in big bins marked “To Shred” to help the trashers :). They then collect all of these bins into a big hopper, and leave that around (sometimes outside the building, in a covered or open yard) for the destruction company to come and pick up.

So if an attacker can get entry to the building, he just roots around in the “To Shred” bins. Someone asks, he tells them he put a printout there in the morning but now think he needs it again. Even if he can’t get in, he just dives in the hopper outside and get access to all those juicy secrets (with none of the banana peelings and teabags associated with the non-secret bin).

But for those attackers who don’t like getting their hands dirty, they can gain some of the same information using technological means. LinkedIn will helpfully provide a list of employees – including their positions, so the public can find out something of the reporting structure. Some will be looking for recruitment opportunities – these are great people to phone for more information! So are ex-employees, something LinkedIn will also help you out with.

But the fun doesn’t stop there. Once our attacker has the names, he now goes over to Twitter and Facebook. There he can find people griping about work…or describing what the organisation is up to, to put it another way.

All of the above information about 21st-century trashing comes from real experience with an office I was invited into in the last 12 months. Of course, I will not name the organisation in charge of that office (or their data destruction company). The conclusion is that trashing is alive and well, and that those who participate need no longer root around in, well, in the trash. How does your organisation deal with the problem?

[*] for me, it was mainly the 1990s. I was the perfect size in the 1980s for trashing, but still finding my way around a Dragon 32.

Losing your identity

Developers make use of cryptographic signatures in multiple places in the software lifecycle. No iPad or iPhone application may be distributed without having been signed by the developer. Mac developers who sign their applications get to annoy their customers much less when they ship updates, and indeed the Sparkle framework allows developers to sign the download file for each update (which I heartily recommend you do). PackageMaker allows developers to sign installer packages. In each of these cases, the developer provides assurance that the application definitely came from their build process, and definitely hasn’t been changed since then (for wholly reasonable values of “definitely”, anyway).

No security measure comes for free. Adding a step like code or update signing mitigates certain risks, but introduces new ones. That’s why security planning must be an iterative process – every time you make changes, you reduce some risks and create or increase others. The risks associated with cryptographic signing are that your private key could be lost or deleted, or it could be disclosed to a third party. In the case of keys associated with digital certificates, there’s also the risk that your certificate expires while you’re still relying on it (I’ve seen that happen).

Of course you can take steps to protect the key from any of those eventualities, but you cannot reduce the risk to zero (at least not while spending a finite amount of time and effort on the problem). You should certainly have a plan in place for migrating from an expired identity to a new one. Having a contingency plan for dealing with a lost or compromised private key will make your life easier if it ever happens – you can work to the plan rather than having to both manage the emergency and figure out what you’re supposed to be doing at the same time.

iPhone/iPad signing certificate compromise

This is the easiest situation to deal with. Let’s look at the consequences for each of the problems identified:

Expired Identity
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Destroyed Private Key
No-one can submit apps to the app store on your behalf, including you. No-one can provision betas of your apps. You cannot test your app on real hardware.
Disclosed Private Key
Someone else can submit apps to the store and provision betas on your behalf. (They can also test their apps on their phone using your identity, though that’s hardly a significant problem.)

In the case of an expired identity, Apple should lead you through renewal instructions using iTunes Connect. You ought to get some warning, and it’s in their interests to help you as they’ll get another $99 out of you :-). There’s not really much of a risk here, you just need to note in your calendar to sort out renewal.

The cases of a destroyed or disclosed private key are exceptional, and you need to contact Apple to get your old identity revoked and a new one issued. Speed is of the essence if there’s a chance your private key has been leaked, because if someone else submits an “update” on your behalf Apple will treat it as a release from you. It will be hard for you to repudiate the update (claim it isn’t yours) – after all, it’s signed with your identity. If you manage to deal with Apple quickly and get your identity revoked, the only remaining possibility is that an attacker could have used your identity to send out some malicious apps as betas. Because of the limited exposure beta apps have, there will only be a slight impact: though you’ll probably want to communicate the issue to the public to motivate users of “your” beta app to remove it from their phones.

By the way, notice that no application on the store has actually been signed by the developer who wrote it – the .ipa bundles are all re-signed by Apple before distribution.

Mac code signing certificate compromise

Again, let’s start with the consequences.

Expired Identity
You can’t sign new products. Existing releases continue to work, as Mac OS X ignores certificate expiration in code signing checks by default.
Destroyed Private Key
You can’t sign new products.
Disclosed Private Key
Someone else can sign applications that appear to be yours. Such applications will receive the same keychain and firewall access rights as your legitimate apps.

If you just switch identities without any notice, there will be some annoyances for users – the keychain, firewall etc. dialogues indicating that your application cannot be identified as a legitimate update will appear for the update where the identities switch. Unfortunately this situation cannot be distinguished from a Trojan horse version of your app being deployed (even more annoyingly there’s no good way to inspect an application distributor’s identity, so users can’t make the distinction themselves). It would be good to make the migration seamless, so that users don’t get bugged by the update warnings, and learn to treat them as suspicious.

When you’re planning a certificate migration, you can arrange for that to happen easily. Presumably you know how long it takes for most users to update your app (where “most users” is defined to be some large fraction such that you can accept having to give the remainder additional support). At least that long before you plan to migrate identities, release an update that changes your application’s designated requirement such that it’s satisfied by both old and new identities. This update should be signed by your existing (old) identity, so that it’s recognised as an update to the older releases of the app. Once that update’s had sufficient uptake, release another update that’s satisfied by only the new identity, and signed by that new identity.

If you’re faced with an unplanned identity migration, that might not be possible (or in the case of a leaked private key, might lead to an unacceptably large window of vulnerability). So you need to bake identity migration readiness into your release process from the start.

Assuming you use certificates provided by vendor CAs whose own identities are trusted by Mac OS X, you can provide a designated requirement that matches any certificate issued to you. The requirement would be of the form (warning: typed directly into MarsEdit):

identifier "com.securemacprogramming.MyGreatApp" and cert leaf[subject.CN]="Secure Mac Programming Code Signing" and cert leaf[subject.O]="Secure Mac Programming Plc." and anchor[subject.O]="Verisign, Inc." and anchor trusted

Now if one of your private keys is compromised, you coordinate with your CA to revoke the certificate and migrate to a different identity. The remaining risks are that the CA might issue a certificate with the same common name and organisation name to another entity: something you need to take up with the CA in their service-level agreement; or Apple might choose to trust a different CA called “Verisign, Inc.” which seems unlikely.

If you use self-signed certificates, then you need to manage this migration process yourself. You can generate a self-signed CA from which you issue signing certificates, then you can revoke individual signing certs as needed. However, you now have two problems: distributing the certificate revocation list (CRL) to customers, and protecting the private key of the top-level certificate.

Package signing certificate compromise

The situation with signed Installer packages is very similar to that with signed Mac applications, except that there’s no concept of upgrading a package and thus no migration issues. When a package is installed, its certificate is used to check its identity. You just have to make sure that your identity is valid at time of signing, and that any certificate associated with a disclosed private key is revoked.

Sparkle signing key compromise

You have to be very careful that your automatic update mechanism is robust. Any other bug in an application can be fixed by deploying an update to your customers. A bug in the update mechanism might mean that customers stop receiving updates, making it very hard for you to tell them about a fix for that problem, or ship any fixes for other bugs. Sparkle doesn’t use certificates, so keys don’t have any expiration associated with them. The risks and consequences are:

Destroyed Private Key
You can’t update your application any more.
Disclosed Private Key
Someone else can release an “update” to your app; provided they can get the Sparkle instance on the customer’s computer to download it.

In the case of a disclosed private key, the conditions that need to be met to actually distribute a poisoned update are specific and hard to achieve. Either the webserver hosting your appcast or the DNS for that server must be compromised, so that the attacker can get the customer’s app to think there’s an update available that the attacker controls. All of that means that you can probably get away with a staggered key update without any (or many, depending on who’s attacking you) customers getting affected:

  • Release a new update signed by the original key. The update contains the new key pair’s public key.
  • Some time later, release another update signed by the new key.

The situation if you actually lose your private key is worse: you can’t update at all any more. You can’t generate a new key pair and start using that, because your updates won’t be accepted by the apps already out in the field. You can’t bake a “just in case” mechanism in, because Sparkle only expects a single key pair. You’ll have to find a way to contact all of your customers directly, explain the situation and get them to manually update to a new version of your app. That’s one reason I’d like to see auto-update libraries use Mac OS X code signing as their integrity-check mechanisms: so that they are as flexible as the platform on which they run.

Security flaw liability

The Register recently ran an opinion piece called Don’t blame Willy the Mailboy for software security flaws. The article is a reaction to the following excerpt from a SANS sample application security procurement contract:

No Malicious Code

Developer warrants that the software shall not contain any code that does not support a software requirement and weakens the security of the application, including computer viruses, worms, time bombs, back doors, Trojan horses, Easter eggs, and all other forms of malicious code.

That seems similar to a requirement I have previously almost proposed voluntarily adopting:

If one of us [Mac developers] were, deliberately or accidentally, to distribute malware to our users, they would be (rightfully) annoyed. It would severely disrupt our reputation if we did that; in fact some would probably choose never to trust software from us again. Now Mac OS X allows us to put our identity to our software using code signing. Why not use that to associate our good reputations as developers with our software? By using anti-virus software to improve our confidence that our development environments and the software we’re building are clean, and by explaining to our customers why we’ve done this and what it means, we effectively pass some level of assurance on to our customer. Applications signed by us, the developers, have gone through a process which reduces the risk to you, the customers. While your customers trust you as the source of good applications, and can verify that you were indeed the app provider, they can believe in that assurance. They can associate that trust with the app; and the trust represents some tangible value.

Now what the draft contract seems to propose (and I have good confidence in this, due to the wording) is that if a logic bomb, back door, Easter Egg or whatever is implemented in the delivered application, then the developer who wrote that misfeature has violated the contract, not the vendor. Taken at face value, this seems just a little bad. In the subset of conditions listed above, the developer has introduced code into the application that was not part of the specification. It either directly affects the security posture of the application, or is of unknown quality because it’s untested: the testers didn’t even know it was there. This is clearly the fault of the developer, and the developer should be accountable. In most companies this would be a sacking offence, but the proposed contract goes further and says that the developer is responsible to the client. Fair enough, although the vendor should take some responsibility too, as a mature software organisation should have a process such that none of its products contain unaccounted code. This traceability from requirement to product is the daily bread of some very mature development lifecycle tools.

But what about the malware cases? It’s harder to assign blame to the developer for malware injection, and I would say that actually the vendor organisation should be held responsible, and should deal with punishment internally. Why? Because there are too many links in a chain for any one person to put malware into a software product. Let’s say one developer does decide to insert malware.

  • Developer introduces malware to his workstation. This means that any malware prevention procedures in place on the workstation have failed.
  • Developer commits malware to the source repository. Any malware prevention procedures in place on the SCM server have failed.
  • Developer submits build request to the builders.
  • Builder checks out build input and does not notice the malware, construct the product.
  • Builder does not spot the malware in the built product.
  • Testers do not spot the malware in final testing.
  • Release engineers do not spot the malware, and release the product.
  • Of course there are various points at which malware could be introduced, but for a developer to do so in a way consistent with his role as developer requires a systematic failure in the company’s procedures regarding information security, which implies that the CSO ought to be accountable in addition to the developer. It’s also, as with the Easter Egg case, symptomatic of a failure in the control of their development process, so the head of Engineering should be called to task as well. In addition, the head of IT needs to answer some uncomfortable questions.

    So, as it stands, the proposed contract seems well-intentioned but inappropriate. Now what if it’s the thin end of a slippery iceberg? Could developers be held to account for introducing vulnerabilities into an application? The SANS contract is quiet on this point. It requires that the vendor shall provide a “certification package” consisting of the security documentation created throughout the development process. The package shall establish that the security requirements, design, implementation, and test results were properly completed and all security issues were resolved appropriately and that Security issues discovered after delivery shall be handled in the same manner as other bugs and issues as specified in this Agreement. In other words, the vendor should prove that all known vulnerabilities have been mitigated before shipment and if a vulnerability is subsequently discovered and is dealt with in an agreed fashion, no-one did anything wrong.

    That seems fairly comprehensive, and definitely places the onus directly on the vendor (there are various other parts of the contract that imply the same, such as the requirement for the vendor to carry out background checks and provide security training for developers). Let’s investigate the consequences for a few different scenarios.

    1. The product is attacked via a vulnerability that was brought up in the original certification package, but the risk was accepted. This vulnerability just needs to be fixed and we all move on; the risk was known, documented and accepted, and the attack is a consequence of doing business in the face of known risks.

    2. The product is attacked via a novel class of vulnerability, details of which were unknown at the time the certification package was produced. I think that again, this is a case where we just need to fix the problem, of course with sufficient attention paid to discovering whether the application is vulnerable in different ways via this new class of flaw. While developers should be encouraged to think of new ways to break the system, it’s inevitable that some unpredicted attack vectors will be discovered. Fix them, incorporate them into your security planning.

    3. The product is attacked by a vulnerability that was not covered in the certification package, but that is a failure of the product to fulfil its original security requirements. This is a case I like to refer to as “someone fucked up”. It ought to be straightforward (if time-consuming) to apply a systematic security analysis process to an application and get a comprehensive catalogue of its vulnerabilities. If the analysis missed things out, then either it was sloppy, abbreviated or ignored.

    Sloppy analysis. The security lead did not systematically enumerate the vulnerabilities, or missed some section of the app. The security lead is at fault, and should be responsible.

    Abbreviated analysis. While the security lead was responsible for the risk analysis, he was not given the authority to see it through to completion or to implement its results in the application. Whoever withheld that authority is to blame and should accept responsibility. In my experience this is typically a marketing or product management decision, as they try to drop tasks to work backwards from a ship date to the amount of effort spent on the product. Sometimes it’s engineering management; it’s almost never project management.

    Ignored analysis. Example: the risk of attack via buffer overflow is noted in the analysis, then the developer writing the feature doesn’t code bounds-checking. That developer has screwed up, and ought to be responsible for their mistake. If you think that’s a bit harsh, check this line from the “duties” list in a software engineer job ad:

    Write code as directed by Development Lead or Manager to deliver against specified project timescales quality and functionality requirements

    When you’re a programmer, it’s your job to bake quality in.

So it’s not just the Department of Homeland Security, then

What is it about government security agencies and, well, security? The UK Intelligence and Security Committee has just published its Annual Report 2008-2009 (pdf, because if there’s one application we all trust, it’s Adobe Reader), detailing financial and policy issues relating to the British security services during that year.

Sounds “riveting”, yes? Well the content is under Crown copyright[*], so I can excerpt some useful tidbits. According to the director of GCHQ:

The greatest threat [to government IT networks] is from state actors and there is an increasing vulnerability, as the critical national infrastructure and other networks become more interdependent.

The report goes on to note:

State-sponsored electronic attack is increasingly being used by nations to gather intelligence, particularly when more traditional espionage methods cannot be used. It is assessed that the greatest threat of such attacks against the UK comes from China and Russia.

and yet:

The National Audit Office management letter, reporting on GCHQ’s 2007/08 accounts, criticised the results of GCHQ’s 2008 laptop computer audit. This showed that 35 laptops were unaccounted for, including three that were certified to hold Top Secret information; the rest were unclassified. We pressed GCHQ about its procedures for controlling and tracking such equipment. It appears that the process for logging the allocation and subsequent location of laptops has been haphazard. We were told:
Historically, we just checked them in and checked them out and updated the records when they went through our… laptop control process.

So our government’s IT infrastructure is under attack from two of the most resourceful countries in the world, and our security service is giving out Top Secret information for free? It sounds like all the foreign intelligence services need to do is employ their own staff to empty the bins in Cheltenham. In fairness, GCHQ have been mandated to implement better asset-tracking mechanisms; if they do so then the count of missing laptops will be reduced to only reflect thefts/misplaced systems. At the moment it includes laptops that were correctly disposed of, in a way that did not get recorded at GCHQ.

[*] Though significantly redacted. We can’t actually tell what the budget of the intelligence services is, nor what they’re up to. How the budget is considered sensitive information, I’m not sure.