Security flaw liability

The Register recently ran an opinion piece called Don’t blame Willy the Mailboy for software security flaws. The article is a reaction to the following excerpt from a SANS sample application security procurement contract:

No Malicious Code

Developer warrants that the software shall not contain any code that does not support a software requirement and weakens the security of the application, including computer viruses, worms, time bombs, back doors, Trojan horses, Easter eggs, and all other forms of malicious code.

That seems similar to a requirement I have previously almost proposed voluntarily adopting:

If one of us [Mac developers] were, deliberately or accidentally, to distribute malware to our users, they would be (rightfully) annoyed. It would severely disrupt our reputation if we did that; in fact some would probably choose never to trust software from us again. Now Mac OS X allows us to put our identity to our software using code signing. Why not use that to associate our good reputations as developers with our software? By using anti-virus software to improve our confidence that our development environments and the software we’re building are clean, and by explaining to our customers why we’ve done this and what it means, we effectively pass some level of assurance on to our customer. Applications signed by us, the developers, have gone through a process which reduces the risk to you, the customers. While your customers trust you as the source of good applications, and can verify that you were indeed the app provider, they can believe in that assurance. They can associate that trust with the app; and the trust represents some tangible value.

Now what the draft contract seems to propose (and I have good confidence in this, due to the wording) is that if a logic bomb, back door, Easter Egg or whatever is implemented in the delivered application, then the developer who wrote that misfeature has violated the contract, not the vendor. Taken at face value, this seems just a little bad. In the subset of conditions listed above, the developer has introduced code into the application that was not part of the specification. It either directly affects the security posture of the application, or is of unknown quality because it’s untested: the testers didn’t even know it was there. This is clearly the fault of the developer, and the developer should be accountable. In most companies this would be a sacking offence, but the proposed contract goes further and says that the developer is responsible to the client. Fair enough, although the vendor should take some responsibility too, as a mature software organisation should have a process such that none of its products contain unaccounted code. This traceability from requirement to product is the daily bread of some very mature development lifecycle tools.

But what about the malware cases? It’s harder to assign blame to the developer for malware injection, and I would say that actually the vendor organisation should be held responsible, and should deal with punishment internally. Why? Because there are too many links in a chain for any one person to put malware into a software product. Let’s say one developer does decide to insert malware.

  • Developer introduces malware to his workstation. This means that any malware prevention procedures in place on the workstation have failed.
  • Developer commits malware to the source repository. Any malware prevention procedures in place on the SCM server have failed.
  • Developer submits build request to the builders.
  • Builder checks out build input and does not notice the malware, construct the product.
  • Builder does not spot the malware in the built product.
  • Testers do not spot the malware in final testing.
  • Release engineers do not spot the malware, and release the product.
  • Of course there are various points at which malware could be introduced, but for a developer to do so in a way consistent with his role as developer requires a systematic failure in the company’s procedures regarding information security, which implies that the CSO ought to be accountable in addition to the developer. It’s also, as with the Easter Egg case, symptomatic of a failure in the control of their development process, so the head of Engineering should be called to task as well. In addition, the head of IT needs to answer some uncomfortable questions.

    So, as it stands, the proposed contract seems well-intentioned but inappropriate. Now what if it’s the thin end of a slippery iceberg? Could developers be held to account for introducing vulnerabilities into an application? The SANS contract is quiet on this point. It requires that the vendor shall provide a “certification package” consisting of the security documentation created throughout the development process. The package shall establish that the security requirements, design, implementation, and test results were properly completed and all security issues were resolved appropriately and that Security issues discovered after delivery shall be handled in the same manner as other bugs and issues as specified in this Agreement. In other words, the vendor should prove that all known vulnerabilities have been mitigated before shipment and if a vulnerability is subsequently discovered and is dealt with in an agreed fashion, no-one did anything wrong.

    That seems fairly comprehensive, and definitely places the onus directly on the vendor (there are various other parts of the contract that imply the same, such as the requirement for the vendor to carry out background checks and provide security training for developers). Let’s investigate the consequences for a few different scenarios.

    1. The product is attacked via a vulnerability that was brought up in the original certification package, but the risk was accepted. This vulnerability just needs to be fixed and we all move on; the risk was known, documented and accepted, and the attack is a consequence of doing business in the face of known risks.

    2. The product is attacked via a novel class of vulnerability, details of which were unknown at the time the certification package was produced. I think that again, this is a case where we just need to fix the problem, of course with sufficient attention paid to discovering whether the application is vulnerable in different ways via this new class of flaw. While developers should be encouraged to think of new ways to break the system, it’s inevitable that some unpredicted attack vectors will be discovered. Fix them, incorporate them into your security planning.

    3. The product is attacked by a vulnerability that was not covered in the certification package, but that is a failure of the product to fulfil its original security requirements. This is a case I like to refer to as “someone fucked up”. It ought to be straightforward (if time-consuming) to apply a systematic security analysis process to an application and get a comprehensive catalogue of its vulnerabilities. If the analysis missed things out, then either it was sloppy, abbreviated or ignored.

    Sloppy analysis. The security lead did not systematically enumerate the vulnerabilities, or missed some section of the app. The security lead is at fault, and should be responsible.

    Abbreviated analysis. While the security lead was responsible for the risk analysis, he was not given the authority to see it through to completion or to implement its results in the application. Whoever withheld that authority is to blame and should accept responsibility. In my experience this is typically a marketing or product management decision, as they try to drop tasks to work backwards from a ship date to the amount of effort spent on the product. Sometimes it’s engineering management; it’s almost never project management.

    Ignored analysis. Example: the risk of attack via buffer overflow is noted in the analysis, then the developer writing the feature doesn’t code bounds-checking. That developer has screwed up, and ought to be responsible for their mistake. If you think that’s a bit harsh, check this line from the “duties” list in a software engineer job ad:

    Write code as directed by Development Lead or Manager to deliver against specified project timescales quality and functionality requirements

    When you’re a programmer, it’s your job to bake quality in.

One Window that is good for Mac security

I realise now that I didn’t cover this when it happened back at the beginning of March, but that not everyone in either the Apple world nor the general infosec community is aware of it. Nearly one month ago, Apple hired a new Security Product Manager (the position was vacant at the time of WWDC 2008 and I think it was just being covered by another product manager in the interim): welcome Window Snyder.

Window has a good history in the infosec world; after working as security design architect at @stake, she moved to Microsoft to act as security sign-off for XP Service Pack 2 (Microsoft’s first OS release focussed solely on security improvements) and Windows Server 2003 (their first completely new OS release after the security push of 2002). It was during her watch that Microsoft became more open about their vulnerability reporting, and introduced “Patch Tuesday” to help systems administrators manage the patch lifecycle. I happen not to like the Patch Tuesday mentality, but at least Microsoft thought about the issue and reacted to it.

After Microsoft, Window became Chief Security Something-or-Other at Mozilla. Here she promoted measurement and tracking of security issues, process improvements and greater transparency, both in terms of Mozilla’s reporting and that of other vendors.

I think that, given the authority to make process and reporting changes regarding Apple’s security procedures, she will be a great addition to Apple’s security teams. Apple typically drop security updates without warning and with minimal information on the content and severity of the vulnerabilities addressed; they maintain what could be charitably described as an “arm’s length” relationship with security vendors and have a history of slow reaction to vulnerabilities discovered in open source components. I have great hope for those facets of Apple’s security work changing soon.

Why do we annoy our users?

I assume that, with my audience being mainly Mac users, you are not familiar with Microsoft Security Assessment Tool, or MSAT. It’s basically a free tool for CIOs, CSOs and the like to perform security analyses. It presents two questionnaires, the first asking questions about your company’s IT infrastructure (“do you offer wireless access?”), the second asking about the company’s current security posture (“do you use WPA encryption?”). The end result is a report comparing the company’s risk exposure to the countermeasures in place, highlighting areas of weakness or overinvestment. The MSAT app itself isn’t too annoying.

Mostly. One bit is. Some of the questions are accompanied by information about the relevant threats, and industry practices that can help mitigate the appropriate threats. Information such as this:

In order to reduce the ability to 'brute-force' the credentials for privileged accounts, the passwords for such accounts should be changed regularly.

So, how does changing a password reduce the likelihood of a brute-force attack succeeding? Well, let’s think about it. The attacker has to choose a potential password to test. Obviously the attacker does not know your password a priori, or the attack wouldn’t be brute-force; so the guess is independent of your password. You don’t know what the attacker has, hasn’t, or will next test—all you know is that the attacker will exhaust all possible guesses given enough time. So your password is independent of the guess distribution.

Your password, and the attacker’s guess at your password, are independent. The probability that the attacker’s next guess is correct is the same even if you change your password first. Password expiration policies cannot possibly mitigate brute-force attacks.

So why do we enforce password expiration policies? Actually, that’s a very good question. Let’s say an attacker does gain your password.

OK, "an attacker does gain your password."

The window of opportunity to exploit this condition depends on the time for which the password is valid, right? Wrong: as soon as the attacker gains the password, he can install a back door, create another account or take other steps to ensure continued access. Changing the password post facto will defeat an attacker who isn’t thinking straight, but ultimately a more comprehensive response should be initiated.

So password expiration policies annoy our users, and don’t help anyone.

So it’s not just the Department of Homeland Security, then

What is it about government security agencies and, well, security? The UK Intelligence and Security Committee has just published its Annual Report 2008-2009 (pdf, because if there’s one application we all trust, it’s Adobe Reader), detailing financial and policy issues relating to the British security services during that year.

Sounds “riveting”, yes? Well the content is under Crown copyright[*], so I can excerpt some useful tidbits. According to the director of GCHQ:

The greatest threat [to government IT networks] is from state actors and there is an increasing vulnerability, as the critical national infrastructure and other networks become more interdependent.

The report goes on to note:

State-sponsored electronic attack is increasingly being used by nations to gather intelligence, particularly when more traditional espionage methods cannot be used. It is assessed that the greatest threat of such attacks against the UK comes from China and Russia.

and yet:

The National Audit Office management letter, reporting on GCHQ’s 2007/08 accounts, criticised the results of GCHQ’s 2008 laptop computer audit. This showed that 35 laptops were unaccounted for, including three that were certified to hold Top Secret information; the rest were unclassified. We pressed GCHQ about its procedures for controlling and tracking such equipment. It appears that the process for logging the allocation and subsequent location of laptops has been haphazard. We were told:
Historically, we just checked them in and checked them out and updated the records when they went through our… laptop control process.

So our government’s IT infrastructure is under attack from two of the most resourceful countries in the world, and our security service is giving out Top Secret information for free? It sounds like all the foreign intelligence services need to do is employ their own staff to empty the bins in Cheltenham. In fairness, GCHQ have been mandated to implement better asset-tracking mechanisms; if they do so then the count of missing laptops will be reduced to only reflect thefts/misplaced systems. At the moment it includes laptops that were correctly disposed of, in a way that did not get recorded at GCHQ.

[*] Though significantly redacted. We can’t actually tell what the budget of the intelligence services is, nor what they’re up to. How the budget is considered sensitive information, I’m not sure.

Integrating SSH with the keychain on Snow Leopard

Not much movement has occurred on projects like SSHKeychain.app or SSHAgent.app in the last couple of years. The reason is that it’s not necessary to use them these days; you can get all of the convenience of keychain-stored SSH passphrases using the built in software. Here’s a guide to using the Keychain to store your pass phrases.

Create the key pair

We’ll use the default key format, which is RSA for SSH protocol 2.0. We definitely want to enter a passphrase, so that if the private key is leaked it cannot be used.

jormungand:~ leeg$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/leeg/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/leeg/.ssh/id_rsa.
Your public key has been saved in /Users/leeg/.ssh/id_rsa.pub.
The key fingerprint is:
ff:ce:0a:f6:ee:0d:e8:a5:aa:56:a0:f3:0b:81:80:cc leeg@jormungand.local
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|+                |
|oE               |
|..  .            |
|. .. .  S        |
|  o.  .  o       |
|  .o .  + +      |
|   .o  o = =     |
|   .oo..oo=o=    |
+-----------------+

If you haven’t seen it before, randomart is not a screenshot from nethack; rather it’s a visual hashing algorithm. Two different public keys are not guaranteed to have different randomart fingerprints, but the chance that they are close enough to pass a quick visual inspection is small.

Deploy the public key to the SSH server

Do this using any available route; I choose to use password-based SSH.

jormungand:~ leeg$ ssh heimdall.local 'cat -  >> .ssh/authorized_keys' < .ssh/id_rsa.pub
Password:

There is no step three

Mac OS X automatically runs ssh-agent, the key-caching service, as a launchd agent. When SSH attempts to negotiate authentication using your key-based identity, you automatically get asked whether you want to store the passphrase in the keychain. Like this:

ssh-add.png

Now the passphrase is stored in the keychain. Don't believe me? Lookie here:

ssh-keychain.png

So your SSH key is protected by the passphrase, and the passphrase is protected by the keychain.

Update: Minor caveat

Of course (and I forgot this :-S), if you use FileVault to protect the home folder on the SSH server, the user's home folder isn't mounted until after you authenticate. This means that the authorized_keys file can't be consulted during negotiation. Once you have logged in once (using your password), subsequent logins will use the keys (as PAM automatically mounts the FileVault volumes the first time, so authorized_keys becomes available).