Look what the feds left behind…

So what conference was on in this auditorium before NSConference? Well, why don’t we just read the documents they left behind?

folder.jpg

Ooops. While there’s nothing at higher clearance than Unrestricted inside, all of the content is marked internal eyes only (don’t worry, feds, I didn’t actually pay too much attention to the content. You don’t need to put me on the no-fly list). There’s an obvious problem though: if your government agency has the word “security” in its name, you should take care of security. Leaving private documentation in a public conference venue does not give anyone confidence in your ability to manage security issues.

Anatomy of a software sales scam

A couple of days ago, Daniel Kennett of the KennettNet micro-ISV (in plain talk, a one-man software business) told me that a customer had fallen victim to a scam. She had purchased a copy of his application Music Rescue—a very popular utility for working with iPods—from a vendor but had not received a download or a licence. The vendor had charged her over three times the actual cost of the application, and seemingly disappeared; she approached KennettNet to try and get the licence she had paid for.

Talking to the developer and (via him) the victim, the story becomes more disturbing. The tale all starts when the victim was having trouble with her iPod and iTunes, and contacted Apple support. The support person apparently gave her an address from which to buy Music Rescue, which turned out to be the scammer’s address. Now it’s hard to know what she meant by that, perhaps the support person gave her a URL, or perhaps she was told a search term and accessed the malicious website via a search engine. It would be inappropriate to try and gauge the Apple support staff’s involvement without further details, except to say that the employee clearly didn’t direct her unambiguously to the real vendor’s website. For all we know, the “Apple” staffer she spoke to may not have been from Apple at all, and the scam may have started with a fake Apple support experience. It does seem more likely that she was talking to the real Apple and their staffer, in good faith or otherwise, gave her information that led to the fraudulent website.

The problem is that if you ask a security consultant for a solution to the problem of being scammed in online purchases, they will probably say “only buy software from trusted sources”. Well, this user clearly trusted Apple as a source, and clearly trusted that she was talking to Apple. She probably was, but still ended up the victim of a scam. Where does this leave the advice, and how can a micro-ISV ever sell software if we’re only to go to stores who’ve built up a reputation with us?

Interestingly the app store model, used on the iPhone and iPad, could offer a solution to these problems. By installing Apple as a gateway to app purchases, customers know (assuming they’ve got the correct app store) that they’re talking to Apple, that any purchase is backed by a real application and that Apple have gone to some (unknown) effort to ensure that the application sold is the same one the marketing page on the store claims will be provided. Such a model could prove a convenient and safe way for users to buy applications on other platforms, even were it not to be the exclusive entry point to the platform.

As a final note, I believe that KennettNet has taken appropriate steps to resolve the problem. As close as I can tell the scam website is operated out of the US, making any attempted legal action hard to pursue as the developer is based in the UK. Instead the micro-ISV has offered the victim a discounted licence and assistance in recovering the lost money from her credit card provider’s anti-fraud process. I’d like to thank Daniel for his information and help in preparing this article.

Code snippit from NSConference presentation

Here’s the code I used to display the code signature status within the sample app at NSConference. You need to be using the 10.6 SDK, and link against Security.framework.

#import <Security/SecCode.h>
- (void)updateSignatureStatus { SecCodeRef myCode = NULL; OSStatus secReturn = SecCodeCopySelf(kSecCSDefaultFlags, &myCode); if (noErr != secReturn) { [statusField setIntValue: secReturn]; return; } CFMakeCollectable(myCode); SecRequirementRef designated = NULL; secReturn = SecCodeCopyDesignatedRequirement(myCode, kSecCSDefaultFlags, &designated); if (noErr != secReturn) { [statusField setIntValue: secReturn]; return; } CFMakeCollectable(designated); secReturn = SecCodeCheckValidity(myCode, kSecCSDefaultFlags, designated); [statusField setIntValue: secReturn]; }

That’s all there is to it. As discussed in my talk, the designated requirement is a requirement baked into the signature seal when the app is signed in Xcode. Application identity is verified by testing whether:

  • the signature is valid;
  • the seal is unbroken;
  • the code satisfies its designated requirement.

Of course there’s nothing to stop you testing code against any requirement you may come up with: the designated requirement just provides “I am the same application that my developer signed”.

Oops. (updated twice)

Q: What caused this?

A: this. A Vodafone employee used the corporate Twitter account to post the message:

[@VodafoneUK] is fed up of dirty homo’s and is going after beaver

And as the Vodafone apology attests, this was no hacking attack, instead a case of TGI Friday on the part of an employee. This goes to show that you don’t need an external attacker to ruin your corporate image if you hire the right staff.

Update: according to an article in the Register, the problem tweet was caused by an employee on a different team in the same office misusing an unlocked terminal with access to Vodafone’s Twitter account. They have fired the employee, but no word on whether they’re reviewing their security practices. I’m reminded of the solution taken to combat safe-cracking at LANL when Richard Feynman showed how easy it was to open the safes with their confidential contents: don’t let Richard Feynman near the safes.

Update again: Vodafone’s official reply:

On Friday afternoon an employee posted an obscene message from the
official Vodafone UK Twitter profile. The employee was suspended
immediately and we have started an internal investigation. This was not
a hack and we apologise for any offence the tweet may have caused.

This sounds like there’s the potential for their practices to be altered as a result of their “internal investigation”, hopefully they’ll make more information available. It would definitely make an interesting case study on responding to real-world security issues.

It’s just a big iPod

I think you would assume I had my privacy settings ramped up a little too high if I hadn’t heard about the iPad, Apple’s new touchscreen mobile device. Having had a few days to consider it and allow the hype to die down, my considered opinion on the iPad’s security profile is this: it’s just a big iPod.

Now that’s no bad thing. We’ve seen from the iPhone that the moderated gateway for distributing software—the App Store—keeps malware away from the platform. Both the Rickrolling iKee worm and its malicious sibling, Duh, rely on users enabling software not sanctioned through the app store. Now whether or not Apple’s review process is a 100% foolproof way of keeping malware off iPhones, iPods and iPads is not proven either way, but it certainly seems to be doing its job so far.

Of course, reviewing every one of those 140,000+ apps is not a free process. Last year, Apple were saying 98% of apps are reviewed in 7 days, this month only 90% are approved in 14 days. So there’s clearly a scalability problem with the review process, and if the iPad does genuinely lead to a “second app store gold rush” then we’ll probably not see an improvement there, either. Now, if an app developer discovers a vulnerability in their app (or worse, if a zero-day is uncovered), it could take a couple of weeks to get a security fix out to customers. How should the developer deal with that situation? Should Apple get involved (and if they do, couldn’t they have used that time to approve the update)? Update: I’m told (thanks @Reversity) that it’s possible to expedite reviews by emailing Apple. We just have to hope that not all developers find out about that, or they’ll all try it.

The part of the “big iPod” picture that I find most interesting from a security perspective, however, is the user account model. In a nutshell, there isn’t one. Just like an iPhone or iPod, it is assumed that the person touching the screen is the person who owns the data on the iPad. There are numerous situations in which that is a reasonable assumption. My iPhone, for instance, spends most of its time in my pocket or in my hand, so it’s rare that someone else gets to use it. If someone casually tries to borrow or steal the phone, the PIN lock should be sufficient to keep them from gaining access. However, as it’s the 3G model rather than the newer 3GS, it lacks filesystem encryption, so a knowledgeable thief could still get the data from it. (As an aside, Apple have not mentioned whether the iPad features the same encryption as the iPhone 3GS, so it would be safest to assume that it does not).

The iPad makes sense as a single-user or shared device if it is used as a living room media unit. My girlfriend and I are happy to share music, photos, and videos, so if that’s all the iPad had it wouldn’t matter if we both used the same one. But for some other use cases even we need to keep secrets from each other—we both work with confidential data so can’t share all of our files. With a laptop, we can each use separate accounts, so when one of us logs in we have access to our own files but not to the other’s.

That multi-user capability—even more important in corporate environments—doesn’t exist in the iPhone OS, and therefore doesn’t exist on the iPad. If two different people want to use an iPad to work with confidential information, they don’t need different accounts; they need different iPads. [Another aside: even if all the data is “in the cloud” the fact that two users on one iPad would share a keychain could mean that they have access to each others’ accounts anyway.] Each would need to protect his iPad from access by anyone else. Now even though in practice many companies do have a “one user, one laptop” correlation, they still rely on a centralised directory service to configure the user accounts, and therefore the security settings including access to private data.

Now the iPhone Configuration Utility (assuming its use is extended to iPads) allows configuration of the security settings on the device such as they are, but you can’t just give Jenkins an iPad, have him tell it that he’s Jenkins, then have it notice that it’s Jenkins’s iPad and should grab Jenkins’s account settings. You can do that with Macs and PCs on a network with a directory service; the individual computers can be treated to varying extents as pieces of furniture which only become “Jenkins’s computer” when Jenkins is using one.

If the iPad works in the same way as an iPhone, it will grab that personal and account info from whatever Mac or PC it’s synced to. Plug it in to a different computer, and that one can sync it, merging or replacing the information on the device. This makes registration fairly easy (“here’s your iPad, Jenkins, plug it in to your computer when you’re logged in”) and deregistration more involved (“Jenkins has quit, we need to recover or remove his PIN, take the data from the iPad, then wipe it before we can give it to Hopkins, his replacement”). I happen to believe that many IT departments could, with a “one iPad<->one computer<->one user” system, manage iPads in that way. But it would require a bit of a change from the way they currently run their networks and IT departments don’t change things without good reason. They would probably want full-device encryption (status: unknown) and to lock syncing to a single system (status: the iPhone Enterprise Deployment Guide doesn’t make it clear, but I think it isn’t possible).

What is clear based on the blogosphere/twitterverse reaction to the device is that many companies will be forced, sooner or later, to support iPads, just as when people started turning up to the helpdesks with BlackBerries and iPhones expecting them to be supported. Being part of that updated IT universe will make for an exciting couple of years.