On phone support scams and fake AV

A couple of weeks ago, I posted on Twitter about a new scam:

Heard about someone who was phoned by a man “from Windows” who engineered his way into remote access to the mark’s computer.

Fast forward to now, the same story has finally been picked up by the security vendors and the mainstream media. This means it’s probably time to go into more depth.

I heard a first-hand account of the scam. The victim is the kind of person who shouldn’t be expected to understand IT security – a long distance lorry driver who uses his computer for browsing, e-mail, and that sort of thing. As he tells it, the person called, saying they were from Windows and that they had discovered his computer was infected. He was given instructions to give the caller remote access to help clean up the computer.

With remote access, the caller was able to describe some of the problems the victim was having with his computer, while taking control to “fix them”. The caller eventually discovered that the victim’s anti-virus was out of date, and that if he gave the caller his payment information he could get new software for £109. This is when the victim hung up; however his computer has not booted properly since then.

I think my audience here is probably tech-savvy enough not to need warning about scams like this, and to understand that the real damage was done even before any discussion of payments was made (hint: browser form-auto-fill data). It’s not the scam itself I want to focus on, but our reaction.

Some people I have told this story to in real life (it does happen) have rolled their eyes, and said something along the lines of “well of course the users are the weakest link” in a knowing way. If that’s true, why rely on the users to make all the security decisions? Why leave it to them to decide what’s legitimate and what’s scammy, as was the case here? Why is the solution to any problem to shovel another bucketload of computer knowledge on them and hope that it sticks, as Sophos and the BBC have tried in the articles above?

No. This is not a solution to anything. No matter how loudly you shout about how that isn’t how Microsoft does business, someone who says he is from Microsoft will phone your users up and tell them that it is.

This is the same problem facing anti-virus vendors trying to convince us not to get fooled by FakeAV scams. Vendor A tells us to buy their product instead of Vendor B’s, because it’s better. So, is Vendor A the FakeAV pedlar, or B? Or is it both? Or neither? You can’t tell.

It may seem that this is a problem that cannot be solved in technology, that it relies on hard-wired behaviour of us bald apes. I don’t think that’s so. I think that it would be possible to change the way we, legitimate software vendors, interact with our users, and the way they interact with our software, such that an offline scam like this would never come to pass. A full discussion would fill a whole whitepaper that I haven’t written yet. However, to take the most extreme point from it, the one I know you’re going to loathe, what if our home computers were managed remotely by the vendors? Do most users really need complete BIOS and kernel level access to their kit? Really?

Look for the whitepaper sometime in the new year.

On free Mac Anti-Virus

On Tuesday, my pals at my old stomping ground Sophos launched their Free home edition Mac product. I’ve been asked by several people what makes it tick, so here’s Mac Anti-Virus In A Nutshell.

Sophos Anti-Virus for Mac

What is the AV doing?

So anti-virus is basically a categorisation technology: you look at a file and decide whether it’s bad. The traditional view people have of an AV engine is that there’s a huge table of file checksums, and the AV product just compares every file it encounters to every checksum and warns you if it finds a match. That’s certainly how it used to work around a decade ago, but even low-end products like ClamAV don’t genuinely work this way any more.

Modern Anti-Virus starts its work by classifying the file it’s looking at. This basically means deciding what type of file it is: a Mac executable, a Word document, a ZIP etc. Some of these are actually containers for other file types: a ZIP obviously contains other files, but a Word document contains sections with macros in which might be interesting. A Mac fat file contains one or more executable files, which each contains various data and program segments. Even a text file might actually contain a shell script (which could contain a perl script as a here doc), and so on. But eventually the engine will have classified zero or more parts of the file that it wants to inspect.

Because the engine now knows the type of the data it’s looking at, it can be clever about what tests it applies. So the engine contains a whole barrage of different tests, but still runs very quickly because it knows when any test is necessary. For example, most AV products now including Sophos’ can actually run x86 code in an emulator or sandbox, to see whether it would try to do something naughty. But it doesn’t bother trying to do that to a JPEG.

That sounds slow.

And the figures seem to bear that out: running a scan via the GUI can take hours, or even a day. A large part of this is due to limitations on the hard drive’s throughput, exacerbated by the fact that there’s no way to ask a disk to come up with a file access strategy that minimises seek time (time that’s effectively wasted while the disk moves its heads and platters to the place where the file is stored). Such a thing would mean reading the whole drive catalogue (its table of contents), and thinking for a while about the best order to read all of the files. Besides, such strategies fall apart when one of the other applications needs to open a file, because the hard drive has to jump away and get that one. So as this approach can’t work, the OS doesn’t support it.

On a Mac with a solid state drive, you actually can get to the point where CPU availability, rather than storage throughput, is the limiting factor. But surely even solid state drives are far too slow compared with CPUs, and the Anti-Virus app must be quite inefficient to be CPU-limited? Not so. Of course, there is some work that Sophos Anti-Virus must be doing in order to get worthwhile results, so I can’t say that it uses no CPU at all. But having dealt with the problem of hard drive seeking, we now meet the UBC.

The Unified Buffer Cache is a place in memory where the kernel holds the content of recently accessed files. As new files are read, the kernel throws away the contents of old files and stores the new one in the cache. Poor kernel. It couldn’t possibly know that this scanner is just going to do some tests on the file then never look at it again, so it goes to a lot of effort swapping contents around in its cache that will never get used. This is where a lot of the time ends up.

On not wasting all that time

This is where the on-access scanner comes in. If you look at the Sopohs installation, you’ll see an application at /Library/Sophos Anti-Virus/InterCheck.app – this is a small UNIX tool that includes a kernel extension to intercept file requests and test the target files. If it finds an infected file, it stops the operating system from opening it.

Sophos reporting a threat.

To find out how to this interception, you can do worse than look at Professional Cocoa Application Security, where I talk about the KAUTH (Kernel AUTHorisation) mechanism in Chapter 11. But the main point is that this approach – checking files when you ask for them – is actually more efficient than doing the whole scan. For a start, you’re only looking at files that are going to be needed anyway, so you’re not asking the hard drive to go out of its way and prepare loads of content that isn’t otherwise being used. InterCheck can also be clever about what it does, for example there’s no need to scan the same file twice if it hasn’t changed in the meantime.

OK, so it’s not a resource hog. But I still don’t need anti-virus.

Not true. This can best be described as anecdotal, but all of the people who reported to me that they had run a scan since the free Sophos product had become available, around 75% reported that it had detected threats. These were mainly Windows executables attached to mail, but it’s still good to detect and destroy those so they don’t get onto your Boot Camp partition or somebody else’s PC.

There definitely is a small, but growing, pile of malware that really does target Macs. I was the tech reviewer for Enterprise Mac Security, for the chapter on malware my research turned up tens of different strains: mainly Trojan horses (as on Windows), some OpenOffice macros, and some web-based threats. And that was printed well before Koobface was ported to the Mac.

Alright, it’s free, I’ll give it a go. Wait, why is it free?

Well here I have to turn to speculation. If your reaction to my first paragraph was “hang on, who is Sophos?”, then you’re not alone. Sophos is still a company that only sells to other businesses, and that means that the inhabitants of the Clapham Omnibus typically haven’t heard of them. Windows users have usually heard of Symantec via their Norton brand, McAfee and even smaller outfits like Kaspersky, so those are names that come up in the board room.

That explains why they might release a free product, but not this one. Well, now you have to think about what makes AV vendors different from one another, and really the answer is “not much”. They all sell pretty much the same thing, occasionally one of them comes up with a new feature but that gap usually closes quite quickly.

Cross-platform support is one area that’s still open, surprisingly. Despite the fact that loads of the vendors (and I do mean loads: Symantec, McAfee, Trend Micro, Sophos, Kaspersky, F-Secure, Panda and Eset all spring to mind readily) support the Mac and some other UNIX platforms, most of these are just checkbox products that exist to prop up their feature matrix. My suspicion is that by raising the profile of their Mac offering Sophos hopes to become the cross-platform security vendor. And that makes giving their Mac product away for free more valuable than selling it.

Security flaw liability

The Register recently ran an opinion piece called Don’t blame Willy the Mailboy for software security flaws. The article is a reaction to the following excerpt from a SANS sample application security procurement contract:

No Malicious Code

Developer warrants that the software shall not contain any code that does not support a software requirement and weakens the security of the application, including computer viruses, worms, time bombs, back doors, Trojan horses, Easter eggs, and all other forms of malicious code.

That seems similar to a requirement I have previously almost proposed voluntarily adopting:

If one of us [Mac developers] were, deliberately or accidentally, to distribute malware to our users, they would be (rightfully) annoyed. It would severely disrupt our reputation if we did that; in fact some would probably choose never to trust software from us again. Now Mac OS X allows us to put our identity to our software using code signing. Why not use that to associate our good reputations as developers with our software? By using anti-virus software to improve our confidence that our development environments and the software we’re building are clean, and by explaining to our customers why we’ve done this and what it means, we effectively pass some level of assurance on to our customer. Applications signed by us, the developers, have gone through a process which reduces the risk to you, the customers. While your customers trust you as the source of good applications, and can verify that you were indeed the app provider, they can believe in that assurance. They can associate that trust with the app; and the trust represents some tangible value.

Now what the draft contract seems to propose (and I have good confidence in this, due to the wording) is that if a logic bomb, back door, Easter Egg or whatever is implemented in the delivered application, then the developer who wrote that misfeature has violated the contract, not the vendor. Taken at face value, this seems just a little bad. In the subset of conditions listed above, the developer has introduced code into the application that was not part of the specification. It either directly affects the security posture of the application, or is of unknown quality because it’s untested: the testers didn’t even know it was there. This is clearly the fault of the developer, and the developer should be accountable. In most companies this would be a sacking offence, but the proposed contract goes further and says that the developer is responsible to the client. Fair enough, although the vendor should take some responsibility too, as a mature software organisation should have a process such that none of its products contain unaccounted code. This traceability from requirement to product is the daily bread of some very mature development lifecycle tools.

But what about the malware cases? It’s harder to assign blame to the developer for malware injection, and I would say that actually the vendor organisation should be held responsible, and should deal with punishment internally. Why? Because there are too many links in a chain for any one person to put malware into a software product. Let’s say one developer does decide to insert malware.

  • Developer introduces malware to his workstation. This means that any malware prevention procedures in place on the workstation have failed.
  • Developer commits malware to the source repository. Any malware prevention procedures in place on the SCM server have failed.
  • Developer submits build request to the builders.
  • Builder checks out build input and does not notice the malware, construct the product.
  • Builder does not spot the malware in the built product.
  • Testers do not spot the malware in final testing.
  • Release engineers do not spot the malware, and release the product.
  • Of course there are various points at which malware could be introduced, but for a developer to do so in a way consistent with his role as developer requires a systematic failure in the company’s procedures regarding information security, which implies that the CSO ought to be accountable in addition to the developer. It’s also, as with the Easter Egg case, symptomatic of a failure in the control of their development process, so the head of Engineering should be called to task as well. In addition, the head of IT needs to answer some uncomfortable questions.

    So, as it stands, the proposed contract seems well-intentioned but inappropriate. Now what if it’s the thin end of a slippery iceberg? Could developers be held to account for introducing vulnerabilities into an application? The SANS contract is quiet on this point. It requires that the vendor shall provide a “certification package” consisting of the security documentation created throughout the development process. The package shall establish that the security requirements, design, implementation, and test results were properly completed and all security issues were resolved appropriately and that Security issues discovered after delivery shall be handled in the same manner as other bugs and issues as specified in this Agreement. In other words, the vendor should prove that all known vulnerabilities have been mitigated before shipment and if a vulnerability is subsequently discovered and is dealt with in an agreed fashion, no-one did anything wrong.

    That seems fairly comprehensive, and definitely places the onus directly on the vendor (there are various other parts of the contract that imply the same, such as the requirement for the vendor to carry out background checks and provide security training for developers). Let’s investigate the consequences for a few different scenarios.

    1. The product is attacked via a vulnerability that was brought up in the original certification package, but the risk was accepted. This vulnerability just needs to be fixed and we all move on; the risk was known, documented and accepted, and the attack is a consequence of doing business in the face of known risks.

    2. The product is attacked via a novel class of vulnerability, details of which were unknown at the time the certification package was produced. I think that again, this is a case where we just need to fix the problem, of course with sufficient attention paid to discovering whether the application is vulnerable in different ways via this new class of flaw. While developers should be encouraged to think of new ways to break the system, it’s inevitable that some unpredicted attack vectors will be discovered. Fix them, incorporate them into your security planning.

    3. The product is attacked by a vulnerability that was not covered in the certification package, but that is a failure of the product to fulfil its original security requirements. This is a case I like to refer to as “someone fucked up”. It ought to be straightforward (if time-consuming) to apply a systematic security analysis process to an application and get a comprehensive catalogue of its vulnerabilities. If the analysis missed things out, then either it was sloppy, abbreviated or ignored.

    Sloppy analysis. The security lead did not systematically enumerate the vulnerabilities, or missed some section of the app. The security lead is at fault, and should be responsible.

    Abbreviated analysis. While the security lead was responsible for the risk analysis, he was not given the authority to see it through to completion or to implement its results in the application. Whoever withheld that authority is to blame and should accept responsibility. In my experience this is typically a marketing or product management decision, as they try to drop tasks to work backwards from a ship date to the amount of effort spent on the product. Sometimes it’s engineering management; it’s almost never project management.

    Ignored analysis. Example: the risk of attack via buffer overflow is noted in the analysis, then the developer writing the feature doesn’t code bounds-checking. That developer has screwed up, and ought to be responsible for their mistake. If you think that’s a bit harsh, check this line from the “duties” list in a software engineer job ad:

    Write code as directed by Development Lead or Manager to deliver against specified project timescales quality and functionality requirements

    When you’re a programmer, it’s your job to bake quality in.

It’s just a big iPod

I think you would assume I had my privacy settings ramped up a little too high if I hadn’t heard about the iPad, Apple’s new touchscreen mobile device. Having had a few days to consider it and allow the hype to die down, my considered opinion on the iPad’s security profile is this: it’s just a big iPod.

Now that’s no bad thing. We’ve seen from the iPhone that the moderated gateway for distributing software—the App Store—keeps malware away from the platform. Both the Rickrolling iKee worm and its malicious sibling, Duh, rely on users enabling software not sanctioned through the app store. Now whether or not Apple’s review process is a 100% foolproof way of keeping malware off iPhones, iPods and iPads is not proven either way, but it certainly seems to be doing its job so far.

Of course, reviewing every one of those 140,000+ apps is not a free process. Last year, Apple were saying 98% of apps are reviewed in 7 days, this month only 90% are approved in 14 days. So there’s clearly a scalability problem with the review process, and if the iPad does genuinely lead to a “second app store gold rush” then we’ll probably not see an improvement there, either. Now, if an app developer discovers a vulnerability in their app (or worse, if a zero-day is uncovered), it could take a couple of weeks to get a security fix out to customers. How should the developer deal with that situation? Should Apple get involved (and if they do, couldn’t they have used that time to approve the update)? Update: I’m told (thanks @Reversity) that it’s possible to expedite reviews by emailing Apple. We just have to hope that not all developers find out about that, or they’ll all try it.

The part of the “big iPod” picture that I find most interesting from a security perspective, however, is the user account model. In a nutshell, there isn’t one. Just like an iPhone or iPod, it is assumed that the person touching the screen is the person who owns the data on the iPad. There are numerous situations in which that is a reasonable assumption. My iPhone, for instance, spends most of its time in my pocket or in my hand, so it’s rare that someone else gets to use it. If someone casually tries to borrow or steal the phone, the PIN lock should be sufficient to keep them from gaining access. However, as it’s the 3G model rather than the newer 3GS, it lacks filesystem encryption, so a knowledgeable thief could still get the data from it. (As an aside, Apple have not mentioned whether the iPad features the same encryption as the iPhone 3GS, so it would be safest to assume that it does not).

The iPad makes sense as a single-user or shared device if it is used as a living room media unit. My girlfriend and I are happy to share music, photos, and videos, so if that’s all the iPad had it wouldn’t matter if we both used the same one. But for some other use cases even we need to keep secrets from each other—we both work with confidential data so can’t share all of our files. With a laptop, we can each use separate accounts, so when one of us logs in we have access to our own files but not to the other’s.

That multi-user capability—even more important in corporate environments—doesn’t exist in the iPhone OS, and therefore doesn’t exist on the iPad. If two different people want to use an iPad to work with confidential information, they don’t need different accounts; they need different iPads. [Another aside: even if all the data is “in the cloud” the fact that two users on one iPad would share a keychain could mean that they have access to each others’ accounts anyway.] Each would need to protect his iPad from access by anyone else. Now even though in practice many companies do have a “one user, one laptop” correlation, they still rely on a centralised directory service to configure the user accounts, and therefore the security settings including access to private data.

Now the iPhone Configuration Utility (assuming its use is extended to iPads) allows configuration of the security settings on the device such as they are, but you can’t just give Jenkins an iPad, have him tell it that he’s Jenkins, then have it notice that it’s Jenkins’s iPad and should grab Jenkins’s account settings. You can do that with Macs and PCs on a network with a directory service; the individual computers can be treated to varying extents as pieces of furniture which only become “Jenkins’s computer” when Jenkins is using one.

If the iPad works in the same way as an iPhone, it will grab that personal and account info from whatever Mac or PC it’s synced to. Plug it in to a different computer, and that one can sync it, merging or replacing the information on the device. This makes registration fairly easy (“here’s your iPad, Jenkins, plug it in to your computer when you’re logged in”) and deregistration more involved (“Jenkins has quit, we need to recover or remove his PIN, take the data from the iPad, then wipe it before we can give it to Hopkins, his replacement”). I happen to believe that many IT departments could, with a “one iPad<->one computer<->one user” system, manage iPads in that way. But it would require a bit of a change from the way they currently run their networks and IT departments don’t change things without good reason. They would probably want full-device encryption (status: unknown) and to lock syncing to a single system (status: the iPhone Enterprise Deployment Guide doesn’t make it clear, but I think it isn’t possible).

What is clear based on the blogosphere/twitterverse reaction to the device is that many companies will be forced, sooner or later, to support iPads, just as when people started turning up to the helpdesks with BlackBerries and iPhones expecting them to be supported. Being part of that updated IT universe will make for an exciting couple of years.