On my own competency

There was a question on programmers.stackexchange.com about whether to put your Stack Overflow reputation in your CV. I don’t, and answered as much: there’s no point in writing for its own sake, unless you want to be a writer. If you want to be a programmer, then you should be a good programmer and any writing you do should be evidence of that.

There’s a great—if somewhat old—resource for evaluating your own capability as a software engineer called the programmer competency matrix. Writing the answer I gave to the above question renewed the memory of the matrix in my mind, so I decided to evaluate myself against it and use that to help build a research and personal development plan.

I’m sharing my own results in this post. My hope is that this will trigger some introspection of your own, and demonstrate just how easy it is to devote a little time to personal development (this took me about an hour to do, including the writing). I’m also being selfish: when I write for publication I’m a lot more organised than if I just push notes into iThoughtsHD or OmniOutliner.

It doesn’t matter whether you agree or disagree with anything I write. If we agree on any row of the matrix, then we’re both pulling in the same direction and pushing the bar a little higher in that area of software engineering. If we disagree, then we’re still both raising the bar, because we both have our special skills that (hopefully) contribute to making the whole industry that bit more skilful and knowledgeable. Of course, if you think that I’m completely brain-dead for choosing not to develop my skills in a particular area, feel free to flame me in the comments.

Notice that it’s hard to be objective about self-evaluation. I hope that my answers below are somewhat accurate, but it’s easy to have an over-inflated opinion of your own competence. In fact, when I was a junior software engineer I felt like I knew more than I do now :). It would definitely be trite to give a nonsense buzzphrase like “we should always be continually improving at everything we’re doing”, because there aren’t enough hours in the day for that. There should be things that we definitely need to work on, things that we “just do” but reflect on how they’re going, and things that we just automatically get on with. Of course, when I think that something should be automatic but I make a mistake, that’s a time to re-evaluate my perception of my skills.

Computer Science

  • data structures: Level 1. I’ve done some study into some of the level 2 features (and even know what some of the words mean in Level 3), but at heart I feel like that’s stuff for a framework programmer to deal with and I’m an applications programmer. I’m happy to believe that the framework engineers know how to implement List<Integer> or NSArray properly and leave them to it.
    Related tale: I once had an interview where the interviewer drew a list on the whiteboard, and asked how I’d sort it. I wrote a "[" at the beginning, and " sort];" at the end. I didn’t get the job.
  • algorithms: Level 2. Again this is largely the domain of the framework engineer, but there are times when it’s become important to my own work such as when I’ve had to do performance engineering (and of course cryptographic algorithms factor heavily into my work). I tend to address algorithmic problems on a case-by-case basis, and don’t feel the need to explicitly schedule some research time.
  • systems programming: Level 2. I do believe that I need to develop here, particularly in the areas of JIT compiling and interpretation. If BDD and DDD (and, to some extent, OOP) have taught us anything, it’s that there are great efficiency savings to be made by expressing a software problem in a way that’s understandable by, or at least explainable to, the user. And that means Domain Specific Languages. I’ve actually seen that work very well, when I was working for Diamond Light Source. Scientists could write working prototypes in a Jython-based DSL that would then either be cleaned up by engineers or reimplemented in the core engine.

    Of course, BDD and DDD are both also examples of RDD (Résumé-Driven Development), but that doesn’t mean they have nothing to offer us.

Software Engineering

  • source code version control: Level 3. To be honest I think that the matrix is outdated here, and that DVCS is well on its way to becoming the new established workflow. When the matrix was written, SVN was the de facto standard for VCS. Version control is just one of those things that you need to know and need to use properly. It will probably still change quite a lot (I don’t think that any of the DVCS platforms out there make the uber-common “star” distribution any easier than decentralised development), and my opinion is that I need to be good at the tools I need to use. Of course, being a contractor that often means being good at all of the tools.
  • build automation: Level 3. I’ve written here before about setting up tools like HeaderDoc and CruiseControl. I definitely think some of the tools need improving, so could find time to work on that.
  • automated testing: Level 3. Something I need to do more of, and can “learn on the job” while I’m doing that: but there isn’t any point in having a “do UI testing” research project.

Programming

  • problem decomposition: Level 3. That’s not to say I’m completely perfect at object-oriented analysis and design, and every time I see someone else’s code I’m interested in what patterns they’ve chosen, and how they’ve organised their code. Actually, my current writing project—a book, video training course and several conference talks on TDD—has been useful for learning more about decomposition and modularity.
  • systems decomposition: Level 2. This is an area I definitely need to focus on, because almost all modern applications are actually multiple-component systems that either do, or should, present the same data across multiple devices and feature both dedicated and incidental online components.
  • communication: Level 3. I think the amount of time I’ve spent recently on books, conference talks, workshops and the like means that I’ve actually over-engineered this aspect significantly and can dial back on it once my current obligations are discharged.
  • code organisation within a file: Unsure. Maybe level 3: well I definitely have a style I use and am comfortable with: is it “beautiful”? Does that even matter? Shouldn’t the important factor be: “reading this programmer’s code doesn’t get in the way of understanding and modifying this programmer’s code”?
  • code organisation across multiple files: Level 2. Who uses the file system any more, anyway? I organise my classes and other resources in the Xcode project navigator. Turns out I use Xcode as some sort of “project builder” tool, not the Finder. If the view of projects and groups in the IDE were the metric in question, then I think I’d be at Level 3 here.
  • source tree organisation: Level 2, for similar reasons.
  • code readability: Level 3. Again, this book/research project on TDD has been my most recent and valuable guide.
  • defensive coding: Level 3. Ensuring you don’t only code and test the happy path is one of my current rants, if you give me enough beer. There was a section in my first book on this.
  • error handling: Level 3. I think this is really just an aspect of defensive coding, though I agree that a consistent approach across projects is important. I’m not a person who avoids using exceptions in Objective-C code.
  • IDE: Level 2. I suppose if you count shell script build phases as ‘custom macros’, then I’ve done level 3 work, but I don’t have a library of those shell scripts that I carry around to various projects. I probably could make some efficiency savings by recording how I use Xcode and reducing any commonly-repeated steps or actions.
  • API: Level 1. No, srsly. I know of the existence and capabilities of many of the Cocoa and Cocoa Touch APIs (with, for obvious reasons, the most focus on Foundation and Core Foundation) but always have to look up the class reference docs. This is really the most important thing I need to work on, so my next research project will be to deepen my knowledge of the APIs.
  • framework: Level 2. In addition to the Cocoa frameworks, I’ve worked with WebObjects, Eclipse RCP, wxWidgets and (more’s the pity) Swing. Not sure I particularly need to write a framework at the moment, there are other people who do good jobs of that. Exploring MonoTouch would be useful.
  • requirements: Level 3. Being a contractor, it’s important to know when to try and change the client’s mind and do things differently, if you want to write good software rather than write crap for a good hourly rate. I’ve also worked on in-house teams where the product manager or architect didn’t have any Mac experience, and it’s impossible to work on those without being able to suggest improved requirements.
  • scripting: Level 2. Yes, automating common developer tasks is still in its infancy, and I probably should find, clean up and publish some of the scripts I’ve created over time.
    (By the way: different scripting languages are useful for different things. I’ve learnt a lot of bash, perl and python, but should probably pick up ruby too. Don’t dismiss any of them for looking like unintelligible spider scrawl; you’ll probably find a use for it one day.)
  • database: Level 1. EOF and Core Data do exist for a reason…to be honest I don’t think I’ll ever be a Level 3 database person, it’s probably cheaper to hire somebody if I ever need that. Most of the products I’m currently being asked to work on either have somebody else for that or don’t have strong performance requirements.

Experience

Actually, I’m going to skip this section. I don’t believe that you can do a research project on improving your experience; only your exposure. You gain experience by doing.

Knowledge

  • tool knowledge: Level 2. As I’ve said there are clear shortcomings in the tools I’ve used, and I’d love to improve them or write something new and get it out there.
  • languages exposed to: Level 2. I have done Prolog, but no concurrent programming languages. Additionally, Andre Pang’s talk on Functional Programming at NSConf 09 showed that I could learn a lot more from that arena.
  • knowledge of upcoming technologies: Level 3. It’s a particular point of professional pride to me that I know what’s coming up and how it changes what I’m working on. Of course, in a world of non-disclosure agreements then “sharing” has to be limited to immediate project teams and people at WWDC. You are coming to WWDC, right?
  • platform internals: Level 2. I’m not Amit Singh. I already have a project on the horizon to do some more hacking on Clang, and have spoken about that before at NSConf Mini. It’s questionable whether a deep understanding of the platform internals is necessary, or whether that should be abstracted by the platform frameworks and interfaces. Personally I believe it is something useful to know, particularly in diagnosing bug reports. It’s well known that almost no bugs reported in your software are really bugs in the OS, and a deep understanding of the OS can help to confirm that.
  • books: Level 1, because I haven’t read Peopleware. I’ve written 1.5 books, does that count for anything?
  • blogs: Level 3. You’re reading it. As identified above, I probably expend more effort on communication than is necessary.

So, that’s my current state of competency in a nutshell. From there, it looks like my priorities for research goals over the upcoming year or so should be:

  1. Improve breadth and depth of API knowledge
  2. Investigate systems decomposition of a smartphone/desktop/internet system.
  3. Learn how to create and support DSLs
  4. Write and publish improved developer support tools

OK, your turn: what are yours?

On the broken(?) Mac App Store

A day after the Mac App Store was launched, people are reporting that it has been cracked. There are two separate stories here, a vapourware circumvention of the FairPlay DRM used to generate the receipts and a report that certain apps aren’t validating the receipts properly. We can ignore the first case for the moment: it’s important, and if it’s true then Apple needs to fix it (and co-ordinate updating the validation code with us third-party developers). But for the moment, it’s more important that developers are implementing the protections that are in place in their applications – it’s those applications that are supposed to be protected.

Let’s skip, for the moment, the question of whether DRM or anti-cracking mechanisms are ethically right, worthwhile, or how much effort you want to put into them. Apple have done most of the legwork, in providing a vendor-signed receipt that’s part of your signed app bundle. What you need to do is:

  • Check whether you have a receipt
  • Check whether Apple signed the receipt you have
  • Check whether the receipt is valid for your product
  • Check whether the receipt is valid for this version of your product
  • Check whether the receipt is valid for this computer

That’s it in a nutshell. Of course, some nutshells surround very big and complex nuts, and that’s true in this case:

  • There’s some good example code for receipt validation at github/roddi/ValidateStoreReceipt, if you’re going to use it then don’t just paste it wholesale. If everyone uses the same code then it’s super-easy for someone to detect and strip that code from each instance.
  • Check at runtime, as well as before startup. If you just check at startup, then all an attacker needs to do is patch main() to jump straight into NSApplicationMain() and your app runs for free.
  • Code obfuscation is not a very effective tool. Having worked in anti-virus, I know it’s much easier to classify code based on what it does than what it is, and it’s quite easy to find the code that opens the receipt file, or calls exit(173). That said, some of the commercial obfuscation companies offer a guaranteed service, so you can still protect your revenue after the app gets cracked.
  • Update I have been advised privately and seen in a blog post that people are recommending hard-coding their app bundle IDs and version numbers into the binary rather than using Info.plist, because that file can be edited. Well, so can the app binary…and in either case you’d need to re-sign the product with a valid certificate to continue, because Apple have used the kill flag:
    heimdall:~ leeg$ codesign -dvvvv /Applications/Twitter.app/
    Executable=/Applications/Twitter.app/Contents/MacOS/Twitter
    Identifier=com.twitter.twitter-mac
    Format=bundle with Mach-O universal (i386 x86_64)
    CodeDirectory v=20100 size=12452 flags=0x200(kill) hashes=616+3 location=embedded
    CDHash=8e0736639d79a108a5a1ebe89f928d1da0d49d94
    Signature size=4169
    Authority=Apple Mac OS Application Signing
    Authority=Apple Worldwide Developer Relations Certification Authority
    Authority=Apple Root CA
    Info.plist entries=21
    Sealed Resources rules=4 files=78
    Internal requirements count=2 size=344
    

    Changing a hard-coded string in a binary file is not difficult. You can of course obfuscate the string, but the motivated cracker still finds the point where the comparison is made (particularly easily if you use NSStrings). Really, how far you want to go depends on how much you’re willing to spend.

Of course, Fuzzy Aliens Ltd has already been implementing receipt validation for customers, so if this is too hard for you or you don’t have the time… ;-)

A last word on publicising receipt-validation vulnerabilities

You and I both make our living by selling software, or by selling services to people who sell software. Crowing on the interwebs about how this application or that application doesn’t validate its receipts properly is not cool, because you are shitting on your own doorstep. There is no public benefit to public disclosure that class of vulnerability, because DRM is not a user security feature. Don’t do that. Send the developer a private message explaining your findings. Give them a chance to put extra effort into protecting their product, if that’s what they want to do.

Protecting source code

As I mentioned on the missing iDeveloper.tv Live episode, one of the consequences of the Gawker hack was that their source code for their internal software was leaked into the Internet. I doubt any of my readers would want that to happen to their code, so I’m going to share the details of how I protect my clients’ code when I’m working. Maybe some of this will work for you.

In the office, I work at a desktop iMac. This has an external time machine backup disk and a DropBox for off-site storage. However, client code does _not_ go onto the DropBox. Instead I keep a separate, encrypted sparse disk image for each project I’m working on. The password for each is different. As well as protecting against snooping, this helps stop cross-contamination. I rarely have two such images mounted at once. Note that it’s not just source that goes into these images: build products, notes, Instruments traces, and images all go into the encrypted containers.

Obviously that means a lot of passwords, and no I can’t remember them all. I use a keychain. It locks automatically when not in use, and has a passphrase that’s different from my login passphrase.

The devices I test on are all encrypted where available (if a client needs me to test on an iPhone 3G, then I can, but it isn’t encrypted). They are passphrase locked, set to require passphrase immediately. And I NEVER take them away from the desk before deleting any developer builds, unless I need to do something special like a real-world location services test.

I rarely do coding work on the laptop, but when I do I copy the appropriate encrypted image onto it. The laptop additionally has FileVault configured, though I’m evaluating full-disk encryption options. Keychain configuration as above, additionally with a password required on wake from sleep or screensaver, and a firmware password.

For pushing work back to the clients, most clients use github or bitbucket which offer SSL-encrypted connections to the repositories. Personally, I have a self-run repo host available over HTTPS or SSH, but will probably move that to a github-like service because life’s too short. Their security policy seems acceptable to me.

On the Mac App Store

I’ve just come off iDeveloper.TV Live with Scotty and John, where we were talking about the Mac app store. I had some material prepared about the security side of the app store that we didn’t get on to – here’s a quick write up.

There’s a lot of discussion on twitter and the macsb mailing list, and doubtless elsewhere, about the encryption paperwork that Apple are making us fill in. It’s not Apple’s fault, it’s the U.S. Department of Commerce. You see, back in the cold war (and, frankly, ever since) the government have been of the opinion that encryption is a weapon (because it hides data from their agents) and so are powerful computers (because they can do encryption that’s expensive to crack). So the Bureau of Industry and Security developed the Export Administration Regulations to control the flow of such heinous weapons through the commercial sector.

Section 5, part 2 covers computer equipment and software. Specific provision is made for encryption, in the documentation we find that Items may be controlled as encryption items even if the encryption is actually performed by the operating system, an external library, a third-party product or a cryptographic processor. If an item uses encryption functionality, whether or not the code that performs the encryption is included with the item, then BIS evaluates the item based on the encryption functionality it uses.

So there you go. If you’re exporting software from the U.S. (and you are, if you’re selling via Apple’s app store) then you need to fill in the export notification.

Other Mac App Store security things, in “oh God is it that late already” format:

  • Receipt validation. No different really from existing licensing frameworks. All you can do is make it hard to find the tests from the binary. I had an idea about a specific way to do that, but want to test it before I release it. As you’ve no doubt found, anti-cracking measures aren’t easy.
  • Users. The user base for the MAS will be wider, and less tech-savvy, than the users existing micro-ISVs are selling to. Make sure your intent with regard to user data, particularly the consequences of your app’s workflow, are clear.
  • Similarly, be clear about the content of updates. Clearer than Apple are: “contains various fixes and improvements” WTF?
  • As we’ve found with the iOS store, it’s harder to push an update out than when you host the download yourself. Getting security right (or, pragmatically, not too wrong) the first time avoids emergency update submissions.
  • Permissions. Your app needs to run entirely as the current user, who may not be an admin. If you’re a developer, you’re probably running as an admin. Test with a non-admin account. Better, do all of your development in a non-admin account. Add yourself to the _developer group so you can still use gdb/Instruments properly.

On Fuzzy Aliens

I have just launched a new company, Fuzzy Aliens[*], offering application security consultancy services for smartphone app developers. This is not the FAQ list, this is the “questions I want to answer so that they don’t become frequently asked” list.

What do you offer?

The company’s services are all focussed on helping smartphone and tablet app developers discover and implement their applications’ security and privacy requirements. When planning an app, I can help with threat modelling, with training developers, securing the development lifecycle, requirements elicitation, secure user experience design, and with developing a testing strategy.

When it comes to implementation, you can hire me to do the security work on your iOS or Android app. That may be some background “plumbing” like storing a password or encrypting sensitive content, or it might be an end-to-end security feature. I can also do security code reviews and vulnerability analysis on existing applications.

Why would I want that?

If you’re developing an application destined for the enterprise market, you probably need it. Company I.T. departments will demand applications that conform to local policy regarding data protection, perhaps based on published standards such as the ISO 27000 family or PCI-DSS.

In the consumer market, users are getting wise to the privacy problems associated with mobile apps. Whether it’s accidentally posting the wrong thing to facebook, or being spied on by their apps, the public don’t want to—and shouldn’t need to—deal with security issues when they’re trying to get their work done and play their games.

Can I afford that?

Having been a Micro-ISV and contracted for others, I know that many apps are delivered under tight budgets by one-person companies. If all you need is a half day together to work on a niggling problem, that’s all you need to pay for. On the other hand I’m perfectly happy to work on longer projects, too :).

Why’s it called Fuzzy Aliens?

Well, the word “fuzz” obviously has a specific meaning in the world of secure software development, but basically the answer is that I knew I could turn that into a cute logo (still pending), and that it hadn’t been registered by a UK Ltd yet.

So how do I contact you about this?

You already have – you’re here. But you could see the company’s contact page for more specific information.


[*] More accurately, I have indicated the intent to do so. The articles of association have not yet been returned by Companies House, so for the next couple of days the blue touch paper is quietly smouldering.

On free Mac Anti-Virus

On Tuesday, my pals at my old stomping ground Sophos launched their Free home edition Mac product. I’ve been asked by several people what makes it tick, so here’s Mac Anti-Virus In A Nutshell.

Sophos Anti-Virus for Mac

What is the AV doing?

So anti-virus is basically a categorisation technology: you look at a file and decide whether it’s bad. The traditional view people have of an AV engine is that there’s a huge table of file checksums, and the AV product just compares every file it encounters to every checksum and warns you if it finds a match. That’s certainly how it used to work around a decade ago, but even low-end products like ClamAV don’t genuinely work this way any more.

Modern Anti-Virus starts its work by classifying the file it’s looking at. This basically means deciding what type of file it is: a Mac executable, a Word document, a ZIP etc. Some of these are actually containers for other file types: a ZIP obviously contains other files, but a Word document contains sections with macros in which might be interesting. A Mac fat file contains one or more executable files, which each contains various data and program segments. Even a text file might actually contain a shell script (which could contain a perl script as a here doc), and so on. But eventually the engine will have classified zero or more parts of the file that it wants to inspect.

Because the engine now knows the type of the data it’s looking at, it can be clever about what tests it applies. So the engine contains a whole barrage of different tests, but still runs very quickly because it knows when any test is necessary. For example, most AV products now including Sophos’ can actually run x86 code in an emulator or sandbox, to see whether it would try to do something naughty. But it doesn’t bother trying to do that to a JPEG.

That sounds slow.

And the figures seem to bear that out: running a scan via the GUI can take hours, or even a day. A large part of this is due to limitations on the hard drive’s throughput, exacerbated by the fact that there’s no way to ask a disk to come up with a file access strategy that minimises seek time (time that’s effectively wasted while the disk moves its heads and platters to the place where the file is stored). Such a thing would mean reading the whole drive catalogue (its table of contents), and thinking for a while about the best order to read all of the files. Besides, such strategies fall apart when one of the other applications needs to open a file, because the hard drive has to jump away and get that one. So as this approach can’t work, the OS doesn’t support it.

On a Mac with a solid state drive, you actually can get to the point where CPU availability, rather than storage throughput, is the limiting factor. But surely even solid state drives are far too slow compared with CPUs, and the Anti-Virus app must be quite inefficient to be CPU-limited? Not so. Of course, there is some work that Sophos Anti-Virus must be doing in order to get worthwhile results, so I can’t say that it uses no CPU at all. But having dealt with the problem of hard drive seeking, we now meet the UBC.

The Unified Buffer Cache is a place in memory where the kernel holds the content of recently accessed files. As new files are read, the kernel throws away the contents of old files and stores the new one in the cache. Poor kernel. It couldn’t possibly know that this scanner is just going to do some tests on the file then never look at it again, so it goes to a lot of effort swapping contents around in its cache that will never get used. This is where a lot of the time ends up.

On not wasting all that time

This is where the on-access scanner comes in. If you look at the Sopohs installation, you’ll see an application at /Library/Sophos Anti-Virus/InterCheck.app – this is a small UNIX tool that includes a kernel extension to intercept file requests and test the target files. If it finds an infected file, it stops the operating system from opening it.

Sophos reporting a threat.

To find out how to this interception, you can do worse than look at Professional Cocoa Application Security, where I talk about the KAUTH (Kernel AUTHorisation) mechanism in Chapter 11. But the main point is that this approach – checking files when you ask for them – is actually more efficient than doing the whole scan. For a start, you’re only looking at files that are going to be needed anyway, so you’re not asking the hard drive to go out of its way and prepare loads of content that isn’t otherwise being used. InterCheck can also be clever about what it does, for example there’s no need to scan the same file twice if it hasn’t changed in the meantime.

OK, so it’s not a resource hog. But I still don’t need anti-virus.

Not true. This can best be described as anecdotal, but all of the people who reported to me that they had run a scan since the free Sophos product had become available, around 75% reported that it had detected threats. These were mainly Windows executables attached to mail, but it’s still good to detect and destroy those so they don’t get onto your Boot Camp partition or somebody else’s PC.

There definitely is a small, but growing, pile of malware that really does target Macs. I was the tech reviewer for Enterprise Mac Security, for the chapter on malware my research turned up tens of different strains: mainly Trojan horses (as on Windows), some OpenOffice macros, and some web-based threats. And that was printed well before Koobface was ported to the Mac.

Alright, it’s free, I’ll give it a go. Wait, why is it free?

Well here I have to turn to speculation. If your reaction to my first paragraph was “hang on, who is Sophos?”, then you’re not alone. Sophos is still a company that only sells to other businesses, and that means that the inhabitants of the Clapham Omnibus typically haven’t heard of them. Windows users have usually heard of Symantec via their Norton brand, McAfee and even smaller outfits like Kaspersky, so those are names that come up in the board room.

That explains why they might release a free product, but not this one. Well, now you have to think about what makes AV vendors different from one another, and really the answer is “not much”. They all sell pretty much the same thing, occasionally one of them comes up with a new feature but that gap usually closes quite quickly.

Cross-platform support is one area that’s still open, surprisingly. Despite the fact that loads of the vendors (and I do mean loads: Symantec, McAfee, Trend Micro, Sophos, Kaspersky, F-Secure, Panda and Eset all spring to mind readily) support the Mac and some other UNIX platforms, most of these are just checkbox products that exist to prop up their feature matrix. My suspicion is that by raising the profile of their Mac offering Sophos hopes to become the cross-platform security vendor. And that makes giving their Mac product away for free more valuable than selling it.

Rumors of your runtime’s death are greatly exaggerated

This is supposed to be the week in which Apple killed Java and Flash on the Mac, but it isn’t. In fact, looking at recent history, Flash could be about to enter its healthiest period on the platform, but the story regarding Java is more complicated.

Since releasing Mac OS X back in 2001, Apple has maintained the ports of both the Flash and Java runtimes on the platform. This contrasts with the situation on Windows, where Adobe and Sun/Oracle respectively distribute and maintain the runtimes. (Those with long memory will remember the problems regarding Microsoft’s JRE, which was eventually killed by court injunction.) In both cases, Apple has received occasional chastisement when its supported version of the runtime lagged behind the upstream release.

In the case of Flash, this was always related to security issues. Apple’s version would lack a couple of patches that Adobe had made available. However, because Adobe maintains the official runtime for Mac OS X, it’s super-easy to grab the latest version and stay up to date. If Apple stops maintaining its sporadically-updated distribution of the Flash runtime, then everyone who needs it will be forced into grabbing a new version from Adobe (who are free to remind users to install updates as other third-party vendors do). Everyone who doesn’t need it doesn’t have the runtime installed, thus reducing the attack surface of their browsers. Win-win.

Java, as I mentioned, is a more complicated kettle of bananas. Apple isn’t redistributing the upstream JRE in this case, they’re building their own from upstream sources. While other runtimes exist, none integrates as well with the OS due to effort Apple put in early on to support Yellow Box for Java and Aqua Swing. This means that you can’t just go somewhere else to get your fix of JRE – it won’t work the same.

There isn’t a big market of Java apps on the Mac platform, so there isn’t a big vendor who might pick up the slack and provide an integrated JRE. Oracle support OpenOffice on the platform, but that isn’t a big earner. There’s IBM with their Lotus products – I’ll come onto that shortly. That just leaves the few Java client apps that do matter on the Mac – IDEs. As a number of Java developers have stated loudly this week, there are many Java developers currently using Macs. These people could either switch to a different OS, or look to maintain support for their IDEs on the Mac: which means supporting the JRE and the GUI libraries used by those IDEs.

By far the people in the best position to achieve that are on Eclipse. The Eclipse IDE (and the applications built on its Rich Client Platform, including IBM’s stuff mentioned earlier) use a GUI library called SWT. This uses native code to draw its widgets using real Cocoa (in recent versions anyway – there’s a Carbon port too) so SWT already works with native drawing in whatever JRE people bring along. This SoyLatte port of OpenJDK can already run Eclipse Helios. Eclipse works with Apache Harmony too, though the releases lag behind quite a bit.

So the conclusion is that your runtime isn’t dead, in fact its support is equivalent to that found on the Windows platform. However, if you’re using Java you might experience a brief period in the wilderness unless/until the community support effort catches up with its own requirements – which aren’t the same as Apple’s.

On Trashing

Back in the 1980s and 1990s, people who wanted to clandestinely gain information about a company or organisation would go trashing.[*] That just meant diving in the bins to find information about the company structure – who worked there, who reported to whom, what orders or projects were currently in progress etc.

You’d think that these days trashing had been thwarted by the invention of the shredder, but no. While many companies do indeed destroy or shred confidential information, this is not universal. Those venues where shredding is common leave it up to their staff to decide what goes in the bin and what goes in the shredder; these staff do not always get it correct (they’re likely to think about whether a document is secret to them rather than the impact on the company). Nor do they always want to think about which bin to put a worthless sheet of paper in.

Even better: in those places that do shred secret papers, they helpfully collect all of the secrets in big bins marked “To Shred” to help the trashers :). They then collect all of these bins into a big hopper, and leave that around (sometimes outside the building, in a covered or open yard) for the destruction company to come and pick up.

So if an attacker can get entry to the building, he just roots around in the “To Shred” bins. Someone asks, he tells them he put a printout there in the morning but now think he needs it again. Even if he can’t get in, he just dives in the hopper outside and get access to all those juicy secrets (with none of the banana peelings and teabags associated with the non-secret bin).

But for those attackers who don’t like getting their hands dirty, they can gain some of the same information using technological means. LinkedIn will helpfully provide a list of employees – including their positions, so the public can find out something of the reporting structure. Some will be looking for recruitment opportunities – these are great people to phone for more information! So are ex-employees, something LinkedIn will also help you out with.

But the fun doesn’t stop there. Once our attacker has the names, he now goes over to Twitter and Facebook. There he can find people griping about work…or describing what the organisation is up to, to put it another way.

All of the above information about 21st-century trashing comes from real experience with an office I was invited into in the last 12 months. Of course, I will not name the organisation in charge of that office (or their data destruction company). The conclusion is that trashing is alive and well, and that those who participate need no longer root around in, well, in the trash. How does your organisation deal with the problem?

[*] for me, it was mainly the 1990s. I was the perfect size in the 1980s for trashing, but still finding my way around a Dragon 32.

Which vendor “is least secure”?

The people over at Intego have a blog post, Which big vendor is least secure? They discuss that because Microsoft have upped their game, malware authors have started to target other products, notably those produced by Adobe and Apple.

That doesn’t really address the question though: which big vendor is least secure (or more precisely, which big vendor creates the least secure products)? It’s an interesting question, and one that’s so hard to answer, people usually get it wrong.

The usual metrics for vendor software security are:

  • Number of vulnerability reports/advisories last year
  • Speed of addressing reported vulnerabilities

Both are just proxies for the question we really want to know the answer to: “what risk does this product expose its users to?” Each has drawbacks when used as such a proxy.

The previous list of vulnerabilities seems to correlate with a company’s development practices – if they were any good at threat modelling, they wouldn’t have released software with those vulnerabilities in, right? Well, maybe. But maybe they did do some analysis, discovered the vulnerability, and decided to accept it. Perhaps the vulnerability reports were actually the result of their improved secure development lifecycle, and some new technique, tool or consultant has patched up a whole bunch of issues. Essentially all we know is what problems have been addressed and who found them, and we can tell something about the risk that users were exposed to while those vulnerabilities were present. Actually, we can’t tell too much about that, unless we can find evidence that it was exploited (or not, which is harder). We really know nothing about the remaining risk profile of the application – have 1% or 100% of vulnerabilities been addressed?

The only time we really know something about the present risk is in the face of zero-day vulnerabilities, because we know that a problem exists and has yet to be addressed. But reports of zero-days are comparatively rare, because the people who find them usually have no motivation to report them. It’s only once the zero-day gets exploited, and the exploit gets discovered and reported that we know the problem existed in the first place.

The speed of addressing vulnerabilities tells us some information about the vendor’s ability to react to security issues. Well, you might think it does, it actually tells you a lot more about the vendor’s perception of their customers’ appetite for installing updates. Look at enterprise-focussed vendors like Sophos and Microsoft, and you’ll find that most security patches are distributed on a regular schedule so that sysadmins know when to expect them and can plan their testing and deployment accordingly. Both companies have issued out-of-band updates, but only in extreme circumstances.

Compare that model with Apple’s, a company that is clearly focussed on the consumer market. Apple typically have an ad hoc (or at least opaque) update schedule, with security and non-security content alike bundled into infrequent patch releases. Security content is simultaneously released for some earlier operating systems in a separate update. Standalone security updates are occasionally seen on the Mac, rarely (if ever) on the iPhone.

I don’t really use any Adobe software so had to research their security update schedule specifically for this post. In short, it looks like they have very frequent security updates, but without any public schedule. Using Adobe Reader is an exercise in unexpected update installation.

Of course, we can see when the updates come out, but that doesn’t directly mean we know how long they take to fix problems – for that we need to know when problems were reported. Microsoft’s monthly updates don’t necessarily address bugs that were reported within the last month, they might be working on a huge backlog.

Where we can compare vendors is situations in which they all ship the same component with the same vulnerabilities, and must provide the same update. The more reactive companies (who don’t think their users mind installing updates) will release the fixes first. In the case of Apple we can compare their fixes of shared components like open source UNIX tools or Java with other vendors – Linux distributors and Oracle mainly. It’s this comparison that Apple frequently loses, by taking longer to release the same patch than other Oracle, Red Hat, Canonical and friends.

So ultimately what we’d like to know is “which vendor exposes its customers to most risk?”, for which we’d need an honest, accurate and comprehensive risk analysis from each vendor or an independent source. Of course, few customers are going to want to wade through a full risk analysis of an operating system.