On utilities

When I worked on an antivirus application, we used to have a joke in our team that we’d choose which one of us would accept the Apple Design Award for our product. Not that we weren’t striving for ADA-quality work; we just knew that Apple would never suffer an anti-virus app to gain that sort of recognition. It’s not what they want customers to associate with the platform.

This doesn’t just apply to AV: any utility app falls into the same situation. The reason is that utilities exist to make up for shortcomings in the computing experience.

No-one wants to use a utility. You don’t wake up full of joy at the prospect of defragmenting your hard drive; you wake up hoping to write a great novel. Or put together that movie from your holiday. Or write a killer iPhone app. You defragmenting your hard drive because it needs doing; because the computer didn’t take care of it for you automatically. Writing your novel needs to wait while you twiddle with the internal workings of something called a filesystem.

Similarly, no-one wants to use anti-virus. It’s just that no-one wants to use a virus either. The computer lets you down by making it possible to run viruses, just like it lets you down by having a fragmented filesystem. And you have to suffer this let-down by running a utility app.

Any utility app – even a very well-written one – is symptomatic of a let-down in the computing experience. That’s why you don’t find utilities on the iPhone app store – and the conditions of the Mac app store will limit the availability of utilities there, too. Apple have no interest in making it easy for users to find the let-downs in the computing experience. (Readers with long memories will remember that even some of the built-in utilities on OS X were, for a long time, part of an optional package.)

Hitherto, most security features have been utilities – because the platform security has been a let-down. Anti-virus: let-down. Mail filters: let-down. Encryption: let-down. The way to get your application noticed is not to make a utility to address the let-down: it’s to design the let-down out of your application. Create something that lets users do what they want; without the compromises that lead to needing utilities. In other words, design the security into the application experience.

Posted in Uncategorized | Tagged | 1 Comment

On phone support scams and fake AV

A couple of weeks ago, I posted on Twitter about a new scam:

Heard about someone who was phoned by a man “from Windows” who engineered his way into remote access to the mark’s computer.

Fast forward to now, the same story has finally been picked up by the security vendors and the mainstream media. This means it’s probably time to go into more depth.

I heard a first-hand account of the scam. The victim is the kind of person who shouldn’t be expected to understand IT security – a long distance lorry driver who uses his computer for browsing, e-mail, and that sort of thing. As he tells it, the person called, saying they were from Windows and that they had discovered his computer was infected. He was given instructions to give the caller remote access to help clean up the computer.

With remote access, the caller was able to describe some of the problems the victim was having with his computer, while taking control to “fix them”. The caller eventually discovered that the victim’s anti-virus was out of date, and that if he gave the caller his payment information he could get new software for £109. This is when the victim hung up; however his computer has not booted properly since then.

I think my audience here is probably tech-savvy enough not to need warning about scams like this, and to understand that the real damage was done even before any discussion of payments was made (hint: browser form-auto-fill data). It’s not the scam itself I want to focus on, but our reaction.

Some people I have told this story to in real life (it does happen) have rolled their eyes, and said something along the lines of “well of course the users are the weakest link” in a knowing way. If that’s true, why rely on the users to make all the security decisions? Why leave it to them to decide what’s legitimate and what’s scammy, as was the case here? Why is the solution to any problem to shovel another bucketload of computer knowledge on them and hope that it sticks, as Sophos and the BBC have tried in the articles above?

No. This is not a solution to anything. No matter how loudly you shout about how that isn’t how Microsoft does business, someone who says he is from Microsoft will phone your users up and tell them that it is.

This is the same problem facing anti-virus vendors trying to convince us not to get fooled by FakeAV scams. Vendor A tells us to buy their product instead of Vendor B’s, because it’s better. So, is Vendor A the FakeAV pedlar, or B? Or is it both? Or neither? You can’t tell.

It may seem that this is a problem that cannot be solved in technology, that it relies on hard-wired behaviour of us bald apes. I don’t think that’s so. I think that it would be possible to change the way we, legitimate software vendors, interact with our users, and the way they interact with our software, such that an offline scam like this would never come to pass. A full discussion would fill a whole whitepaper that I haven’t written yet. However, to take the most extreme point from it, the one I know you’re going to loathe, what if our home computers were managed remotely by the vendors? Do most users really need complete BIOS and kernel level access to their kit? Really?

Look for the whitepaper sometime in the new year.

Posted in antivirus, Malware, Phishing, Scam, user-error | 2 Comments

On free Mac Anti-Virus

On Tuesday, my pals at my old stomping ground Sophos launched their Free home edition Mac product. I’ve been asked by several people what makes it tick, so here’s Mac Anti-Virus In A Nutshell.

Sophos Anti-Virus for Mac

What is the AV doing?

So anti-virus is basically a categorisation technology: you look at a file and decide whether it’s bad. The traditional view people have of an AV engine is that there’s a huge table of file checksums, and the AV product just compares every file it encounters to every checksum and warns you if it finds a match. That’s certainly how it used to work around a decade ago, but even low-end products like ClamAV don’t genuinely work this way any more.

Modern Anti-Virus starts its work by classifying the file it’s looking at. This basically means deciding what type of file it is: a Mac executable, a Word document, a ZIP etc. Some of these are actually containers for other file types: a ZIP obviously contains other files, but a Word document contains sections with macros in which might be interesting. A Mac fat file contains one or more executable files, which each contains various data and program segments. Even a text file might actually contain a shell script (which could contain a perl script as a here doc), and so on. But eventually the engine will have classified zero or more parts of the file that it wants to inspect.

Because the engine now knows the type of the data it’s looking at, it can be clever about what tests it applies. So the engine contains a whole barrage of different tests, but still runs very quickly because it knows when any test is necessary. For example, most AV products now including Sophos’ can actually run x86 code in an emulator or sandbox, to see whether it would try to do something naughty. But it doesn’t bother trying to do that to a JPEG.

That sounds slow.

And the figures seem to bear that out: running a scan via the GUI can take hours, or even a day. A large part of this is due to limitations on the hard drive’s throughput, exacerbated by the fact that there’s no way to ask a disk to come up with a file access strategy that minimises seek time (time that’s effectively wasted while the disk moves its heads and platters to the place where the file is stored). Such a thing would mean reading the whole drive catalogue (its table of contents), and thinking for a while about the best order to read all of the files. Besides, such strategies fall apart when one of the other applications needs to open a file, because the hard drive has to jump away and get that one. So as this approach can’t work, the OS doesn’t support it.

On a Mac with a solid state drive, you actually can get to the point where CPU availability, rather than storage throughput, is the limiting factor. But surely even solid state drives are far too slow compared with CPUs, and the Anti-Virus app must be quite inefficient to be CPU-limited? Not so. Of course, there is some work that Sophos Anti-Virus must be doing in order to get worthwhile results, so I can’t say that it uses no CPU at all. But having dealt with the problem of hard drive seeking, we now meet the UBC.

The Unified Buffer Cache is a place in memory where the kernel holds the content of recently accessed files. As new files are read, the kernel throws away the contents of old files and stores the new one in the cache. Poor kernel. It couldn’t possibly know that this scanner is just going to do some tests on the file then never look at it again, so it goes to a lot of effort swapping contents around in its cache that will never get used. This is where a lot of the time ends up.

On not wasting all that time

This is where the on-access scanner comes in. If you look at the Sopohs installation, you’ll see an application at /Library/Sophos Anti-Virus/InterCheck.app – this is a small UNIX tool that includes a kernel extension to intercept file requests and test the target files. If it finds an infected file, it stops the operating system from opening it.

Sophos reporting a threat.

To find out how to this interception, you can do worse than look at Professional Cocoa Application Security, where I talk about the KAUTH (Kernel AUTHorisation) mechanism in Chapter 11. But the main point is that this approach – checking files when you ask for them – is actually more efficient than doing the whole scan. For a start, you’re only looking at files that are going to be needed anyway, so you’re not asking the hard drive to go out of its way and prepare loads of content that isn’t otherwise being used. InterCheck can also be clever about what it does, for example there’s no need to scan the same file twice if it hasn’t changed in the meantime.

OK, so it’s not a resource hog. But I still don’t need anti-virus.

Not true. This can best be described as anecdotal, but all of the people who reported to me that they had run a scan since the free Sophos product had become available, around 75% reported that it had detected threats. These were mainly Windows executables attached to mail, but it’s still good to detect and destroy those so they don’t get onto your Boot Camp partition or somebody else’s PC.

There definitely is a small, but growing, pile of malware that really does target Macs. I was the tech reviewer for Enterprise Mac Security, for the chapter on malware my research turned up tens of different strains: mainly Trojan horses (as on Windows), some OpenOffice macros, and some web-based threats. And that was printed well before Koobface was ported to the Mac.

Alright, it’s free, I’ll give it a go. Wait, why is it free?

Well here I have to turn to speculation. If your reaction to my first paragraph was “hang on, who is Sophos?”, then you’re not alone. Sophos is still a company that only sells to other businesses, and that means that the inhabitants of the Clapham Omnibus typically haven’t heard of them. Windows users have usually heard of Symantec via their Norton brand, McAfee and even smaller outfits like Kaspersky, so those are names that come up in the board room.

That explains why they might release a free product, but not this one. Well, now you have to think about what makes AV vendors different from one another, and really the answer is “not much”. They all sell pretty much the same thing, occasionally one of them comes up with a new feature but that gap usually closes quite quickly.

Cross-platform support is one area that’s still open, surprisingly. Despite the fact that loads of the vendors (and I do mean loads: Symantec, McAfee, Trend Micro, Sophos, Kaspersky, F-Secure, Panda and Eset all spring to mind readily) support the Mac and some other UNIX platforms, most of these are just checkbox products that exist to prop up their feature matrix. My suspicion is that by raising the profile of their Mac offering Sophos hopes to become the cross-platform security vendor. And that makes giving their Mac product away for free more valuable than selling it.

Posted in antivirus, Business, Malware, PCAS | 8 Comments

Rumors of your runtime’s death are greatly exaggerated

This is supposed to be the week in which Apple killed Java and Flash on the Mac, but it isn’t. In fact, looking at recent history, Flash could be about to enter its healthiest period on the platform, but the story regarding Java is more complicated.

Since releasing Mac OS X back in 2001, Apple has maintained the ports of both the Flash and Java runtimes on the platform. This contrasts with the situation on Windows, where Adobe and Sun/Oracle respectively distribute and maintain the runtimes. (Those with long memory will remember the problems regarding Microsoft’s JRE, which was eventually killed by court injunction.) In both cases, Apple has received occasional chastisement when its supported version of the runtime lagged behind the upstream release.

In the case of Flash, this was always related to security issues. Apple’s version would lack a couple of patches that Adobe had made available. However, because Adobe maintains the official runtime for Mac OS X, it’s super-easy to grab the latest version and stay up to date. If Apple stops maintaining its sporadically-updated distribution of the Flash runtime, then everyone who needs it will be forced into grabbing a new version from Adobe (who are free to remind users to install updates as other third-party vendors do). Everyone who doesn’t need it doesn’t have the runtime installed, thus reducing the attack surface of their browsers. Win-win.

Java, as I mentioned, is a more complicated kettle of bananas. Apple isn’t redistributing the upstream JRE in this case, they’re building their own from upstream sources. While other runtimes exist, none integrates as well with the OS due to effort Apple put in early on to support Yellow Box for Java and Aqua Swing. This means that you can’t just go somewhere else to get your fix of JRE – it won’t work the same.

There isn’t a big market of Java apps on the Mac platform, so there isn’t a big vendor who might pick up the slack and provide an integrated JRE. Oracle support OpenOffice on the platform, but that isn’t a big earner. There’s IBM with their Lotus products – I’ll come onto that shortly. That just leaves the few Java client apps that do matter on the Mac – IDEs. As a number of Java developers have stated loudly this week, there are many Java developers currently using Macs. These people could either switch to a different OS, or look to maintain support for their IDEs on the Mac: which means supporting the JRE and the GUI libraries used by those IDEs.

By far the people in the best position to achieve that are on Eclipse. The Eclipse IDE (and the applications built on its Rich Client Platform, including IBM’s stuff mentioned earlier) use a GUI library called SWT. This uses native code to draw its widgets using real Cocoa (in recent versions anyway – there’s a Carbon port too) so SWT already works with native drawing in whatever JRE people bring along. This SoyLatte port of OpenJDK can already run Eclipse Helios. Eclipse works with Apache Harmony too, though the releases lag behind quite a bit.

So the conclusion is that your runtime isn’t dead, in fact its support is equivalent to that found on the Windows platform. However, if you’re using Java you might experience a brief period in the wilderness unless/until the community support effort catches up with its own requirements – which aren’t the same as Apple’s.

Posted in AAPL, Business, Updates | Leave a comment

What do you think of this?

I’m interested to find out what us Cocoa developers (alright, I know my opinion already) think of the following distinction between Foundation and, well any other object-oriented foundation library.

The distinction is this. In many libraries, compound objects (not only collections, but strings which are many-character objects and data which are many-byte objects) have both immutable and mutable varieties. We’re familiar with the Cocoa pattern, where we have the base immutable class e.g. NSArray (class clusters are an irrelevant complication for now) and a mutable subclass, e.g.:

@interface NSMutableArray : NSArray

That’s not how everyone else does it. They have distinct class hierarchies for the mutable and immutable types; for instance in Java we have String and StringBuilder. The two classes aren’t related, but you can create a StringBuilder given a String and vice versa. You certainly can’t pass one in where the API expects the other. The design pattern used here is called Builder.

The argument in favour of the subclass approach is that mutable arrays are just specialised versions of arrays. You can find out how long they are, what object is at what index, and you can also do all the add/remove/replace stuff on top, so these objects are arrays with extra functionality, and thus should be subclasses of the array type.

The argument against is that this specialisation is bogus. Client code can’t treat all array objects the same in case it gets a mutable array, and therefore mutable arrays can’t be used as arrays and shouldn’t be subclasses. They violate the Liskov substitution principle: the rule that if I expect to work with one class, any of its subclasses must be usable in its stead.

We can use an example from Foundation to bolster the second argument a bit. The NSFastEnumeration protocol works on all collections, except that when implemented on a mutable collection it must employ additional checks to protect against the collection being mutated while it’s iterating over it. So we need extra client code to deal with some subclasses, and thus Liskov is violated.

What do you think? Would you have designed Foundation the same way NeXT did? Perhaps there are other changes you would have made. Let the world know in the comments box.

Posted in code-level, software-engineering | 2 Comments

An example of unit testing working for me

Some specific feedback I was given regarding my unit testing talk at VTM: iPhone fall conference was that the talk was short on real-world application of unit testing. That statement is definitely true, and it’s unfortunate that I didn’t meet the attendee’s expectations for the talk. I hope to address that deficiency here, by showing you a real bug that I really fixed via unit testing.

The code is in the still-MIA Rehearsals app. I started developing Rehearsals before I was a full believer in test-driven development, so there were places in the app where I could go back and add tests to existing code. This is actually a pretty useful displacement activity – if you get bored of doing the real code, you can switch to writing the tests. This is enough of a change of pace (for me) to provide a welcome distraction, and still makes the product better.

So, in order to understand the problem, I need to understand the requirements. I’m a fan of the user story approach to requirements documentation, and so Rehearsals has the following story written on a 5×3 index card:

A musician can rate tunes in her tunebook, by giving it a number of stars between 1 and 5. Tunes that the musician has not rated have no star value.

As is often the case, there are other stories that build from this one (any such collection is called an “epic”), but I can concentrate on implementing this user story for now. One feature in Rehearsals is an AppleScript interface, so clearly if the musician can rate a tune, she must be able to do it via AppleScript. Glossing over the vagiaries of the Cocoa Scripting mechanism, I define a property scriptRating on the tune that had the following KVC-compliant accessors:

- (NSNumber *)scriptRating {
    if (!self.rating) {
        return [NSNumber numberWithInteger: 0];
    }
    return self.rating;
}

- (void)setScriptRating: (NSNumber *)newRating {
    NSInteger val = [newRating integerValue];
    if (val < 0 || val > 5) {
        //out of range, tell AppleScript
        NSScriptCommand *cmd = [NSScriptCommand currentCommand];
       [cmd setScriptErrorNumber: TORatingOutOfRangeError];
        //note that AppleScript is never localised
        [cmd setScriptErrorString: @"Rating must be between 0 and 5."];
    }
    else {
        self.rating = newRating;
    }
}

(No prizes for having already found the bug – it’s really lame). So testing that the getter works is ultra-simple: I can just -setRating: and verify that -scriptRating returns the same value. You may think this is not worth doing, but as this functionality is using indirect access via a different property I want to make sure I never break the connection. I decided that a single test would be sufficient (tune is an unconfigured tune object created in -setUp):

- (void)testTuneRatingIsSeenInAppleScript {
    tune.rating = [NSNumber numberWithInteger: 3];
    STAssertEquals([tune.scriptRating integerValue], 3,
                   @"Tune rating should be visible to AppleScript");
}

Simples. Now how do I test the setter? Well, of course I can just set the value and see whether it sticks, that’s the easy bit. But there’s also this bit about being able to get an error into AppleScript if the user tries to set an out of range error. That seems like really useful functionality, because otherwise the app would accept the wrong value and give up later when the persistent store gets saved. So it’d be useful to have a fake script command object that lets me see whether I’m setting an error on it. That’s easy to do:

@interface FakeScriptCommand : NSScriptCommand
{
    NSString *scriptErrorString;
    NSInteger scriptErrorNumber;
}
@property (nonatomic, copy) NSString *scriptErrorString;
@property (nonatomic, assign) NSInteger scriptErrorNumber;
@end

@implementation FakeScriptCommand
@synthesize scriptErrorString;
@synthesize scriptErrorNumber;
- (void) dealloc {...}
@end

OK, but how do I use this object in my test? The tune class sets an error on +[NSScriptCommand currentScriptCommand], which is probably returning an object defined in a static variable in the Foundation framework. I can’t provide a setter +setCurrentScriptCommand:, because I can’t get to the innards of Foundation to set the variable. I could change or swizzle the implementation of +currentScriptCommand in a category, but that has the potential to break other tests if they’re not expecting me to have broken Foundation.

So the solution I went for is to insert a “fulcrum” if you will: a point where the code changes from asking for a script command to being told about a script command:

- (void)setScriptRating: (NSNumber *)newRating {
    [self setRating: newRating fromScriptCommand: [NSScriptCommand currentScriptCommand]];
}

- (void)setRating: (NSNumber *)newRating fromScriptCommand: (NSScriptCommand *)cmd {
    NSInteger val = [newRating integerValue];
    if (val < 0 || val > 5) {
       [cmd setScriptErrorNumber: TORatingOutOfRangeError];
        //note that AppleScript is never localised
        [cmd setScriptErrorString: @"Rating must be between 0 and 5."];
    }
    else {
        self.rating = newRating;
    }
}

This is the Extract Method refactoring pattern. Now it’s possible to test -setRating:fromScriptCommand: without depending on the environment, and be fairly confident that if that works properly, -setScriptRating: should work properly too. I’ll do it:

- (void)testAppleScriptGetsErrorsWhenRatingSetTooLow {
    FakeScriptCommand *cmd = [[FakeScriptCommand alloc] init];
    [tune setRating: [NSNumber numberWithInteger: 0]
  fromScriptCommand: cmd];
    STAssertEquals([cmd scriptErrorNumber], TIRatingOutOfRangeError,
                   @"Should get a rating out of range error");
    STAssertNotNil([cmd scriptErrorString], @"Should report error to the user");
}

Oh-oh, my test failed. Why’s that? I’m sure you’ve spotted it now: the user is supposed to be restricted to values between 1 and 5, but the method is actually testing for the range 0 to 5. I could fix that quickly by changing the value in the test, but I’ll take this opportunity to remove the magic numbers from the code too (though you’ll notice I haven’t done that with the error string yet, I still need to do some more refactoring):

- (void)setRating: (NSNumber *)newRating fromScriptCommand: (NSScriptCommand *)command {
    if (![self validateValue: &newRating forKey: @"rating" error: NULL]) {
        [command setScriptErrorNumber: TIRatingOutOfRangeError];
        //note that AppleScript is never localised
        [command setScriptErrorString: @"Rating must be a number between 1 and 5."];
    }
    else {
        self.rating = newRating;
    }
}

OK, so now my test passes. Great. But that’s not what’s important: what’s important is that I found and fixed a bug by thinking about how my code is used. And that’s what unit testing is for. Some people will say they get a bad taste in their mouths when they see that I redesigned the method just so that it could be used in a test framework. To those people I will repeat the following: I found and fixed a bug.

Posted in code-level, iPad, iPhone, Mac, software-engineering, TDD, tool-support, VTM | 1 Comment

On Ignoring the Tests

As mentioned over two months ago, I’ll be giving two talks this weekend at the Voices That Matter: iPhone Developers Fall conference. I’m feeling good about both of the talks that I’ve worked on, though I definitely think the Unit Testing talk on Saturday is better polished. That’s probably at least in part because of the work put in by everyone’s favourite Apple Employee, who gave me a great base for the talk.

One of the (myriad) problems I’m currently facing is related to unit testing, in a project I’ve recently started working on. The existing code base works well but, how can we put this delicately, it smells. There aren’t any important bugs, but when you come to extend some functionality or add a new feature you find yourself trying to piece together code from numerous classes in different libraries, before you can tell the story of how the bit you’re trying to change works. You then load the shotgun with the lead shot of enhanced featurity, which is liberally spattered across the code base.

There are various refactoring exercises that can be brought to bear on code such as this, or there could be if it weren’t for the unit tests. Some of the tests fail. Sometimes you look and find a hard-coded path to a file that can quickly be changed to get us going again. Sometimes you see a bit of expected value drift. Every so often, you find that the class under test and the test suite bear no resemblance to one another. Why are these tests failing, why did nobody notice, and what is the behaviour of the code under test?

The fact is usually that at some point, someone needed to make a change to the application and, believing it to be a “quick fix”, decided it wasn’t worth doing TDD for the new code. Whether or not that was a good decision, the new code had to interface with the old code somewhere and it changed the behaviour of that old code. Tests started failing. The developer decided that the test should be ignored. Worst. Decision. Ever.

Ignoring a test means leaving it in the code, but marking it up in some way such that the test runner either doesn’t execute the test or doesn’t care if it fails. Concrete example: in NUnit, you can set the [Ignore] attribute on a test method and it won’t get run. You can’t actually do this in OCUnit, though you can get per-class ignorance by just removing the class from the test bundle target.

The problem with ignored tests is that they don’t look much different from non-ignored tests (sometimes, they can look identical, if the ignorance is provided by a parameters file rather than the source code). They give the impression that the code is under test. They give the impression that the code should work in the way specified in the test. They give the impression, if you run the tests and they fail, that you broke something. Turning off a test is worse than useless – just delete the thing. That way it’s obvious that the code it was testing is no longer under test. And don’t worry, it’s not gone forever – you could always retrieve the deleted code from version control. You never will, but you could.

“But,” pipe up the authors of the xUnit ignore features, “that’s not what the ignore feature is for. You write tests for code that doesn’t exist yet, and set them ignored so they don’t break the build. You get the utility of annotating your code with tests, without the red bar.”

Unfortunately it doesn’t work like that. I write the tests now, and set them to be ignored. Some time later, the feature gets cancelled. The ignored tests hang around, bearing less and less resemblance to the real application.

Or I write the tests now, and set them to be ignored. Some time later, somebody else implements the feature, writing their own tests (or not). The ignored tests hang around, looking uncomfortably like they might be testing the feature.

Now using unit tests as a documentation and communication tool is useful and beneficial, and I’ll talk about that at the weekend. But only commit tests for code that either got written with the tests, or will be written imminently. If it looks like a test is no longer relevant, rewrite it or delete it. But do not tell your test runner to ignore it, because your developers probably won’t.

Posted in Uncategorized | Leave a comment

On documentation

Over at the daily WTF, Alex Papadimoulis writes about Documentation Done Right. His conclusion is spot on:

The immediate answer to what’s the right way to do documentation is clear: produce the least amount of documentation needed to facilitate the most understanding, and be very explicit about which documentation is to be maintained and which is to be archived (i.e., read-only and left to rot).

The amount of documentation appropriate to any project depends very much on the project, and the people working on it. However I’ve found that there are some useful rough guides that can generally be applied. Like Alex, I’m ignoring user documentation here and talking about internal project artifacts.

Enterprise project documentation exists to give managers someone to blame.

The reason large projects seem so documentation-heavy is twofold: managers don’t like to think they don’t know what’s going on; and managers like to be able to come down like a ton of shit on someone when they find out that they don’t know what’s going on. That’s where the waterfall model comes into its own. The big up-front planning means that everyone on the project has signed off on the ridiculous unreadable project documentation of doom, so it must describe the best software possible. Whoever does something that isn’t the same as the documentation has fucked up.

You never need the amount of documentation produced by most large-company software projects. Never. It’s usually inaccurate even at the moment the it’s signed off, because it took so long to get there the requirements changed, and because the level of precision required leads authors to make assumptions about how the OS, APIs etc. work. It’s often very hard to work from, because there are multiple documents, poorly cross-referenced using support “tools” like Word, each with its own glossary depending heavily on terminology relevant to the author’s domain. And they weren’t written by the same author.

In order to code up a feature from one of these projects, you need to check the product requirements document to see what it’s supposed to do (and whether it’s the highest priority outstanding work – you also get to find out who asked for the feature, how much money it’s worth, and plenty of other information that’s of no use to a software engineer). You check the functional specification to see how it’s supposed to do it. You check the architecture document to see what classes go in which packages. You raise a change control because the APIs don’t support part of the functional specification. Two weeks later, a fix has been approved. You write the code, ensuring that you use the test plan to find out what the acceptance criteria are, and when they’ll be tested (in fact, I’ve worked on projects where the functional and performance test plans were delivered at different times by different people). Oh, and make sure you’re keeping up to date with the issue tracker, otherwise you might do work that’s been assigned to someone else. Your systems engineer can point to the process documentation to let you know how you do that.

Good documentation doesn’t answer all the questions, but leaves you capable of asking smart questions of the right people.

It’s no secret that I’m a fan of user stories as a form of requirements documentation. A well-written user story tells you what the user expects to be able to do after a feature has been added, and precious little else. There might be questions over terminology, or specifics such as failure cases, but you know who to ask – the person who wrote the user story. There will be nothing about architecture or the GUI layout, because those things aren’t up to the user, customer or product manager (or at least they shouldn’t be).

Similarly, a good architecture diagram is going to tell you something about how the classes in an application fit together, and precious little else. If you can’t work out how to fit your feature in, you can ask the architect, but you’ll both be able to use the diagram as a good place to start. As Alex says in his post, precise documentation will go out of date quickly, so the documentation needs to be good enough to hang discussions on, and no “better”.

UML diagrams are great to describe code before it’s been written; they’re not so great at writing code.

If you’re trying to explain to another engineer how you think a feature should work in code, what design pattern to follow, or what steps will be needed to talk to a server, then a UML diagram is a great way to do it. Many engineers understand UML, and those who don’t can work out what it means (with your help) very quickly. It’s a consistent language for talking about software.

That said, when I draw UML diagrams I tend to prefer whiteboards and agnostic diagramming software like Omnigraffle or dia over syntax-checking UML tools like Enterprise Architect or ArgoUML. The reason is that I gave in the last section: the diagram needs to be good enough to explain to someone else what we’re going to do, not a gold-plated example of conformance to the UML specification. I don’t care if my swim-lane is in the wrong place if everybody involved knows what I mean (or can ask).

Code generated from UML tools tends to have readability issues. Whereas you or I might put related methods together, the tool might sort them by visibility or name. If you’re diagram is rough enough to be useful, then the tool will only generate a few method stubs – and you’ll be tempted to fill them in rather than creating small private methods to do logical units of work. These and other problems (I’ve seen EA generate code that uses UUIDs as class identifiers) can be fixed, but you shouldn’t need to fix them. You should write your application and sell it.

After code has been written, the only accurate documentation is the code itself (or generated from it).

There are other useful forms of documentation – for instance, a whiteboard diagram explains a developer’s understanding of the code. It doesn’t document the code itself – an important distinction.[*] All of those shiny enterprise documents your project manager got you to write went out of date as soon as the first customer reported a bug or feature request. The code, however, documents exactly what the code does – just not necessarily in the most useful way. Javadoc/doxygen comments are more likely than any thing else to stay in sync with the code due to their proximity, but even those can be outdated or unhelpful.

This is where UML tools can come in very handy. Those with the ability to generate diagrams based on code (even Xcode does this) can automatically give you a correct view of the application’s behaviour, at a more appropriate level of abstraction than the code itself. If what you need is a package dependency diagram, it’s better to get it from a UML tool than to try and read all of the source.

Unfortunately, Xcode (and Doxygen’s very limited capabilities) are the only games in town for Objective-C. Tool support for Java and C# is way ahead, but for Cocoa developers there’s only MacTranslator (which I’ve never tried). Not that UML maps particularly well onto Objective-C anyway.

[*]Though documenting a developer’s assumptions and comparing them with reality can often explain where some subtle bugs come from, and is of course useful.

The better your prototypes, the worse the feedback.

Back in the 1990s there was an explosion of Rapid Application Development tools, after the rest of the software industry saw Interface Builder and decided it was good. The RAD way (which is, of course, eminently achievable using IB) is to produce an executable prototype for users to give feedback on. Of course, the problem is that you end up shipping the prototype. I’ve actually ended up doing that on a couple of Mac applications.

One problem is that because the app looks complete, users assume it is complete and that they’re being asked to provide polishing details, or spot spelling mistakes and misplaced buttons. The other is that because it looks complete, managers assume it is complete and will tell you to ship it.

Don’t make that mistake. Do UI prototypes as paper-based wireframes, Keynote presentations or Cappuccino apps. Whatever you do, make it look like it’s just a crappy sketch you’re willing to have ripped to shreds. That way, people will rip it to shreds.

There are few document “artifacts” that need to hang around.

If you think about what you’re going to do with an application after you’ve written it, there’s selling it, supporting it, maintaining it, and extending it. Support people might need some high-level architecture knowledge so that they can work out what component a problem is in, or how to diagnose a particular failure.

Such an architecture document can be a good aid for planning new features or bug fixes, because you can quickly see where you need to modify the app to get the desired behaviour (clearly you then need to dive into the code to get a better idea, but you know where to jump). Similarly, architecture rationale documentation (including the threat model, API/library use justification) can be handy so that you don’t need to go through the same debates/research when fixing a bug or adding a feature. Threat models particularly can take a lot of time and expertise to construct from scratch.

Sales people will need to know which features have been delivered, which features are on the roadmap, and can probably find out specific questions from engineers if they get a grilling from an awkward customer. Only in a very limited set of circumstances will sales staff need to give customers security documentation, test coverage information, or anything other than the user manual.

Notice that in each of these cases, one of the main aspects of keeping the document around is so that you can keep it up to date. There’s no point having the 1.0 feature list when you’re selling version 2.5 – in fact it would be positively detrimental. So obviously the fewer documents you keep around, the fewer you need to keep up to date. There’s some trade-off involved (natürlich) – if you need something you didn’t hang on to, you’re going to have to regenerate it.

Posted in software-engineering, tool-support | Comments Off on On documentation

YOUR development team needs security engineers

It can definitely be tempting if your engineers don’t have a whole lot of security expertise to get a consultant in. Indeed this can be a great way to bootstrap a security process, however it then needs to be owned and executed inside the team. The reason has mainly to do with the goals of your developers and of the consultant.

Fuzzy logic

The consultant’s high fee is based on being able to come in to a new team and quickly make a high impact. This means, of course, rapidly reporting low-hanging problems. It also means that all of the reported problems are important – it’s a good job you hired methis anonymous consultant, having found all of those critical issues, no?

The fact is that most of the security problems found by external security consultants and penetration testers are those that can be found in short order by a script kiddie – in other words, ones it would take very little effort to write automated tests for. In fact that’s pretty much what consultants, pen testers and script kiddies do, and it’s called fuzzing. Now, if you’d given the task of writing the fuzzing tools to one of your developers, you’d be slightly more behind schedule (than you already are because you’ve got to fix those data-handling defects), and you’d have a developer who knows how to write fuzzing tools.

Finding architectural and requirements-level security problems is not impossible for a consultant, however we only really have a second hand knowledge of the requirements and architecture (and it’s likely that we got that from talking to your developers, unless you have some really shit-hot and surprisingly relevant documentation. Hint: you don’t). What we bring to the party is a thorough knowledge of threat analysis and risk management, which we then get to apply to an unknown project. Your developers have an inside-out knowledge of the project, but probably aren’t as experienced as me at carrying out security assessments.

Don’t get me wrong, the consultant will definitely find real problems and will probably leave you in a position to deliver a more secure product. However, in order to get most value out of the consultant’s input, you need to make sure that your project’s security model and security knowledge remain up to date, and that means bringing it at least partially in house.

Security champions

Make sure that every development team has its own security champion. That person’s responsibility is:

  • Define the team’s security process:
    • How is the threat model discovered/updated?
    • How are security problems found and reported?
    • How are they addressed?
  • Ensure the process is followed
  • Define security criteria for each iteration, user story, or whatever makes sense in your project management setup

The champion is authorised to:

  • Delegate responsibility for carrying out tasks in the security process
  • Declare any build/user story/iteration/whatever to be a failure until its security criteria are satisfied

As we know, responsibility without authority just means you have someone to fire. A team can only get things done if they’re actually allowed to ensure they get things done. Sounds like common sense, but many software engineering managers fail to grasp it.

In our team, everybody is the security guy. We share the responsibility.

Bullshit. You get nothing done. If everyone is responsible for a problem, then everyone knows that somebody else can deal with it. I’ve seen software released with known and rather heinous vulnerabilities, all because no-one wanted to be the person who delayed release just before the ship date. Don’t allow your project to get into that state – give someone ownership of the security efforts.

Posted in Uncategorized | Leave a comment

On McAfee

Today, Apple’s CPU/motherboard supplier Intel announced that it will acquire McAfee, in a deal worth nearly $7.7B. While this is definitely big bucks, it doesn’t seem like terrifically big security news.

Intel probably don’t want the technology. McAfee is the world’s biggest security vendor, so there are cheaper ways for Intel to acquire security technology. Intel probably don’t want a fast buck either: or if they do, they’re not about to get it. It would take around a decade for Intel’s new security software division to make its money back, assuming no huge changes in organization.

Intel may want the IP. McAfee has an extensive patent portfolio (as do all the big players in the cold war world of security software), there’s bound to be things that Intel could implement in silico. Jokes have already been doing the rounds on Twitter of a new CPU opcode, SCANAV. Encryption and data tagging seem more likely targets. But couldn’t they just license the patents?

I expect that what Intel are after is to make the company a one-stop IT shop, with security software being just one element in that. Large businesses and government in particular value having a small network of large, stable, boring, trusted partners. We’ve already seen in the last couple of years that the likes of Cisco, HP and Oracle have been shifting toward “vertical” provision of IT services. Intel now have a few different software houses under their wing, and of course McAfee brings a vast collection of juicy business customers. What Paul Ottelini is likely hoping is that such customers will start looking to Intel for other services, and maybe hardware too. And that Intel’s existing customers will buy into McAfee’s security offerings.

Posted in Uncategorized | Tagged | Leave a comment