Rumors of your runtime’s death are greatly exaggerated

This is supposed to be the week in which Apple killed Java and Flash on the Mac, but it isn’t. In fact, looking at recent history, Flash could be about to enter its healthiest period on the platform, but the story regarding Java is more complicated.

Since releasing Mac OS X back in 2001, Apple has maintained the ports of both the Flash and Java runtimes on the platform. This contrasts with the situation on Windows, where Adobe and Sun/Oracle respectively distribute and maintain the runtimes. (Those with long memory will remember the problems regarding Microsoft’s JRE, which was eventually killed by court injunction.) In both cases, Apple has received occasional chastisement when its supported version of the runtime lagged behind the upstream release.

In the case of Flash, this was always related to security issues. Apple’s version would lack a couple of patches that Adobe had made available. However, because Adobe maintains the official runtime for Mac OS X, it’s super-easy to grab the latest version and stay up to date. If Apple stops maintaining its sporadically-updated distribution of the Flash runtime, then everyone who needs it will be forced into grabbing a new version from Adobe (who are free to remind users to install updates as other third-party vendors do). Everyone who doesn’t need it doesn’t have the runtime installed, thus reducing the attack surface of their browsers. Win-win.

Java, as I mentioned, is a more complicated kettle of bananas. Apple isn’t redistributing the upstream JRE in this case, they’re building their own from upstream sources. While other runtimes exist, none integrates as well with the OS due to effort Apple put in early on to support Yellow Box for Java and Aqua Swing. This means that you can’t just go somewhere else to get your fix of JRE – it won’t work the same.

There isn’t a big market of Java apps on the Mac platform, so there isn’t a big vendor who might pick up the slack and provide an integrated JRE. Oracle support OpenOffice on the platform, but that isn’t a big earner. There’s IBM with their Lotus products – I’ll come onto that shortly. That just leaves the few Java client apps that do matter on the Mac – IDEs. As a number of Java developers have stated loudly this week, there are many Java developers currently using Macs. These people could either switch to a different OS, or look to maintain support for their IDEs on the Mac: which means supporting the JRE and the GUI libraries used by those IDEs.

By far the people in the best position to achieve that are on Eclipse. The Eclipse IDE (and the applications built on its Rich Client Platform, including IBM’s stuff mentioned earlier) use a GUI library called SWT. This uses native code to draw its widgets using real Cocoa (in recent versions anyway – there’s a Carbon port too) so SWT already works with native drawing in whatever JRE people bring along. This SoyLatte port of OpenJDK can already run Eclipse Helios. Eclipse works with Apache Harmony too, though the releases lag behind quite a bit.

So the conclusion is that your runtime isn’t dead, in fact its support is equivalent to that found on the Windows platform. However, if you’re using Java you might experience a brief period in the wilderness unless/until the community support effort catches up with its own requirements – which aren’t the same as Apple’s.

What do you think of this?

I’m interested to find out what us Cocoa developers (alright, I know my opinion already) think of the following distinction between Foundation and, well any other object-oriented foundation library.

The distinction is this. In many libraries, compound objects (not only collections, but strings which are many-character objects and data which are many-byte objects) have both immutable and mutable varieties. We’re familiar with the Cocoa pattern, where we have the base immutable class e.g. NSArray (class clusters are an irrelevant complication for now) and a mutable subclass, e.g.:

@interface NSMutableArray : NSArray

That’s not how everyone else does it. They have distinct class hierarchies for the mutable and immutable types; for instance in Java we have String and StringBuilder. The two classes aren’t related, but you can create a StringBuilder given a String and vice versa. You certainly can’t pass one in where the API expects the other. The design pattern used here is called Builder.

The argument in favour of the subclass approach is that mutable arrays are just specialised versions of arrays. You can find out how long they are, what object is at what index, and you can also do all the add/remove/replace stuff on top, so these objects are arrays with extra functionality, and thus should be subclasses of the array type.

The argument against is that this specialisation is bogus. Client code can’t treat all array objects the same in case it gets a mutable array, and therefore mutable arrays can’t be used as arrays and shouldn’t be subclasses. They violate the Liskov substitution principle: the rule that if I expect to work with one class, any of its subclasses must be usable in its stead.

We can use an example from Foundation to bolster the second argument a bit. The NSFastEnumeration protocol works on all collections, except that when implemented on a mutable collection it must employ additional checks to protect against the collection being mutated while it’s iterating over it. So we need extra client code to deal with some subclasses, and thus Liskov is violated.

What do you think? Would you have designed Foundation the same way NeXT did? Perhaps there are other changes you would have made. Let the world know in the comments box.

An example of unit testing working for me

Some specific feedback I was given regarding my unit testing talk at VTM: iPhone fall conference was that the talk was short on real-world application of unit testing. That statement is definitely true, and it’s unfortunate that I didn’t meet the attendee’s expectations for the talk. I hope to address that deficiency here, by showing you a real bug that I really fixed via unit testing.

The code is in the still-MIA Rehearsals app. I started developing Rehearsals before I was a full believer in test-driven development, so there were places in the app where I could go back and add tests to existing code. This is actually a pretty useful displacement activity – if you get bored of doing the real code, you can switch to writing the tests. This is enough of a change of pace (for me) to provide a welcome distraction, and still makes the product better.

So, in order to understand the problem, I need to understand the requirements. I’m a fan of the user story approach to requirements documentation, and so Rehearsals has the following story written on a 5×3 index card:

A musician can rate tunes in her tunebook, by giving it a number of stars between 1 and 5. Tunes that the musician has not rated have no star value.

As is often the case, there are other stories that build from this one (any such collection is called an “epic”), but I can concentrate on implementing this user story for now. One feature in Rehearsals is an AppleScript interface, so clearly if the musician can rate a tune, she must be able to do it via AppleScript. Glossing over the vagiaries of the Cocoa Scripting mechanism, I define a property scriptRating on the tune that had the following KVC-compliant accessors:

- (NSNumber *)scriptRating {
    if (!self.rating) {
        return [NSNumber numberWithInteger: 0];
    }
    return self.rating;
}

- (void)setScriptRating: (NSNumber *)newRating {
    NSInteger val = [newRating integerValue];
    if (val < 0 || val > 5) {
        //out of range, tell AppleScript
        NSScriptCommand *cmd = [NSScriptCommand currentCommand];
       [cmd setScriptErrorNumber: TORatingOutOfRangeError];
        //note that AppleScript is never localised
        [cmd setScriptErrorString: @"Rating must be between 0 and 5."];
    }
    else {
        self.rating = newRating;
    }
}

(No prizes for having already found the bug – it’s really lame). So testing that the getter works is ultra-simple: I can just -setRating: and verify that -scriptRating returns the same value. You may think this is not worth doing, but as this functionality is using indirect access via a different property I want to make sure I never break the connection. I decided that a single test would be sufficient (tune is an unconfigured tune object created in -setUp):

- (void)testTuneRatingIsSeenInAppleScript {
    tune.rating = [NSNumber numberWithInteger: 3];
    STAssertEquals([tune.scriptRating integerValue], 3,
                   @"Tune rating should be visible to AppleScript");
}

Simples. Now how do I test the setter? Well, of course I can just set the value and see whether it sticks, that’s the easy bit. But there’s also this bit about being able to get an error into AppleScript if the user tries to set an out of range error. That seems like really useful functionality, because otherwise the app would accept the wrong value and give up later when the persistent store gets saved. So it’d be useful to have a fake script command object that lets me see whether I’m setting an error on it. That’s easy to do:

@interface FakeScriptCommand : NSScriptCommand
{
    NSString *scriptErrorString;
    NSInteger scriptErrorNumber;
}
@property (nonatomic, copy) NSString *scriptErrorString;
@property (nonatomic, assign) NSInteger scriptErrorNumber;
@end

@implementation FakeScriptCommand
@synthesize scriptErrorString;
@synthesize scriptErrorNumber;
- (void) dealloc {...}
@end

OK, but how do I use this object in my test? The tune class sets an error on +[NSScriptCommand currentScriptCommand], which is probably returning an object defined in a static variable in the Foundation framework. I can’t provide a setter +setCurrentScriptCommand:, because I can’t get to the innards of Foundation to set the variable. I could change or swizzle the implementation of +currentScriptCommand in a category, but that has the potential to break other tests if they’re not expecting me to have broken Foundation.

So the solution I went for is to insert a “fulcrum” if you will: a point where the code changes from asking for a script command to being told about a script command:

- (void)setScriptRating: (NSNumber *)newRating {
    [self setRating: newRating fromScriptCommand: [NSScriptCommand currentScriptCommand]];
}

- (void)setRating: (NSNumber *)newRating fromScriptCommand: (NSScriptCommand *)cmd {
    NSInteger val = [newRating integerValue];
    if (val < 0 || val > 5) {
       [cmd setScriptErrorNumber: TORatingOutOfRangeError];
        //note that AppleScript is never localised
        [cmd setScriptErrorString: @"Rating must be between 0 and 5."];
    }
    else {
        self.rating = newRating;
    }
}

This is the Extract Method refactoring pattern. Now it’s possible to test -setRating:fromScriptCommand: without depending on the environment, and be fairly confident that if that works properly, -setScriptRating: should work properly too. I’ll do it:

- (void)testAppleScriptGetsErrorsWhenRatingSetTooLow {
    FakeScriptCommand *cmd = [[FakeScriptCommand alloc] init];
    [tune setRating: [NSNumber numberWithInteger: 0]
  fromScriptCommand: cmd];
    STAssertEquals([cmd scriptErrorNumber], TIRatingOutOfRangeError,
                   @"Should get a rating out of range error");
    STAssertNotNil([cmd scriptErrorString], @"Should report error to the user");
}

Oh-oh, my test failed. Why’s that? I’m sure you’ve spotted it now: the user is supposed to be restricted to values between 1 and 5, but the method is actually testing for the range 0 to 5. I could fix that quickly by changing the value in the test, but I’ll take this opportunity to remove the magic numbers from the code too (though you’ll notice I haven’t done that with the error string yet, I still need to do some more refactoring):

- (void)setRating: (NSNumber *)newRating fromScriptCommand: (NSScriptCommand *)command {
    if (![self validateValue: &newRating forKey: @"rating" error: NULL]) {
        [command setScriptErrorNumber: TIRatingOutOfRangeError];
        //note that AppleScript is never localised
        [command setScriptErrorString: @"Rating must be a number between 1 and 5."];
    }
    else {
        self.rating = newRating;
    }
}

OK, so now my test passes. Great. But that’s not what’s important: what’s important is that I found and fixed a bug by thinking about how my code is used. And that’s what unit testing is for. Some people will say they get a bad taste in their mouths when they see that I redesigned the method just so that it could be used in a test framework. To those people I will repeat the following: I found and fixed a bug.

On Ignoring the Tests

As mentioned over two months ago, I’ll be giving two talks this weekend at the Voices That Matter: iPhone Developers Fall conference. I’m feeling good about both of the talks that I’ve worked on, though I definitely think the Unit Testing talk on Saturday is better polished. That’s probably at least in part because of the work put in by everyone’s favourite Apple Employee, who gave me a great base for the talk.

One of the (myriad) problems I’m currently facing is related to unit testing, in a project I’ve recently started working on. The existing code base works well but, how can we put this delicately, it smells. There aren’t any important bugs, but when you come to extend some functionality or add a new feature you find yourself trying to piece together code from numerous classes in different libraries, before you can tell the story of how the bit you’re trying to change works. You then load the shotgun with the lead shot of enhanced featurity, which is liberally spattered across the code base.

There are various refactoring exercises that can be brought to bear on code such as this, or there could be if it weren’t for the unit tests. Some of the tests fail. Sometimes you look and find a hard-coded path to a file that can quickly be changed to get us going again. Sometimes you see a bit of expected value drift. Every so often, you find that the class under test and the test suite bear no resemblance to one another. Why are these tests failing, why did nobody notice, and what is the behaviour of the code under test?

The fact is usually that at some point, someone needed to make a change to the application and, believing it to be a “quick fix”, decided it wasn’t worth doing TDD for the new code. Whether or not that was a good decision, the new code had to interface with the old code somewhere and it changed the behaviour of that old code. Tests started failing. The developer decided that the test should be ignored. Worst. Decision. Ever.

Ignoring a test means leaving it in the code, but marking it up in some way such that the test runner either doesn’t execute the test or doesn’t care if it fails. Concrete example: in NUnit, you can set the [Ignore] attribute on a test method and it won’t get run. You can’t actually do this in OCUnit, though you can get per-class ignorance by just removing the class from the test bundle target.

The problem with ignored tests is that they don’t look much different from non-ignored tests (sometimes, they can look identical, if the ignorance is provided by a parameters file rather than the source code). They give the impression that the code is under test. They give the impression that the code should work in the way specified in the test. They give the impression, if you run the tests and they fail, that you broke something. Turning off a test is worse than useless – just delete the thing. That way it’s obvious that the code it was testing is no longer under test. And don’t worry, it’s not gone forever – you could always retrieve the deleted code from version control. You never will, but you could.

“But,” pipe up the authors of the xUnit ignore features, “that’s not what the ignore feature is for. You write tests for code that doesn’t exist yet, and set them ignored so they don’t break the build. You get the utility of annotating your code with tests, without the red bar.”

Unfortunately it doesn’t work like that. I write the tests now, and set them to be ignored. Some time later, the feature gets cancelled. The ignored tests hang around, bearing less and less resemblance to the real application.

Or I write the tests now, and set them to be ignored. Some time later, somebody else implements the feature, writing their own tests (or not). The ignored tests hang around, looking uncomfortably like they might be testing the feature.

Now using unit tests as a documentation and communication tool is useful and beneficial, and I’ll talk about that at the weekend. But only commit tests for code that either got written with the tests, or will be written imminently. If it looks like a test is no longer relevant, rewrite it or delete it. But do not tell your test runner to ignore it, because your developers probably won’t.

On documentation

Over at the daily WTF, Alex Papadimoulis writes about Documentation Done Right. His conclusion is spot on:

The immediate answer to what’s the right way to do documentation is clear: produce the least amount of documentation needed to facilitate the most understanding, and be very explicit about which documentation is to be maintained and which is to be archived (i.e., read-only and left to rot).

The amount of documentation appropriate to any project depends very much on the project, and the people working on it. However I’ve found that there are some useful rough guides that can generally be applied. Like Alex, I’m ignoring user documentation here and talking about internal project artifacts.

Enterprise project documentation exists to give managers someone to blame.

The reason large projects seem so documentation-heavy is twofold: managers don’t like to think they don’t know what’s going on; and managers like to be able to come down like a ton of shit on someone when they find out that they don’t know what’s going on. That’s where the waterfall model comes into its own. The big up-front planning means that everyone on the project has signed off on the ridiculous unreadable project documentation of doom, so it must describe the best software possible. Whoever does something that isn’t the same as the documentation has fucked up.

You never need the amount of documentation produced by most large-company software projects. Never. It’s usually inaccurate even at the moment the it’s signed off, because it took so long to get there the requirements changed, and because the level of precision required leads authors to make assumptions about how the OS, APIs etc. work. It’s often very hard to work from, because there are multiple documents, poorly cross-referenced using support “tools” like Word, each with its own glossary depending heavily on terminology relevant to the author’s domain. And they weren’t written by the same author.

In order to code up a feature from one of these projects, you need to check the product requirements document to see what it’s supposed to do (and whether it’s the highest priority outstanding work – you also get to find out who asked for the feature, how much money it’s worth, and plenty of other information that’s of no use to a software engineer). You check the functional specification to see how it’s supposed to do it. You check the architecture document to see what classes go in which packages. You raise a change control because the APIs don’t support part of the functional specification. Two weeks later, a fix has been approved. You write the code, ensuring that you use the test plan to find out what the acceptance criteria are, and when they’ll be tested (in fact, I’ve worked on projects where the functional and performance test plans were delivered at different times by different people). Oh, and make sure you’re keeping up to date with the issue tracker, otherwise you might do work that’s been assigned to someone else. Your systems engineer can point to the process documentation to let you know how you do that.

Good documentation doesn’t answer all the questions, but leaves you capable of asking smart questions of the right people.

It’s no secret that I’m a fan of user stories as a form of requirements documentation. A well-written user story tells you what the user expects to be able to do after a feature has been added, and precious little else. There might be questions over terminology, or specifics such as failure cases, but you know who to ask – the person who wrote the user story. There will be nothing about architecture or the GUI layout, because those things aren’t up to the user, customer or product manager (or at least they shouldn’t be).

Similarly, a good architecture diagram is going to tell you something about how the classes in an application fit together, and precious little else. If you can’t work out how to fit your feature in, you can ask the architect, but you’ll both be able to use the diagram as a good place to start. As Alex says in his post, precise documentation will go out of date quickly, so the documentation needs to be good enough to hang discussions on, and no “better”.

UML diagrams are great to describe code before it’s been written; they’re not so great at writing code.

If you’re trying to explain to another engineer how you think a feature should work in code, what design pattern to follow, or what steps will be needed to talk to a server, then a UML diagram is a great way to do it. Many engineers understand UML, and those who don’t can work out what it means (with your help) very quickly. It’s a consistent language for talking about software.

That said, when I draw UML diagrams I tend to prefer whiteboards and agnostic diagramming software like Omnigraffle or dia over syntax-checking UML tools like Enterprise Architect or ArgoUML. The reason is that I gave in the last section: the diagram needs to be good enough to explain to someone else what we’re going to do, not a gold-plated example of conformance to the UML specification. I don’t care if my swim-lane is in the wrong place if everybody involved knows what I mean (or can ask).

Code generated from UML tools tends to have readability issues. Whereas you or I might put related methods together, the tool might sort them by visibility or name. If you’re diagram is rough enough to be useful, then the tool will only generate a few method stubs – and you’ll be tempted to fill them in rather than creating small private methods to do logical units of work. These and other problems (I’ve seen EA generate code that uses UUIDs as class identifiers) can be fixed, but you shouldn’t need to fix them. You should write your application and sell it.

After code has been written, the only accurate documentation is the code itself (or generated from it).

There are other useful forms of documentation – for instance, a whiteboard diagram explains a developer’s understanding of the code. It doesn’t document the code itself – an important distinction.[*] All of those shiny enterprise documents your project manager got you to write went out of date as soon as the first customer reported a bug or feature request. The code, however, documents exactly what the code does – just not necessarily in the most useful way. Javadoc/doxygen comments are more likely than any thing else to stay in sync with the code due to their proximity, but even those can be outdated or unhelpful.

This is where UML tools can come in very handy. Those with the ability to generate diagrams based on code (even Xcode does this) can automatically give you a correct view of the application’s behaviour, at a more appropriate level of abstraction than the code itself. If what you need is a package dependency diagram, it’s better to get it from a UML tool than to try and read all of the source.

Unfortunately, Xcode (and Doxygen’s very limited capabilities) are the only games in town for Objective-C. Tool support for Java and C# is way ahead, but for Cocoa developers there’s only MacTranslator (which I’ve never tried). Not that UML maps particularly well onto Objective-C anyway.

[*]Though documenting a developer’s assumptions and comparing them with reality can often explain where some subtle bugs come from, and is of course useful.

The better your prototypes, the worse the feedback.

Back in the 1990s there was an explosion of Rapid Application Development tools, after the rest of the software industry saw Interface Builder and decided it was good. The RAD way (which is, of course, eminently achievable using IB) is to produce an executable prototype for users to give feedback on. Of course, the problem is that you end up shipping the prototype. I’ve actually ended up doing that on a couple of Mac applications.

One problem is that because the app looks complete, users assume it is complete and that they’re being asked to provide polishing details, or spot spelling mistakes and misplaced buttons. The other is that because it looks complete, managers assume it is complete and will tell you to ship it.

Don’t make that mistake. Do UI prototypes as paper-based wireframes, Keynote presentations or Cappuccino apps. Whatever you do, make it look like it’s just a crappy sketch you’re willing to have ripped to shreds. That way, people will rip it to shreds.

There are few document “artifacts” that need to hang around.

If you think about what you’re going to do with an application after you’ve written it, there’s selling it, supporting it, maintaining it, and extending it. Support people might need some high-level architecture knowledge so that they can work out what component a problem is in, or how to diagnose a particular failure.

Such an architecture document can be a good aid for planning new features or bug fixes, because you can quickly see where you need to modify the app to get the desired behaviour (clearly you then need to dive into the code to get a better idea, but you know where to jump). Similarly, architecture rationale documentation (including the threat model, API/library use justification) can be handy so that you don’t need to go through the same debates/research when fixing a bug or adding a feature. Threat models particularly can take a lot of time and expertise to construct from scratch.

Sales people will need to know which features have been delivered, which features are on the roadmap, and can probably find out specific questions from engineers if they get a grilling from an awkward customer. Only in a very limited set of circumstances will sales staff need to give customers security documentation, test coverage information, or anything other than the user manual.

Notice that in each of these cases, one of the main aspects of keeping the document around is so that you can keep it up to date. There’s no point having the 1.0 feature list when you’re selling version 2.5 – in fact it would be positively detrimental. So obviously the fewer documents you keep around, the fewer you need to keep up to date. There’s some trade-off involved (natürlich) – if you need something you didn’t hang on to, you’re going to have to regenerate it.