NSConference MINI videos available

During WWDC week I talked at NSConference MINI, a one-day conference organised by Scotty and the MDN. The videos are now available: free to attendees, or $50 for all 10 for non-attendees.

My own talk was on extending the Clang static analyser, to perform your own tests on your code. I’m pleased with the amount I managed to get in, and I like how the talk managed to fit well with the general software-engineering theme of the conference. There’s stuff on bit-level manipulation, eXtreme Programming, continuous integration, product management and more. I’d fully recommend downloading the whole shebang.

On detecting God Classes

Opinion on Twitter was divided when I suggested the following static analyser behaviour: report on any class that conforms to too many protocols.

Firstly, a warning: “too many” is highly contextual. Almost all objects implement NSObject and you couldn’t do much without it, so it gets a bye. Other protocols, like NSCoding and NSCopying, are little bits of functionality that don’t really dilute a class by their presence. It’s probably harmless for a class to implement those in addition to other protocols. Still others are so commonly implemented together (like UITableViewDataSource and UITableViewDelegate, or WebView‘s four delegate protocols) that they probably shouldn’t independently count against a class’s “protocol weight”. On the other hand, a class that conforms to UITableViewDelegate, UIAlertViewDelegate and MKMapViewDelegate might be trying to do too much – of which more after the next paragraph.

Secondly, a truism: the goal of a static analyser is to ignore code that the developer’s happy with, and to warn about code the developer isn’t happy with. If your coding style permits a class to conform to any number of protocols, and you’re happy with that, then you shouldn’t implement this analyser rule. If you would be happy with a maximum of 2, 4, or 1,024 protocols, you would benefit from a rule with that number. As I said in my NSConf MINI talk, the analyser’s results are not as cleanly definable as compiler errors (which indicate code that doesn’t conform to the language definition) or warnings (which indicate code that is very probably incorrect). The analyser is more of a code style and API use checker. Conforming to protocols is use of the API that can be checked.

OK, let’s get on with it. A class that conforms to too many protocols has too many reponsibilities – it is a “God Class”. Now how can this come about? A developer who has heard about model-view-controller (MVC) will try to divide classes into three high-level groups, like this:

MVC high-level architecture

The problem comes when the developer fails to take that diagram for what it is: a 50,000-foot overview of the MVC architecture. Many Mac and iOS developers will use Core Data, and will end up with a model composed of multiple different entities. If a particular piece of the app’s workflow needs to operate on multiple entities, they may break that out into “business logic” objects that can be called by the controller. Almost all Mac and iOS developers use the standard controls and views, meaning they have no choice but to break up the view into multiple objects.

But where does that leave the controller? Without any motivation to divide things up, everything else is stuffed into a single object (likely an NSDocument or UIViewController subclass). This is bad. What happens if you need to display the same table in two different places? You have to go through that class, picking out the bits that are relevant, finding out where they’re tied to the bits that aren’t and untangling them. Ugh.

Cocoa developers will probably already be using multiple controller objects if they’re using Cocoa Bindings. Each NSArrayController or similar receives its object or objects, usually from the main controller object, and then gets on with the job of marshalling the interaction between the bound view and the observed model objects. So, if we take the proposed changes so far, our MVC diagram looks like this:

MVC - Slightly closer look

The point of my protocol-checking code is to go the remaining distance, and abstract out the other data sources into their own objects. What we’re left with is a controller that looks after the view’s use case, ensuring that logic actions take place when they ought, that steps in the workflow only become available when their preconditions are met, and so on. Everything related to performing the logic is down in those dynamic model objects, and everything to do with data presentation is in its own controller objects. Well, maybe not everything – a button doesn’t exactly have a complicated API. But if you need a table of employees for this view controller and a table of employees for that view controller, you just take the same table datasource object in both places. You don’t have two different datasource implementations in two view controllers (or even the same one pasted twice). This makes the diagram look like this:

MVC - more separation

So to summarise, a class that conforms to too many protocols probably has too many responsibilities. The usual reason for this is that controller objects take on responsibility for managing workflow, providing data to views and handling delegate responsibilities for the views. This leads to code that is not reusable except through the disdainful medium of copy-pasting, as it is harder to define a clean interface between these various concerns. By producing a tool that reports on the existence of such God classes, developers can be alerted to their presence and take steps to fix them.

Using Aspect-Oriented Programming for Security Engineering

This paper by Kotrappa Sirbi and Prakash Jayanth Kulkarni (link goes to HTML abstract, full text PDF is free) discusses implementation of an application’s security requirements in Java using Aspect-Oriented Programming (AOP).

We have AOP for Objective-C (of sorts), but as hardly anyone has used it I think it’s worth taking a paragraph or two out to explain. If you’ve ever written a non-trivial Cocoa[ Touch] application, you’ll have found that even when you have a good object-oriented design, you have code that addresses different concerns mixed together. A method that performs some useful task (deleting a document, for example) also calls some logging code, checks for authorisation, reports errors, and maybe some other things. Let’s define the main business concerns of the application as, well, business concerns, and all of these other things like logging and access control as cross-cutting concerns.

AOP is an attempt to reduce the coupling between business and cross-cutting code by introducing aspects. An aspect is a particular cross-cutting concern that must be implemented in an application, such as logging. Rather than calling the logging code from the business code, we’ll define join points, which are locations in the business code where it’s useful to insert cross-cutting code. These join points are usually method calls, but could also be exception throw points or anywhere else that program control changes. We don’t necessarily need logging at every join point, so we also define predicates that say which join points are useful for logging. Whenever one of the matching join points is met, the application will run the logging code.

This isn’t just useful for adding code. Your aspect code can also define whether the business code actually gets run at all, and can even inspect and modify the return value of the business code. That’s where it gets useful for security purposes. You don’t need to put your access control (for instance) checks inside the business code, you can define them as modifications to join points in the business code. If you need to change the way access control works (say going from a single-user to directory service scheme, or from password checks to access tokens) you can just change the implementation of the aspect rather than diving through all of the app code.

Of course, that doesn’t mean you can just implement the business logic then add security stuff later, like icing a cake or sprinkling fairy dust. You still need to design the business objects such that the security control decisions and the join points occur at the same places. However, AOP is useful for separating concerns, and for improving maintainability of non-core app behaviour.

Template class for unit testing Core Data entities

Some time ago, in a blog far, far, away, I wrote about unit-testing Core Data. Essentially, your test case class should create a temporary, in-memory Core Data stack in -setUp, and clean it up in -tearDown. Your test methods can access the temporary context, model and persistent store to investigate the behaviour of objects under test.

The thing is, with any non-trivial Core Data app you’re going to end up with multiple entities, each with its own suite of tests. Wouldn’t it be nice if Xcode could set that stuff up automatically?

xcode-managedtest.png

Oh, right, it can :-). The template header is class.h:

//
//  «FILENAME»
//  «PROJECTNAME»
//
//  Created by «FULLUSERNAME» on «DATE».
//  Copyright «YEAR» «ORGANIZATIONNAME». All rights reserved.
//

#import <SenTestingKit/SenTestingKit.h>

@interface «FILEBASENAMEASIDENTIFIER» : SenTestCase {
    NSPersistentStoreCoordinator *coord;
    NSManagedObjectContext *ctx;
    NSManagedObjectModel *model;
    NSPersistentStore *store;
}

@end

And the implementation, class.m:

//
//  «FILENAME»
//  «PROJECTNAME»
//
//  Created by «FULLUSERNAME» on «DATE».
//  Copyright «YEAR» «ORGANIZATIONNAME». All rights reserved.
//

«OPTIONALHEADERIMPORTLINE»

@implementation «FILEBASENAMEASIDENTIFIER»

- (void)setUp
{
    model = [[NSManagedObjectModel mergedModelFromBundles: nil] retain];
    coord = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: model];
    store = [coord addPersistentStoreWithType: NSInMemoryStoreType
                                configuration: nil
                                          URL: nil
                                      options: nil 
                                        error: NULL];
    ctx = [[NSManagedObjectContext alloc] init];
    [ctx setPersistentStoreCoordinator: coord];
}

- (void)tearDown
{
    [ctx release];
    ctx = nil;
    NSError *error = nil;
    STAssertTrue([coord removePersistentStore: store error: &error], 
                 @"couldn't remove persistent store: %@", error);
    store = nil;
    [coord release];
    coord = nil;
    [model release];
    model = nil;
}


@end

Put those in a folder called /Library/Application Support/Developer/Shared/Xcode/File Templates/Thaes Ofereode/Objective-C test case class (Core Data).pbfiletemplate
, along with a TemplateInfo.plist file that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>CounterpartTemplateFile</key>
	<string>class.h</string>
	<key>Description</key>
	<string>A subclass of SenTestCase, with optional header. Automatically sets up a managed object context suitable for unit-testing Core Data entities.</string>
	<key>MainTemplateFile</key>
	<string>class.m</string>
</dict>
</plist>

Xcode will automatically pick up the template the next time you use the template chooser.

Configuring CruiseControl.rb in under an hour

One of the changes I decided to make straight after NSConf MINI yesterday was to enable continuous integration for my projects. I had used CI before based on BuildBot, but that had left me less than impressed:

  • It was really hard to set up
  • Its dependency system was less than optimal, leading it to do things like running the integration tests even after the build had failed
  • It had really bad memory consumption. The official line was that it didn’t leak, but used a massive working set. Now I don’t have a dedicated system for CI so can’t really spare all my RAM :)

In his talk yesterday, Gordon Murrison mentioned that Open Planet Software use CruiseControl.rb, and in conversation he assured me that it wasn’t as bad as all that. I decided to set it up today, and it took me pretty much an hour to go from downloading the source to having a working continuous integration setup. If I had known the stuff below, it could’ve been quicker.

The downloads page for CC.rb says that you need an older version of Ruby than Snow Leopard ships with, but I found that it actually works with the stock Ruby, so no problem there.

Before going any further, I decided to create a separate user account for automated builds. This is to provide some validation of the source code – so we know that the product actually can be built from the artefacts in SCM, and doesn’t depend on some magic configuration that happens to be part of my developer box. In fact, it turns out that for me there was some magic configuration in place – the Rehearsals build depends on the BGHUDAppKit IBPlugin, which of course wasn’t installed for my new cruisecontrol user.

So with the build environment set up, I can point CC.rb at my project. That’s done with this command line:

./cruise add Rehearsals --source-control svn --repository http://svn.thaesofereode.info/rehearsals/branches/branch-to-watch

Note that you have to specify the path all the way to the actual branch you want it to build (or trunk), not the top level of the repository. Now you need to tell it how to actually build the project. So edit ~/.cruise/projects/Rehearsals/cruise_config.rb and add a line like this:

project.build_command = 'xcodebuild -configuration Debug -target Test\ Cases build'

That tells Xcode to build the “Test Cases” target in Debug configuration, which in my case is what I need to get my OCUnit tests to run. While you’re in that file, set the project’s email settings, and create config/site_config.rb in the CC.rb distribution folder to set up the mail server (Gmail in my case). Now it should just be a case of running:

./cruise start

and watching my build succeed. But it wasn’t :(. My unit test target is injected into the app, which means that in order for the tests to even launch my cruisecontrol user needed a window server connection, so I used fast-user switching to log it in behind my real user. OK, so now my unit tests are automatically run whenever I check in some source on that branch, and I can see the status at http://localhost:3333/.

That’s as far as I’ve got for now. Of course I’d like to have the cruisecontrol user automatically log in and run CC.rb whenever the system starts up, I’ll create a launch agent to do that. But I also have a gcov-instrumented build configuration, and it would be instructive for CC.rb to automatically report on code coverage when the tests are run (though that report shouldn’t affect the result of the build). But I think I’ve done enough for one day, it’s time to go back to writing tests :).

Update: Thanks Simon Whitaker for finding a guide to run CC.rb under Apache using Passenger. I’m not sure how that would work in my case where I need a WindowServer connection, but I’m sure that there will be projects where this is a better way to get the thing running automatically.

On improved tool support for Cocoa developers

I started writing some tweets, that were clearly taking up too much room. They started like this:

My own thoughts: tool support is very important to good software engineering. 3.3.1 is not a big inhibitor to novel tools. /cc @rentzsch

then this:

There’s still huge advances to make in automating design, bug-hunting/squashing and traceability/accountability, for instance.

(The train of thought was initiated by the Dog Spanner’s [c4 release]; post.)

In terms of security tools, the Cocoa community needs to catch up with where Microsoft are before we need to start wondering whether Apple might be holding us back. Yes, I have started working on this, I expect to have something to show for it at NSConference MINI. However, I don’t mind whether it’s you or me who gets the first release, the important thing is that the tools should be available for all of us. So I don’t mind sharing my impression of where the important software security engineering tools for Mac and iPhone OS developers will be in the next few years.

Requirements comprehension

My first NSConference talk was on understanding security requirements, and it’s the focus of Chapter 1 of Professional Cocoa Application Security. The problem is, most of you aren’t actually engineers of security requirements, you’re engineers of beautiful applications. Where do you dump all of that security stuff while you’re focussing on making the beautiful app? It’s got to be somewhere that it’s still accessible, somewhere that it stays up to date, and it’s got to be available when it’s relevant. In other words, this information needs to be only just out of your way. A Pages document doesn’t really cut it.

Now over in the Windows world, they have Microsoft Threat Modeling Tool, which makes it easy to capture and organise the security requirements. But stops short of providing any traceability or integration with the rest of the engineering process. It’d be great to know how each security requirement impacts each class, or the data model, etc.

Bug-finding

The Clang analyser is just the start of what static analysis can do. Many parts of Cocoa applications are data-driven, and good analysis tools should be able to inspect the relationship between the code and the data. Other examples: currently if you want to ensure your UI is hooked up properly, you manually write tests that inspect the outlets, actions and bindings you set up in the XIB. If you want to ensure your data model is correct, you manually write tests to inspect your entity descriptions and relationships. Ugh. Code-level analysis can already reverse-engineer test conditions from the functions and methods in an app, they ought to be able to use the rest of the app too. And it ought to make use of the security model, described above.

I have recently got interested in another LLVM project called KLEE, a symbolic execution tool. Current security testing practices largely involve “fuzzing”, or choosing certain malformed/random input to give to an app and seeing what it does. KLEE can take this a step further by (in effect) testing any possible input, and reporting on the outcomes for various conditions. It can even generate automated tests to make it easy to see what effect your fixes are having. Fuzzing will soon become obsolete, but we Mac people don’t even have a good and conventional tool for that yet.

Bug analysis

Once you do have fuzz tests or KLEE output, you start to get crash reports. But what are the security issues? Apple’s CrashWrangler tool can take a stab at analysing the crash logs to see whether a buffer overflow might potentially lead to remote code execution, but again this is just the tip of the iceberg. Expect KLEE-style tools to be able to report on deviations from expected behaviour and security issues without having to wait for a crash, just as soon as we can tell the tool what the expected behaviour is. And that’s an interesting problem in itself, because really the specification of what you want the computer to do is your application’s source code, and yet we’re trying to determine whether or not that is correct.

Safe execution

Perhaps the bitterest pill to swallow for long time Objective-C programmers: some time soon you will be developing for a managed environment. It might not be as high-level as the .Net runtime (indeed my money is on the LLVM intermediate representation, as hardware-based managed runtimes have been and gone), but the game has been up for C arrays, memory dereferencing and monolithic process privileges for years. Just as garbage collectors have obsoleted many (but of course not all) memory allocation problems, so environment-enforced buffer safety can obsolete buffer overruns, enforced privilege checking can obsolete escalation problems and so on. We’re starting to see this kind of safety retrofitted to compiled code using stack guards and the like, but by the time the transition is complete (if it ever is), expect your application’s host to be unrecognisable to the app as an armv7 or x86_64, even if the same name is still used.

Why do we annoy our users?

I assume that, with my audience being mainly Mac users, you are not familiar with Microsoft Security Assessment Tool, or MSAT. It’s basically a free tool for CIOs, CSOs and the like to perform security analyses. It presents two questionnaires, the first asking questions about your company’s IT infrastructure (“do you offer wireless access?”), the second asking about the company’s current security posture (“do you use WPA encryption?”). The end result is a report comparing the company’s risk exposure to the countermeasures in place, highlighting areas of weakness or overinvestment. The MSAT app itself isn’t too annoying.

Mostly. One bit is. Some of the questions are accompanied by information about the relevant threats, and industry practices that can help mitigate the appropriate threats. Information such as this:

In order to reduce the ability to 'brute-force' the credentials for privileged accounts, the passwords for such accounts should be changed regularly.

So, how does changing a password reduce the likelihood of a brute-force attack succeeding? Well, let’s think about it. The attacker has to choose a potential password to test. Obviously the attacker does not know your password a priori, or the attack wouldn’t be brute-force; so the guess is independent of your password. You don’t know what the attacker has, hasn’t, or will next test—all you know is that the attacker will exhaust all possible guesses given enough time. So your password is independent of the guess distribution.

Your password, and the attacker’s guess at your password, are independent. The probability that the attacker’s next guess is correct is the same even if you change your password first. Password expiration policies cannot possibly mitigate brute-force attacks.

So why do we enforce password expiration policies? Actually, that’s a very good question. Let’s say an attacker does gain your password.

OK, "an attacker does gain your password."

The window of opportunity to exploit this condition depends on the time for which the password is valid, right? Wrong: as soon as the attacker gains the password, he can install a back door, create another account or take other steps to ensure continued access. Changing the password post facto will defeat an attacker who isn’t thinking straight, but ultimately a more comprehensive response should be initiated.

So password expiration policies annoy our users, and don’t help anyone.