NSConference MINI videos available

During WWDC week I talked at NSConference MINI, a one-day conference organised by Scotty and the MDN. The videos are now available: free to attendees, or $50 for all 10 for non-attendees.

My own talk was on extending the Clang static analyser, to perform your own tests on your code. I’m pleased with the amount I managed to get in, and I like how the talk managed to fit well with the general software-engineering theme of the conference. There’s stuff on bit-level manipulation, eXtreme Programming, continuous integration, product management and more. I’d fully recommend downloading the whole shebang.

Posted in code-level, NSConf, software-engineering, tool-support | Leave a comment

On Trashing

Back in the 1980s and 1990s, people who wanted to clandestinely gain information about a company or organisation would go trashing.[*] That just meant diving in the bins to find information about the company structure – who worked there, who reported to whom, what orders or projects were currently in progress etc.

You’d think that these days trashing had been thwarted by the invention of the shredder, but no. While many companies do indeed destroy or shred confidential information, this is not universal. Those venues where shredding is common leave it up to their staff to decide what goes in the bin and what goes in the shredder; these staff do not always get it correct (they’re likely to think about whether a document is secret to them rather than the impact on the company). Nor do they always want to think about which bin to put a worthless sheet of paper in.

Even better: in those places that do shred secret papers, they helpfully collect all of the secrets in big bins marked “To Shred” to help the trashers :). They then collect all of these bins into a big hopper, and leave that around (sometimes outside the building, in a covered or open yard) for the destruction company to come and pick up.

So if an attacker can get entry to the building, he just roots around in the “To Shred” bins. Someone asks, he tells them he put a printout there in the morning but now think he needs it again. Even if he can’t get in, he just dives in the hopper outside and get access to all those juicy secrets (with none of the banana peelings and teabags associated with the non-secret bin).

But for those attackers who don’t like getting their hands dirty, they can gain some of the same information using technological means. LinkedIn will helpfully provide a list of employees – including their positions, so the public can find out something of the reporting structure. Some will be looking for recruitment opportunities – these are great people to phone for more information! So are ex-employees, something LinkedIn will also help you out with.

But the fun doesn’t stop there. Once our attacker has the names, he now goes over to Twitter and Facebook. There he can find people griping about work…or describing what the organisation is up to, to put it another way.

All of the above information about 21st-century trashing comes from real experience with an office I was invited into in the last 12 months. Of course, I will not name the organisation in charge of that office (or their data destruction company). The conclusion is that trashing is alive and well, and that those who participate need no longer root around in, well, in the trash. How does your organisation deal with the problem?

[*] for me, it was mainly the 1990s. I was the perfect size in the 1980s for trashing, but still finding my way around a Dragon 32.

Posted in Business, Data Leakage, Policy, Twitter | Leave a comment

On detecting God Classes

Opinion on Twitter was divided when I suggested the following static analyser behaviour: report on any class that conforms to too many protocols.

Firstly, a warning: “too many” is highly contextual. Almost all objects implement NSObject and you couldn’t do much without it, so it gets a bye. Other protocols, like NSCoding and NSCopying, are little bits of functionality that don’t really dilute a class by their presence. It’s probably harmless for a class to implement those in addition to other protocols. Still others are so commonly implemented together (like UITableViewDataSource and UITableViewDelegate, or WebView‘s four delegate protocols) that they probably shouldn’t independently count against a class’s “protocol weight”. On the other hand, a class that conforms to UITableViewDelegate, UIAlertViewDelegate and MKMapViewDelegate might be trying to do too much – of which more after the next paragraph.

Secondly, a truism: the goal of a static analyser is to ignore code that the developer’s happy with, and to warn about code the developer isn’t happy with. If your coding style permits a class to conform to any number of protocols, and you’re happy with that, then you shouldn’t implement this analyser rule. If you would be happy with a maximum of 2, 4, or 1,024 protocols, you would benefit from a rule with that number. As I said in my NSConf MINI talk, the analyser’s results are not as cleanly definable as compiler errors (which indicate code that doesn’t conform to the language definition) or warnings (which indicate code that is very probably incorrect). The analyser is more of a code style and API use checker. Conforming to protocols is use of the API that can be checked.

OK, let’s get on with it. A class that conforms to too many protocols has too many reponsibilities – it is a “God Class”. Now how can this come about? A developer who has heard about model-view-controller (MVC) will try to divide classes into three high-level groups, like this:

MVC high-level architecture

The problem comes when the developer fails to take that diagram for what it is: a 50,000-foot overview of the MVC architecture. Many Mac and iOS developers will use Core Data, and will end up with a model composed of multiple different entities. If a particular piece of the app’s workflow needs to operate on multiple entities, they may break that out into “business logic” objects that can be called by the controller. Almost all Mac and iOS developers use the standard controls and views, meaning they have no choice but to break up the view into multiple objects.

But where does that leave the controller? Without any motivation to divide things up, everything else is stuffed into a single object (likely an NSDocument or UIViewController subclass). This is bad. What happens if you need to display the same table in two different places? You have to go through that class, picking out the bits that are relevant, finding out where they’re tied to the bits that aren’t and untangling them. Ugh.

Cocoa developers will probably already be using multiple controller objects if they’re using Cocoa Bindings. Each NSArrayController or similar receives its object or objects, usually from the main controller object, and then gets on with the job of marshalling the interaction between the bound view and the observed model objects. So, if we take the proposed changes so far, our MVC diagram looks like this:

MVC - Slightly closer look

The point of my protocol-checking code is to go the remaining distance, and abstract out the other data sources into their own objects. What we’re left with is a controller that looks after the view’s use case, ensuring that logic actions take place when they ought, that steps in the workflow only become available when their preconditions are met, and so on. Everything related to performing the logic is down in those dynamic model objects, and everything to do with data presentation is in its own controller objects. Well, maybe not everything – a button doesn’t exactly have a complicated API. But if you need a table of employees for this view controller and a table of employees for that view controller, you just take the same table datasource object in both places. You don’t have two different datasource implementations in two view controllers (or even the same one pasted twice). This makes the diagram look like this:

MVC - more separation

So to summarise, a class that conforms to too many protocols probably has too many responsibilities. The usual reason for this is that controller objects take on responsibility for managing workflow, providing data to views and handling delegate responsibilities for the views. This leads to code that is not reusable except through the disdainful medium of copy-pasting, as it is harder to define a clean interface between these various concerns. By producing a tool that reports on the existence of such God classes, developers can be alerted to their presence and take steps to fix them.

Posted in code-level, iPad, iPhone, Mac, software-engineering, tool-support | Leave a comment

On Fitt’s Law and Security

…eh? Don’t worry, read on and all shall be explained.

I’ve said in multiple talks and podcasts before that one key to good security is good user interface design. If users are comfortable performing their tasks, and your application is designed such that the easiest way to use it is to do the correct thing, then your users will make fewer mistakes. Not only will they be satisfied with the experience, they’ll be less inclined to stray away from the default security settings you provide (your app is secure by default, right?) and less likely to damage their own data.

Let’s look at a concrete example. Matt Gemmell has already done a pretty convincing job of destroying this UI paradigm:

The dreaded add/remove buttonsThe button on the left adds something new, which is an unintrusive, undoable operation, and one that users might want to do often. The button on the right destroys data (or configuration settings or whatever), which if unintended represents an accidental loss of data integrity or possibly service availability.

Having an undo feature here is not necessarily a good mitigation, because a user flustered at having just lost a load of work or important data cannot be guaranteed to think of using it. Especially as undo is one of those features that has different meaning in different apps, and is always hidden (there’s no “you can undo this” message shown to users). Having an upfront confirmation of deletion is not useful either, and Matt’s video has a good discussion of why.

Desktop UI

Anyway, this is a 19×19 px button (or two of them), right? Who’s going to miss that? This is where we start using the old cognometrics and physiology. If a user needs to add a new thing, he must:

  1. Think about what he wants to do (add a new thing)
  2. Think about how to do it (click the plus button, but you can easily make this stage longer by adding contextual menus, a New menu item and a keyboard shortcut, forcing him to choose)
  3. Move his hand to the mouse (if it isn’t there already)
  4. Locate the mouse pointer
  5. Move the pointer over the button
  6. Click the mouse button

Then there’s the cognitive effort of checking that the feedback is consistent with the expected outcome. Anyway, Fitt’s Law comes in at step 5. It tells us, in a very precise mathematical way, that it’s quicker to move the pointer to a big target than a small one. So while a precise user might take time to click in this region:

Fastidious clicking action

A user in a hurry (one who cares more about getting the task done than on positioning a pointing device) can speed up the operation by clicking in this region:

Hurried clicking action

The second region includes around a third of the remove button’s area; the user has a ~50% chance of getting the intended result, and ~25% of doing something bad.

Why are the click areas elliptical? Think about moving your hand – specifically moving the mouse in your hand (of course this discussion does not apply to trackpads, trackballs etc, but a similar one does). You can move your wrist left/right easily, covering large (in mousing terms) distances. Moving the mouse forward/back needs to be done with the fingers, a more fiddly task that covers less distance in the same time. If you consider arm motions the same applies – swinging your radius and ulna at the elbow to get (roughly) left/right motions is easier than pulling your whole arm at the shoulder to get forward/back. Therefore putting the remove button at one side of the add button is, from an ergonomic perspective, the absolutely worst place for it to go.

Touch UI

Except on touch devices. We don’t really need to worry about the time spent moving our fingers to the right place on a touchscreen – most people develop a pretty good unconscious sense of proprioception (knowing where the bits of our bodies are in space) at a very early age and can readily put a digit where they want it. The problem is that we don’t want it in the same place the device does.

We spend most of our lives pointing at things, not stabbing them with the fleshy part of a phalange. Therefore when people go to tap a target on the screen, the biggest area of finger contact will be slightly below (and to one side, depending on handedness) the intended target. And of course they may not accurately judge the target’s location in the first place. In the image below, the user probably intended to tap on “Presentations”, but would the tap register there or in “LinkedIn Profile”? Or neither?

A comparatively thin finger

The easiest way to see this for yourself is to move from an iPhone (which automatically corrects for the user’s intended tap location) and an Android device (which doesn’t, so the tap occurs where the user actually touched the screen) or vice versa. Try typing on one for a while, then moving over to type on the other. Compare the sorts of typos you make on each device (obviously independent of the auto-correct facilities, which are good on both operating systems), and you’ll find that after switching devices you spend a lot of time hitting keys to the side of or below your target keys.

Conclusion

Putting any GUI controls next to each other invites the possibility that users will tap the wrong one. When a frequently-used control has a dangerous control as a near neighbour, the chance for accidental security problems to occur exists. Making dangerous controls hard to accidentally activate does not only provide a superior user experience, but a superior security experience.

Posted in iPad, iPhone, Mac, threatmodel, UI, user-error | 1 Comment

Using Aspect-Oriented Programming for Security Engineering

This paper by Kotrappa Sirbi and Prakash Jayanth Kulkarni (link goes to HTML abstract, full text PDF is free) discusses implementation of an application’s security requirements in Java using Aspect-Oriented Programming (AOP).

We have AOP for Objective-C (of sorts), but as hardly anyone has used it I think it’s worth taking a paragraph or two out to explain. If you’ve ever written a non-trivial Cocoa[ Touch] application, you’ll have found that even when you have a good object-oriented design, you have code that addresses different concerns mixed together. A method that performs some useful task (deleting a document, for example) also calls some logging code, checks for authorisation, reports errors, and maybe some other things. Let’s define the main business concerns of the application as, well, business concerns, and all of these other things like logging and access control as cross-cutting concerns.

AOP is an attempt to reduce the coupling between business and cross-cutting code by introducing aspects. An aspect is a particular cross-cutting concern that must be implemented in an application, such as logging. Rather than calling the logging code from the business code, we’ll define join points, which are locations in the business code where it’s useful to insert cross-cutting code. These join points are usually method calls, but could also be exception throw points or anywhere else that program control changes. We don’t necessarily need logging at every join point, so we also define predicates that say which join points are useful for logging. Whenever one of the matching join points is met, the application will run the logging code.

This isn’t just useful for adding code. Your aspect code can also define whether the business code actually gets run at all, and can even inspect and modify the return value of the business code. That’s where it gets useful for security purposes. You don’t need to put your access control (for instance) checks inside the business code, you can define them as modifications to join points in the business code. If you need to change the way access control works (say going from a single-user to directory service scheme, or from password checks to access tokens) you can just change the implementation of the aspect rather than diving through all of the app code.

Of course, that doesn’t mean you can just implement the business logic then add security stuff later, like icing a cake or sprinkling fairy dust. You still need to design the business objects such that the security control decisions and the join points occur at the same places. However, AOP is useful for separating concerns, and for improving maintainability of non-core app behaviour.

Posted in code-level, software-engineering, tool-support | Leave a comment

A solution in need of a problem

I don’t usually do product reviews, in fact I have been asked a few times to accept a free product in return for a review and have turned them all down. This is just such an outré product that I have decided to write a review, despite not being approached by the company and having no connection. The product is the 3M Privacy Filter Gold. Here is one I was given at InfoSec, on my MacBook:

3M Privacy Filter Gold - front view

You’ll probably notice that the screen has some somewhat unsightly plastic tabs around the edge. They are holding in the main feature of this product, which is the thing giving the screen that slightly coppery colour. It’s actually a sheet of plastic which I assume is etched with a blazed diffraction grating, because at more acute viewing angles it makes the screen look like this:

3M Privacy Filter Gold - side view

OK, so you can still see the plastic tabs, but now it’s hard to make out the text. And that’s the goal of the privacy filter gold: it’s to stop shoulder surfers. By reducing the usable viewing angle of your screen. Hold on a moment, while I count the number of times I’ve been told about a business or government agency that leaked sensitive data through shoulder surfing.

0.

And the number of times I’ve heard (or discovered, through risk analysis) that it’s an important risk for an organisation to address? About the same. OK, so this product is distinctive, and gimmicky, and evidently does what it’s designed to do. But I don’t see the problem that led to this product being developed. It might be useful if you want to browse porn in Starbucks or goof off on Facebook while you’re at work, except that someone stood behind you can still see the screen. OK, you could browse porn while sat on the London Underground – except you won’t be able to get a network signal.

If anti-shoulder-surfing is important to you, you may want to bear these issues into account. When used in strong sunlight, the privacy filter gold makes the screen look like this (note: availability of handsome security expert holding iPhone is strictly limited):

3M Privacy Filter Gold - in the sunshine

The MacBook isn’t great in strong sunlight anyway, but with the filter over the top it becomes positively unusable. All of that dust is actually on the filter (although it was fresh out of its packaging when the picture was taken), and causing additional scatterings through its grating leading to a “gold dust” effect on the screen. And yes, the characters “3M GPF13.3W” are indeed etched into the filter at the top-right, near that thing that yes, is indeed a notch you can put your finger in for extracting the filter from the plastic tabs.

That’s one issue, the other is price. Retail price for these filters is around £50, varying slightly depending on screen size. I’m really not sure that’s a price worth paying considering that I have no idea what the benefit of the filter will be.

Posted in Data Leakage | Leave a comment

Template class for unit testing Core Data entities

Some time ago, in a blog far, far, away, I wrote about unit-testing Core Data. Essentially, your test case class should create a temporary, in-memory Core Data stack in -setUp, and clean it up in -tearDown. Your test methods can access the temporary context, model and persistent store to investigate the behaviour of objects under test.

The thing is, with any non-trivial Core Data app you’re going to end up with multiple entities, each with its own suite of tests. Wouldn’t it be nice if Xcode could set that stuff up automatically?

xcode-managedtest.png

Oh, right, it can :-). The template header is class.h:

//
//  «FILENAME»
//  «PROJECTNAME»
//
//  Created by «FULLUSERNAME» on «DATE».
//  Copyright «YEAR» «ORGANIZATIONNAME». All rights reserved.
//

#import <SenTestingKit/SenTestingKit.h>

@interface «FILEBASENAMEASIDENTIFIER» : SenTestCase {
    NSPersistentStoreCoordinator *coord;
    NSManagedObjectContext *ctx;
    NSManagedObjectModel *model;
    NSPersistentStore *store;
}

@end

And the implementation, class.m:

//
//  «FILENAME»
//  «PROJECTNAME»
//
//  Created by «FULLUSERNAME» on «DATE».
//  Copyright «YEAR» «ORGANIZATIONNAME». All rights reserved.
//

«OPTIONALHEADERIMPORTLINE»

@implementation «FILEBASENAMEASIDENTIFIER»

- (void)setUp
{
    model = [[NSManagedObjectModel mergedModelFromBundles: nil] retain];
    coord = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: model];
    store = [coord addPersistentStoreWithType: NSInMemoryStoreType
                                configuration: nil
                                          URL: nil
                                      options: nil 
                                        error: NULL];
    ctx = [[NSManagedObjectContext alloc] init];
    [ctx setPersistentStoreCoordinator: coord];
}

- (void)tearDown
{
    [ctx release];
    ctx = nil;
    NSError *error = nil;
    STAssertTrue([coord removePersistentStore: store error: &error], 
                 @"couldn't remove persistent store: %@", error);
    store = nil;
    [coord release];
    coord = nil;
    [model release];
    model = nil;
}


@end

Put those in a folder called /Library/Application Support/Developer/Shared/Xcode/File Templates/Thaes Ofereode/Objective-C test case class (Core Data).pbfiletemplate
, along with a TemplateInfo.plist file that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>CounterpartTemplateFile</key>
	<string>class.h</string>
	<key>Description</key>
	<string>A subclass of SenTestCase, with optional header. Automatically sets up a managed object context suitable for unit-testing Core Data entities.</string>
	<key>MainTemplateFile</key>
	<string>class.m</string>
</dict>
</plist>

Xcode will automatically pick up the template the next time you use the template chooser.

Posted in CoreData, software-engineering, tool-support | Leave a comment

Configuring CruiseControl.rb in under an hour

One of the changes I decided to make straight after NSConf MINI yesterday was to enable continuous integration for my projects. I had used CI before based on BuildBot, but that had left me less than impressed:

  • It was really hard to set up
  • Its dependency system was less than optimal, leading it to do things like running the integration tests even after the build had failed
  • It had really bad memory consumption. The official line was that it didn’t leak, but used a massive working set. Now I don’t have a dedicated system for CI so can’t really spare all my RAM :)

In his talk yesterday, Gordon Murrison mentioned that Open Planet Software use CruiseControl.rb, and in conversation he assured me that it wasn’t as bad as all that. I decided to set it up today, and it took me pretty much an hour to go from downloading the source to having a working continuous integration setup. If I had known the stuff below, it could’ve been quicker.

The downloads page for CC.rb says that you need an older version of Ruby than Snow Leopard ships with, but I found that it actually works with the stock Ruby, so no problem there.

Before going any further, I decided to create a separate user account for automated builds. This is to provide some validation of the source code – so we know that the product actually can be built from the artefacts in SCM, and doesn’t depend on some magic configuration that happens to be part of my developer box. In fact, it turns out that for me there was some magic configuration in place – the Rehearsals build depends on the BGHUDAppKit IBPlugin, which of course wasn’t installed for my new cruisecontrol user.

So with the build environment set up, I can point CC.rb at my project. That’s done with this command line:

./cruise add Rehearsals --source-control svn --repository http://svn.thaesofereode.info/rehearsals/branches/branch-to-watch

Note that you have to specify the path all the way to the actual branch you want it to build (or trunk), not the top level of the repository. Now you need to tell it how to actually build the project. So edit ~/.cruise/projects/Rehearsals/cruise_config.rb and add a line like this:

project.build_command = 'xcodebuild -configuration Debug -target Test\ Cases build'

That tells Xcode to build the “Test Cases” target in Debug configuration, which in my case is what I need to get my OCUnit tests to run. While you’re in that file, set the project’s email settings, and create config/site_config.rb in the CC.rb distribution folder to set up the mail server (Gmail in my case). Now it should just be a case of running:

./cruise start

and watching my build succeed. But it wasn’t :(. My unit test target is injected into the app, which means that in order for the tests to even launch my cruisecontrol user needed a window server connection, so I used fast-user switching to log it in behind my real user. OK, so now my unit tests are automatically run whenever I check in some source on that branch, and I can see the status at http://localhost:3333/.

That’s as far as I’ve got for now. Of course I’d like to have the cruisecontrol user automatically log in and run CC.rb whenever the system starts up, I’ll create a launch agent to do that. But I also have a gcov-instrumented build configuration, and it would be instructive for CC.rb to automatically report on code coverage when the tests are run (though that report shouldn’t affect the result of the build). But I think I’ve done enough for one day, it’s time to go back to writing tests :).

Update: Thanks Simon Whitaker for finding a guide to run CC.rb under Apache using Passenger. I’m not sure how that would work in my case where I need a WindowServer connection, but I’m sure that there will be projects where this is a better way to get the thing running automatically.

Posted in software-engineering, tool-support | 3 Comments

On the extension of code signing

One of the public releases Apple has made this WWDC week is that of Safari 5, the latest version of their web browser. Safari 5 is the first version of the software to provide a public extensions API, and there are already numerous extensions to provide custom functionality.

The documentation for developing extensions is zero-cost, but you have to sign up with Apple’s Safari developer program in order to get access. One of the things we see advertised before signing up is the ability to manage and download extension certificates using the Apple Extension Certificate Utility. Yes, that’s correct, all Safari extensions will be signed, and the certificates will be provided by Apple.

For those of you who have provisioned apps using the iTunes Connect and Provisioning Portal, you will find the Safari extension certificate dance to be a little easier. There’s no messing around with UDIDs, for a start, and extensions can be delivered from your own web site rather than having to go through Apple’s storefront. Anyway, the point is that earlier statement – despite being entirely JavaScript, HTML and CSS (so no “executable code” in the traditional sense of machine-language libraries or programs), all Safari extensions are signed using a certificate generated by Apple.

That allows Apple to exert tight control over the extensions that users will have access to. In addition to the technology-level security features (extensions are sandboxed, and being JavaScript run entirely in a managed environment where buffer overflows and stack smashes are nonexistent), Apple effectively have the same “kill switch” control over the market that they claim over the iPhone distribution channel. Yes, even without the store in place. If a Safari extension is discovered to be malicious, Apple could not stop the vendor from distributing it but by revoking the extension’s certificate they could stop Safari from loading the extension on consumer’s computers.

But what possible malicious software could you run in a sandboxed browser extension? Quite a lot, as it happens. Adware and spyware are obvious examples. An extension could automatically “Like” facebook pages when it detects that you’re logged in, post Twitter updates on your behalf, or so on. Having the kill switch in place (and making developers reveal their identities to Apple) will undoubtedly come in useful.

I think Apple are playing their hand here; not only do they expect more code to require signing in the future, but they expect to have control over who has certificates that allow signed code to be used with the customers. Expect more APIs added to the Mac platform to feature the same distribution requirements. Expect new APIs to replace existing APIs, again with the same requirement that Apple decide who is or isn’t allowed to play.

Posted in Authorization, Browser, Codesign, Crypto, Mac, Windows | Comments Off on On the extension of code signing

On NSNull as an anti-pattern

All this talk about type-safe collections may leave you thinking: but what about NSNull? Let’s say you have an array that only accepts objects conforming to MyProtocol. You can’t add +[NSNull null] to it, because it doesn’t implement the protocol. So haven’t I just broken mutable arrays?

Let’s be clear: NSNull is a nasty hack. The original inventors of Foundation wanted to provide a variadic initialiser and factory for collection classes, but rather than doing +[NSArray arrayWithCount: (unsigned)count objects: (id)firstObject, ...] they created +[NSArray arrayWithObjects: (id)firstObject, ...]. That meant they needed a special value to flag the end of the list, and they chose nil. That meant you couldn’t put nil into an array, because such an array could not be constructed using the +arrayWithObjects: style. Therefore they decided to provide a new “nothing” placeholder, and created NSNull.

NSNull makes client code warty. If you were permitted to put nil into a collection, you could just do this:

for (id <MyProtocol>foo in myFoos) {
  [foo doSomethingInteresting];
}

but if you use NSNull, you get to write this (or a close variant):

for (id <MyProtocol>foo in myFoos) {
  if ([foo conformsToProtocol: @protocol(MyProtocol)]) {
    [foo doSomethingInteresting];
  }
}

Nasty. You’ve actually got to perform the test that the protocol conformance is supposed to address, or something that gets you the same outcome.

I’d prefer to use a different pattern, common in languages where nil or its equivalent cannot be messaged, known as the Null Object Pattern. To be clear, all you do is implement a class that conforms to the protocol (or extends the superclass, if that’s what you’re up to) but doesn’t do anything interesting. If it’s interrogated for data, it just returns 0, NO or whatever is relevant. If it’s asked to do work, it just returns. In short, it can be used in the same way as a real instance but does nothing, just as a placeholder ought to behave. So we might do this:

@interface NullMyProtocolConformer: NSObject <MyProtocol> { }
@end

@implementation NullMyProtocolConformer

- (void)doSomethingInteresting { }

@end

Now we can go back to the first version of our loop iteration, and keep our type-conformance tests in our collection classes. Anywhere you might want to put NSNull, you just stuff an instance of NullMyProtocolConformer.

Posted in code-level, iPad, iPhone, Mac | 7 Comments