On type safety and making it harder to write buggy code

Objective-C’s duck typing system is both a blessing and a curse. A blessing, in that it’s amazingly flexible. A curse, in that such flexibility can lead to some awkward problems.

Something that typically happens in dealing with data from a property list, JSON or other similar format is that you perform some operation on an array of strings, only to find out that one of those strings was actually a dictionary. Boom – unrecognised selector sent to instance of NSCFDictionary. Unfortunately, in this case the magic smoke is escaping a long way from the gun barrel – we get to see what the problem is but not what caused it. The stack trace in the bug report only tells us what tried to use the collection that may have been broken a long time ago.

The easiest way to deal with bugs is to have the compiler catch them and refuse to emit any executable until they’re fixed. We can’t quite do that in Objective-C, at least not for the situation described here. That would require adding generics or a similar construction to the language definition, and providing classes to support such a construction. However we can do something which gets us significantly better runtime error diagnosis, using the language’s introspection facilities.

Imagine a mutable collection that knew exactly what kinds of objects it was supposed to accept. If a new object is added that is of that kind, then fine. If a new object of a different kind is added, then boom – invalid argument exception, crash. Only this time, the application crashes where we broke it not some time later. Actually, don’t imagine such a collection, read this one. Here’s the interface:

//
//  GLTypesafeMutableArray.h
//  GLTypesafeMutableArray
//
//  Created by Graham Lee on 24/05/2010.
//  Copyright 2010 Thaes Ofereode. All rights reserved.
//

#import <Cocoa/Cocoa.h>

@class Protocol;

/**
 * Provides a type-safe mutable array collection.
 * @throws NSInvalidArgumentException if the type safety is violated.
 */
@interface GLTypesafeMutableArray : NSMutableArray {
@private
    Class elementClass;
    Protocol *elementProtocol;
    CFMutableArrayRef realContent;
}

/**
 * The designated initialiser. Returns a type-safe mutable array instance.
 * @param class Objects added to the array must be an instance of this Class.
 *              Can be Nil, in which case class membership is not tested.
 * @param protocol Objects added to the array must conform to this Protocol.
 *                 Can be nil, in which case protocol conformance is not tested.
 * @note It is impossible to set this object's parameters after initialisation.
 *       Therefore calling -init will throw an exception; this initialiser must
 *       be used.
 */
- (id)initWithElementClass: (Class)class elementProtocol: (Protocol *)protocol;

/**
 * The class of which all added elements must be a kind, or Nil.
 */
@property (nonatomic, readonly) Class elementClass;

/**
 * The protocol to which all added elements must conform, or nil.
 */
@property (nonatomic, readonly) Protocol *elementProtocol;

@end

Notice that the class doesn’t allow you to build a type-safe array then set its invariants, nor can you change the element class or protocol after construction. This choice is deliberate: imagine if you could create an array to accept strings, add strings then change it to accept arrays. Not only could you then have two different kinds in the array, but the array’s API couldn’t tell you about both kinds. Also notice that the added elements can either be required to be of a particular class (or subclasses), or to conform to a particular protocol, or both. In theory it’s always better to define the protocol than the class, in practice most Objective-C code including Cocoa is light in its use of protocols.

The implementation is then pretty simple, we just provide the properties, initialiser and the NSMutableArray primitive methods. The storage is simply a CFMutableArrayRef.

//
//  GLTypesafeMutableArray.m
//  GLTypesafeMutableArray
//
//  Created by Graham Lee on 24/05/2010.
//  Copyright 2010 Thaes Ofereode. All rights reserved.
//

#import "GLTypesafeMutableArray.h"
#import <objc/Protocol.h>
#import <objc/runtime.h>

@implementation GLTypesafeMutableArray

@synthesize elementClass;
@synthesize elementProtocol;

- (id)init {
    @throw [NSException exceptionWithName: NSInvalidArgumentException
                                   reason: @"call initWithClass:protocol: instead"
                                 userInfo: nil];
}

- (id)initWithElementClass: (Class)class elementProtocol: (Protocol *)protocol {
    if (self = [super init]) {
        elementClass = class;
        elementProtocol = protocol;
        realContent = CFArrayCreateMutable(NULL,
                                           0,
                                           &kCFTypeArrayCallBacks);
    }
    return self;
}

- (void)dealloc {
    CFRelease(realContent);
    [super dealloc];
}

- (NSUInteger)count {
    return CFArrayGetCount(realContent);
}

- (void)insertObject:(id)anObject atIndex:(NSUInteger)index {
    if (elementClass != Nil) {
        if (![anObject isKindOfClass: elementClass]) {
            @throw [NSException exceptionWithName: NSInvalidArgumentException
                                           reason: [NSString stringWithFormat: @"Added object is not a kind of %@",
                                                    NSStringFromClass(elementClass)]
                                         userInfo: nil];
        }
    }
    if (elementProtocol != nil) {
        if (![anObject conformsToProtocol: elementProtocol]) {
            @throw [NSException exceptionWithName: NSInvalidArgumentException
                                           reason: [NSString stringWithFormat: @"Added object does not conform to %s",
                                                    protocol_getName(elementProtocol)]
                                         userInfo: nil];
        }
    }
    CFArrayInsertValueAtIndex(realContent,
                              index,
                              (const void *)anObject);
}

- (id)objectAtIndex:(NSUInteger)index {
    return (id)CFArrayGetValueAtIndex(realContent, index);
}

@end

Of course, this class isn’t quite production-ready: it won’t play nicely with toll-free bridging[*], isn’t GC-ready, and doesn’t supply any versions of the convenience constructors. That last point is a bit of a straw man though because the whole class is a convenience constructor in that it’s a realisation of the Builder pattern. If you need an array of strings, you can take one of these, tell it to only accept strings then add all your objects. Take a copy at the end and what you have is a read-only array that definitely only contains strings.

So what we’ve found here is that we can use the facilities provided by the Objective-C runtime to move our applications’ problems earlier in time, moving bug discovery from when we try to use the buggy object to when we try to create the buggy object. Bugs that are discovered earlier are easier to track down and fix, and are therefore cheaper to deal with.

[*]OK, so it is compatible with toll-free-bridging. Thanks to mike and mike for making me look that up…it turns out that CoreFundation just does normal ObjC message dispatch if it gets something that isn’t a CFTypeRef. Sadly, when I found that out I discovered that I had already read and commented on the post about bridging internals…

Posted in code-level, iPad, iPhone, Mac | 5 Comments

Careful how you define your properties

Spot the vulnerability in this Objective-C class interface:

@interface SomeParser : NSObject {
  @private
	NSString *content;
}
@property (nonatomic, retain) NSString *content;
- (void)beginParsing;
//...
@end

Any idea? Let’s have a look at a use of this class in action:

SomeParser *parser = [[SomeParser alloc] init];
NSMutableString *myMutableString = [self prepareContent];
parser.content = myMutableString;
[parser beginParsing];
[self modifyContent];

The SomeParser class retains an object that might be mutable. This can be a problem if the parser only functions correctly when its input is invariant. While it’s possible to stop the class’s API from mutating the data – perhaps using the State pattern to change the behaviour of the setters – if the ivar objects are mutable then the class cannot stop other code from making changes. Perhaps the string gets truncated while it’s being parsed, or valid data is replaced with invalid data while the parser is reading it.

If a class needs an instance variable to remain unmodified during the object’s lifetime (or during some lengthy operation), it should take a copy of that object. It’s easy to forget that in cases like strings and collections where the type of the ivar is immutable, but mutable subclasses exist. So to fix this parser:

@property (nonatomic, copy) NSString *content;

You could also make the property readonly and provide an -initWithContent: constructor, which takes a copy that will be worked on.

But with collection class properties these fixes may not be sufficient. Sure, you definitely get an immutable collection, but is it holding references to mutable elements? You need to check whether the collection class you’re using support shallow or deep copying—that is, whether copying the collection retains all of the elements or copies them. If you don’t have deep copying but need it, then you’ll end up having to implement a -deepCopy method yourself.

Note that the above discussion applies not only to collection classes, but to any object that has other objects as ivars and which is either itself mutable or might have mutable ivars. The general expression of the problem is fairly easy to express: if you don’t want your properties to change, then take copies of them. The specifics can vary from case to case and, as ever, the devil’s in the detail.

Posted in iPad, iPhone, Mac, Vulnerability | 2 Comments

Why OS X (almost) doesn’t need root any more

Note: this post was originally written for the Mac Developer Network.

In the beginning, there was the super-user. And the super-user was root.

When it comes to doling out responsibility for privileged work in an operating system, there are two easy ways out. Single-user operating systems just do whatever they’re told by whoever has access, so anyone can install or remove software or edit configuration. AmigaDOS, Classic Mac OS and MS-DOS all took this approach.

The next-simplest approach is to add multiple users, and let one of them do everything while all the others can do nothing. This is the approach taken by all UNIX systems since time immemorial – the root user can edit all files, set access rights for files and devices, start network services on low-numbered ports…and everyone else can’t.

The super-user approach has obvious advantages in a multi-user environment over the model with no privilege mechanism – only users who know how to log in as root can manage the computer. In fact it has advantages in a single-user environment as well: that one user can choose to restrict her own privileges to the times when she needs them, by using a non-privileged account the rest of the time.

It’s still a limited mechanism, in that it’s all-or-nothing. You either have the permission to do everything, or you don’t. Certain aspects like the ability to edit files can be delegated, but basically you’re either root or you’re useless. If you manage to get root – by intention or by malicious exploitation – you can do anything on the computer. If you exploit a root-running network service you can get it to load a kernel extension: not because network services need to load kernel extensions, but because there is nothing to stop root from doing so.

And that’s how pretty much all UNIX systems, including Mac OS X, work. Before getting up in arms about how Apple disabled root in OS X, remember this: they didn’t disable root, they disabled the account’s password. You can’t log in to a default OS X installation as root (though you can on Mac OS X Server). All of the admin facilities on Mac OS X are implemented by providing access to the monolithic root account – running a software update, configuring Sharing services, setting the FileVault master password all involve gaining root privilege.

The way these administrative features typically work is to use Authorization Services, and the principle of least privilege. I devoted a whole chapter to that in Professional Cocoa Application Security so won’t go into too much detail here, the high-level view is that there are two components, one runs as the regular user and the other as root. The unprivileged part performs an authorisation test and then, at its own discretion, decides whether to call the privileged helper. The privileged part might independently test whether the user application really did pass the authorisation test. The main issue is that the privileged part still has full root access.

So Authorization Services gives us discretionary access control, but there’s also a useful mandatory test relevant to the super-user. You see, traditional UNIX tests for whether a user is root by doing this:

if (process.p_euid == 0) {

Well, Mac OS X does do something similar in parts, but it actually has a more flexible test in places. There’s a kernel authorisation framework called kauth – again, there’s a chapter in PCAS on this so I don’t intend to cover too much detail. It basically allows the kernel to defer security policy decisions to callbacks provided by kernel extensions, one such policy question is “should I give this process root?”. Where the kernel uses this test, the super-user access is based not on the effective UID of the calling process, but on whatever the policy engine decides. Hmm…maybe the policy engine could use Authorization Services? If the application is an installer, and it has the installer right, and it’s trying to get root access to the filesystem, then it’s allowed.

Apple could then do away with monolithic root privileges completely, allowing the authorisation policy database to control who has privileged access for what tasks with which applications. The advantage is that if a privileged process ever gets compromised, the consequences for the rest of the OS are reduced.

Posted in Authorization, Mac, PCAS | Comments Off on Why OS X (almost) doesn’t need root any more

On improved tool support for Cocoa developers

I started writing some tweets, that were clearly taking up too much room. They started like this:

My own thoughts: tool support is very important to good software engineering. 3.3.1 is not a big inhibitor to novel tools. /cc @rentzsch

then this:

There’s still huge advances to make in automating design, bug-hunting/squashing and traceability/accountability, for instance.

(The train of thought was initiated by the Dog Spanner’s [c4 release]; post.)

In terms of security tools, the Cocoa community needs to catch up with where Microsoft are before we need to start wondering whether Apple might be holding us back. Yes, I have started working on this, I expect to have something to show for it at NSConference MINI. However, I don’t mind whether it’s you or me who gets the first release, the important thing is that the tools should be available for all of us. So I don’t mind sharing my impression of where the important software security engineering tools for Mac and iPhone OS developers will be in the next few years.

Requirements comprehension

My first NSConference talk was on understanding security requirements, and it’s the focus of Chapter 1 of Professional Cocoa Application Security. The problem is, most of you aren’t actually engineers of security requirements, you’re engineers of beautiful applications. Where do you dump all of that security stuff while you’re focussing on making the beautiful app? It’s got to be somewhere that it’s still accessible, somewhere that it stays up to date, and it’s got to be available when it’s relevant. In other words, this information needs to be only just out of your way. A Pages document doesn’t really cut it.

Now over in the Windows world, they have Microsoft Threat Modeling Tool, which makes it easy to capture and organise the security requirements. But stops short of providing any traceability or integration with the rest of the engineering process. It’d be great to know how each security requirement impacts each class, or the data model, etc.

Bug-finding

The Clang analyser is just the start of what static analysis can do. Many parts of Cocoa applications are data-driven, and good analysis tools should be able to inspect the relationship between the code and the data. Other examples: currently if you want to ensure your UI is hooked up properly, you manually write tests that inspect the outlets, actions and bindings you set up in the XIB. If you want to ensure your data model is correct, you manually write tests to inspect your entity descriptions and relationships. Ugh. Code-level analysis can already reverse-engineer test conditions from the functions and methods in an app, they ought to be able to use the rest of the app too. And it ought to make use of the security model, described above.

I have recently got interested in another LLVM project called KLEE, a symbolic execution tool. Current security testing practices largely involve “fuzzing”, or choosing certain malformed/random input to give to an app and seeing what it does. KLEE can take this a step further by (in effect) testing any possible input, and reporting on the outcomes for various conditions. It can even generate automated tests to make it easy to see what effect your fixes are having. Fuzzing will soon become obsolete, but we Mac people don’t even have a good and conventional tool for that yet.

Bug analysis

Once you do have fuzz tests or KLEE output, you start to get crash reports. But what are the security issues? Apple’s CrashWrangler tool can take a stab at analysing the crash logs to see whether a buffer overflow might potentially lead to remote code execution, but again this is just the tip of the iceberg. Expect KLEE-style tools to be able to report on deviations from expected behaviour and security issues without having to wait for a crash, just as soon as we can tell the tool what the expected behaviour is. And that’s an interesting problem in itself, because really the specification of what you want the computer to do is your application’s source code, and yet we’re trying to determine whether or not that is correct.

Safe execution

Perhaps the bitterest pill to swallow for long time Objective-C programmers: some time soon you will be developing for a managed environment. It might not be as high-level as the .Net runtime (indeed my money is on the LLVM intermediate representation, as hardware-based managed runtimes have been and gone), but the game has been up for C arrays, memory dereferencing and monolithic process privileges for years. Just as garbage collectors have obsoleted many (but of course not all) memory allocation problems, so environment-enforced buffer safety can obsolete buffer overruns, enforced privilege checking can obsolete escalation problems and so on. We’re starting to see this kind of safety retrofitted to compiled code using stack guards and the like, but by the time the transition is complete (if it ever is), expect your application’s host to be unrecognisable to the app as an armv7 or x86_64, even if the same name is still used.

Posted in PCAS, threatmodel, tool-support | 1 Comment

LLVM projects you may not be aware of

All Mac and iPhone OS developers must by now be familiar with LLVM, the Low-Level Virtual Machine compiler that Apple has backed in preference to GCC (presumably at least partially because because GCC 4.5 is now a GPLv3 project, in addition to technical problems with improving the older compiler). You’ll also be familiar with Clang, the modular C/ObjC/C++ lexer/parser that can be used as an LLVM front-end, or as a library for providing static analysis, refactoring and other code comprehension facilities. And of course MacRuby uses LLVM’s optimisation libraries.

The LLVM umbrella also covers a number of other projects that Mac/iPhone developers may not yet have heard about, but which nonetheless are pretty cool. This post is just a little tour of some of those. There are other projects that have made use of LLVM code, but which aren’t part of the compiler project – they are not the subject of this post.

LibC++ is a C++ library, targeting 100% compatibility with the C++0x (draft) standard.

KLEE looks very cool. It’s a “symbolic execution tool”, capable of automatically generating unit tests for software with high degrees of coverage (well over 90%). Additionally, given information about an application’s constraints and requirements it can automatically discover bugs, generating failing tests to demonstrate the bug and become part of the test suite. There’s a paper describing KLEE including a walkthrough of discovering a bug in tr, and tutorials in its use.

vmkit is a substrate layer for running bytecode. It takes high-level bytecode (currently JVM bytecode or IL, the bytecode of the .Net runtime) and translates it to IR, the LLVM intermediate representation. In doing so it can make use of LLVM’s optimisations and make better decisions regarding garbage collection.

Posted in C++, Java, objc | 27 Comments

On localisation and security

Hot on the heels of Uli’s post on the problems of translation, I present another problem you might encounter while localising your code. This is a genuine bug (now fixed, of course) in code I have worked on in the past, only the data has been changed to protect the innocent.

We had a crash in the following line:

NSString *message = [NSString stringWithFormat:
	NSLocalizedString(@"%@ problems found", @"Discovery message"),
	problem];

Doesn’t appear to be anything wrong with that, does there? Well, as I say, it was a crasher. The app only crashed in one language though…for purposes of this argument, we’ll assume it was English. Let’s have a look at English.lproj/Localizable.strings:

/* Discovery message */
"%@ problems found" = "%@ found in %@";

Erm, that’s not so good. It would appear that at runtime, the variadic method +[NSString stringWithFormat: (NSString *)fmt, ...] is expecting two arguments to follow fmt, but only passed one, so it ends up reading its way off the end of the stack. That’s a classic format string vulnerability, but with a twist: none of our usual tools (by which I mean the various -Wformat flags and the static analyser) can detect this problem, because the format string is not contained in the code.

This problem should act as a reminder to ensure that the permissions on your app’s resources are correct, not just on the binary—an attacker can cause serious fun just by manipulating a text file. It should also suggest that you audit your translators’ work carefully, to ensure that these problems don’t arise in your app even without tampering.

Posted in buffer-overflow, l10n, Mac, Vulnerability | 2 Comments

Which vendor “is least secure”?

The people over at Intego have a blog post, Which big vendor is least secure? They discuss that because Microsoft have upped their game, malware authors have started to target other products, notably those produced by Adobe and Apple.

That doesn’t really address the question though: which big vendor is least secure (or more precisely, which big vendor creates the least secure products)? It’s an interesting question, and one that’s so hard to answer, people usually get it wrong.

The usual metrics for vendor software security are:

  • Number of vulnerability reports/advisories last year
  • Speed of addressing reported vulnerabilities

Both are just proxies for the question we really want to know the answer to: “what risk does this product expose its users to?” Each has drawbacks when used as such a proxy.

The previous list of vulnerabilities seems to correlate with a company’s development practices – if they were any good at threat modelling, they wouldn’t have released software with those vulnerabilities in, right? Well, maybe. But maybe they did do some analysis, discovered the vulnerability, and decided to accept it. Perhaps the vulnerability reports were actually the result of their improved secure development lifecycle, and some new technique, tool or consultant has patched up a whole bunch of issues. Essentially all we know is what problems have been addressed and who found them, and we can tell something about the risk that users were exposed to while those vulnerabilities were present. Actually, we can’t tell too much about that, unless we can find evidence that it was exploited (or not, which is harder). We really know nothing about the remaining risk profile of the application – have 1% or 100% of vulnerabilities been addressed?

The only time we really know something about the present risk is in the face of zero-day vulnerabilities, because we know that a problem exists and has yet to be addressed. But reports of zero-days are comparatively rare, because the people who find them usually have no motivation to report them. It’s only once the zero-day gets exploited, and the exploit gets discovered and reported that we know the problem existed in the first place.

The speed of addressing vulnerabilities tells us some information about the vendor’s ability to react to security issues. Well, you might think it does, it actually tells you a lot more about the vendor’s perception of their customers’ appetite for installing updates. Look at enterprise-focussed vendors like Sophos and Microsoft, and you’ll find that most security patches are distributed on a regular schedule so that sysadmins know when to expect them and can plan their testing and deployment accordingly. Both companies have issued out-of-band updates, but only in extreme circumstances.

Compare that model with Apple’s, a company that is clearly focussed on the consumer market. Apple typically have an ad hoc (or at least opaque) update schedule, with security and non-security content alike bundled into infrequent patch releases. Security content is simultaneously released for some earlier operating systems in a separate update. Standalone security updates are occasionally seen on the Mac, rarely (if ever) on the iPhone.

I don’t really use any Adobe software so had to research their security update schedule specifically for this post. In short, it looks like they have very frequent security updates, but without any public schedule. Using Adobe Reader is an exercise in unexpected update installation.

Of course, we can see when the updates come out, but that doesn’t directly mean we know how long they take to fix problems – for that we need to know when problems were reported. Microsoft’s monthly updates don’t necessarily address bugs that were reported within the last month, they might be working on a huge backlog.

Where we can compare vendors is situations in which they all ship the same component with the same vulnerabilities, and must provide the same update. The more reactive companies (who don’t think their users mind installing updates) will release the fixes first. In the case of Apple we can compare their fixes of shared components like open source UNIX tools or Java with other vendors – Linux distributors and Oracle mainly. It’s this comparison that Apple frequently loses, by taking longer to release the same patch than other Oracle, Red Hat, Canonical and friends.

So ultimately what we’d like to know is “which vendor exposes its customers to most risk?”, for which we’d need an honest, accurate and comprehensive risk analysis from each vendor or an independent source. Of course, few customers are going to want to wade through a full risk analysis of an operating system.

Posted in Business, Responsibility, threatmodel, Vulnerability | 2 Comments

Why passwords aren’t always the right answer.

I realised something yesterday. I don’t know my master password.

Users of Mac OS X can use FileVault, a data protection feature that replaces the user’s home folder with an encrypted disk image. Encrypted disk images are protected by AES-128 or AES-256 encryption, but to get at the private key you need to supply one of two pieces of information. The first is the user’s login password, and the second is a private key for a recovery certificate. That private key is stored in a dedicated keychain, which is itself protected by….the master password. More information on the mechanism is available both in Professional Cocoa Application Security and Enterprise Mac.

Anyway, so this password is very useful – any FileVault-enabled home folder can be opened by the holder of the master password. Even if the user has forgotten his login password, has left the company or is being awkward, you can get at the encrypted content. It’s also hardly ever used. In fact, I’ve never used my own master password since I set it – and as a consequence have forgotten it.

There are a few different ways for users to recall passwords – by recital, by muscle memory or by revision. So when you enter the password, you either remember what the characters in the password are, where your hands need to be to type it or you look at the piece of paper/keychain where you wrote it down. Discounting the revision option (the keychain is off the menu, because if you forget your login password you can’t decrypt your login keychain in order to view the recorded password), the only ways to reinforce a password in your memory are to use it. And you never use the FileVault master password.

I submit that as a rarely-used authentication step, the choice of a password to protect FileVault recovery is a particularly bad one. Of course you don’t want attackers able to use the recovery mechanism, but you do want that when you really need to recover your encrypted data, the OS doesn’t keep you out, too.

Posted in Encryption, Keychain, Mac, password | 3 Comments

WWDC dates announced

The entire of Twitter has imploded after noticing that Apple has announced the dates for WWDC, this year June 7-11. That’s too short notice for me to go, and having only recently started working again after a few months concentrating solely on Professional Cocoa Application Security, I can’t scrape together the few thousand pounds needed to reserve flights, hotel and ticket at a month’s notice.

I hope that those of you who are going have a great time. The conference looks decidedly thin on Mac content this year, and while I still class myself as more of a Mac developer than an iP* developer that shouldn’t be too much of a problem. The main value in WWDC is in the social/networking side first, the labs second, and the lecture content third – so as long as you can find an engineer in the labs who remembers how a Mac works, you’ll probably still have a great week and learn a lot.

Posted in carbon, conference, nextstep | 2 Comments

The difference between NSTableView and UITableView

A number of times, I’ve chased myself down rat holes in iPhone projects because I’ve created a design or implementation that assumes UITableView and NSTableView are similar objects. They aren’t.

The main problem I come across is related to how the cells are treated in Cocoa and in Cocoa touch. An AppKit table comprises columns, each of which uses a cell to display its content. A cell contains the drawing and event-handling stuff of a view, but nothing to do with its place in the view hierarchy or responder chain. It’s essentially a light-weight view. For each row in the table, NSTableColumn takes its cell, configures it for the content in that row and then draws the cell at its location in the column. No matter how many rows there are, a single cell is used.

UIKit works differently. Of course a UITableView only has one column, but it also displays views rather than cells. This is good, but leads to the key distinction that always trips me up: you can’t use the same view more than once in a table view. Of course, sections in a UITableView will often have more than one row, but each row that is visible on-screen will needs its own instance of UITableViewCell (which is a subclass of UIView, and therefore a view in the traditional sense rather than a cell). If you try to re-use the same instance multiple times, the table view will configure each row but only the last one it prepared will be drawn.

So what’s this -reuseIdentifier? stuff? That’s related to caching views for scrolling. Imagine a table view with 10 rows, of which 4 can be seen on screen at once. Each uses the same type of cell in this example. When the table view first becomes visible there will be 4 UITableViewCell instances in use, displaying rows 0-3. Now you start to scroll the view. UITableView finds it needs an extra cell to display row 4, which is now partially on-screen and row 0 is starting to slide off. When row 0 disappears completely, the table view could just delete its cell – but rather than do that, it adds it to a queue of reusable cells. When row 5 starts to appear, the table view can re-use the object it’s already created for row 0, because it’s the same type of cell as the one for row 5 and is currently unused.

So, that’s that really. Note to self: don’t treat UIKit like it’s just AppKit, you’ll end up wasting a day of code.

Posted in cocoa, iPad, iPhone, objc | 3 Comments