The code you wrote six months ago

We have this trope in programming that you should hate the code you wrote six months ago. This is a figurative way of saying that you should be constantly learning and assimilating new ideas, so that you can look at what you were doing earlier this year and have new ways of doing it.

It would be more accurate, though less visceral, to say “you should be proud that the code you wrote six months ago was the best you could do with the knowledge you then had, and should be able to ways to improve upon it with the learning you’ve accomplished since then”. If you actually hate the code, well, that suggests that you think anyone who doesn’t have the knowledge you have now is an idiot. That kind of mentality is actually deleterious to learning, because you’re not going to listen to anyone for whom you have Set the Bozo Bit, including your younger self.

I wrote a lot about learning and teaching in APPropriate Behaviour, and thinking about that motivates me to scale this question up a bit. Never mind my code, how can we ensure that any programmer working today can look at the code I was writing six months ago and identify points for improvement? How can we ensure that I can look at the code any other programmer was working on six months ago, and identify points for improvement?

My suggestion is that programmers should know (or, given the existence of the internet, know how to use the index of) the problems that have already come before, how we solved them, and why particular solutions were taken. Reflecting back on my own career I find a lot of problems I introduced by not knowing things that had already been solved: it wasn’t until about 2008 that I really understood automated testing, a topic that was already being discussed back in 1968. Object-oriented analysis didn’t really click for me until later, even though Alan Kay and a lot of really other clever people had been working on it for decades. We’ll leave discussion of parallel programming aside for the moment.

So perhaps I’m talking about building, disseminating and updating a shared body of knowledge. The building part already been done, but I’m not sure I’ve ever met anyone who’s read the whole SWEBOK or referred to any part of it in their own writing or presentations so we’ll call the dissemination part a failure.

Actually, as I said we only really need an index, not the whole BOK itself: these do exist for various parts of the programming endeavour. Well, maybe not indices so much as catalogues; summaries of the state of the art occasionally with helpful references back to the primary material. Some of them are even considered “standards”, in that they are the go-to places for the information they catalogue:

  • If you want an algorithm, you probably want The Art of Computer Programming or Numerical Recipes. Difficulties: you probably won’t understand what’s written in there (the latter book in particular assumes a bunch of degree-level maths).
  • If you want idioms for your language, look for a catalogue called “Effective <name of your language>”. Difficulty: some people will disagree with the content here just to be contrary.
  • If you want a pattern, well! Have we got a catalogue for you! In fact, have we got more catalogues than distinct patterns! There’s the Gang of Four book, the PloP series, and more. If you want a catalogue that looks like it’s about patterns but is actually comprised of random internet commentators trying to prove they know more than Alastair Cockburn, you could try out the Portland Pattern Repository. Difficulty: you probably won’t know what you’re looking for until you’ve already read it—and a load of other stuff.

I’ve already discussed how conference talks are a double-edged sword when it comes to knowledge sharing: they reach a small fraction of the practitioners, take information from an even smaller fraction, and typically set up a subculture with its own values distinct from programming in the large. The same goes for company-internal knowledge sharing programs. I know a few companies that run such programs (we do where I work, and Etsy publish the talks from theirs). They’re great for promoting research, learning and sharing within the company, but you’re always aware that you’re not necessarily discovering things from without.

So I consider this one of the great unsolved problems in programming at the moment. In fact, let me express it as two distinct questions:

  1. How do I make sure that I am not reinventing wheels, solving problems that no longer need solving or making mistakes that have already been fixed?
  2. A new (and for sake of this discussion) inexperienced programmer joins my team. How do I help this person understand the problems that have already been solved, the mistakes that have already been made, and the wheels that have already been invented?

Solve this, and there are only two things left to do: fix concurrency, name things, and improve bounds checking.

Separating user interface from work

Here’s a design I’ve had knocking around my head for a while, and between a discussion we had a few weeks ago at work and Saul Mora’s excellent design patterns talk at QCon I’ve built it.

A quick heads-up: currently the logic is all built into a side project app I’ve been working on so I don’t have a single project download I can point to. The post here should explain all of the relevant code, which is made available under the terms of the MIT Licence. A reusable component is forthcoming.


Remove the Massive View Controller from our applications’ architectures. Push Cocoa, Cocoa Touch, or other frameworks to the edges of our codebase, responsible only for working with the UI. Separate the concerns of user interaction, work scheduling and the actual work.

There are maintainability reasons for doing so. We separate unrelated work into different classes, localising the responsibilities and removing coupling between them. The same code can be used in multiple contexts, because the UI frameworks are decoupled from the work that they’re doing. This is not only a benefit for cross-platform work but for re-using the same logic in different places in a single user interface.

We also notice performance optimisations that become possible. With a clear delineation between the user interface code and the work, it’s much easier to understand which parts of the application must be run on the user interface thread and which can be done in other contexts.


Implement the Message Bus pattern from server applications. In response to a user event, the user interface creates a command object and sends it to a command bus. The command bus picks an appropriate handler, passes it the command and schedules it. The user interface, the work done and the scheduling of that work are therefore all decoupled.

IKBCommand Class Diagram


At application launch, the application accesses the shared IKBCommandBus object:

@interface IKBCommandBus : NSObject

+ (instancetype)applicationCommandBus;
- (void)execute: (id <IKBCommand>)command;
- (void)registerCommandHandler: (Class)handlerClass;


and registers command handlers. Command handlers know what commands they can process, and can tell the bus whether they will accept a given command. Handlers can also be loaded later, for example in Mac applications or server processes when a new dynamic bundle is loaded.

Once the application is running, the command bus can be used by user interface controllers. These controllers are typically UIViewControllers in an iOS app, NSViewControllers, NSDocuments or other objects in a Cocoa app, or maybe something else in other contexts. A controller receives an action related to a user interface event, and creates a specific IKBCommand.

@protocol IKBCommand <NSObject, NSCoding>

@property (nonatomic, readonly) NSUUID *identifier;


Commands represent requests to do specific work, so the controller needs to configure the properties of the command it created based on user input such as the current state of text fields and so on. This is done on the user interface thread to ensure that the controller accesses its UI objects correctly.

The controller then tells the application’s command bus to execute the command. This does not need to be done on the user interface thread. The bus looks up the correct handler:

@interface IKBCommandHandler : NSObject

+ (BOOL)canHandleCommand: (id <IKBCommand>)command;
- (void)executeCommand: (id <IKBCommand>)command;


Then the bus schedules the handler’s executeCommand: method.

Implementation and Discussion

The Command protocol includes a unique identifier and conformance to the NSCoding protocol. This supports the Event Sourcing pattern, in which changes to the application can be stored directly as a sequence of events. Rather than storing the current state in a database, the app could just replay all events it has received when it starts up.

This opens up possibilities including journaling (the app can replay messages it received but didn’t get a chance to complete due to some outage) and syncing (the app can retrieve a set of events from a remote source and play those it hasn’t already seen). An extension to the implementation provided here is that the event source acts as a natural undo stack, if commands can express how to revert their work. In fact, even if an event can’t be reversed, you can “undo” it by removing it from the event store and replaying the whole log back into the application from scratch.

When a command is received by the bus, it looks through the handlers that have been registered to find one that can handle the command. Then it schedules that handler on a queue.

@implementation IKBCommandBus
  NSOperationQueue *_queue;
  NSSet *_handlers;

static IKBCommandBus *_defaultBus;

+ (void)initialize
  if (self == [IKBCommandBus class])
      _defaultBus = [self new];

+ (instancetype)applicationCommandBus
  return _defaultBus;

- (id)init
  self = [super init];
  if (self)
      _queue = [NSOperationQueue new];
      _handlers = [[NSSet set] retain];
  return self;

- (void)registerCommandHandler: (Class)handlerClass
  _handlers = [[[_handlers autorelease] setByAddingObject: handlerClass] retain];

- (void)execute: (id <IKBCommand>)command
  IKBCommandHandler *handler = nil;
  for (Class handlerClass in _handlers)
      if ([handlerClass canHandleCommand: command])
          handler = [handlerClass new];
  NSAssert(handler != nil, @"No handler defined for command %@", command);
  NSInvocationOperation *executeOperation = [[NSInvocationOperation alloc] initWithTarget: handler selector: @selector(executeCommand:) object: command];
  [_queue addOperation: executeOperation];
  [executeOperation release];
  [handler release];

- (void)dealloc
  [_queue release];
  [_handlers release];
  [super dealloc];


Updating the UI

By the time a command is actually causing code to be run it’s far away from the UI, running a command handler’s method in an operation queue. The application can use the Observer pattern (for example Key Value Observing, or Cocoa Bindings) to update the user interface when command handlers change the data model.

On protocols that aren’t

There’s a common assumption when dealing with Objective-C protocols or Java interfaces (or abstract classes, I suppose): that you’re abstracting away the implementation of an object leaving just its interface. “Oh, don’t mind how I quack, all you need to know is that I do quack”. This assumption is unwarranted.

All protocols/interfaces actually tell you is that there is a list of messages, and you’ll be given an object that responds to those messages. Assuming that you can use it in the same way as any other object that responds to the same messages is a mistake. This is something I’ve written about before, but it’s time to give a concrete example.

I’ve been debugging some multi-threaded code recently, so I’ve been writing my own lock classes to try and inspect how the different threads interact. They implement the NSLocking protocol, as all the Foundation lock classes do:

The NSLocking protocol declares the elementary methods adopted by classes that define lock objects. A lock object is used to coordinate the actions of multiple threads of execution within a single application. By using a lock object, an application can protect critical sections of code from being executed simultaneously by separate threads, thus protecting shared data and other shared resources from corruption.

It defines two methods, -lock and -unlock. So, given I have an object that conforms to NSLocking, I can call -lock and -unlock just as I would on any other object that conforms to the protocol, right?

Wrong. With an NSRecursiveLock (which conforms to the protocol), I can call -lock multiple times in a row without calling -unlock. Given an NSLock (which also conforms to the protocol), I can’t. With NSLock I can’t call -unlock on a different thread than I called -lock; I could write a class where that is permitted.

As I argued in the above-linked post, protocols are actually a poor approximation for what we want to use them as: agreements on how to interact with an object via its messaging interface.

Shell scripts and Xcode

Back in 2009 at the first NSConf, Scotty asked some of the speakers for an Xcode Quick Tip. I’m still using mine today.

When your target needs a “Run Shell Script” build phase, don’t write the script into the box in Xcode’s build phases view. Instead, create the shell script as an external file and call that from the Xcode build phase. It’s easier to version control, and you can take advantage of the capabilities of external editors—particularly where your “shell script” is actually in Perl, Ruby or some similar language.

Objective-C, dependencies, linking

In the most recent episode of Edge Cases, Wolf and Andrew discuss dependency management, specifically as it pertains to Objective-C applications that import libraries using the Cocoapods tool.

In one app I worked on a few years ago, two different libraries each tried to include (as part of the libraries themselves, not as dependencies) the Reachability classes from Apple’s sample code. The result was duplicate symbol definitions, because my executable was trying to link both (identical) definitions of the classes. Removing one of the source files from the build fixed it, but how could we avoid getting into that situation in the first place?

One way explored in the podcast is to namespace the classes. So Mister Framework could rename their Reachability to MRFReachability, Framework-O-Tron could rename theirs to FOTReachability. Now we have exactly the same code included twice, under two different names. They don’t conflict, but they are identical so our binary is bigger than it needs to be.

It’d be great if they both encoded their dependency on a common class but didn’t try to include it themselves so we could just fetch that class and use it in both places. Cocoapods’s dependency resolution allows for that, and will work well when both frameworks want exactly the same Reachability class. However, we hit a problem again when they want different libraries, with the same names in.

Imagine that the two frameworks were written using different versions of libosethegame. The developers changed the interface when they went from v1.0 to v1.1, and Framework-O-Tron is still based on the older version of the interface. So just selecting the newer version won’t work. Of course, neither does just selecting the older version. Can we have both versions of libosethegame, used by the two different frameworks, without ending up back with the symbol collision error?

At least in principle, yes we can. The dynamic loader, dyld (also described in the podcast) supports two-level namespace for dynamic libraries. Rather than linking against the osethegame library with -losethegame, you could deploy both libosethegame.1.0.0.dylib and libosethegame.1.1.0.dylib. One framework links with -losethegame.1.0, the other links with -losethegame.1.1. Both are deployed, and the fact that they were linked with different names means that the two-level namespace resolves the correct symbol from the correct version of the library, and all is well.

Of course, if you’ve got dynamic libraries, and the library vendor is willing to do a little work, they can just ship one version that supports all previous behaviour, looking at which version of the library the client was linked against to decide what behaviour to provide. Despite Mac OS X providing a versioned framework bundle layout, Apple has never (to my knowledge) shipped different versions of the system frameworks. Instead, the developers use the Mach-O load headers for an executable to find the linked version of their library, and supply behaviour equivalent to that version.

The above two paragraphs do rather depend on being able to use the dynamic linker. We can’t, on iOS, at the moment.

Can Objective-C be given safe categories?

That was the subject of this lunchtime’s vague thinking out loud. The problems with categories are well-known: you can override the methods already declared on a class, or the methods provided in another category (and therefore another category can replace your implementations too). Your best protection is to use ugly wartifying prefixes in the hope that your bewarted method names don’t collide with everybody else’s bewarted method names.

A particular problem with categories, and one that’s been observed in the wild, is when you add a method in a category that is, at some later time, added to the original implementation of the class itself. Other consumers of the class (including the framework it’s part of) may be expecting to work with the first-party implementation, not your substitution. If the first-party method has a different binary interface to yours (e.g. one of you returns a primitive value and the other a struct), as happened to a lot of people with NSArray around the end of the 1990s, prepare to start crashinating.

Later implementations of similar features in other languages have avoided this problem by refusing to add methods that already exist, and by ensuring that even if multiple extensions define the same method they can all coexist and the client code expresses exactly which one it’s referring to. Can we add any of this safety to Objective-C?

Partially. We could design a function for adding a collection of methods from a “category” to a class at runtime, that only adds them if the class doesn’t already implement them. class_addCategory() shows what this might look like, but it only supports non-struct-returning instance methods.

If class_addCategory(target, source, NO) succeeds, then the methods you were trying to add did not exist on the target class before you called the function. However, you cannot be sure that they weren’t being added while your call was in progress, and you can’t know later that they weren’t clobbered by someone else at some point between successfully adding the methods and using them. Also, if class_addCategory() fails, you may find the only reasonable course of action is to not use the methods you were trying to add: the only thing you know about their implementation is that it either doesn’t exist or isn’t the one you were expecting. This is at odds with a hypothetical purist notion of Object-Oriented Programming where you send messages to objects and don’t care what happens as a result.

There are plenty of ways to work around the limitations of categories: composition is the most likely to succeed (more likely than subclassing, which suffers the same collision problem as a later version of the superclass might try to define a method with the same name as one you’ve chosen, which you’re now clobbering). It doesn’t let you replace methods on a class—a tool that like most in the programmer’s utility belt is both occasionally useful and occasionally abuseful.


I should point out that I’m not a fan of taking away the potentially dangerous tools. Many people who see the possibility for a language feature to be abused argue that it should never be used or that languages that don’t offer it should be preferred. This is continuum-fallacy nonsense, to which I do not subscribe. Use whatever language features help you to produce a working, comprehensible, valuable software system: put in whatever protections you want to guard against existing or likely problems.

Coupling in a Cocoa[ Touch] App

This is one of my occasional “problem looking for a solution” posts. It’d be great to discuss this over on or G+ or somewhere. I don’t think, at the outset of writing this post, that the last sentence is going to solve the problems identified in the body.

I’ve built applications in a few different technologies, and I think that the Cocoa and Cocoa Touch apps I’ve seen have been the most tightly coupled to their host frameworks with the possible exception of Delphi. I include both my code and that I’ve seen of other people—I’m mainly talking about my work of course. I’ve got a few ideas on why that might be.

The coupling issue in detail

Ignore Foundation for a moment. A small amount of Foundation is framework-ish: the run loop and associated event sources and sinks. The rest is really a library of data types and abstracted operating system facilities.

But now look at AppKit or UIKit. Actually, imagine not looking at them. Consider removing all of the code in your application that directly uses an AppKit or UIKit class. Those custom views, gone. View controller subclasses? Boom. Managed documents? Buh-bye.

OK, now let’s try the same with Core Data. Imagine it disappeared from the next version of the OS. OK, so your managed objects disappear, but how much of the rest of your app uses them directly? Now how about AVFoundation, or GameKit, and so on?

I recently took a look at some of my code that I’d written very recently, and found that less code survived these excisions than I might like. To be sure, it’s a greater amount than from an application I wrote a couple of years ago, but all the same my Objective-C code is tightly coupled to the Apple frameworks. The logic and features of the application do not stand on their own but are interwoven with the things that make it a Mac, or iPhone app.

Example symptoms

I think Colin Campbell said it best:

iOS architecture, where MVC stands for Massive View Controller

A more verbose explanation is that these coupled applications violate the Single Responsibility Principle. There are multiple things each class is doing, and this means multiple reasons that I might need to change any of these classes. Multiple reasons to change means more likelihood that I’ll introduce a bug into this class, and in turn more different aspects of the app that could break as a result.

Particularly problematic is that changes in the way the app needs to interact with the frameworks could upset the application behaviour: i.e. I could break what the app does by changing how it presents a UI or how it stores objects. Such modifications could come about when new framework classes are introduced, or new features added to existing classes.

What’s interesting is that I’ve worked on applications using other frameworks, and done a better job: I can point at an Eclipse RCP app where the framework dependencies all lie at the plugin interface boundaries. Is there something specific to Cocoa or Cocoa Touch that leads to applications being more tightly coupled? Let’s look at some possibilities.

Sample code

People always malign sample code. When you’re just setting out, it assumes you know too much. When you’re expert, it takes too many shortcuts and doesn’t display proper [choose whichever rule of code organisation is currently hot]. Sample code is like the avocado of the developer documentation world: it’s only ripe for a few brief minutes between being completely inedible and soft brown mush.

Yes, sample code is often poorly-decoupled, with all of the application logic going in the app delegate or the view controller. But I don’t think I can blame my own class design on that. I remember a time when I did defend a big-ass app delegate by saying it’s how Apple do it in their code, but that was nearly a decade ago. I don’t think I’ve looked to the sample code as a way to design an app for years.

But sample code is often poorly-decoupled regardless of its source. In researching this post, I had a look at the sample code in the Eclipse RCP book; the one I used to learn the framework for the aforementioned loosely-coupled app. Nope, that code is still all “put the business logic in the view manager” in the same way Apple’s and Microsoft’s is.

Design of the frameworks and history

I wonder whether there’s anything specific about the way that Apple’s frameworks are designed that lead to tight coupling. One thing that’s obvious about how many of the frameworks were designed is the “in the 1990s” aspect of the issue.

Documentation on object-oriented programming from that time tells us that subclassing was the hotness. NSTableView was designed such that each of its parts would be “fully subclassable”, according to the release notes. Enterprise Objects Framework (and as a result, Core Data) was designed such that even your data has to be in a subclass of a framework class.

Fast forward to now and we know that subclassing is extremely tight coupling. Trying to create a subclass of a type you do control is tricky enough, but when it’s someone else’s you have no idea when they’re going to add methods that clash with yours or change behaviour you’re relying on. But those problems, real though they are, are not what I’m worried about here: subclassing is vendor lock-in. Every time you make it harder to extract your code from the vendor’s framework, you make it harder to leave that framework. More on that story later.

So subclassing couples your code to the superclass, and makes it share the responsibility of the superclass and whatever it is you want it to do. That becomes worse when the superclass itself has multiple responsibilities:

  • views are responsible for drawing, geometry and event handling.
  • documents are responsible for loading and saving, managing windows, handling user interaction for load, save and print requests, managing the in-memory data representing the document, serialising document access, synchronising user interface access with background work and printing.
  • pretty much everything in an AppKit app is responsible in some way for scripting support which is a cross-cutting concern.

You can see Apple’s frameworks extricating themselves from the need to subclass everything, in some places. The delegate pattern, which was largely used either to supply data or let the application respond to events from long-running tasks, is now also used as a decorator to provide custom layout decisions, as with the 10.4 additions to the NSTableViewDelegate, UITableViewDelegate and more obviously the UICollectionViewDelegateFlowLayout. Delegate classes are still coupled to the protocol interface, but are removed from the class hierarchy giving us more options to adapt our application code onto the interface.

Similarly, the classes that offer an API based on supplying a completion handler require that the calling code be coupled to the framework interface, but allow anything to happen on completion. As previously mentioned you sometimes get other problems related to the callback design, but at least now the only coupling to the framework is at the entry point.

No practice at decoupling

This is basically the main point, isn’t it? A lack of discipline comes from a lack of practice. But why should I (and, I weakly argue, other programmers whose code I’ve read) be out of practice?

No push to decouple

Here’s what I think the reason could be. Remind me of the last application you saw that wasn’t either a UIKit app or an AppKit app. Back in the dim and distant past there might be more variety: the code in your WebObjects Objective-C server might also be used in an AppKit+EOF client app. But most of us are writing iOS software that will only run on iOS, so the effort in decoupling an iOS app from UIKit has intellectual benefits but no measurable tangible benefits.

The previous statement probably shouldn’t be taken as absolute. Yes, there are other ways to do persistence than Core Data. Nonetheless, it’s common to find iOS apps with managed object contexts passed to every view controller: imagine converting that ball of mud to use BNRPersistence.

What about portability? There are two options for writing cross-platform Objective-C software: frameworks that are API compatible with UIKit or AppKit, or not. While it is possible to build an application that has an Objective-C core and uses, say, Qt or Win32 to provide a UI, in practice I’ve never seen that.

There also aren’t any “alternative” Objective-C application frameworks on the platforms ObjC programmers do support. In the same way that you might want to use the same Java code in a SWT app, a Swing app, and a Spring MVC server; you’re not going to want to port your AppKit app to SomeoneElsesKit. You’ll probably not want to move away from UIKit to $something_else in the near future in the same way it might be appropriate to move from Struts to Spring or from Eclipse RCP to NetBeans Platform.

Of course, this is an argument with some circular features. Because it’s hard to move the code I write away from UIKit, if something else does come along the likelihood is I’d be disinclined to take advantage of it. I should probably change that: vendor lock-in in a rapidly changing industry like software is no laughing matter. Think of the poor people who are stuck with their AWT applications, or who decided to build TNT applications on NeWS because it was so much more powerful than X11.


This is the last sentence, and it solves nothing; I did tell you that would happen.

A two-dimensional dictionary


A thing I made has just been open-sourced by my employers at Agant: the AGTTwoDimensionalDictionary works a bit like a normal dictionary, except that the keys are CGPoints meaning we can find all the objects within a given rectangle.


A lot of time on developing Discworld: The Ankh-Morpork Map was spent on performance optimisation: there’s a lot of stuff to get moving around a whole city. As described by Dave Addey, the buildings on the map were traced and rendered into separate images so that we could let characters go behind them. This means that there are a few thousand of those little images, and whenever you’re panning the map around the app has to decide which images are visible, put them in the correct place (in three dimensions; remember people can be in front of or behind the buildings) and draw everything.

A first pass involved creating a set containing all of the objects, looping over them to find out which were within the screen region. This was too slow. Implementing this 2-d index instead made it take about 20% the original time for only a few tens of kilobytes more memory, so that’s where we are now. It’s also why the data type doesn’t currently do any rebalancing of its tree; it had become fast enough for the app it was built in already. This is a key part of performance work: know which battles are worth fighting. About one month of full-time development went into optimising this app, and it would’ve been more if we hadn’t been measuring where the most benefit could be gained. By the time we started releasing betas, every code change was measured in Instruments before being accepted.

Anyway, we’ve open-sourced it so it can be fast enough for your app, too.


There’s a data structure called the multidimensional binary tree or k-d tree, and this dictionary is backed by that data structure. I couldn’t find an implementation of that structure I could use in an iOS app, so cracked open the Objective-C++ and built this one.

Objective-C++? Yes. There are two reasons for using C++ in this context: one is that the structure actually does get accessed often enough in the Discworld app that dynamic dispatch all the way down adds a significant time penalty. The other is that the structure contains enough objects that having a spare isa pointer per node adds a significant memory penalty.

But then there’s also a good reason for using Objective-C: it’s an Objective-C app. My controller objects shouldn’t have to be written in a different language just to use some data structure. Therefore I reach for the only application of ObjC++ that should even be permitted to compile: an implementation in one language that exposes an interface in the other. Even the unit tests are written in straight Objective-C, because that’s how the class is supposed to be used.

The Liskov Citation Principle

In her keynote speech at QCon London 2013 on The Power of Abstraction, Barbara Liskov referred to several papers contemporary with her work on abstract data types. I’ve collected these references and found links to free copies of the articles where available.

Dijkstra 1968 Go To statement considered harmful

Wirth 1971 Program development by stepwise refinement

Parnas 1971 Information distribution aspects of design methodology

Liskov 1972 A design methodology for reliable software systems

Schuman and Jorrand 1970 Definition mechanisms in extensible programming languages
Not apparently available online for free

Balzer 1967 Dataless Programming

Dahl and Hoare 1972 Hierarchical program structures
Not apparently available online for free

Morris 1973 Protection in programming languages

Liskov and Zilles 1974 Programming with abstract data types

Liskov 1987 Data abstraction and hierarchy

How to handle Xcode in your meta-build system’s iOS or Mac app target

OK, I’ve said before in APPropriate Behaviour that I dislike build systems that build other build systems:

Some build procedures get so complicated that they spawn another build system that configures the build environment for the target system before building. An archetypal example is GNU autotools – which actually has a three-level build system. Typically the developers will run `autoconf`, a tool that examines the project to find out what questions the subsequent step should ask and generates a script called `configure`. The user downloads the source package and runs `configure`, which inspects the compilation environment and uses a collection of macros to create a Makefile. The Makefile can then compile the source code to (finally!) create the product.

As argued by Poul-Henning Kamp, this is a bad architecture that adds layers of cruft to work around code that has not been written to be portable to the environments where it will be used. Software written to be built with tools like these is hard to read, because you must read multiple languages just to understand how one line of code works.

One problem that arises in any cross-platform development is that assumptions about “the other platforms” (being the ones you didn’t originally write the software on) are sometimes made based on one of the following sources of information:

  • none
  • a superficial inspection of the other platform
  • analogy to the “primary” platform

An example of the third case: I used to work on the Mac version of a multi-platform product, certain core parts of which were implemented by cross-platform libraries. One of these libraries just needed a little configuration for each platform: tell it what file extension to use for shared libraries, and give it the path to the Registry.

What cost me a morning today was an example of the second case: assuming that all Macs are like the one you tried. Let me show you what I mean. Here’s the contents of /Developer on my Mac:

$ ls /Developer/

Wait, where’s Xcode? Oh right, they moved it for the App Store builds didn’t they?

$ ls /Applications/
ls: /Applications/ No such file or directory



Since Xcode 2.5, Xcode has been relocatable and can live anywhere on the filesystem. Even if it is in one of the usual places, that might not be the version a developer wants to use. I keep a few different Xcodes around: usually the current one, the last one I knew everything worked on, and a developer preview release when there is one. I then also tend to forget to throw old Xcodes away, so I’ve got 4 different versions at the moment.

But surely this is all evil chaos from those crazy precious Mac-using weirdos! How can you possibly cope with all of that confusion? Enter xcode-select:

$ xcode-select -print-path

Xcode-select is in /usr/bin, so you don’t have the bootstrapping problem of trying to find the tool that lets you find the thing. That means that you can always rely on it being in one place for your scripts or other build tools. You can use it in a shell script:

XCODE_DEVELOPER_DIR=`/usr/bin/xcode-select -print-path`

or in a CMake file:

exec_program(/usr/bin/xcode-select ARGS -print-path OUTPUT_VARIABLE XCODE_DEVELOPER_DIR)

or in whatever other tool you’re using. The path is manually chosen by the developer (using the -switch option), so if for some reason it doesn’t work out (like the developer has deleted that version of Xcode without updating xcode-select), then you can fall back to looking in default locations.

Please do use xcode-select as a first choice for finding Xcode or the developer folder on any Mac system, particularly if your project uses a build generator. It’s more robust to changes—either from Apple or from the users of that Mac—than relying on the developer tools being installed to their default location.