Coupling in a Cocoa[ Touch] App

This is one of my occasional “problem looking for a solution” posts. It’d be great to discuss this over on or G+ or somewhere. I don’t think, at the outset of writing this post, that the last sentence is going to solve the problems identified in the body.

I’ve built applications in a few different technologies, and I think that the Cocoa and Cocoa Touch apps I’ve seen have been the most tightly coupled to their host frameworks with the possible exception of Delphi. I include both my code and that I’ve seen of other people—I’m mainly talking about my work of course. I’ve got a few ideas on why that might be.

The coupling issue in detail

Ignore Foundation for a moment. A small amount of Foundation is framework-ish: the run loop and associated event sources and sinks. The rest is really a library of data types and abstracted operating system facilities.

But now look at AppKit or UIKit. Actually, imagine not looking at them. Consider removing all of the code in your application that directly uses an AppKit or UIKit class. Those custom views, gone. View controller subclasses? Boom. Managed documents? Buh-bye.

OK, now let’s try the same with Core Data. Imagine it disappeared from the next version of the OS. OK, so your managed objects disappear, but how much of the rest of your app uses them directly? Now how about AVFoundation, or GameKit, and so on?

I recently took a look at some of my code that I’d written very recently, and found that less code survived these excisions than I might like. To be sure, it’s a greater amount than from an application I wrote a couple of years ago, but all the same my Objective-C code is tightly coupled to the Apple frameworks. The logic and features of the application do not stand on their own but are interwoven with the things that make it a Mac, or iPhone app.

Example symptoms

I think Colin Campbell said it best:

iOS architecture, where MVC stands for Massive View Controller

A more verbose explanation is that these coupled applications violate the Single Responsibility Principle. There are multiple things each class is doing, and this means multiple reasons that I might need to change any of these classes. Multiple reasons to change means more likelihood that I’ll introduce a bug into this class, and in turn more different aspects of the app that could break as a result.

Particularly problematic is that changes in the way the app needs to interact with the frameworks could upset the application behaviour: i.e. I could break what the app does by changing how it presents a UI or how it stores objects. Such modifications could come about when new framework classes are introduced, or new features added to existing classes.

What’s interesting is that I’ve worked on applications using other frameworks, and done a better job: I can point at an Eclipse RCP app where the framework dependencies all lie at the plugin interface boundaries. Is there something specific to Cocoa or Cocoa Touch that leads to applications being more tightly coupled? Let’s look at some possibilities.

Sample code

People always malign sample code. When you’re just setting out, it assumes you know too much. When you’re expert, it takes too many shortcuts and doesn’t display proper [choose whichever rule of code organisation is currently hot]. Sample code is like the avocado of the developer documentation world: it’s only ripe for a few brief minutes between being completely inedible and soft brown mush.

Yes, sample code is often poorly-decoupled, with all of the application logic going in the app delegate or the view controller. But I don’t think I can blame my own class design on that. I remember a time when I did defend a big-ass app delegate by saying it’s how Apple do it in their code, but that was nearly a decade ago. I don’t think I’ve looked to the sample code as a way to design an app for years.

But sample code is often poorly-decoupled regardless of its source. In researching this post, I had a look at the sample code in the Eclipse RCP book; the one I used to learn the framework for the aforementioned loosely-coupled app. Nope, that code is still all “put the business logic in the view manager” in the same way Apple’s and Microsoft’s is.

Design of the frameworks and history

I wonder whether there’s anything specific about the way that Apple’s frameworks are designed that lead to tight coupling. One thing that’s obvious about how many of the frameworks were designed is the “in the 1990s” aspect of the issue.

Documentation on object-oriented programming from that time tells us that subclassing was the hotness. NSTableView was designed such that each of its parts would be “fully subclassable”, according to the release notes. Enterprise Objects Framework (and as a result, Core Data) was designed such that even your data has to be in a subclass of a framework class.

Fast forward to now and we know that subclassing is extremely tight coupling. Trying to create a subclass of a type you do control is tricky enough, but when it’s someone else’s you have no idea when they’re going to add methods that clash with yours or change behaviour you’re relying on. But those problems, real though they are, are not what I’m worried about here: subclassing is vendor lock-in. Every time you make it harder to extract your code from the vendor’s framework, you make it harder to leave that framework. More on that story later.

So subclassing couples your code to the superclass, and makes it share the responsibility of the superclass and whatever it is you want it to do. That becomes worse when the superclass itself has multiple responsibilities:

  • views are responsible for drawing, geometry and event handling.
  • documents are responsible for loading and saving, managing windows, handling user interaction for load, save and print requests, managing the in-memory data representing the document, serialising document access, synchronising user interface access with background work and printing.
  • pretty much everything in an AppKit app is responsible in some way for scripting support which is a cross-cutting concern.

You can see Apple’s frameworks extricating themselves from the need to subclass everything, in some places. The delegate pattern, which was largely used either to supply data or let the application respond to events from long-running tasks, is now also used as a decorator to provide custom layout decisions, as with the 10.4 additions to the NSTableViewDelegate, UITableViewDelegate and more obviously the UICollectionViewDelegateFlowLayout. Delegate classes are still coupled to the protocol interface, but are removed from the class hierarchy giving us more options to adapt our application code onto the interface.

Similarly, the classes that offer an API based on supplying a completion handler require that the calling code be coupled to the framework interface, but allow anything to happen on completion. As previously mentioned you sometimes get other problems related to the callback design, but at least now the only coupling to the framework is at the entry point.

No practice at decoupling

This is basically the main point, isn’t it? A lack of discipline comes from a lack of practice. But why should I (and, I weakly argue, other programmers whose code I’ve read) be out of practice?

No push to decouple

Here’s what I think the reason could be. Remind me of the last application you saw that wasn’t either a UIKit app or an AppKit app. Back in the dim and distant past there might be more variety: the code in your WebObjects Objective-C server might also be used in an AppKit+EOF client app. But most of us are writing iOS software that will only run on iOS, so the effort in decoupling an iOS app from UIKit has intellectual benefits but no measurable tangible benefits.

The previous statement probably shouldn’t be taken as absolute. Yes, there are other ways to do persistence than Core Data. Nonetheless, it’s common to find iOS apps with managed object contexts passed to every view controller: imagine converting that ball of mud to use BNRPersistence.

What about portability? There are two options for writing cross-platform Objective-C software: frameworks that are API compatible with UIKit or AppKit, or not. While it is possible to build an application that has an Objective-C core and uses, say, Qt or Win32 to provide a UI, in practice I’ve never seen that.

There also aren’t any “alternative” Objective-C application frameworks on the platforms ObjC programmers do support. In the same way that you might want to use the same Java code in a SWT app, a Swing app, and a Spring MVC server; you’re not going to want to port your AppKit app to SomeoneElsesKit. You’ll probably not want to move away from UIKit to $something_else in the near future in the same way it might be appropriate to move from Struts to Spring or from Eclipse RCP to NetBeans Platform.

Of course, this is an argument with some circular features. Because it’s hard to move the code I write away from UIKit, if something else does come along the likelihood is I’d be disinclined to take advantage of it. I should probably change that: vendor lock-in in a rapidly changing industry like software is no laughing matter. Think of the poor people who are stuck with their AWT applications, or who decided to build TNT applications on NeWS because it was so much more powerful than X11.


This is the last sentence, and it solves nothing; I did tell you that would happen.

A two-dimensional dictionary


A thing I made has just been open-sourced by my employers at Agant: the AGTTwoDimensionalDictionary works a bit like a normal dictionary, except that the keys are CGPoints meaning we can find all the objects within a given rectangle.


A lot of time on developing Discworld: The Ankh-Morpork Map was spent on performance optimisation: there’s a lot of stuff to get moving around a whole city. As described by Dave Addey, the buildings on the map were traced and rendered into separate images so that we could let characters go behind them. This means that there are a few thousand of those little images, and whenever you’re panning the map around the app has to decide which images are visible, put them in the correct place (in three dimensions; remember people can be in front of or behind the buildings) and draw everything.

A first pass involved creating a set containing all of the objects, looping over them to find out which were within the screen region. This was too slow. Implementing this 2-d index instead made it take about 20% the original time for only a few tens of kilobytes more memory, so that’s where we are now. It’s also why the data type doesn’t currently do any rebalancing of its tree; it had become fast enough for the app it was built in already. This is a key part of performance work: know which battles are worth fighting. About one month of full-time development went into optimising this app, and it would’ve been more if we hadn’t been measuring where the most benefit could be gained. By the time we started releasing betas, every code change was measured in Instruments before being accepted.

Anyway, we’ve open-sourced it so it can be fast enough for your app, too.


There’s a data structure called the multidimensional binary tree or k-d tree, and this dictionary is backed by that data structure. I couldn’t find an implementation of that structure I could use in an iOS app, so cracked open the Objective-C++ and built this one.

Objective-C++? Yes. There are two reasons for using C++ in this context: one is that the structure actually does get accessed often enough in the Discworld app that dynamic dispatch all the way down adds a significant time penalty. The other is that the structure contains enough objects that having a spare isa pointer per node adds a significant memory penalty.

But then there’s also a good reason for using Objective-C: it’s an Objective-C app. My controller objects shouldn’t have to be written in a different language just to use some data structure. Therefore I reach for the only application of ObjC++ that should even be permitted to compile: an implementation in one language that exposes an interface in the other. Even the unit tests are written in straight Objective-C, because that’s how the class is supposed to be used.

“You could simply do X” costs more

Someone always says it. “Could you just add this?” or “I don’t think it would be too hard to…” or if somebody else “changes these two simple things”, someone might create a completely bug-compatible, scale-compatible implementation of this other, undocumented service…wait, what?

Many of us are naturally optimistic people. We believe that the problems that befall others, or that we’ve experienced before, will not happen this time. That despite the last project suffering from “the code getting messier and messier”, we’ll do it right this time.

Optimism’s great. It tricks us into trying to solve difficult problems. It convinces us that the solution is “just around the corner”, so we should persevere. The problems start to arise when we realise that everyone else is optimistic, too—and that optimism is contagious. If you’re asked to give a drive-by estimate on how hard something is, or how long it takes, you’ll give an answer that probably doesn’t take into account all the problems that might arise. But now two of you believe in this optimistic estimate: after all, you’re a smart person, you’re trusted to give good estimates.

We need to be careful when talking to people who aren’t developers to make it clear that there’s no such thing as “simply” in most software systems. That “simply” adding a field brings with it all sorts of baggage: placing the field in an aesthetically pleasing fashion across multiple localised user interfaces, localising the field, building the user experience of interacting with the field and so on. That using the value from the field could turn it from a complicated problem into a complex problem, particularly if the field is just selecting between multiple implementations of what may even be multiple interfaces. That just adding this field brings not only work, but additional risk. That these are just the problems we could think of up front; there are often more that get uncovered as we begin to shave the yak.

But clearly we also need to bear in mind the problems we’ve faced and continue to face when talking to each other. We should remember that the last thing we tried to simply do ended up chasing a rabbit down a hole. If I don’t think that I can “simply” do something without unexpected complexity and risk, I should not expect that others can “simply” do it either.

The Liskov Citation Principle

In her keynote speech at QCon London 2013 on The Power of Abstraction, Barbara Liskov referred to several papers contemporary with her work on abstract data types. I’ve collected these references and found links to free copies of the articles where available.

Dijkstra 1968 Go To statement considered harmful

Wirth 1971 Program development by stepwise refinement

Parnas 1971 Information distribution aspects of design methodology

Liskov 1972 A design methodology for reliable software systems

Schuman and Jorrand 1970 Definition mechanisms in extensible programming languages
Not apparently available online for free

Balzer 1967 Dataless Programming

Dahl and Hoare 1972 Hierarchical program structures
Not apparently available online for free

Morris 1973 Protection in programming languages

Liskov and Zilles 1974 Programming with abstract data types

Liskov 1987 Data abstraction and hierarchy

When all you have is a NailFactory…

…every problem looks like it can be solved by configuring a different nail.

We have an obsession with tools in the software industry. We’ve built tools for building software, tools for testing software, tools for recording how the software is broken, tools for recording when we fixed software. Tools for migrating data from the no-longer-cool tools into the cool tools. Tools for measuring how much other tools have been used.

Let’s call this Tool-Driven Development, and let’s give Tool-Driven Development the following manifesto (a real manifesto that outlines intended behaviour, not a green paper):

Given an observed lack of consideration toward attribute x, we Tool-Driven Developers commit to supplying a tool that automates the application of attribute x.

So, if your developers aren’t thinking about testing, we’ll make a tool to make the tests they don’t write run quicker! If your developers aren’t doing performance analysis, we’ve got all sorts of tools for getting the very reports they don’t know that they need!

This fascination with creating tools is a natural consequence of assuming that everyone[*] is like me. I’ve found this problem that I need to solve, surely everyone needs to solve this problem so I’ll write a tool. Then I can tell people how to use this tool and the problem will be solved for everyone!

[*]Obviously not everyone, just everyone who gets it. Those clueless [dinosaurs clinging to the old tools|hipsters jumping on the new bandwagons] don’t get it, and I’m not talking about them.

No. Well, not yet. We’ve skipped two important steps out of a three-step enlightenment scheme:

  1. Awareness. Tell me what the unknown that I don’t know is.
  2. Education. Tell me why this thing that I now know about is a big deal, what I’m missing out on, what the drawbacks are, and why solving it would be beneficial.
  3. Training. Now that I know this thing exists, and that I should do something about it, and what that something is, now is the time to show me the tools and how I can use them to solve my new problem.

One of the talks at QCon London was by Damian Conway on dead languages. It covered these three features almost in reverse, to make the point that the tools we use constrain our mental models of the problems we’re trying to solve. Training: here’s a language, this is how it works, this is a code problem solved in that language. Education: the language has these features which lets us write our code in this way with these limitations. Awareness: there are ways to write code, and hence to solve problems in software, that aren’t the way you’re currently doing it.

A lot of what I’ve worked on has covered awareness without going further. The App Makers’ Privacy Pledge raises awareness that privacy in mobile apps is a problem, without discussing the details of the problem or the mechanics of a solution. APPropriate Behaviour contains arguments expressing that programmers should be aware of the social scope in which their programming activities sit.

While I appreciate and even accept the charge of intellectual foreplay, I think a problem looking for a solution is more useful than a solution looking for a problem. Still, with some of us doing the awareness thing and others doing the training thing, a scheme by which we can empower ourselves and future developers is clear: let’s team up and meet in the middle.

A note on notes

I’ve always had a way to take notes, but have never settled into a particular scheme. This post, more for my benefit than for yours, is an attempt to dig through this history and decide what I want to do about it.

At the high level, the relevant questions are what I want to do with the contents now and how I intend to work with them in the future. Most of the notes I take don’t have a long-term future; my work from my first degree has long been destroyed. I referred to the notes during the degree which gives an upper bound on the lifetime of four years, realistically more like 2 from creation to the exam where I needed the notes.

Said notes were taken on A4 ruled paper with a cartridge pen and a propelling pencil. Being able to use text (including maths symbols etc) and diagrams interchangeably is a supremely useful capability. It even helps with code especially where UI or geometry is involved.

I no longer do this, but my strategy then was to take rapid notes in lectures and classes, and produce fair copies later. This meant absorbing more from the notes as I re-read them and put them into different words, and let me add cross references to textbooks or other materials.

I’ve used pen-and-paper note taking at other times. Particularly in classrooms or conferences, it’s much faster than typing. At various phases of my career I’ve also kept log books, either for my own benefit or other people. That’s not something I do currently. The weapons of choice in this sphere are now fountain pen, propelling pencil and Moleskine.

Evernote is my note shoebox of choice, and my destination for typing notes (in fact this draft was built up in Evernote on an iPhone, rather than a blog editor). I don’t just use Macs and iOS so an iCloud-based note shoebox wouldn’t work for me.

I sometimes put notes handwritten in books or on whiteboards in there too, but don’t really worry about tagging because I usually search chronologically. My handwriting is so poor that Evernote’s transcription doesn’t work at all which is probably something that keeps me away from search. When it comes to symbols etc I’m more likely to put LaTeX markup in the text than draw equation images or use the extended characters palette.

When I was at O2 I had a dalliance with the Bamboo stylus and Penultimate. I still use those for drawing but never for writing as the poor sensitivity makes my narrow handwriting look even worse. I haven’t tried anything with a dedicated stylus sensor like the Jot stylus, or the Galaxy S-pen. Again these get dumped into Evernote. I don’t tend to change colours or pens; I tried Paper by 53 but don’t use it much in practice.

Mind maps or outlines: sometimes. I only ever do these in software, never on paper.

I think the summary is that handwritten notes are fastest and allow the biggest variation in formatting and content. Sticking the resulting notes in Evernote helps to go back through them, but I should try to recover the discipline of writing up a fair copy. It helps cement the content in my mind and gives me a chance to add external references and citations that I would otherwise miss out.

The trick with paper-based notes is to always have a notebook and pen to hand; I don’t often carry things around with me so I’d either have to get into the habit of wearing a manbag or leave notebooks around wherever I’m likely to want to write something.

How to version a Mach-O library

Yes, it’s the next instalment of “cross-platform programming for people who don’t use Macs very much”. You want to give your dynamic library a version number, probably of the format major.minor.patchlevel. Regardless of marketing concerns, this helps with dependency management if you choose a version convention such that binary-compatible revisions of the libraries can be easily discovered. What could possibly go wrong?

The linker will treat your version number in the following way (from the APSL-licensed ld64/ld/Options.cpp) if you’re building a 32-bit library:

// Parses number of form X[.Y[.Z]] into a uint32_t where the nibbles are xxxx.yy.zz
uint32_t Options::parseVersionNumber32(const char* versionString)
	uint32_t x = 0;
	uint32_t y = 0;
	uint32_t z = 0;
	char* end;
	x = strtoul(versionString, &end, 10);
	if ( *end == '.' ) {
		y = strtoul(&end[1], &end, 10);
		if ( *end == '.' ) {
			z = strtoul(&end[1], &end, 10);
	if ( (*end != '\0') || (x > 0xffff) || (y > 0xff) || (z > 0xff) )
		throwf("malformed 32-bit x.y.z version number: %s", versionString);

	return (x << 16) | ( y << 8 ) | z;

and like this if you’re building a 64-bit library (I’ve corrected an obvious typo in the comment here):

// Parses number of form A[.B[.C[.D[.E]]]] into a uint64_t where the bits are a24.b10.c10.d10.e10
uint64_t Options::parseVersionNumber64(const char* versionString)
	uint64_t a = 0;
	uint64_t b = 0;
	uint64_t c = 0;
	uint64_t d = 0;
	uint64_t e = 0;
	char* end;
	a = strtoul(versionString, &end, 10);
	if ( *end == '.' ) {
		b = strtoul(&end[1], &end, 10);
		if ( *end == '.' ) {
			c = strtoul(&end[1], &end, 10);
			if ( *end == '.' ) {
				d = strtoul(&end[1], &end, 10);
				if ( *end == '.' ) {
					e = strtoul(&end[1], &end, 10);
	if ( (*end != '\0') || (a > 0xFFFFFF) || (b > 0x3FF) || (c > 0x3FF) || (d > 0x3FF)  || (e > 0x3FF) )
		throwf("malformed 64-bit a.b.c.d.e version number: %s", versionString);

	return (a << 40) | ( b << 30 ) | ( c << 20 ) | ( d << 10 ) | e;

The specific choice of bit widths in both variants is weird (why would you have more major versions than patchlevel versions?) and the move from 32-bit to 64-bit makes no sense to me at all. Nonetheless, there’s a general rule:

Don’t use your SCM revision number in your version numbering scheme.

The rule of thumb is that the major version can always be less than 65536, the minor versions can always be less than 256 and you can always have up to two minor version numbers. Trying to supply a version number that doesn’t fit in the bitfields defined here will be a linker error, and you will not go into (address) space today.