Rubbies and sores

I imagine many of you are familiar with the difference between Ruby (a beautiful language representing the best pragmatic balance between Smalltalk’s elegance and C’s ubiquity) and Rubby (a horrendous mishmash of abominations in the style of all scripting languages, glommed together by finding nearly-compatible corner cases).

I also make the same distinction between Open Source (an attempt to get the same exploitation of labour as Free Software but without the principles) and Open Sores (the disturbingly wobbly house of cards that arises when collections of developers, none of whom feels empowered to make big changes, individually attach small pieces of mud to the collectively-constructed big ball).

Neither is fair, but both are useful shorthands.

Inside-Out Apps

This article is based on a talk I gave at mdevcon 2014. The talk also included a specific example to demonstrate the approach, but was otherwise a presentation of the following argument.

You probably read this blog because you write apps. Which is kind of cool, because I have been known to use apps. I’d be interested to hear what yours is about.

Not so fast! Let me make this clear first: I only buy page-based apps, they must use Core Data, and I automatically give one star to any app that uses storyboards.

OK, that didn’t sound like it made any sense. Nobody actually chooses which apps to buy based on what technologies or frameworks are used, they choose which apps to buy based on what problems those apps solve. On the experience they derive from using the software.

When we build our apps, the problem we’re solving and the experience we’re providing needs to be at the uppermost of our thoughts. You’re probably already used to doing this in designing an application: Apple’s Human Interface Guidelines describe the creation of an App Definition Statement to guide thinking about what goes into an app and how people will use it:

An app definition statement is a concise, concrete declaration of an app’s main purpose and its intended audience.

Create an app definition statement early in your development effort to help you turn an idea and a list of features into a coherent product that people want to own. Throughout development, use the definition statement to decide if potential features and behaviors make sense.

My suggestion is that you should use this idea to guide your app’s architecture and your class design too. Start from the problem, then work through solving that problem to building your application. I have two reasons: the first will help you, and the second will help me to help you.

The first reason is to promote a decoupling between the problem you’re trying to solve, the design you present for interacting with that solution, and the technologies you choose to implement the solution and its design. Your problem is not “I need a Master-Detail application”, which means that your solution may not be that. In fact, your problem is not that, and it may not make sense to present it that way. Or if it does now, it might not next week.

You see, designers are fickle beasts, and for all their feel-good bloviation about psychology and user experience, most are actually just operating on a combination of trend and whimsy. Last week’s refresh button is this week’s pull gesture is next week’s interaction-free event. Yesterday’s tab bar is today’s hamburger menu is tomorrow’s swipe-in drawer. Last decade’s mouse is this decade’s finger is next decade’s eye motion. Unless your problem is Corinthian leather, that’ll be gone soon. Whatever you’re doing for iOS 7 will change for iOS 8.

So it’s best to decouple your solution from your design, and the easiest way to do that is to solve the problem first and then design a presentation for it. Think about it. If you try to design a train timetable, then you’ll end up with a timetable that happens to contain train details. If you try to solve the problem “how do I know at what time to be on which platform to catch the train to London?”, then you might end up with a timetable, but you might not. And however the design of the solution changes, the problem itself will not: just as the problem of knowing where to catch a train has not changed in over a century.

The same problem that affects design-driven development also affects technology-driven development. This month you want to use Core Data. Then next month, you wish you hadn’t. The following month, you kind of want to again, then later you realise you needed a document database after all and go down that route. Solve the problem without depending on particular libraries, then changing libraries is no big deal, and neither is changing how you deal with those libraries.

It’s starting with the technology that leads to Massive View Controller. If you start by knowing that you need to glue some data to some view via a View Controller, then that’s what you end up with.

This problem is exacerbated, I believe, by a religious adherence to Model-View-Controller. My job here is not to destroy MVC, I am neither an iconoclast nor a sacrificer of sacred cattle. But when you get overly attached to MVC, then you look at every class you create and ask the question “is this a model, a view, or a controller?”. Because this question makes no sense, the answer doesn’t either: anything that isn’t evidently data or evidently graphics gets put into the amorphous “controller” collection, which eventually sucks your entire codebase into its innards like a black hole collapsing under its own weight.

Let’s stick with this “everything is MVC” difficulty for another paragraph, and possibly a bulleted list thereafter. Here are some helpful answers to the “which layer does this go in” questions:

  • does my Core Data stack belong in the model, the view, or the controller? No. Core Data is a persistence service, which your app can call on to save or retrieve data. Often the data will come from the model, but saving and retrieving that data is not itself part of your model.
  • does my networking code belong in the model, the view, or the controller? No. Networking is a communications service, which your app can call on to send or retrieve data. Often the data will come from the model, but sending and retrieving that data is not itself part of your model.
  • is Core Graphics part of the model, the view, or the controller? No. Core Graphics is a display primitive that helps objects represent themselves on the display. Often those objects will be views, but the means by which they represent themselves are part of an external service.

So building an app in a solution-first approach can help focus on what the app does, removing any unfortunate coupling between that and what the app looks like or what the app uses. That’s the bit that helps you. Now, about the other reason for doing this, the reason that makes it easier for me to help you.

When I come to look at your code, and this happens fairly often, I need to work out quickly what it does and what it should do, so that I can work out why there’s a difference between those two things and what I need to do about it. If your app is organised in such a way that I can see how each class contributes to the problem being solved, then I can readily tell where I go for everything I need. If, on the other hand, your project looks like this:

MVC organisation

Then the only thing I can tell is that your app is entirely interchangeable with every other app that claims to be nothing more than MVC. This includes every Rails app, ever. Here’s the thing. I know what MVC is, and how it works. I know what UIKit is, and why Apple thinks everything is a view controller. I get those things, your app doesn’t need to tell me those things again. It needs to reflect not the similarities, which I’ve seen every time I’ve launched Project Builder since around 2000, but the differences, which are the things that make your app special.

OK, so that’s the theory. We should start from the problem, and move to the solution, then adapt the solution onto the presentation and other technology we need to use to get a product we can sell this week. When the technologies and the presentations change, we can adapt onto those new things, to get the product we can sell next week, without having to worry about whether we broke solving the problem. But what’s the practice? How do we do that?

Start with the model.

Remember that, in Apple’s words:

model objects represent knowledge and expertise related to a specific problem domain

so solving the problem first means modelling the problem first. Now you can do this without regard to any particular libraries or technology, although it helps to pick a programming language so that you can actually write something down. In fact, you can start here:

Command line tool

A Foundation command-line tool has everything you need to solve your problem (in fact it contains a few more things than that, to make up for erstwhile deficiencies in the link editor, but we’ll ignore those things). It lets you make objects, and it lets you use all those boring things that were solved by computer scientists back in the 1760s like strings, collections and memory allocation.

So with a combination of the subset of Objective-C, the bits of Foundation that should really be in Foundation, and unit tests to drive the design of the solution, we can solve whatever problem it is that the app needs to solve. There’s just one difficulty, and that is that the problem is only solved for people who know how to send messages to objects. Now we can worry about those fast-moving targets of presentation and technology choice, knowing that the core of our app is a stable, well-tested collection of objects that solve our customers’ problem. We expose aspects of the solution by adapting them onto our choice of user interface, and similarly any other technology dependencies we need to introduce are held at arm’s length. We test that we integrate with them correctly, but not that using them ends up solving the problem.

If something must go, then we drop it, without worrying whether we’ve broken our ability to solve the problem. The libraries and frameworks are just services that we can switch between as we see fit. They help us solve our problem, which is to help everyone else to solve their problem.

And yes, when you come to build the user interface, then model-view-controller will be important. But only in adapting the solution onto the user interface, not as a strategy for stuffing an infinite number of coats onto three coat hooks.

References

None of the above is new, it’s just how Object-Oriented Programming is supposed to work. In the first part of my MVC series, I investigated Thing-Model-View-Editor and the progress from the “Thing” (a problem in the real world that must be solved) to a “Model” (a representation of that problem and its solution in the computer). That article relied on sources from Trygve Reenskaug, who described (in 1979) the process by which he moved from a Thing to a Model and then to a user interface.

In his 1992 book Object-Oriented Software Engineering: A Use-Case Driven Approach, Ivar Jacobson describes a formalised version of the same motion, based on documented use cases. Some teams replaced use cases with user stories, which look a lot like Reenskaug’s user goals:

An example: To get better control over my finances, I would need to set up a budget; to keep account
of all income and expenditure; and to keep a running comparison between budget and accounts.

Alastair Cockburn described the Ports and Adapters Architecture (earlier known as the hexagonal architecture) in which the application’s use cases (i.e. the ways in which it solves problems) are at the core, and everything else is kept at a distance through adapters which can easily be replaced.

When single responsibility isn’t possible

This posted was motivated by Rob Rix’s bug report on NSObject, “Split NSObject protocol into logical sub-protocols”. He notes that NSObject provides multiple responsibilities[*]: hashing, equality checking, sending messages, introspecting and so on.

What that bug report didn’t look at was the rest of NSObject‘s functionality that isn’t in the NSObject protocol. The class itself defines method signature lookups, message forwarding and archiving features. Yet more features are added via categories: scripting support (Mac only), Key-Value Coding and Key-Value Observing are all added in this way.

I wondered whether this many responsibilities in the root class were common, and decided to look at other object libraries. Pretty much all Objective-C object libraries work this way: the Object class from ObjPak, NeXTSTEP and ICPak101 (no link, sadly) all have similarly rambling collections of functionality.

[*] By extension, all subclasses of NSObject and NSProxy (which _also_ conforms to the NSObject protocol) do, too.

Another environment I’ve worked a lot in is Java. The interface for java.lang.Object is mercifully brief: it borrows NSObject‘s ridiculous implementation of a copy method that doesn’t work by default. It actually has most of the same responsibilities, though notably not introspection nor message-sending: the run-time type checking in Java is separated into the java.lang.reflect package. Interestingly it also adds a notification-based system for concurrency to the root class’s feature set.

C#’s System.Object is similar to Java’s, though without the concurrency thing. Unlike the Java/Foundation root classes, its copy operation (MemberwiseClone()) actually works, creating a shallow copy of the target object.

Things get a bit different when looking at Ruby’s system. The Object class exposes all sorts of functionality: in addition to introspection, it offers the kind of modifications to classes that ObjC programmers would do with runtime functions. It offers methods for “freezing” objects (marking them read-only), “tainting” them (marking them as containing potentially-dangerous data), “untrusting” them (which stops them working on objects that are trusted) and then all the things you might find on NSObject. But there’s a wrinkle. Object isn’t really a root class: it’s just the conventional root for Ruby classes. It is itself a subclass of BasicObject, and this is about the simplest root class of any of the systems looked at so far. It can do equality comparison, message forwarding (which Objective-C supports via the runtime, and NSObject has API for) and the ability to run blocks of code within the context of the receiving object.

C++ provides the least behaviour to its classes: simple constructors that are referenced but not defined can be generated.

It’s useful to realise that even supposedly simple rules like “single responsibility principle” are situated in the context of the software system. Programmers will expect an object with a “single” responsibility to additionally adopt all the responsibilities of the base class, which in something like Foundation can be numerous.

As the Kaiser Chiefs might say: Ruby ruby ruby n00bie

Imagine someone took the training wheels off of Objective-C. That’s how I currently feel.

Bike with Training Wheels: image credit Break

I’ve actually had a long—erm, not quite “love-hate”, more “‘sup?-meh”—relationship with Ruby. I’ve long wanted to tinker but never really had a project where I could make it fit; I did learn a little about Rails a couple of years back but didn’t then get to put it into practice. Recently I’ve been able to do some Real Work™ with Ruby, and wanted to share the experience.

Bear in mind that when I say I’ve been working with Ruby, I mean that I’ve been writing Objective-C in Ruby. This becomes clear when we see one of the problems I’ve been facing: I couldn’t work out how to indicate that a variable exposes some interface, until I realised I didn’t need to. Ruby takes the idea of duck typing much further than Objective-C does: using Ruby is much more like Smalltalk in that you don’t care what an object is, you care what it does. Currently no tools really support that way of working (and so Stockholm Syndrome-wielding developers will tell you that you don’t need such tools; just vi and a set of tests); the first warning I get when I’ve made a mistake is usually an exception backtrace. Something I had to learn quite quickly is that Ruby and Objective-C have different ideas of nil: Ruby behaves as the gods intend and lets you put nil into collections; but Objective-C behaves as the gods intend and lets you treat nil as a null object.

The problems I’ve been facing have largely involved learning how things are conventionally done. One example is that a library I was using took a particular parameter and treated it as a constant. Apparently Matz is a big fan of Fortran, but only early hipster Fortran before they sold out and added implicit none (around the time they fired their bass player and started playing the bigger venues). So Ruby provides its own implicit convention: constants have to be named starting with an uppercase. Otherwise you get told this:

wrong constant name parameter-value

Erm, that’s it. Not “you should try calling it ParameterValue“, or “constants must start with a capital letter”. Not even “this is not a good name for a constant”; who else interpreted that as “you gave the name of the wrong constant”? I think I’ve been spoiled by the improvements to the clang diagnostics over the last couple of years, but I found some of Ruby’s messages confusing and unhelpful. This is often the case with software that relies on convention: once you know the conventions you can go really fast, but when you don’t know them you feel like you’re being ignored or that it’s being obtuse.[*]

[*] When I asked for help on this issue I was told I suggest you pick of[sic] a good Ruby book or watch some Ruby tutorials on YouTube; you’ll be pleased to know that the interpreter wasn’t the only ignorant or obtuse tool I had to deal with.

These are very neophyte problems though, and once I got past them I found that I was able to make good progress with the language. I was using LightTable and RubyMine for editing, and found that I could work really quickly with a combination of those editors and irb. Having an interactive environment or a REPL is amazing for trying out little things, particularly when you’re new at a language and don’t know what’s going to work. It’s a bit cumbersome for more involved tests, but the general execute-test cycle is much faster than with Objective-C.

Speaking of tests, I know that if you ask four Ruby developers how to write unit tests you’ll get six different answers and at least eighteen of them will have moved on to Node.JS. I’ve been using Mini::Test, as it’s part of the standard library so involved the least configuration to get going.

I also took the opportunity to install MacRuby and have a go at building a Mac app, using Cocoa Bindings on the UI side to work with controllers and models that I’d written in Ruby. This isn’t the first exposure I’ve had to a bridged environment: I’ve done a lot of Perl-Cocoa with CamelBones, the PerlObjCBridge and ObjectiveFramework. MacRuby isn’t like those bridges though, in that (as I understand it) MacRuby builds Ruby’s object model on top of NSObject and the Objective-C runtime so Ruby objects actually are ObjC objects. It means there’s less manual gluing: e.g. in Perl you might do:

my $string = NSString->alloc->initWithCString_encoding_("Hello", NSUTF8StringEncoding);

In MacRuby that becomes:

string = "Hello"

That’s not to say there’s no boilerplate. I found that by-return references need the creation of a Pointer object on the Ruby side to house the pointer to the object reference, which looks like this:

error = Pointer.new(:object)
saveResult = string.writeToFile path, atomically: false, encoding: NSUTF8StringEncoding, error: error

For a long time, I’ve thought that there would be mileage in suggesting programmers use a different language than Objective-C for building applications in Cocoa, relying on ObjC as the systems language. Ruby could be that thing. The object models are very similar, so there isn’t a great deal of mind-twisting going on in exposing Objective-C classes and objects in Ruby. There’s a lot less “stuff you do that shuts the compiler up”, though ObjC has seen a reduction in that itself of late it still relies on C and all of its idiosyncrasies. Whether it’s actually better for some developers, and if so for whom, would need study.

Summarising, Ruby feels a lot like Objective-C without the stabilisers. You can work with objects and methods in a very similar way. The fast turnaround afforded by having an interactive shell and no compile-link waiting means you can go very quickly. The fact that you don’t get the same up-front analysis and reporting of problems means you can easily drive into a wall at full tilt. But at least you did so while you were having fun.