Choosing the correct openings and closures

Plenty of programmers will have heard of the Open-Closed Principle of object-oriented design. It is, after all, one of the five SOLID principles. You may not, however, have seen the principle as originally stated. You’ve probably heard this formulation by Robert C. Martin, or a variation on the theme:

Modules that conform to the open-closed principle have two primary attributes.

  1. They are “Open For Extension”.
    This means that the behavior of the module can be extended. That we can make the module behave in new and different ways as the requirements of the application change, or to meet the needs of new applications.

  2. They are “Closed for Modification”.
    The source code of such a module is inviolate. No one is allowed to make source code changes to it.

Source: “The Open-Closed Principle”, the Engineering Notebook, Robert C. Martin, 1996

OK, so how can we add stuff to a module or make it behave “in different ways” if we’re not allowed to make source code changes? Martin’s solution involves abstract classes (because he’s writing for a C++ journal, read “interfaces” or “protocols” as appropriate to your circumstances). Your code makes use of the abstract idea of, say, a view. If views need to work in a different way, for some reason (say someone needs to throw away some third-party licensed display server and replace it with something written in-house) then you don’t edit the view you’ve already provided, you replace that class with your new one.

The Open-Closed Principle was originally written by Bertrand Meyer in the first edition of his book, Object-Oriented Software Construction. Here’s what he had to say:

A satisfactory modular decomposition technique must satisfy one more requirement: it should yield modules that are both open and closed.

  • A module will be said to be open if it is still available for extension. For example, it should be possible to add fields to the data structures it contains, or new elements to the set of functions it performs.
  • A module will be said to be closed if [it] is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding). In the case of a programming language module, a closed module is one that may be compiled and stored in a library, for others to use. In the case of a design or specification module, closing a module simply means having it approved by management, adding it to the project’s official repository of accepted software items (often called the project baseline), and publishing its interface for the benefit of other module designers.

Source: Object-Oriented Software Construction, Bertrand Meyer, 1988 (p.23)

Essentially the idea behind a “closed” module for Meyer is one that’s baked; it has been released, people are using it, no more changes. He doesn’t go as far as Martin later did; there are no changes to its data structure or functionality. But if a module has been closed, how can it still be open? “Aha,” we hear Meyer say, “that’s the beauty of inheritance. Inheritance lets you borrow the implementation of a parent type, so you can open a new module that has all the behaviour of the old.” There’s no abstract supertype involved, everything’s concrete, but we still get this idea of letting old clients carry on as they were while new programmers get to use the new shiny.

Both of these programmers were suggesting the “closedness” of a module as a workaround to limitations in their compilers: if you add fields to a module’s data structure, you previously needed to recompile clients of that module. Compilers no longer have that restriction: in [and I can’t believe I’m about to say this in 2013] modern languages like Objective-C and Java you can add fields with aplomb and old clients will carry on working. Similarly with methods: while there are limitations in C++ on how you can add member functions to classes without needing a recompile, other languages let you add methods without breaking old clients. Indeed in Java you can add new methods and even replace existing ones on the fly, and in Smalltalk-derived languages you can do it via the runtime library.

But without the closed part of the open-closed principle, there’s not much point to the open part. It’s no good saying “you should be able to add stuff”, of course you can. That’s what the 103 keys on your keyboard that aren’t backspace or delete are for. This is where we have to remember that the compiler isn’t the only reader of the code: you and other people are.

In this age where we don’t have to close modules to avoid recompiles, we should say that modules should be closed to cognitive overload. Don’t make behavioural changes that break a programmer’s mental model of what your module does. And certainly don’t make people try to keep two or more mental models of what the same class does (yes, NSTableView cell-based and view-based modes, I am looking at you).

There’s already a design principle supposed to address this. The Single Responsibility Principle says not to have a module doing more than one thing. Our new version of the Open-Closed Principle needs to say that a module is open to providing new capabilities for the one thing it does do, but closed to making programmers think about differences in the way that thing is done. If you need to change the implementation in such a way that clients of the module need to care about, stop pretending it’s the same module.

NIMBY Objects

Members of comfortable societies such as English towns have expectations of the services they will receive. They want their rubbish disposed of before it builds up too much, for example. They don’t so much care how it’s dealt with, they just want to put the rubbish out there and have it taken away. They want electricity supplied to their houses, it doesn’t so much matter how as long as the electrons flow out of the sockets and into their devices.

Some people do care about the implementation, in that they want it to be far enough away from it not to have to pay it any mind. These people are known as NIMBYs, after the phrase Not In My Back Yard. Think what it will do to traffic/children walking to school/the skyline/property prices etc. to have this thing I intend to use near my house!

A NIMBY wants to have their rubbish taken away, but does not want to be able to see the landfill or recycling centre during their daily business. A NIMBY wants to use electricity, but does not want to see a power station or wind turbine on their landscape.

What does this have to do with software? Modules in applications (which could be—and often are—objects) should be NIMBYs. They should want to make use of other services, but not care where the work is done except that it’s nowhere near them. The specific where I’m talking about is the execution context. The user interface needs information from the data model but doesn’t want the information to be retrieved in its context, by which I mean the UI thread. The UI doesn’t want to wait while the information is fetched from the model: that’s the equivalent of residential traffic being slowed down by the passage of the rubbish truck. Drive the trucks somewhere else, but Not In My Back Yard.

There are two ramifications to this principle of software NIMBYism. Firstly, different work should be done in different places. It doesn’t matter whether that’s on other threads in the same process, scheduled on work queues, done in different processes or even on different machines, just don’t do it anywhere near me. This is for all the usual good reasons we’ve been breaking work into multiple processes for forty years, but a particularly relevant one right now is that it’s easier to make fast-ish processors more numerous than it is to make one processor faster. If you have two unrelated pieces of work to do, you can put them on different cores. Or on different computers on the same network. Or on different computers on different networks. Or maybe on the same core.

The second is that this execution context should never appear in API. Module one doesn’t care where module two’s code is executed, and vice versa. That means you should never have to pass a thread, an operation queue, process ID or any other identifier of a work context between modules. If an object needs its code to run in a particular context, that object should arrange it.

Why do this? Objects are supposed to be a technique for encapsulation, and we can use that technique to encapsulate execution context in addition to code and data. This has benefits because Threading Is Hard. If a particular task in an application is buggy, and that task is the sole responsibility of a single class, then we know where to look to understand the buggy behaviour. On the other hand, if the task is spread across multiple classes, discovering the problem becomes much more difficult.

NIMBY Objects apply the Single Responsibility Principle to concurrent programming. If you want to understand surprising behaviour in some work, you don’t have to ask “where are all the places that schedule work in this context?”, or “what other places in this code have been given a reference to this context?” You look at the one class that puts work on that context.

The encapsulation offered by OOP also makes for simple substitution of a class’s innards, if nothing outside the class cares about how it works. This has benefits because Threading Is Hard. There have been numerous different approaches to multiprocessing over the years, and different libraries to support the existing ones: whatever you’re doing now will be replaced by something else soon.

NIMBY Objects apply the Open-Closed Principle to concurrent programming. You can easily replace your thread with a queue, your IPC with RPC, or your queue with a Disruptor if only one thing is scheduling the work. Replace that one thing. If you pass your multiprocessing innards around your application, then you have multiple things to fix or replace.

There are existing examples of patterns that fit the NIMBY Object description. The Actor model as implemented in Erlang’s processes and many other libraries (and for which a crude approximation was described in this very blog) is perhaps the canonical example.

Android’s AsyncTask lets you describe the work that needs doing while it worries about where it needs to be done. So does IKBCommandBus, which has been described in this very blog. Android also supports a kind of “get off my lawn” cry to enforce NIMBYism: exceptions are raised for doing (slow) network operations in the UI context.

There are plenty of non-NIMBY APIs out there too, which paint you into particular concurrency corners. Consider -[NSNotificationCenter addObserverForName:object:queue:usingBlock:] and ignore any “write ALL THE BLOCKS” euphoria going through your mind (though this is far from the worst offence in block-based API design). Notification Centers are for decoupling the source and sink of work, so you don’t readily know where the notification is coming from. So there’s now some unknown number of external actors defecating all over your back yard by adding operations to your queue. Good news: they’re all being funnelled through one object. Bad news: it’s a global singleton. And you can’t reorganise the lawn because the kids are on it: any attempt to use a different parallelism model is going to have to be adapted to accept work from the operation queue.

By combining a couple of time-honoured principles from OOP and applying them to execution contexts we come up with NIMBY Objects, objects that don’t care where you do your work as long as you don’t bother them with it. In return, they won’t bother you with details of where they do their work.

Dogma-driven development

You can find plenty of dogmatic positions in software development, in blogs, in podcasts, in books, and even in academic articles. “You should (always/never) write tests before writing code.” “Pair programming is a (good/bad) use of time.” “(X/not X) considered harmful.” “The opening brace should go on the (same/next) line.”

Let us ignore, for the moment, that only a maximum of 50% of these commandments can actually be beneficial. Let us skip past the fact that demonstrating which is the correct position to take is fraught with problems. Instead we shall consider this question: dogmatic rules in software engineering are useful to whom?

The Dreyfus model of skill acquisition tells us that novices at any skill, not just programming, understand the skill in only a superficial way. Their recollection of rules is non-situational; in other words they will try to apply any rule they know at any time. Their ability to recognise previously penchant free scenarios is small-scale, not holistic. They make decisions by analysis.

The Dreyfus brothers proposed that the first level up from novice was competence. Competents have moved to a situational recollection of the rules. They know which do or do not apply in their circumstances. Those who are competent can become proficient, when their recognition becomes holistic. In other words, the proficient see the whole problem, rather than a few small disjointed parts of it.

After proficiency comes expertise. The expert is no longer analytical but intuitive, they have internalised their craft and can feel the correct way to approach a problem.

“Rules” of software development mean nothing to the experts or the proficient, who are able to see their problem for what it is and come up with a context-appropriate solution. They can be confusing to novices, who may be willing to accept the rule as a truth of our work but unable to see in which contexts it applies. Only the competent programmers have a need to work according to rules, and the situational awareness to understand when the rules apply.

But competent programmers are also proficient programmers waiting to happen. Rather than being given more arbitrary rules to follow, they can benefit from being shown the big picture, from being led to understand their work more holistically than as a set of distinct parts to which rulaes can be mechanistically – or dogmatically – applied.

Pronouncements like coding standards and methodological commandments can be useful, but not on their own. By themselves they help competent programmers to remain competent. They can be situated and evaluated, to help novices progress to competence. They can be presented as a small part of a bigger picture, to help competents progress to proficiency. As isolated documents they are significantly less valuable.

Dogmatism considered harmful.