When Object-Oriented Programming Isn’t

A problem I was investigating today led me to a two-line Ruby method much like this:

class App
  # ...
  def write_file_if_configured
    file_writer = FileWriter.new(@configuration.options)
    file_writer.write if file_writer.can_write?

This method definitely looks nice and object-oriented, and satisfies many code quality rules: it’s shorter than 10 lines, contains no branches, no Boolean parameters (unless there are any hiding in that options object?), indeed no parameters at all.

It also conforms to the Law of Demeter: it calls a method on one of its fields, it creates a local object and calls methods on that objects, and it doesn’t send messages to any other object.

In fact there’s significant Feature Envy in this method. The method is much more interested in the FileWriter than in its own class, only passing along some data. Moreover, it’s not merely using the writer, it’s creating it too.

That means that there’s no way that this method can take advantage of polymorphism. The behaviour, all invoked on the second line, can only be performed by an instance of FileWriter, the classglobal variable invoked on the first line.

FileWriter has no independent existence, and therefore is not really an object. It’s a procedure (one that uses @configuration.options to discover whether a file can be written, and write that file if so), that’s been split into three procedure calls that must be invoked in sequence. There’s no encapsulation, because nothing is hidden: creation and use are all right there in one place. The App class is not open to extension because the extension point (the choice of writer object) is tightly coupled to the behaviour.

Unsurprisingly that makes this code harder to “reason about” (a fancy phrase meaning “grok”) than it could otherwise be, and that with no additional benefit coming from encapsulation or polymorphism. Once again, the failure of object-oriented programming is discovered to be that it hasn’t been tried.

If Object-Oriented Programming were announced today

Here’s an idea: the current backlash against OOP is actually because people aren’t doing OOP, they’re doing whatever they were doing before OOP. But they’re calling it OOP, because the people who were promoting OOP wanted them to believe that they were already doing OOP.

Why is that? Because the people who were promoting OOP wanted to sell their things. They were doing this in the 1980s to 1990s, when you could still expect developers to spend thousands of dollars on tools and libraries. “Here’s a thing that’s completely unlike what you’re already doing” is not as good a sales pitch as “ride the latest wave without boiling the ocean”. Object-Oriented principles were then hidden by the “Object Technology” companies – the StepStones, NeXTs, OTIs, OMGs – who wanted to make OOP accessible in order to sell their Object Technology.

That’s not the world of 2017. Nowadays, developer environments, libraries, deployment platforms etc are largely open source and free or very cheap, so rather than make an idea seem accessible, people try to make it seem important. We see that with the current wave (third wave? I’m not sure) of functional programming evangelism, where people are definitely trying to show that they are “functionaller than thou” rather than that you already know this stuff. Throw up a github or an npm that uses monad, pointfree or homoiconic without any indication of shame, and you’re functionalling right. Demonstrating that it’s already possible to curry methods in Objective-C is not the right way to functional.

If OOP were introduced into this world, you’d probably have a more dogmatic, “purer” representation of it than OOP as popularly practised today. You might even have OOP in Java.

One thing that makes me think that is that, well, it’s happened and it’s true. Multiple times. As previously explored here, protocol-oriented programming is just polymorphism under another name, and polymorphism will be found in a big-letters heading in any OOP book (in Barbara Liskov’s “Program Development in Java”, it’s chapter 8, “Polymorphic Abstractions”. In Bertrand Meyer’s “Touch of Class”, section 16.2, “Polymorphism”.

Similarly, what are microservices other than independent programs that maintain their own data and the operations on those data, performing those operations in response to receiving messages? What is a router at the front of a microservice but the message dispatch handler, picking a method implementation by examining the content of a selector?

Prototypical object-oriented programming

Some people think that the notion of classes is intrinsic to object-oriented programming. Bertrand Meyer even wrote a textbook about OOP called A Touch of Class. But back in the 1980s, Alan Borning and others were trying to teach object-oriented programming using the Smalltalk system, ostensibly designed to make simulation in computer programmers accessible to children. What they found was that classes are hard.

You’re not allowed to think about how your thing works before you’ve gone a level of abstraction up and told the computer all about the essence of thing-ness, what it is that’s common to all things and sets them apart from other ideas. And while you’re at it, you could well need to think about the metaclass, the essence of essence-of-thing-ness.

So Borning asked the reasonable question: why not just get rid of classes?. Rather than say what all things are like, let me describe the thing I want to think about.

But what happens when I need a different thing? Two options present themselves: both represent the idea that this thing is like that thing, except for some specific properties. One option is that I just create a clone of the first object. I now have two identical things, I make the changes that distinguish the second from the first, and now I can use my two, distinct things.

The disadvantage of that is that there’s no link between those two objects, so I have nowhere to put any shared behaviour. Imagine that I’m writing the HR software for a Silicon Valley startup. Initially there’s just one employee, the founder, and rather than think about the concept of Employee-ness and create the class of all employees, I just represent the founder as an object and get on with writing the application. Now the company hires a second employee, and being a Silicon Valley startup they hire someone who’s almost identical to the founder with just a couple of differences. Rather than duplicating the founder and changing the relevant properties, I create a new object that just contains the specific attributes that make this employee different, and link it to the founder object by saying that the founder is the prototype of the other employee.

Any message received by employee #2, if not understood, is delegated to the original employee, the founder. Later, I add a new feature to the Silicon Valley HR application: an employee can issue a statement apologising if anybody got offended. By putting this feature on the first employee, the other employee(s) also get that behaviour.

This simplified approach to beahvioural inheritance in object-oriented programming has been implemented a few times. It’s worth exploring, if you haven’t already.

The package management paradox

There was no need to build a package management system since CPAN, and yet npm is the best.
Wait, what?

Every time a new programming language or framework is released, people seem to decide that:

  1. It needs its own package manager.

  2. Simple algorithms need to be rewritten from scratch in “pure” $language/framework and distributed as packages in this package manager.

This is not actually true. Many programming languages – particularly many of the trendy ones – have a way to call C functions, and a way to expose their own routines as C functions. Even C++ has this feature. This means that you don’t need any new packaging system, if you can deploy packages that expose C functions (whatever the implementation language) then you can use existing code, and you don’t need to rewrite everything.

So there hasn’t been a need for a packaging system since at least CPAN, maybe earlier.

On the other hand, npm is the best packaging system ever because people actually consume existing code with it. It’s huge, there are tons of libraries, and so people actually think about whether this thing they’re doing needs new code or the adoption of existing code. It’s the realisation of the OO dream, in which folks like Brad Cox said we’d have data sheets of available components and we’d pull the components we need and bind them together in our applications.

Developers who use npm are just gluing components together into applications, and that’s great for software.

Minimum Viable Controller

The book “NeXTstep Programming Step One: Object-Oriented Applications” by Garfinkel and Mahoney said this about Controllers in 1993:

A good rule of thumb is to place as little code in your controller as necessary. If it is possible to create a second controller that is only used for a particular function, do so – the less complicated you make your application’s objects, the easier they are to debug.

Later, on the same page (p131):

Before you start coding, it’s a good idea to sit down and think about your problem.

Both of these pieces of advice still apply. Neither has been universally internalised 24 years later.

Dogmatic paradigmatism

First, you put all of your faith in structured programming, and you got burned. You found it hard to associate the operations in your software with the data upon which they act, and to make sure that the expectations made on the data in one place are satisfied when that data has been modified in that other place, or over there in yet another place. Clearly structured programming is broken.

Then, you put all of your faith in object-oriented programming, and you got burned. You found it hard to follow the flow of a program when it jumps in and out of different classes, and to see which parts were coupled to what. Clearly object-oriented programming is broken.

Then, you put all of your faith in functional programming, and you got burned. You found it hard to represent real business processes in terms of immutable data structures and pure functions, and to express changes to the operating environment without using side effects. Clearly functional programming is broken.

Or maybe it’s you. Maybe, rather than relying on faith to make these conceptual thought frameworks do what you need from them, you could have thought about the concepts.

*-Oriented Programming

Much is written about various paradigms or orientations of programming: Object- (nee Message-) Oriented, Functional, Structured, Dataflow, Logic, and probably others. These are often presented as camps or tribes with which to identify. A Smalltalk programmer will tell you that they are an Object-Oriented programmer, and furthermore those Johnny-come-latelies with their Java are certainly not members of the same group. A Haskell programmer will tell you that they are a functional programmer, and that that is the only way to make working software (though look closely; their Haskell is running on top of a large body of successful, imperative C code).

Notice the identification of paradigms with individual programming languages. Can you really not be object-oriented if you use F#, or is structured programming off-limits to an Objective-C coder? Why is the way that I think so tightly coupled to the tool that I choose to express my thought?

Of course, tools are important, and they do have a bearing on the way we think. But that’s at a fairly low, mechanical level, and programming is supposed to be about abstraction and high-level problem solving. You’re familiar with artists working with particular tools: there are watercolour painters and there are oil painters (and there are others too). Now imagine if the watercolour painting community (there is, of course, no such thing) decreed that it’s impossible to represent a landscape using oil paints and the oil painting community declared that watercolours are “the wrong tools for the job” of painting portraits.

This makes no sense. Oil paints and watercolour paints define how the paint interacts with the canvas, the brush, and the paint that’s already been applied. They don’t affect how the painter sees their subject, or thinks about the shapes involved and the interaction of light, shadow, reflection, and colour. They affect the presentation of those thoughts, but that’s at a mechanical low level.

Programming languages define how the code interacts with the hardware, the libraries, and the code that’s already been applied. They don’t affect how the programmer sees their problem, or thinks about the factors involved. They affect the presentation of those thoughts, but that’s at a mechanical low level.

Function-oriented Objects

Given a Cartesian representation of the point (x,y), find its distance from the origin and angle from the x axis.

I’m going to approach this problem using the principles of functional programming. There’s clearly a function that can take us from the coordinates to the displacement, and one that can take us from the coordinates to the angle. Ignoring the implementation for the moment, they look like this:

Point_radius :: float, float -> float
Point_angle  :: float, float -> float

This solution has its problems. I have two interchangeable arguments (both the x and y ordinates are floats) used in independent signatures, how do I make it clear that these are the same thing? How do I ensure that they’re used in the same way?

One tool in the arsenal of the functional programmer is pattern matching. I could create a single entry point with an enumeration that selects the needed operation. Now it’s clear that the operations are related, and there’s a single way to interpret the arguments, guaranteeing consistency.

Point :: float, float, Selector -> float

Good for now, but how extensible is this? What if I need to add an operation that returns a different type (for example a description that returns a string), or one that needs other arguments (for example the projection on to a different vector)? To provide that generality, I’ll use a different tool from the functional toolbox: the higher-order function. Rewrite Point so that instead of returning a float, it returns a function that captures the coordinates and takes any required additional arguments to return a value of the correct type. To avoid cluttering this example with irrelevant details, I’ll give that function a shorthand named type: Method.

Point :: float, float, Selector -> Method

You may want to perform multiple operations on values that represent the same point. Using a final functional programming weapon, partial application, we can capture the coordinates and let you request different operations on the same encapsulated data.

Point :: float, float -> Selector -> Method

Now it’s clear to see that the Point function is a constructor of some type that encapsulates the coordinates representing a given two-dimensional Cartesian point. That type is a function that, upon being given a Selector representing some operation, returns a Method capable of implementing that operation. The function implements message sending, and Points are just objects!

Imagine that we wanted to represent points in a different way, maybe with polar coordinates. We could provide a different function, Point', which captures those:

Point' :: float, float -> Selector -> Method

This function has the same signature as our original function, it too encapsulates the constructor’s arguments (call them instance variables) and returns methods in response to selectors. In other words, Point and Point' are polymorphic: if they have methods for the distance and angle selectors, they can be used interchangeably.

Object-oriented Functions

Write a compiler that takes source code in some language and creates an executable. If it encounters malformed source code, it should report an error and not produce an executable.

Thinking about this with my object-oriented head, I might have a Compiler object with some method #compile(source:String) that returns an optional Executable. If it doesn’t work, then use the #getErrors():List<Error> method to find out what went wrong.

That approach will work (as with most software problems there are infinite ways to skin the same cat), but it’s got some weird design features. What will the getErrors() method do if it’s called before the compile() method? If compile() is called multiple times, do earlier errors get kept or discarded? There’s some odd and unclear temporal coupling here.

To clean that up, use the object-oriented design principle “Tell, don’t ask”. Rather than requesting a list of errors from the compiler, have it tell an error-reporting object about the problems as they occur. How will it know what error reporter to use? That can be passed in, in accordance with another OO principle: dependency inversion.

Compiler#compile(source:String, reporter:ErrorConsumer): Optional<Executable>
ErrorConsumer#reportError(error:Error): void

Now it’s clear that the reporter will receive errors related to the invocation of #compile() that it was passed to, and there’s no need for a separate accessor for the errors. This clears up confusion as to what the stored state represents, as there isn’t anyway.

Another object-oriented tool is the Single Responsibility Principle, which invites us to design objects that have exactly one reason to change. A compiler does not have exactly one reason to change: you might need to target different hardware, change the language syntax, adopt a different executable format. Separating these concerns will yield more cohesive objects.

Tokeniser#tokenise(source:String, reporter:ErrorConsumer): Optional<TokenisedSource>
Compiler#compile(source:TokenisedSource, reporter:ErrorConsumer): Optional<AssemblyProgram>
Assembler#assemble(program:AssemblyProgram, reporter:ErrorConsumer): Optional<BinaryObject>
Linker#link(objects:Array<BinaryObject>, reporter:ErrorConsumer): Optional<Executable>
ErrorConsumer#reportError(error:Error): void

Every class in this system is named Verber, and has a single method, #verb. None of them has any (evident) internal state, they each map their arguments onto return values (with the exception of ErrorConsumer, which is an I/O sink). They’re just stateless functions. Another function is needed to plug them together:

Binder<T,U,V>#bind(T->Optional<U>, U->Optional<V>): (T->Optional<V>)

And now we’ve got a compiler constructed out of functions constructed out of objects.

Brain-oriented programming

Those examples were very abstract, not making any use of specific programming languages. Because software design is not coupled to programming languages, and paradigmatic approaches to programming are constrained ways to think about software design. They’re abstract enough to be separable from the nuts and bolts of the implementation language you choose (whether you’ve already chosen it or not).

Those functions in the Point example could be built using the blocks available in Smalltalk, Ruby or other supposedly object-oriented languages, in which case you’d have objects built out of functions that are themselves built out of objects (which are, of course, built out of functions…). The objects and classes in the Compiler example can easily be closures in a supposedly functional programming language. In fact, closures and blocks are not really too dissimilar.

What conclusions can be derived from all of this? Clearly different programming paradigms are far from exclusive, so the first lesson is that you don’t have to let your choice of programming language dictate your choice of problem solving approach (nor do you really have to do it the other way around). Additionally where the approaches try to solve the same problem, the specific techniques they comprise are complementary or even identical. Both functional and object-oriented programming are about organisation, decomposition and comprehensibility, and using either or even both can help to further those aims.

Ultimately your choice of tools isn’t going to affect your ability to think by much. Whether they help you express your thoughts is a different matter, but while expression is an important part of our work it’s only a small part.

Further Reading

The ideas here were primarily motivated by Uday Reddy’s Objects as Closures: Abstract Semantics of Object-Oriented Languages (weird embedded PDF reader link). In the real-life version of this presentation I also talked a bit about Theorems for Free (actual PDF link) by Philip Wadler, which isn’t so related but is nonetheless very interesting :).

Protocol-Oriented Programming in Objective-C

Hi, this is a guest post from Crusty. I’ve been doing a tour of the blogosphere, discussing Protocol-Oriented Programming. This time, Graham was kind enough to hand over the keyboard for SICPers and let me write a post here.

Back when Graham was talking about Object-Oriented Programming in Functional Programming in Swift, he mentioned that the number of methods you need to define on an object (or “procedural data type”) is actually surprisingly low. A set, Graham argued, is a single method that reports whether an object is a member of the set or not. An array has two methods: the count of objects it contains and the object at a particular index. We’re only talking about immutable objects here, but mutability can be modelled in immutable objects anyway.

Looking at the header or documentation for your favourite collections library, you probably think at this point that we’re missing something. Quite a few things, in fact. Your array data type probably has a few more than two methods, so how can those be the only ones you need?

Everything you might want to do to an array can be done in terms of those two methods that return the count and the object at an index. Any other operation can be built on those two operations (well, those two operations and a way to create some new objects). What’s more, any other array operation can be built using those operations without knowing how they’re implemented. That means we’re free to define them in a completely abstract way, using a protocol:

@protocol AnArray <NSObject>

- (NSUInteger)count;
- (id)objectAtIndex:(NSUInteger)index;


Now, without telling you anything about how that object works, I can create another object that “decorates” this array by using its methods. It can only call methods that are in the protocol, i.e. count and objectAtIndex:, but that’s sufficient. Hey, we’re doing protocol-oriented programming! As I said, we also need a way to create a new array sometimes, so for arguments’ sake let’s say that there’s a concrete version of AnArray called MyArray that the decorator knows how to create.

@interface ArrayDecorator : NSObject <AnArray>

- (instancetype)initWithArray:(id <AnArray>)anArray;

- (id)firstObject;
- (id <AnArray>)map:(SEL)aSelector;


@implementation ArrayDecorator
    id <AnArray> _array;

- (instancetype)initWithArray:(id <AnArray>)anArray
    self = [super init];
    if (!self) return nil;
    _array = anArray;
    return self;

- (NSUInteger)count { return [_array count]; }
- (id)objectAtIndex:(NSUInteger)index { return [_array objectAtIndex:index]; }

- (id)firstObject
    return [self count] ? [self objectAtIndex:0] : nil;

- (id <AnArray>)map:(SEL)aSelector
    void **buffer = malloc([self count] * sizeof(id));
    for (int i = 0; i < [self count]; i++) {
        buffer[i] = (__bridge void *)[[self objectAtIndex:i] performSelector:aSelector];
    id <AnArray> result = [[ArrayDecorator alloc] initWithArray:
        [[MyArray alloc] initWithObjects:(id *)buffer count:[self count]]];
    return result;


We’ve managed to create useful additional methods like firstObject and map:, that are only implemented in terms of the underlying array’s count and objectAtIndex:. It doesn’t matter how those methods are implemented, as long as they are.

Now, I know what you’re thinking. This protocol-oriented programming is fine and all, but now I have to remember to decorate every array I might create in order to get all of these useful additional methods. Wouldn’t it be great if there were some way to give all implementors of the AnArray protocol default implementations of map: and firstObject?

There is indeed a way to do that. This is what OOP’s inheritance feature is for. It’s been kindof abused, in that it’s overloaded to mean subtyping, which gets people into horrible knots over whether squares are rectangles or rectangles are squares (the answer, by the way, is yes). All inheritance really means is “if you send me a message I don’t have a method for, I’ll check to see if my parent class has that method”. In that case, this array decoration can be rewritten like this:

@interface SomeArray : NSObject 

- (NSUInteger)count;
- (id)objectAtIndex:(NSUInteger)index;
- (id)firstObject;
- (SomeArray *)map:(SEL)aSelector;


@implementation SomeArray

- (NSUInteger)count { [self doesNotRecognizeSelector:_cmd]; return 0; }
- (id)objectAtIndex:(NSUInteger)index { [self doesNotRecognizeSelector:_cmd]; return nil; }

- (id)firstObject
    return [self count] ? [self objectAtIndex:0] : nil;

- (SomeArray *)map:(SEL)aSelector
    void **buffer = malloc([self count] * sizeof(id));
    for (int i = 0; i < [self count]; i++) {
        buffer[i] = (__bridge void *)[[self objectAtIndex:i] performSelector:aSelector];
    SomeArray *result = [[MyArray alloc] initWithObjects:(id *)buffer count:[self count]];
    return result;


What’s changed? The methods that were previously defined on the AnArray protocol have been subsumed into the interface for SomeArray (which is what the word “protocol” used to mean in Smalltalk programming: the list of messages you could rely on an object responding to). Array implementations that are subclasses of SomeArray need merely implement count and objectAtIndex: (as before) and they automatically get all of the other methods, which are all implemented in terms of those two.

This looks familiar. Let me check the documentation for NSArray (this is from the Foundation 1.0 documentation on NeXTSTEP 3, by the way. They don’t call me Crusty for nothing.):

The NSArray class declares the programmatic interface to an object that manages an immutable array of objects. NSArray’s two primitive methods–count and objectAtIndex:–provide the basis for all the other methods in its interface. The count method returns the number of elements in the array. objectAtIndex: gives you access to the array elements by index, with index values starting at 0.

Interesting: all of its methods are implemented in terms of two “primitive” methods, called count and objectAtIndex:. The documentation also mentions that NSArray is a “class cluster”, and has this to say on class clusters:

Your subclass must override inherited primitives, but having done so can be sure that all derived methods that it inherits will operate properly.

Yes, that’s correct, protocol-oriented programming is a Crusty concept. We cornered it in the haunted fairground, removed its mask, and discovered that it was just abstract methods in object-oriented programming all along.

Functional Programming in Object-Oriented Programming in Functional Programming in Swift

The objects that I’ve been building up over the last few posts have arbitrarily broad behaviours. They can respond to any selector drawn from the set of all possible strings.

As with all art, beauty is produced by imposing constraints. An important class (pardon the pun) of objects only has a meaningful response to one selector (value or, if it takes an additional argument, value:) which causes the object to do the one thing it knows how and return a result.

In many object-oriented programming circles, including the Smalltalk world, these objects are called blocks. They’re useful because they enable multiple uses of the same algorithm to be extracted into a single place, with the block customising the significant activity that happens in the algorithm. As an example, iterating an array to square numbers and iterating an array to load images from paths both involve iterating an array. The algorithm that iterates an array can be expressed in a form that accepts a block that does the work: squaring some numbers or loading some images.

Without going in to the details of the implementation (which is a collection of classes in the Objective-Swift system), here’s how a block could be used:

let myArray = newObject(NSArray(Integer(1, proto: o), Integer(2, proto: o),
  Integer(3, proto: o), Integer(4, proto: o)))
let evens = (myArray,"filter:") ✍ OneArgBlock({ obj in
  return (ℹ︎(obj)! % 2) == 0 ? True : False
  }, proto: o)
ℹ︎(evens→"count") // 2
📓((evens,"objectAtIndex:") ✍ Integer(0, proto: o)) // "2"
📓((evens,"objectAtIndex:") ✍ Integer(1, proto: o)) // "4"

The NSArray→filter: selector takes a block, and for each element in the array it sends the value: message with the element as the argument. In fact, it passes another block to the result of this block. Both True and False respond to the ifTrue: message. True→ifTrue: executes its argument, False→ifTrue: doesn’t.

Now we already have a name for the concept of an object that does exactly one thing, returning some result based on its input parameters. That’s a function. Yes, we finally got there:


So just as Object-Oriented Programming was a restricted application of a subset of Functional Programming, we can see Functional Programming as a restricted application of a subset of Object-Oriented Programming.

And that’s pretty much the point of this series of posts. These two ways of thinking about computer programs get you to the same place, as long as you apply the thinking. Neither is a silver bullet, neither is subsumed nor obsoleted by the other.


Now imagine a block that takes a selector and returns another block…