The next phase in technological convergence will be harder than the last, because it can’t be solved with technology. Last time the devices converged, some phone makers just needed to buy a photoelectric detector, a lens, and licenses for some MP3 patents.

But how can the various tab-sized computers I carry – my bank cards, SIMs, passport, building door card, transport cards, office ID – be integrated, when they mean different things to different people? Technologically, it’s really bad to keep them separate because you can’t just hold a bag of RFID tokens up to a reader and expect the right thing to happen. Socially, it’s really bad to converge them; I can’t imagine my bank, employer, train conductor and government all agreeing to the same terms on identifying me, nor would one company be an acceptable clearing house for all of that so rather than N distinct tokens we’d end up with N tokens you can use Anywhere™*.

More on Layers

I was told yesterday that entity-relationship diagrams can be OK as high level descriptions of database schemata, but are not appropriate for designing a database. Enough information is missing that they are not able to model the problem.

Could the same be true of layer diagrams? Perhaps they’re OK as stratospheric guides to your software, but not useful for designing that software as too much information is missing.

I don’t think that’s the case, though. Layer diagrams tend to be pretty much interchangeable between systems, so that I can’t really tell which system I’m looking at from the layer cake. Add the difficulty that I probably can’t tell how the layers communicate, and certainly can’t tell how the subsystems within the layers are composed. All I can tell is that you like drawing boxes, but not too many boxes.

The trouble with layers

In describing Inside-Out Apps I expressed my distrust of the “everything is MVC” school of design.

[…]when you get overly attached to MVC, then you look at every class you create and ask the question “is this a model, a view, or a controller?”. Because this question makes no sense, the answer doesn’t either: anything that isn’t evidently data or evidently graphics gets put into the amorphous “controller” collection, which eventually sucks your entire codebase into its innards like a black hole collapsing under its own weight.

Let me extract a general rule: layers are bad. It’s not just that I can’t distinguish your app’s architecture diagram from a sandwich’s architecture diagram, or a trifle’s. The problem is in the boxes.

As Possibly Alan, IDK and I discussed recently, the problem with box-and-arrow diagrams is that the boxes are really just collections of arrows zoomed out. Combine that with something my colleague Uri told me, that “a package in the dependency diagram is just the smallest thing that contains a cycle”, and you end up with your layer cake diagram looking like this:

Layer Cake

Three big wastelands of “anything goes”, with some vague idea that the one at the top shouldn’t talk to the one at the bottom. Or maybe it’s that any of them can talk down as much as they like but never up. Either way it’s not clear how any of the dependencies in this system are controlled. Is it “anything goes” within a layer? If two related classes belong in different layers, are they supposed to talk to each other or not? Can data types from one layer be passed to another without adaptation? Just what is the layer boundary?

Contractually-obligated testing

About a billion years ago, Bertrand Meyer (he of Open-Closed Principle fame) introduced a programming language called Eiffel. It had a feature called Design by Contract, that let you define constraints that your program had to adhere to in execution. Like you can convince C compilers to emit checks for rules like integer underflow everywhere in your code, except you can write your own rules.

To see what that’s like, here’s a little Objective-C (I suppose I could use Eiffel, as Eiffel Studio is in homebrew, but I didn’t). Here’s my untested, un-contractual Objective-C Stack class.

@interface Stack : NSObject

- (void)push:(id)object;
- (id)pop;

@property (nonatomic, readonly) NSInteger count;

@end

static const int kMaximumStackSize = 4;

@implementation Stack
{
    __strong id buffer[4];
    NSInteger _count;
}

- (void)push:(id)object
{
    buffer[_count++] = object;
}

- (id)pop
{
    id object = buffer[--_count];
    buffer[_count] = nil;
    return object;
}

@end

Seems pretty legit. But I’ll write out the contract, the rules to which this class will adhere provided its users do too. Firstly, some invariants: the count will never go below 0 or above the maximum number of objects. Objective-C doesn’t actually have any syntax for this like Eiffel, so this looks just a little bit messy.

@interface Stack : ContractObject

- (void)push:(id)object;
- (id)pop;

@property (nonatomic, readonly) NSInteger count;

@end

static const int kMaximumStackSize = 4;

@implementation Stack
{
    __strong id buffer[4];
    NSInteger _count;
}

- (NSDictionary *)contract
{
    NSPredicate *countBoundaries = [NSPredicate predicateWithFormat: @"count BETWEEN %@",
                                    @[@0, @(kMaximumStackSize)]];
    NSMutableDictionary *contract = [@{@"invariant" : countBoundaries} mutableCopy];
    [contract addEntriesFromDictionary:[super contract]];
    return contract;
}

- (void)in_push:(id)object
{
    buffer[_count++] = object;
}

- (id)in_pop
{
    id object = buffer[--_count];
    buffer[_count] = nil;
    return object;
}

@end

I said the count must never go outside of this range. In fact, the invariant must only hold before and after calls to public methods: it’s allowed to be broken during the execution. If you’re wondering how this interacts with threading: confine ALL the things!. Anyway, let’s see whether the contract is adhered to.

int main(int argc, char *argv[]) {
    @autoreleasepool {
        Stack *stack = [Stack new];
        for (int i = 0; i < 10; i++) {
            [stack push:@(i)];
            NSLog(@"stack size: %ld", (long)[stack count]);
        }
    }
}

2014-08-11 22:41:48.074 ContractStack[2295:507] stack size: 1
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 2
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 3
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 4
2014-08-11 22:41:48.076 ContractStack[2295:507] *** Assertion failure in -[Stack forwardInvocation:], ContractStack.m:40
2014-08-11 22:41:48.077 ContractStack[2295:507] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException',
 reason: 'invariant count BETWEEN {0, 4} violated after call to push:'

Erm, oops. OK, this looks pretty useful. I’ll add another clause: the caller isn’t allowed to call -pop unless there are objects on the stack.

- (NSDictionary *)contract
{
    NSPredicate *countBoundaries = [NSPredicate predicateWithFormat: @"count BETWEEN %@",
                                    @[@0, @(kMaximumStackSize)]];
    NSPredicate *containsObjects = [NSPredicate predicateWithFormat: @"count > 0"];
    NSMutableDictionary *contract = [@{@"invariant" : countBoundaries,
             @"pre_pop" : containsObjects} mutableCopy];
    [contract addEntriesFromDictionary:[super contract]];
    return contract;
}

So I’m not allowed to hold it wrong in this way, either?

int main(int argc, char *argv[]) {
    @autoreleasepool {
        Stack *stack = [Stack new];
        id foo = [stack pop];
    }
}

2014-08-11 22:46:12.473 ContractStack[2386:507] *** Assertion failure in -[Stack forwardInvocation:], ContractStack.m:35
2014-08-11 22:46:12.475 ContractStack[2386:507] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException',
 reason: 'precondition count > 0 violated before call to pop'

No, good. Having a contract is a bit like having unit tests, except that the unit tests are always running whenever your object is being used. Try out Eiffel; it’s pleasant to have real syntax for this, though really the Objective-C version isn’t so bad.

Finally, the contract is implemented by some simple message interception (try doing that in your favourite modern programming language of choice, non-Rubyists!).

@interface ContractObject : NSObject
- (NSDictionary *)contract;
@end

static SEL internalSelector(SEL aSelector);

@implementation ContractObject

- (NSDictionary *)contract { return @{}; }

- (NSMethodSignature *)methodSignatureForSelector:(SEL)aSelector
{
    NSMethodSignature *sig = [super methodSignatureForSelector:aSelector];
    if (!sig) {
        sig = [super methodSignatureForSelector:internalSelector(aSelector)];
    }
    return sig;
}

- (void)forwardInvocation:(NSInvocation *)inv
{
    SEL realSelector = internalSelector([inv selector]);
    if ([self respondsToSelector:realSelector]) {
        NSDictionary *contract = [self contract];
        NSPredicate *alwaysTrue = [NSPredicate predicateWithValue:YES];
        NSString *calledSelectorName = NSStringFromSelector([inv selector]);
        inv.selector = realSelector;
        NSPredicate *invariant = contract[@"invariant"]?:alwaysTrue;
        NSAssert([invariant evaluateWithObject:self],
            @"invariant %@ violated before call to %@", invariant, calledSelectorName);
        NSString *preconditionKey = [@"pre_" stringByAppendingString:calledSelectorName];
        NSPredicate *precondition = contract[preconditionKey]?:alwaysTrue;
        NSAssert([precondition evaluateWithObject:self],
            @"precondition %@ violated before call to %@", precondition, calledSelectorName);
        [inv invoke];
        NSString *postconditionKey = [@"post_" stringByAppendingString:calledSelectorName];
        NSPredicate *postcondition = contract[postconditionKey]?:alwaysTrue;
        NSAssert([postcondition evaluateWithObject:self],
            @"postcondition %@ violated after call to %@", postcondition, calledSelectorName);
        NSAssert([invariant evaluateWithObject:self],
            @"invariant %@ violated after call to %@", invariant, calledSelectorName);
    }
}

@end

SEL internalSelector(SEL aSelector)
{
    return NSSelectorFromString([@"in_" stringByAppendingString:NSStringFromSelector(aSelector)]);
}

Things I believe

The task of producing software is one of choosing and creating constraints, rules and abstractions inside a system which provides very few a priori. Typically we select a large collection of pre-existing constraints, rules and abstractions upon which to base our own: models of computation, programming and deployment environments, everything from the size of the register file to the way in which text is represented on the display is theoretically up for grabs, but we impose limitations in their freedom upon ourselves when we create a new product.

None of these limitations is essential. Many are conventional, and have become so embedded in the cultural practice of making software that it would be expensive or impractical to choose alternative options. Still others have so much rhetoric surrounding them that the emotional cost of change is too great to bear.

So what are these restrictions? Here’s a list of mine. I accept that they don’t all apply to you. I accept that many of them have alternatives. Indeed I believe that all of them have alternatives, and that enumerating them is the first thing that lets me treat them as assumptions to be challenged.

  1. Computers use the same memory for programs and data. I know the alternatives exists but wouldn’t know how to start using them.
  2. Memory is a big blob of uniform storage. Like above, except I know this one isn’t true but that I just ignore that detail.
  3. Memory and bus wires can be in one of two states.
  4. There probably is a free-form hierarchical database available.
  5. There is a thing called a stack and a thing called a heap, and the difference between the two is important.
  6. There is no point trying to do a better job at multiprocessing than the operating system.
  7. There is an operating system.
  8. The operating system, file system, indeed any first system on which my thing is a second system; those first systems are basically interchangeable.
  9. I can buy a faster thing (except in mobile, where I can’t).
  10. Whatever processor you gave me behaves correctly.
  11. Whatever compiler you gave me behaves correctly.
  12. Whatever library you gave me probably behaves correctly.
  13. Text is a poor way to represent a computer program but is the best we have.
  14. The way to write a computer program is to tell the computer what to do.
  15. The goal of the industry for last few decades has been the DynaBook.
  16. I still do not need a degree in computer science.
  17. I should know what my software does before I give it to the people who need to use it.
  18. The universal runtime environment is the C system.
  19. Processors today are basically like faster versions of the MC68000.
  20. Platform vendors no longer see lock-in as a goal, but do see it as a convenient side-effect.
  21. You will look after drawing pictures, playing videos, and making sounds for me.
  22. Types are optional.

On too much and too little

In the following text, remember that words like me or I are to be construed in the broadest possible terms.

It’s easy to be comfortable with my current level of knowledge. Or perhaps it’s not the value, but the derivative of the value: the amount of investment I’m putting into learning a thing. Anyway, it’s easy to tell stories about why the way I’m doing it is the right, or at least a good, way to do it.

Take, for example, object-oriented design. We have words to describe insufficient object-oriented design. Spaghetti Code, or a Big Ball of Mud. Obviously these are things that I never succumb to, but other people do. So clearly (actually, not clearly at all, but that’s beside the point) there is some threshold level of design or analysis practice that represents an acceptable minimum. Whatever that value is, it’s less than the amount that I do.

Interestingly there are also words to describe the over-application of object-oriented design. Architecture Astronauts, for example, are clearly people who do too much architecture (in the same way that NASA astronauts got carried away with flying and overdid it, I suppose). It’s so cold up in space that you’ll catch a fever, resulting in Death by UML Fever. Clearly I am only ever responsible for tropospheric architecture, thus we conclude that there is some acceptable maximum threshold for analysis and design too.

The really convenient thing is that my current work lies between these two limits. In fact, I’m comfortable in saying that it always has.

But wait. I also know that I’m supposed to hate the code that I wrote six months ago, probably because I wasn’t doing enough of whatever it is that I’m doing enough of now. But I don’t remember thinking six months ago that I was below the threshold for doing acceptable amounts of the stuff that I’m supposed to be doing. Could it be, perhaps, that the goalposts have conveniently moved in that time?

Of course they have. What’s acceptable to me now may not be in the future, either because I’ve learned to do more of it or because I’ve learned that I was overdoing it. The trick is not so much in recognising that, but in recognising that others who are doing more or less than me are not wrong, they could in fact be me at a different point on my timeline but with the benefit that they exist now so I can share my experiences with them and work things out together. Or they could be someone with a completely different set of experiences, which is even more exciting as I’ll have more stories to swap.

When it comes to techniques and devices for writing software, I tend to prefer overdoing things and then finding out which bits I don’t really need after all, rather than under-application. That’s obviously a much larger cognitive and conceptual burden, but it stems from the fact that I don’t think we really have any clear ideas on what works and what doesn’t. Not much in making software is ever shown to be wrong, but plenty of it is shown to be out of fashion.

Let me conclude by telling my own story of object-oriented design. It took me ages to learn object-oriented thinking. I learned the technology alright, and could make tools that used the Objective-C language and Foundation and AppKit, but didn’t really work out how to split my stuff up into objects. Not just for a while, but for years. A little while after that Death by UML Fever article was written, my employer sent me to Sun to attend their Object-Oriented Analysis and Design Using UML course.

That course in itself was a huge turning point. But just as beneficial was the few months afterward in which I would architecturamalise all the things, and my then-manager wisely left me to it. The office furniture was all covered with whiteboard material, and there soon wasn’t a bookshelf or cupboard in my area of the office that wasn’t covered with sequence diagrams, package diagrams, class diagrams, or whatever other diagrams. I probably would’ve covered the external walls, too, if it wasn’t for Enterprise Architect. You probably have opinions(TM) of both of the words in that product’s name. In fact I also used OmniGraffle, and dia (my laptop at the time was an iBook G4 running some flavour of Linux).

That period of UMLphoria gave me the first few hundred hours of deliberate practice. It let me see things that had been useful, and that had either helped me understand the problem or communicate about it with my peers. It also let me see the things that hadn’t been useful, that I’d constructed but then had no further purpose for. It let me not only dial back, but work out which things to dial back on.

I can’t imagine being able to replace that experience with reading web articles and Stack Overflow questions. Sure, there are plenty of opinions on things like OOA/D and UML on the web. Some of those opinions are even by people who have tried it. But going through that volume of material and sifting the experience-led advice from the iconoclasm or marketing fluff, deciding which viewpoints were relevant to my position: that’s all really hard. Harder, perhaps, than diving in and working slowly for a few months while I over-practice a skill.

Inside-Out Apps

This article is based on a talk I gave at mdevcon 2014. The talk also included a specific example to demonstrate the approach, but was otherwise a presentation of the following argument.

You probably read this blog because you write apps. Which is kind of cool, because I have been known to use apps. I’d be interested to hear what yours is about.

Not so fast! Let me make this clear first: I only buy page-based apps, they must use Core Data, and I automatically give one star to any app that uses storyboards.

OK, that didn’t sound like it made any sense. Nobody actually chooses which apps to buy based on what technologies or frameworks are used, they choose which apps to buy based on what problems those apps solve. On the experience they derive from using the software.

When we build our apps, the problem we’re solving and the experience we’re providing needs to be at the uppermost of our thoughts. You’re probably already used to doing this in designing an application: Apple’s Human Interface Guidelines describe the creation of an App Definition Statement to guide thinking about what goes into an app and how people will use it:

An app definition statement is a concise, concrete declaration of an app’s main purpose and its intended audience.

Create an app definition statement early in your development effort to help you turn an idea and a list of features into a coherent product that people want to own. Throughout development, use the definition statement to decide if potential features and behaviors make sense.

My suggestion is that you should use this idea to guide your app’s architecture and your class design too. Start from the problem, then work through solving that problem to building your application. I have two reasons: the first will help you, and the second will help me to help you.

The first reason is to promote a decoupling between the problem you’re trying to solve, the design you present for interacting with that solution, and the technologies you choose to implement the solution and its design. Your problem is not “I need a Master-Detail application”, which means that your solution may not be that. In fact, your problem is not that, and it may not make sense to present it that way. Or if it does now, it might not next week.

You see, designers are fickle beasts, and for all their feel-good bloviation about psychology and user experience, most are actually just operating on a combination of trend and whimsy. Last week’s refresh button is this week’s pull gesture is next week’s interaction-free event. Yesterday’s tab bar is today’s hamburger menu is tomorrow’s swipe-in drawer. Last decade’s mouse is this decade’s finger is next decade’s eye motion. Unless your problem is Corinthian leather, that’ll be gone soon. Whatever you’re doing for iOS 7 will change for iOS 8.

So it’s best to decouple your solution from your design, and the easiest way to do that is to solve the problem first and then design a presentation for it. Think about it. If you try to design a train timetable, then you’ll end up with a timetable that happens to contain train details. If you try to solve the problem “how do I know at what time to be on which platform to catch the train to London?”, then you might end up with a timetable, but you might not. And however the design of the solution changes, the problem itself will not: just as the problem of knowing where to catch a train has not changed in over a century.

The same problem that affects design-driven development also affects technology-driven development. This month you want to use Core Data. Then next month, you wish you hadn’t. The following month, you kind of want to again, then later you realise you needed a document database after all and go down that route. Solve the problem without depending on particular libraries, then changing libraries is no big deal, and neither is changing how you deal with those libraries.

It’s starting with the technology that leads to Massive View Controller. If you start by knowing that you need to glue some data to some view via a View Controller, then that’s what you end up with.

This problem is exacerbated, I believe, by a religious adherence to Model-View-Controller. My job here is not to destroy MVC, I am neither an iconoclast nor a sacrificer of sacred cattle. But when you get overly attached to MVC, then you look at every class you create and ask the question “is this a model, a view, or a controller?”. Because this question makes no sense, the answer doesn’t either: anything that isn’t evidently data or evidently graphics gets put into the amorphous “controller” collection, which eventually sucks your entire codebase into its innards like a black hole collapsing under its own weight.

Let’s stick with this “everything is MVC” difficulty for another paragraph, and possibly a bulleted list thereafter. Here are some helpful answers to the “which layer does this go in” questions:

  • does my Core Data stack belong in the model, the view, or the controller? No. Core Data is a persistence service, which your app can call on to save or retrieve data. Often the data will come from the model, but saving and retrieving that data is not itself part of your model.
  • does my networking code belong in the model, the view, or the controller? No. Networking is a communications service, which your app can call on to send or retrieve data. Often the data will come from the model, but sending and retrieving that data is not itself part of your model.
  • is Core Graphics part of the model, the view, or the controller? No. Core Graphics is a display primitive that helps objects represent themselves on the display. Often those objects will be views, but the means by which they represent themselves are part of an external service.

So building an app in a solution-first approach can help focus on what the app does, removing any unfortunate coupling between that and what the app looks like or what the app uses. That’s the bit that helps you. Now, about the other reason for doing this, the reason that makes it easier for me to help you.

When I come to look at your code, and this happens fairly often, I need to work out quickly what it does and what it should do, so that I can work out why there’s a difference between those two things and what I need to do about it. If your app is organised in such a way that I can see how each class contributes to the problem being solved, then I can readily tell where I go for everything I need. If, on the other hand, your project looks like this:

MVC organisation

Then the only thing I can tell is that your app is entirely interchangeable with every other app that claims to be nothing more than MVC. This includes every Rails app, ever. Here’s the thing. I know what MVC is, and how it works. I know what UIKit is, and why Apple thinks everything is a view controller. I get those things, your app doesn’t need to tell me those things again. It needs to reflect not the similarities, which I’ve seen every time I’ve launched Project Builder since around 2000, but the differences, which are the things that make your app special.

OK, so that’s the theory. We should start from the problem, and move to the solution, then adapt the solution onto the presentation and other technology we need to use to get a product we can sell this week. When the technologies and the presentations change, we can adapt onto those new things, to get the product we can sell next week, without having to worry about whether we broke solving the problem. But what’s the practice? How do we do that?

Start with the model.

Remember that, in Apple’s words:

model objects represent knowledge and expertise related to a specific problem domain

so solving the problem first means modelling the problem first. Now you can do this without regard to any particular libraries or technology, although it helps to pick a programming language so that you can actually write something down. In fact, you can start here:

Command line tool

A Foundation command-line tool has everything you need to solve your problem (in fact it contains a few more things than that, to make up for erstwhile deficiencies in the link editor, but we’ll ignore those things). It lets you make objects, and it lets you use all those boring things that were solved by computer scientists back in the 1760s like strings, collections and memory allocation.

So with a combination of the subset of Objective-C, the bits of Foundation that should really be in Foundation, and unit tests to drive the design of the solution, we can solve whatever problem it is that the app needs to solve. There’s just one difficulty, and that is that the problem is only solved for people who know how to send messages to objects. Now we can worry about those fast-moving targets of presentation and technology choice, knowing that the core of our app is a stable, well-tested collection of objects that solve our customers’ problem. We expose aspects of the solution by adapting them onto our choice of user interface, and similarly any other technology dependencies we need to introduce are held at arm’s length. We test that we integrate with them correctly, but not that using them ends up solving the problem.

If something must go, then we drop it, without worrying whether we’ve broken our ability to solve the problem. The libraries and frameworks are just services that we can switch between as we see fit. They help us solve our problem, which is to help everyone else to solve their problem.

And yes, when you come to build the user interface, then model-view-controller will be important. But only in adapting the solution onto the user interface, not as a strategy for stuffing an infinite number of coats onto three coat hooks.

References

None of the above is new, it’s just how Object-Oriented Programming is supposed to work. In the first part of my MVC series, I investigated Thing-Model-View-Editor and the progress from the “Thing” (a problem in the real world that must be solved) to a “Model” (a representation of that problem and its solution in the computer). That article relied on sources from Trygve Reenskaug, who described (in 1979) the process by which he moved from a Thing to a Model and then to a user interface.

In his 1992 book Object-Oriented Software Engineering: A Use-Case Driven Approach, Ivar Jacobson describes a formalised version of the same motion, based on documented use cases. Some teams replaced use cases with user stories, which look a lot like Reenskaug’s user goals:

An example: To get better control over my finances, I would need to set up a budget; to keep account
of all income and expenditure; and to keep a running comparison between budget and accounts.

Alastair Cockburn described the Ports and Adapters Architecture (earlier known as the hexagonal architecture) in which the application’s use cases (i.e. the ways in which it solves problems) are at the core, and everything else is kept at a distance through adapters which can easily be replaced.

Garbage-collected Objective-C

When was a garbage collector added to Objective-C? If you follow Apple’s work with the language, you might be inclined to believe that it was in 2008 when AutoZone was added as part of Objective-C 2.0 (the AutoZone collector has since been deprecated by Apple, and I’m not sure whether anyone else ever adopted it).

With a slightly wider knowledge of the language’s history, you can push this date back a bit. The GNUstep project—a Free Software reimplementation of Apple’s (formerly NeXT’s) APIs—has been using the Boehm–Demers–Weiser collector for a while. How long? I can’t tell exactly, but a keyword search in the project’s version control logs makes me think that most of the work to support it was done by one person in mid-2002:

r13976 | nico | 2002-06-26 15:34:16 +0100 (Wed, 26 Jun 2002) | 3 lines

Do not add -lobjc_gc -lgc flags when compiling with gc=yes – should now
be added automatically by gnustep-make


r13971 | nico | 2002-06-25 18:28:56 +0100 (Tue, 25 Jun 2002) | 2 lines

Tidyup for gc=yes with old compilers


r13970 | nico | 2002-06-25 18:23:05 +0100 (Tue, 25 Jun 2002) | 2 lines

Tidied code to compile with gc=yes and older compilers


r13969 | nico | 2002-06-25 13:36:11 +0100 (Tue, 25 Jun 2002) | 3 lines

Tidied some indentation; a couple of insignificant changes to have it compile
under gc


r13968 | nico | 2002-06-25 13:15:04 +0100 (Tue, 25 Jun 2002) | 2 lines

Tidied code which wouldn’t compile with gc=yes and gcc < 3.x


r13967 | nico | 2002-06-25 13:13:19 +0100 (Tue, 25 Jun 2002) | 2 lines

Tidied code which was not compiling with the garbage collector


r13966 | nico | 2002-06-25 13:12:17 +0100 (Tue, 25 Jun 2002) | 2 lines

Tidied code which was not compiling with gc=yes

That was, until fairly recently, the earliest example I knew about. Then I discovered a conference talk by Paulo Ferreira:

Reclaiming storage in an object oriented platform supporting extended C++ and Objective-C applications

This is a paper presented at “1991 International Workshop on Object Orientation in Operating Systems”. 1991. That is—obviously—11 years before GNUstep’s GC work and 17 years before Apple released AutoZone.

Comandos

The context in which this work was being done is a platform called Comandos. I’d never heard of that before—and I thought I knew Objective-C!

Judging from the report linked above, Comandos is a platform for distributed and parallel object-oriented software, based on UNIX but supporting multiple variants. The fact that it was created in 1986 means that both the languages supported—Objective-C and C++—were new at the time. Indeed the project was contemporary with the development of NeXTSTEP, which was publicly released to developers in 1988.

The 1994 summary report doesn’t mention Objective-C: just C++, Eiffel and a bespoke language called Guide. It’s possible that the platform supported ObjC simply because they used gcc which picked up ObjC support during the life of Comandos; however this seems unlikely as there would be significant work in making Objective-C objects work with their platform’s distributed messaging interface and persistence subsystem.

Why ObjC should be one of two languages mentioned (along with C++) in the 1991 paper on garbage collection, but zero of three mentioned (C++, Eiffel, Guide) in 1994 will have to remain a mystery for now. Looking into the references for Ferreira’s paper, I can find one mention of Objective-C as the inspiration for their own, custom C-based message dispatch system, but no indication that they actually used Objective-C.

The Garbage Collector

I’m not really an expert at garbage collectors. In fact, I have no idea what I’m doing. I appreciate them when they’re around, and leak or crash things occasionally when they’re not.

To my uneducated eye, the description of the Ferreira 1991 garbage collector and Apple’s description of their collector (no link I’m afraid, it was session 940 at WWDC 2008) look quite different. AutoZone is conservative (like B-W-D) and only works on Objective-C objects. Ferreira’s collector operates, like B-W-D, on any memory block including new C++ instances and C heap allocations. Apple’s collector is supposed to avoid blocking wherever it can, a constraint not mentioned in the Ferreira paper.

All of Comandos, GNUstep and Cocoa (Apple’s Objective-C framework) have systems for distributed objects that complicate collection: does some remote process have a handle on memory in my address space? The proxy system used by Cocoa and GNUstep make it easy to answer this question. Comandos used a different technique, where objects local to a process were “volatile” and objects shared between processes were “persistent”. Persistent objects were subject to a different lifecycle management process, so the Ferreira GC didn’t interact with them.

As an aside, Apple’s garbage collector also needed to provide a “mixed mode”—support for code that could be loaded into either a garbage-collected or manually managed process.

Conclusions

Memory management is hard. Making programmers do it themselves leads to all sorts of problems. Doing it automatically is also hard, and many different approaches have been tried over the last few decades. Interestingly, Apple has (for the moment) settled on a “none of the above” approach, using a compiler-inserted reference counting system based on the manual ownership tracking previously implemented by the frameworks.

What interests me most about this paper on Objective-C garbage collection is not so much its technical content (which it’s actually rather light on, containing only conversational overviews of the algorithms and no information about results), but the fact that it existed at all and I, as someone who considers himself an experienced Objective-C programmer, did not know anything about it or its project.

That’s why I started this blog [Ed: referring to the blog these posts are imported from] by discussing it. A necessary prerequisite to deciding whether the literature has something useful to tell us is knowing about its existence. I’m really surprised that it took so long for me to find out about something that’s almost directly related to my everyday work. Mind you, maybe I shouldn’t feel too bad: the author of AutoZone told me he hadn’t heard of it, either.

At the old/new interface: jQuery in WebObjects

It turns out to be really easy to incorporate jQuery into an Objective-C WebObjects app (targeting GNUstep Web). In fact, it doesn’t really touch the Objective-C source at all. I defined a WOJavascript object that loads jQuery itself from the application’s web server resources folder, so it can be reused across multiple components:

jquery_script:WOJavaScript {scriptFile="jquery-2.0.2.js"}

Then in the components where it’s used, any field that needs uniquely identifying should have a CSS identifier, which can be bound via WebObjects’s id binding. In this example, a text field for entering an email address in a form will only be enabled if the user has checked a “please contact me” checkbox.

email_field:WOTextField {value=email; id="emailField"}
contact_boolean:WOCheckBox {checked=shouldContact; id="shouldContact"}

The script itself can reside in the component’s HTML template, or in a WOJavascript that looks in the app’s resources folder or returns javascript that’s been prepared by the Objective-C code.

    <script>
function toggleEmail() {
    var emailField = $("#emailField");
    var isChecked = $("#shouldContact").prop("checked");
    emailField.prop("disabled", !isChecked);
    if (!isChecked) {
        emailField.val("");
    }
}
$(document).ready(function() {
    toggleEmail();
    $("#shouldContact").click(toggleEmail);
});
    </script>

I’m a complete newbie at jQuery, but even so that was easier than expected. I suppose the lesson to learn is that old technology isn’t necessarily incapable technology. People like replacing their web backend frameworks every year or so; whether there’s a reason (beyond caprice) warrants investigation.

HATEOAS app structure explained through some flimsy analogy

You are in a tall, narrow view. A vibrant, neon sign overhead tells you that this is the entrance to “Stocks” – below it is one of those scrolling news tickers you might see on Times Square. In front of you lies a panel of buttons. Before you can do anything, an automatic droid introduces itself as “Searchfield”. It invites you to tell it the name of a stock you’d like to know the price of, and tells you that it will then GET the price.

You can see:

Searchfield<br/>
A button marked “Price”<br/>
A button marked “Market Cap”<br/>
A box marked “PUT trades here”<br/>

There is an exit going Back. Additionally you can go Home.