By your _cmd

This post is a write-up of a talk I gave at Alt Tech Talks: London on the Objective-C runtime. Seriously though, you should’ve been there.

The Objective-C runtime?

That’s the name of the library of C functions that implement the nuts and bolts of Objective-C. Objects could just be represented as C structures, and methods could just be implemented as C functions. In fact they sort of are, but with some extra capabilities. These structures and functions are wrapped in this collection of runtime functions that allows Objective-C programs to create, inspect and modify classes, objects and methods on the fly.

It’s the Objective-C runtime library works out what methods get executed, too. The [object doSomething] syntax does not directly resolve a method and call it. Instead, a message is sent to the object (which gets called the receiver in this context). The runtime library gives objects the opportunity to look at the message and decide how to respond to it. Alan Kay repeatedly said that message-passing is the important part of Smalltalk (from which Objective-C derives), not objects:

I’m sorry that I long ago coined the term “objects” for this topic because it gets many people to focus on the lesser idea.

The big idea is “messaging” – that is what the kernal[sic] of Smalltalk/Squeak is all about (and it’s something that was never quite completed in our Xerox PARC phase). The Japanese have a small word – ma – for “that which is in between” – perhaps the nearest English equivalent is “interstitial”. The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.

Indeed in one article describing the Smalltalk virtual machine the programming technique is called the message-passing or messaging paradigm. “Object-Oriented” is used to describe the memory management system.

Throughout this talk and post I’m talking about the ObjC runtime, but there are many. They all support an object’s introspective and message-receiving capabilities, but they have different features and work in different ways (for example, Apple’s runtime sends messages in one step, but the GNU runtime looks up messages and then invokes the discovered function in two steps). All of the discussion below relates to Apple’s most modern runtime library (the one that is delivered as part of OS X since 10.5 and iOS).

In the talk, I decided to examine a few specific areas of the runtime library’s behaviour. I looked for things that I wanted to understand better, and came up with questions I wanted to answer as part of the talk.

Dynamic class creation

Can I implement Key-Value Observing?

While I was preparing the talk, a post called KVO considered harmful started to get a lot of coverage. That post raises a lot of valid criticisms of the Key-Value Observing API, but rather than throw away the Observer pattern I wanted to explore a new implementation.

The observed (pardon the pun) behaviour of KVO is to privately subclass the observed object’s class, so that it can customise the object’s behaviour to call the KVO callback. That’s done through a function called objc_duplicateClass, unfortunately the documentation tells us that we should not call this function ourselves.

It’s still possible to implement an Observer pattern that uses the same secret-subclass behaviour, by allocating and registering a “class pair”. What’s a class pair? Well each class in Objective-C is really two classes: the class object defines the instance methods, and the “metaclass” defines the class methods. So each class is really a singleton instance of its metaclass.

The ObserverPattern implementation shows how this works. When you add an observer to an object, the receiver first works out whether it’s an instance of the observable class. If it needs to create that class, it does so: adding our own implementations of -dealloc to clean up after ourselves, and -class so that, like KVO observable objects, the generated class name doesn’t appear when you ask an observed object its type.

Having created the class, the code goes on to add a setter for the conventional Key-Value Coding selector name for the property: this setter grabs the old and new values of the property and invokes the callback which was supplied as a block object. Because we can, the block is dispatched asynchronously.

Notice that the -addObserverForKey:withBlock: method uses object_setClass() to replace the receiver’s class with the newly-constructed class. The main effect of this is to change the way messages are resolved onto methods, but you need to be careful that the original and replaced class have the same instance variable layout too. Instance variables are looked up via the runtime too, and changing the class could alter where the runtime thinks the bytes are for any given variable.

We have a little extra hurdle to overcome in storing the collection of observer tokens, because there’s nowhere to put them. Adding an instance variable to the ObserverPattern[…] class would not work, as instances of that class are never actually allocated. The objects involved have the instance variables of their initial class, which won’t include space for the observers.

The Objective-C runtime provides for this situation by giving us associated objects. Any object can have what is, conceptually, a dictionary of other objects maintained by the runtime. Observed objects can store and retrieve their observer tokens via associated references, and no extra instance variables are needed.

A little problem in the ObserverPattern implementation will become clear if you run it enough times. The observation callbacks are sent asynchronously, and can be delivered out of sequence. That means the observer can’t actually tell what the final state of the observed key is, because the “new value” received in the callback might have already been replaced. I left this fun issue in to demonstrate that KVO’s synchronous implementation is a feature, not a bug.

Creating objects

What are those extra bytes for?

When you create an Objective-C object, the runtime lets you allocate some extra storage at the end of the space reserved for its instance variables. What’s the point of that? All you can do is get a pointer to the start of the space (using object_getIndexedIvars)…hmm, indexed ivars. Well, I suppose an array is a pretty obvious use of indexed ivars…

Let’s build NSArray! There are two things to see in SimpleArray: the most obvious is the use of the class cluster pattern. The reason is that the object returned from +alloc—where we’d normally allocate space for the object—cannot know how big it’s going to be. We need to use the arguments to -initWithObjects:count: to know how many objects there are in the array. So +alloc returns a placeholder, which is then able to allocate and return the real array object.

One obvious question to ask is why we’d do this at all. Why not just use calloc() to grab an appropriately-sized buffer in which to store the object pointers? The answer is to do with a low-level performance concern called locality of reference. We know from the design of the array class that pretty much every time the array pointer is used, the buffer pointer will be used too. Putting them next to each other in RAM means we don’t have to look off at some dereferenced pointer just to find another pointer.

Message dispatch

Just how does message forwarding work?

One of the powerful features of Objective-C is that an object doesn’t have to implement a method when it’s compiled to be able to respond to messages with that selector name. It can lazily resolve the methods, or it can forward them to another object, or it can raise an error, or it can do something else. But something about this feature was bugging me: message forwarding (which happens in the runtime) calls -forwardInvocation:, passing it an NSInvocation object. But NSInvocation is defined in Foundation: does the runtime library need to “know” about Foundation to work?

I tracked down what was going on and found that no, it does not need to know about Foundation. The runtime lets applications define the forwarding function, that gets called when objc_msgSend() can’t find the implementation for a selector. On startup, CoreFoundation[+] injects the forwarding function that does -forwardInvocation:. So presumably my application can do its own thing, right?

Let’s build Ruby! OK, not all of Ruby. But Ruby has a #method_missing function that gets called when an object receives a message it doesn’t understand, which is much more similar to Smalltalk’s approach than to Objective-C’s. Using objc_setForwardHandler, it’s possible to implement methodMissing: in our Objective-C classes.

Conclusion

The Objective-C runtime is a powerful way to add a lot of dynamic behaviour to an application for very little work. Some developers don’t use it much beyond swizzling methods for debugging, but it has facilities that make it a powerful tool for real application code too.


[+]CoreFoundation and Foundation are really siblings, and they each expose pieces of the other’s implementation, but one has a C interface and the other an Objective-C interface. Various Objective-C classes are actually part of CoreFoundation, including NSInvocation and the related NSMethodSignature class. NSObject is not in either of these libraries: it’s now defined in the runtime itself, so that the runtime’s memory management functions know about -retain, -release and so on[++]. On the other hand, most of the *behaviour* of NSObject is implemented by categories higher up. And, of course, this is all implementation detail and the locations of these classes could be (and are) moved between versions of the frameworks.

[++]Other languages like Smalltalk and Ruby have a simple base class that does nothing except know how to be an object, called ProtoObject or BaseObject. You could imagine the runtime supplying—and being coupled to—ProtoObject, and (Core)Foundation supplying NSObject and NSProxy as subclasses of ProtoObject.

Posted in code-level, OOP | Comments Off on By your _cmd

The Ignoble Programmer

Two programmers are taking a break from their work, relaxing on a bench in the park across from their office. As they discuss their weekend plans, a group of people jog past, each carrying their laptop in a yoke around their neck and furiously typing as they go.

“Oh, there goes the Smalltalk team,” says the senior of the two programmers on the bench. “They have to do everything at run-time.”

I love jokes. And not just because they’re sometimes funny, though that helps: I certainly find I enjoy a conversation and can relax more when at least two of the people involved are having fun. When only one person is joking, it gets awkward (particularly if everyone else is from HR). But a little levity can go a long way toward disarming an unpleasant truth so that it can be discussed openly. Political leaders through the ages have taken advantage of this by appointing jesters and fools to keep them aware of intrigues in the courts: even the authors of the American bill of rights remembered the satirist before the shooter.

I also like jokes because of the thought that goes into constructing a good (or deliberately bad) one. There’s a certain kind of creativity that goes into identifying an apparently absurd connection, exactly because of the absurdity. Being able to construct a joke, and being practised at constructing jokes, means being able to see new contexts and applications for existing ideas. Welcome to the birthplace of reuse and exploring the bounds of a construct’s application: welcome to the real home of software architecture.

But there’s a problem, or at least an opportunity (or maybe just a few thousand consulting dollars to be made and a book to be written). That problem is this: everyone else puts way more effort into their jokes than programmers do. Take this one, from the scientists:

Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon

They didn’t just joke about doing a brain scan of a dead fish, they did a brain scan of a dead fish. And published the (serendipitous and unexpected) results. But they didn’t just angle for a laugh, they had a real point. The subtitle of their paper:

An Argument For Proper Multiple Comparisons Correction

And isn’t it fun that some microbiologists demonstrated that beards are significant vectors for microbial infections?

Both of these examples were lifted from the Annals of Improbable Research’s Ig Nobel Prizes, awarded for “achievements that first make people laugh, and then makes them think”. The Ig Nobels have been awarded every year since 1991, and in that time only one computer science award has been granted. That award was given to the developer behind PawSense, a utility that detects and blocks typing caused by your cat walking across your keyboard.

Jokes that first make you laugh, and then make you think, are absolutely the best jokes you can make about my work. If I conclude “you’re right, that is absurd, but what if…” then you’ve done it right. Jokes that are thought-terminating statements can make us laugh, and maybe make us feel good about what we’re doing, but cannot make us any better at it because they don’t give us the impetus to reflect on our craft. Rather, they make us feel smug about knowing better than the poor sap who’s the butt of the joke. They confirm that we’ve nothing to learn, which is never the correct outlook.

We need more Ig Nobel-quality achievements in computing. Disarming the absurd and the downright broken in programming and presenting them as jokes can first make us laugh, and then make us think.

N.B. My complete connection to the Annals of Improbable Research is that I helped out on the AV desk at a couple of their talks. At their talk in Oxford in 2006 I was inducted into the Luxuriant and Flowing Hair Club for Scientists.

Posted in advancement of the self, learning, social-science | Leave a comment

Agile application security

There’s a post by clever security guy Jim Bird on Appsec’s Agile Problem: how can security experts participate in fast-moving agile (or Agile™) projects without either falling behind or dragging the work to a halt?

I’ve been the Appsec person on such projects, so hopefully I’m in a position to provide at least a slight answer :-).

On the team where I did this work, projects began with the elevator pitch, application definition statement, whatever you want to call it. “We want to build X to let Ys do Z”. That, often with a straw man box-and-line system diagram, is enough to begin a conversation between the developers and other stakeholders (including deployment people, marketing people, legal people) about the security posture.

How will people interact with this system? How will it interact with our other systems? What data will be touched or created? How will that be used? What regulations, policies or other constraints are relevant? How will customers be made aware of relevant protections? How can they communicate with us to identify problems or to raise their concerns? How will we answer them?

Even this one conversation has achieved a lot: everybody is aware of the project and of its security risks. People who will make and support the system once it’s live know the concerns of all involved, and that’s enough to remove a lot of anxiety over the security of the system. It also introduces awareness while we’re working of what we should be watching out for. A lot of the suggestions made at this point will, for better or worse, be boilerplate: the system must make no more use of personal information than existing systems. There must be an issue tracker that customers can confidentially participate in.

But all talk and no trouser will not make a secure system. As we go through the iterations, acceptance tests (whether manually run, automated, or results of analysis tools) determine whether the agreed risk profile is being satisfied.

Should there be any large deviations from the straw man design, the external stakeholders are notified and we track any changes to the risk/threat model arising from the new information. Regular informal lunch sessions give them the opportunity to tell our team about changes in the rest of the company, the legal landscape, the risk appetite and so on.

Ultimately this is all about culture. The developers need to trust the security experts to make their expertise available and help out with making it relevant to their problems. The security people need to trust the developers to be trying to do the right thing, and to be humble enough to seek help where needed.

This cultural outlook enables quick reaction to security problems detected in the field. Where the implementors are trusted, the team can operate a “break glass in emergency” mode where solving problems and escalation can occur simultaneously. Yes, it’s appropriate to do some root cause analysis and design issues out of the system so they won’t recur. But it’s also appropriate to address problems in the field quickly and professionally. There’s a time to write a memo to the shipyard suggesting they use thicker steel next time, and there’s a time to put your finger over the hole.

If there’s a problem with agile application security, then, it’s a problem of trust: security professionals, testers, developers and other interested parties[*] must be able to work together on a self-organising team, and that means they must exercise knowledge where they have it and humility where they need it.

[*] This usually includes lawyers. You may scoff at the idea of agile lawyers, but I have worked with some very pragmatic, responsive, kind and trustworthy legal experts.

Posted in Uncategorized | Leave a comment

How to answer questions the smart way

You may have read how to ask questions the smart way by Eric S. Raymond. You may have even quoted it when faced with a question you thought was badly-formed. I want you to take a look at a section near the end of the article.

How to answer questions in a helpful way is the part I’m talking about. It’s a useful section. It reminds us that questions are part of a dialogue, which is a two-way process. Sometimes questions seem bad, but then giving bad answers is certainly no way to make up for that. What else should we know about answering questions?

The person who asked the question has had different experiences than you. The fact that you do not understand why the question should be asked does not mean that the question should not be asked. “Why would you even want to do that?” is not an answer.

Answer at a level appropriate to the question. If the question shows a familiarity with the basics, there’s no need to mansplain trivial details in the answer. On the other hand, if the question shows little familiarity with the basics, then an answer that relies on advanced knowledge is just pointless willy-waving.

The shared values that pervade your culture are learned, not innate. Not everyone has learned them yet, and they are not necessarily even good, valuable or correct. This is a point that Raymond misses with quotes like this:

You shouldn’t be offended by this; by hacker standards, your respondent is showing you a rough kind of respect simply by not ignoring you. You should instead be thankful for this grandmotherly kindness.

What this says is: this is how we’ve always treated outsiders, so this is how you should expect to be treated. Fuck that. You’re better than that. Give a respectful, courteous answer, or don’t answer. It’s really that simple. We can make a culture of respect and courtesy normative, by being respectful and courteous. We can make a culture of inclusion by not being exclusive.

I’m not saying that I’m any form of authority on answering questions. I’m far from perfect, and by exploring the flaws I know I perceive in myself and making them explicit I make them conscious, with the aim of detecting and correcting them in the future.

Posted in advancement of the self, learning, Responsibility, social-science | Leave a comment

Story points: because I don’t know what I’m doing

The scenario

[Int. developer’s office. Developer sits at a desk that faces the wall. Two of the monitors on Developer’s desk are on stands, if you look closely you see that the third is balanced on the box set of The Art of Computer Programming, which is still in its shrink-wrap. Developer notices you and identifies an opportunity to opine about why the world is wrong, as ever.]

Every so often, people who deal with the real world instead of the computer world ask us developers annoying questions about how our work interacts with so-called reality. You’re probably thinking the same thing I do: who cares, right? I’m right in the middle of a totally cool abstraction layer on top of the operating system’s abstraction layer that abstracts their abstraction so I can interface it to my abstraction and abstract all the abstracts, what’s that got to do with reality and customers and my employer and stuff?

Ugh, damn, turning up my headphones and staring pointedly at the screen hasn’t helped, they’re still asking this question. OK, what is it?

Apparently they want to know when some feature will be done. Look, I’m a programmer, I’m absolutely the worst person to ask about time. OK, I believe that you might want to know whether this development effort is going to deliver value to the customers any time soon, and whether we’re still going to be ahead financially when we’re done, or whether it’d be better to take on some other work. And really I’d love to answer this question, except for one thing:

I have absolutely no idea what I’m doing. Seriously, don’t you remember all the other times that I gave you estimates and they were way off? The problem isn’t some systematic error in the way I think about how long it’ll take me to do stuff, it’s that while I can build abstractions on top of other abstractions I’m not so great at going the other way. Give me a short description of a task, I’ll try and work out what’s involved but I’m likely to miss something that will become important when I go to do it. It’s these missed details that add time, and I don’t know how many of those there will be until I get started.

The proposed solution

[Developer appears to have a brainwave]

Wait, remember how my superpower is adding layers of abstraction? Well your problem of estimation looks quite a lot like a nail to me, so I’ll apply my hammer! Let’s add a layer of abstraction on top of time!

Now you wanted to know how long it’ll take to finish some feature. Well I’ll tell you, but I won’t tell you in units of hours or days, I’ll use BTUs (Bullshit Time Units) instead. So this thing I’m working on will be about five BTUs. What do you mean, that doesn’t tell you when I’ll be done? It’s simple, duh! Just wait a couple of months, and measure how many BTUs we actually managed to complete. Now you know how many BTUs per day we can do, and you know how long everything takes!

[Developer puts their headphones back in, and turns to face the monitor. The curtain closes on the scene, and the Humble(-ish) Narrator takes the stage.]

The observed problem

Did you notice that the BTU doesn’t actually solve the stated problem? If it’s possible to track BTU completion over time until we know how many BTUs get completed in an iteration, then we are making the assumption that there is a linear relationship between BTUs and units of time. Just as there are 40 (or 90, if you picked the wrong recruiter) hours to the work week, so there are N BTUs to the work week. A BTU is worth x hours, and we just need to measure for a bit until we find the value of x.

But Developer’s problem was not a failure to understand how many hours there are in an hour. Developer’s problem was a failure to know what work is outstanding. An inability to foresee what work needs to be done cannot be corrected by any change to the way in which work to be done is mapped onto time. It is, to wear out even further an already tired saw, an unknown unknown.

What to do about it

We’re kindof stuck, really. We can’t tell how long something will take until we do it, not because we’re bad at estimating how long it’ll take to do something but because we’re bad at knowing what it is we need to do.

The little bit there about “until we do it” is, I think, what we need to focus on. I can’t tell you how long something I haven’t done will take, but I can probably tell you what problems are outstanding on the thing I’m doing now. I can tell you whether it’s ready now, or whether I think it’ll be ready “soon” or “not soon”.

So here’s the opportunity: we’ll keep whatever we’ve already got ready for immediate release. We’ll share information about which of the acceptance tests are passing, and if we were to release right now you’d know what customers will get from that. Whatever the thing we’re working on now is, we’ll be in a position to decide whether to switch away if we can do some more valuable work instead.

Posted in Business, Responsibility, social-science, software-engineering | Leave a comment

Updating my ObjC web app on git push

I look at SignUp.woa running on my Ubuntu server, and it looks like this.

Sign up for the APPropriate Behaviour print edition!

That title text doesn’t quite look right.

$ open -a TextWrangler Main.wo/Main.html
$ make
$ make check
$ git add -A
$ git commit -m "Use new main page heading proposed by marketing"
$ git push server master

I refresh my browser.

APPropriate Behaviour print edition: sign up here!

The gsw-hooks README explains how this works. My thanks to Dan North for guiding me around a problem I had between my keyboard and my chair.

Posted in gnustep, tool-support, UNIX, WebObjects | Leave a comment

Automated tests with the GNUstep test framework

Setup

Of course, it’d be rude not to use a temperature converter as the sample project in a testing blog post. The only permitted alternative is a flawed bank account model.

I’ll create a folder for my project, then inside it a folder for the tests:

$ mkdir -p TemperatureConverter/test
$ cd TemperatureConverter

The test runner, gnustep-tests, is a shell script that looks for tests in subfolders of the current folder. If I run it now, nothing will happen because there aren’t any tests. I’ll tell it that the test folder will contain tests by creating an empty marker file that the script looks for.

$ touch test/TestInfo

Of course, there still aren’t any tests, so I should give it something to build and run. The test fixture files themselves can be Objective-C or Objective-C++ source files.

$ cat > converter.m
#include "Testing.h"

int main(int argc, char **argv)
{
}
^D

Now the test runner has something to do, though not very much. Any of the invocations below will cause the runner to find this new file, compile and run it. It’ll also look for test successes and failures, but of course there aren’t any yet. Still, these invocations show how large test suites could be split up to let developers only run the parts relevant to their immediate work.

$ gnustep-tests #tests everything
$ gnustep-tests test #tests everything in the test/ folder
$ gnustep-tests test/converter.m #just the tests in the specified file

The first test

Following the standard practice of red-green-refactor, I’ll write the test that I want to be able to write and watch it fail. This is it:

#include "Testing.h"

int main(int argc, char **argv)
{
  TemperatureConverter *converter = [TemperatureConverter new];
  float minusFortyF = [converter convertToFahrenheit:-40.0];
  PASS(minusFortyF == -40.0, "Minus forty is the same on both scales");
  return 0;
}

The output from that:

$ gnustep-tests
Checking for presence of test subdirectories ...
--- Running tests in test ---

test/converter.m:
Failed build:     

      1 Failed build


Unfortunately we could not even compile all the test programs.
This means that the test could not be run properly, and you need
to try to figure out why and fix it or ask for help.
Please see /home/leeg/GNUstep/TemperatureConverter/tests.log for more detail.

Unsurprisingly, it doesn’t work. Perhaps I should write some code. This can go at the top of the converter.m test file for now.

#import <Foundation/Foundation.h>

@interface TemperatureConverter : NSObject
- (float)convertToFahrenheit:(float)celsius;
@end

@implementation TemperatureConverter
- (float)convertToFahrenheit:(float)celsius;
{
  return -10.0; //WAT
}
@end

I’m reasonably confident that’s correct. I’ll try it.

$gnustep-tests 
Checking for presence of test subdirectories ...
--- Running tests in test ---

test/converter.m:
Failed test:     converter.m:19 ... Minus forty is the same on both scales

      1 Failed test


One or more tests failed.  None of them should have.
Please submit a patch to fix the problem or send a bug report to
the package maintainer.
Please see /home/leeg/GNUstep/TemperatureConverter/tests.log for more detail.

Oops, I seem to have a typo which should be easy enough to correct. Here’s proof that it now works:

$ gnustep-tests
Checking for presence of test subdirectories ...
--- Running tests in test ---

      1 Passed test

All OK!

Second point, first set

If every temperature in Celsius were equivalent to -40F, then the two scales would not be measuring the same thing. It’s time to discover whether this class is useful for a larger range of inputs.

All of the tests I’m about to add are related to the same feature, so it makes sense to document these tests as a group. The suite calls these groups “sets”, and it works like this:

int main(int argc, char **argv)
{
  TemperatureConverter *converter = [TemperatureConverter new];
  START_SET("celsius to fahrenheit");
  float minusFortyF = [converter convertToFahrenheit:-40.0];
  PASS(minusFortyF == -40.0, "Minus forty is the same on both scales");
  float freezingPoint = [converter convertToFahrenheit:0.0];
  PASS(freezingPoint == 32.0, "Water freezes at 32F");
  float boilingPoint = [converter convertToFahrenheit:100.0];
  PASS(boilingPoint == 212.0, "Water boils at 212F");
  END_SET("celsius to fahrenheit");
  return 0;
}

Now at this point I could build a look-up table to map inputs onto outputs in my converter method, or I could choose a linear equation.

- (float)convertToFahrenheit:(float)celsius;
{
  return (9.0/5.0)*celsius + 32.0;
}

Even tests have aspirations

Aside from documentation, test sets have some useful properties. Imagine I’m going to add a feature to the app: the ability to convert from Fahrenheit to Celsius. This is the killer feature, clearly, but I still need to tread carefully.

While I’m developing this feature, I want to integrate it with everything that’s in production so that I know I’m not breaking everything else. I want to make sure my existing tests don’t start failing as a result of this work. However, I’m not exposing it for public use until it’s ready, so I don’t mind so much if tests for the new feature fail: I’d like them to pass, but it’s not going to break the world for anyone else if they don’t.

Test sets in the GNUstep test suite can be hopeful, which represents this middle ground. Failures of tests in hopeful sets are still reported, but as “dashed hopes” rather than failures. You can easily separate out the case “everything that should work does work” from broken code under development.

  START_SET("fahrenheit to celsius");
  testHopeful = YES;
  float minusFortyC = [converter convertToCelsius:-40.0];
  PASS(minusFortyC == -40.0, "Minus forty is the same on both scales");
  END_SET("fahrenheit to celsius");

The report of dashed hopes looks like this:

$ gnustep-tests 
Checking for presence of test subdirectories ...
--- Running tests in test ---

      3 Passed tests
      1 Dashed hope

All OK!

But we were hoping that even more tests might have passed if
someone had added support for them to the package.  If you
would like to help, please contact the package maintainer.

Promotion to production

OK, well one feature in my temperature converter is working so it’s time to integrate it into my app. How do I tell the gnustep-tests script where to find my class if I remove it from the test file?

I move the classes under test not into an application target, but a library target (a shared library, static library or framework). Then I arrange for the tests to link that library and use its headers. How you do that depends on your build system and the arrangement of your source code. In the GNUstep world it’s conventional to define a target called “check” so developers can write make check to run the tests. I also add an optional argument to choose a subset of tests, so the three examples of running the suite at the beginning of this post become:

$ make check
$ make check suite=test
$ make check suite=test/converter.m

I also arrange for the app to link the same library and use its headers, so the tests and the application use the same logic compiled with the same tools and settings.

Here’s how I arranged for the TemperatureConverter to be in its own library, using gnustep-make. Firstly, I broke the class out of test/converter.m and into a pair of files at the top level, TemperatureConverter.[hm]. Then I created this GNUmakefile at the same level:

include $(GNUSTEP_MAKEFILES)/common.make

LIBRARY_NAME=TemperatureConverter
TemperatureConverter_OBJC_FILES=TemperatureConverter.m
TemperatureConverter_HEADER_FILES=TemperatureConverter.h

-include GNUmakefile.preamble

include $(GNUSTEP_MAKEFILES)/library.make

-include GNUmakefile.postamble

Now my tests can’t find the headers or the library, so it doesn’t build again. In GNUmakefile.postamble I’ll create the “check” target described above to run the test suite in the correct argument. GNUmakefile.postamble is included (if present) after all of GNUstep-make’s rules, so it’s a good place to define custom targets while ensuring that your main target (the library in this case) is still the default.

TOP_DIR := $(CURDIR)

check::
    @(\
    ADDITIONAL_INCLUDE_DIRS="-I$(TOP_DIR)";\
    ADDITIONAL_LIB_DIRS="-L$(TOP_DIR)/$(GNUSTEP_OBJ_DIR)";\
    ADDITIONAL_OBJC_LIBS=-lTemperatureConverter;\
    LD_LIBRARY_PATH="$(TOP_DIR)/$(GNUSTEP_OBJ_DIR):${LD_LIBRARY_PATH}";\
    export ADDITIONAL_INCLUDE_DIRS;\
    export ADDITIONAL_LIB_DIRS;\
    export ADDITIONAL_OBJC_LIBS;\
    export LD_LIBRARY_PATH;\
    gnustep-tests $(suite);\
    grep -q “Failed test” tests.sum; if [ $$? -eq 0 ]; then exit 1; fi\
    )

The change to LD_LIBRARY_PATH is required to ensure that the tests can load the build version of the library. This must come first in the library path so that the tests are definitely investigating the code in the latest version of the library, not some other version that might be installed elsewhere in the system. The last line fails the build if any tests failed (meaning we can use this check as part of a continuous integration system).

More information

The GNUstep test framework is part of gnustep-make, and documentation can be found in its README. Nicola Pero has some useful tutorials about the rest of the make system, having written most of it himself.

Posted in gnustep, TDD, tool-support | Leave a comment

Happy 19th birthday, Cocoa!

On October 19th, 1994 NeXT Computer, Inc. (later NeXT Software, Inc.) published a specification for OpenStep, a cross-platform interface for application programming, based on their existing Objective-C frameworks and the Display PostScript graphics system.

A little bit of history

First there came message-passing object oriented programming, in the form of Smalltalk. Well, not first, I mean first there was Simula 67, and there were even things before that but every story has to start somewhere. In 1983 Brad Cox added Smalltalk messaging to the C language to create the Object-Oriented pre-compiler. In his work with Tom Love at Productivity Products International, this eventually became Objective-C.

Object-Oriented Programming: an Evolutionary Approach

If PPI (later Stepstone) had any larger customers than NeXT, they had none that would have a bigger impact on the software industry. In 1988 NeXT released the first version of their UNIX platform, NEXTSTEP. Its application programming interface combined the “application kit” of Objective-C objects representing windows, menus, and views with Adobe’s Display PostScript to provide a high-fidelity (I mean, if you like grey, I suppose) WYSIWYG app environment.

NeXTSTEP Programming Step One: Object-Oriented Applications

N.B.: my reason for picking the Garfinkel and Mahoney book will become clear later. It happens to be the book I learned to make apps from, too.

Certain limitations in the NEXTSTEP APIs became clear. I will not exhaustively list them nor attempt to put them into any sort of priority, suffice it to say that significant changes became necessary. When the Enterprise Objects Framework came along, NeXT also introduced the Foundation Kit, a “small set of base utility classes” designed to promote common conventions, portability and enhanced localisation through Unicode support. Hitherto, applications had used C strings and arrays.

It was time to let app developers make use of the Foundation Kit. For this (and undoubtedly other reasons), the application kit was rereleased as the App Kit, documented in the specification we see above.

The release of OpenStep

OpenStep was not merely an excuse to do application kit properly, it was also part of NeXT’s new strategy to license its software and tools to other platform vendors rather than limiting it to the few tens of thousands of its own customers. Based on the portable Foundation Kit, NeXT made OpenStep for its own platform (now called OPENSTEP) and for Windows NT, under the name OpenStep Enterprise. Sun Microsystems licensed it for SPARC Solaris, too.

What happened, um, NeXT

The first thing to notice about the next release of OpenStep is that book cover designers seem to have discovered acid circa 1997.

Rhapsody Developer's Guide

Everyone’s probably aware of NeXT’s inverse takeover of Apple at the end of 1996. The first version of OpenStep to be released by Apple was Rhapsody, a developer preview of their next-generation operating system. This was eventually turned into a product: Mac OS X Server 1.0. Apple actually also released another OpenStep product: a y2k-compliant patch to NeXT’s platform in late 1999.

It’s kindof tempting to tell the rest of the story as if the end was clear, but at the time it really wasn’t. With Rhapsody itself it wasn’t clear whether Apple would promote Objective-C for OpenStep (now called “Yellow Box”) applications, or whether they would favour Yellow Box for Java. The “Blue Box” environment for running Mac apps was just a virtual machine with an older version of the Macintosh system installed, there wasn’t a way to port Mac software natively to Rhapsody. It wasn’t even clear whether (or if so, when) the OpenStep software would become a consumer platform, or whether it was destined to be a server for traditional Mac workgroups.

That would come later, with Mac OS X, when the Carbon API was introduced. Between Rhapsody and Mac OS X, Apple introduced this transition framework so that “Classic” software could be ported to the new platform. They also dropped one third of the OpenStep-specified libraries from the system, as Adobe’s Display PostScript was replaced with Quartz and Core Graphics. Again, reasons are many and complicated, though I’m sure someone noticed that if they released Mac OS X with the DPS software then their bill for Adobe licences would increase by a factor of about 1,000. The coloured box naming scheme was dropped as Apple re-used the name of their stagecast creator software: Cocoa.

So it pretty much seemed at the time like Apple were happy to release everything they had: UNIX, Classic Mac, Carbon, Cocoa-ObjC and Cocoa-Java. Throw all of that at the wall and some of it was bound to stick.

Building Cocoa Applications

Over time, some parts indeed stuck while others slid off to make some sort of yucky mess at the bottom of the wall (you know, it is possible to take an analogy too far). Casualties included Cocoa-Java, the Classic runtime and the Carbon APIs. We end in a situation where the current Mac platform (and, by slight extension, iOS) is a direct, and very close, descendent of the OpenStep platform specified on this day in 1994.

Happy birthday, Cocoa!

Posted in Uncategorized | 4 Comments

Conflicts in my mental model of Objective-C

My worldview as it relates to the writing of software in Objective-C contains many items that are at odds with one another. I either need to resolve them or to live with the cognitive dissonance, gradually becoming more insane as the conflicting items hurl one another at my cortex.

Of the programming environments I’ve worked with, I believe that Objective-C and its frameworks are the most pleasant. On the other hand, I think that Objective-C was a hack, and that the frameworks are not without their design mistakes, regressions and inconsistencies.

I believe that Objective-C programmers are correct to side with Alan Kay in saying that the designers of C++ and Java missed out on the crucial part of object-oriented programming, which is message passing. However I also believe that ObjC missed out on a crucial part of object-oriented programming, which is the compiler as an object. Decades spent optimising the compile-link-debug-edit cycle have been spent on solving the wrong problem. On which topic, I feel conflicted by the fact that we’ve got this Smalltalk-like dynamic language support but can have our products canned for picking the same selector name as some internal secret stuff in someone else’s code.

I feel disappointed that in the last decade, we’ve just got tools that can do the same thing but in more places. On the other hand, I don’t think it’s Apple’s responsibility to break the world; their mission should be to make existing workflows faster, with new excitement being optional or third-party. It is both amazing and slightly saddening that if you defrosted a cryogenically-preserved NeXT application programmer, they would just need to learn reference counting, blocks and a little new syntax and style before they’d be up to speed with iOS apps (and maybe protocols, depending on when you threw them in the cooler).

Ah, yes, Apple. The problem with a single vendor driving the whole community around a language or other technology is that the successes or failures of the technology inevitably get caught up in the marketing messages of that vendor, and the values and attitudes ascribed to that vendor. The problem with a community-driven technology is that it can take you longer than the life of the Sun just to agree how lambdas should work. It’d be healthy for there to be other popular platforms for ObjC programming, except for the inconsistencies and conflicts that would produce. It’s great that GNUstep, Cocotron and Apportable exist and are as mature as they are, but “popular” is not quite the correct adjective for them.

Fundamentally I fear a world in which programmers think JavaScript is acceptable. Partly because JavaScript, but mostly because when a language is introduced and people avoid it for ages, then just because some CEO says all future websites must use it they start using it, that’s not healthy. Objective-C was introduced and people avoided it for ages, then just because some CEO said all future apps must use it they started using it.

I feel like I ought to do something about some of that. I haven’t, and perhaps that makes me the guy who comes up to a bunch of developers, says “I’ve got a great idea” and expects them to make it.

Posted in AAPL, Business, gnustep, iPhone, OOP, software-engineering, tool-support | Leave a comment

Reading List

I was asked “what books do you consider essential for app making”? Here’s the list. Most of these are not about specific technologies, which are fleeting and teach low-level detail. Those that are tech-specific also contain a good deal of what and why, in addition to the coverage of how.

This post is far from exhaustive.

I would recommend that any engineer who has not yet read it should read Code Complete 2. Then I would ask them the top three things they agreed with and top three they disagreed with, as criticality is the hallmark of a good engineer :-).

Other books I have enjoyed and learned from and I believe others would too:

  • Steve Krug, “Don’t make me think!”
  • Michael Feathers, “Refactoring” and “Working Effectively with Legacy Code”
  • Bruce Tate, “Seven languages in seven weeks”
  • Jez Humble and David Farley, “Continuous Delivery”
  • Hunt and Thomas, “The Pragmatic Programmer”
  • Gerald Weinberg, “The psychology of computer programming”
  • David Rice, “Geekonomics”
  • Robert M. Pirsig, “Zen and the art of motorcycle maintenance”
  • Alan Cooper, “About Face 3”
  • Jeff Johnson, “Designing with the mind in mind”
  • Fred Brooks, “the design of design”
  • Kent Beck, “Test-Driven Development”
  • Mike Cohn, “User stories applied”
  • Jef Raskin, “The humane interface”

Most app makers are probably doing object-oriented programming. The books that explain the philosophy of this and why it’s important are Meyer’s “Object-oriented software construction” and Cox’s “Object-oriented programming an evolutionary approach”.

Posted in Uncategorized | Leave a comment