Preparing for Computing’s Big One-Oh-Oh

However you slice the pie, we’re between two and three decades away from the centenary celebration for applied computing (which is of course significantly after theoretical or hypothetical advances made by the likes of Lovelace, Turing and others). You might count the anniversary of Colossus in 2043, the ENIAC in 2046, or maybe something earlier (and arguably not actually applied) like the Z3 or ABC (both 2041). Whichever one you pick, it’s not far off.

That means that the time to start organising the handover from the first century’s programmers to the second is now, or perhaps a little earlier. You can see the period from the 1940s to around 1980 as a time of discovery, when people invented new ways of building and applying computers because they could, and because there were no old ways yet. The next three and a half decades—a period longer than my life—has been a period of rediscovery, in which a small number of practices have become entrenched and people occasionally find existing, but forgotten, tools and techniques to add to their arsenal, and incrementally advance the entrenched ones.

My suggestion is that the next few decades be a period of uncovery, in which we purposefully seek out those things that have been tried, and tell the stories of how they are:

  • successful because they work;
  • successful because they are well-marketed;
  • successful because they were already deployed before the problems were understood;
  • abandoned because they don’t work;
  • abandoned because they are hard;
  • abandoned because they are misunderstood;
  • abandoned because something else failed while we were trying them.

I imagine a multi-volume book✽, one that is to the art of computer programming as The Art Of Computer Programming is to the mechanics of executing algorithms on a machine. Such a book✽ would be mostly a guide, partly a history, with some, all or more of the following properties:

  • not tied to any platform, technology or other fleeting artefact, though with examples where appropriate (perhaps in a platform invented for the purpose, as MIX, Smalltalk, BBC BASIC and Oberon all were)
  • informed both by academic inquiry and practical experience
  • more accessible than the Software Engineering Body of Knowledge
  • as accepting of multiple dissenting views as Ward’s Wiki
  • at least as honest about our failures as The Mythical Man-Month
  • at least as proud of our successes as The Clean Coder
  • more popular than The Celestial Homecare Omnibus

As TAOCP is a survey of algorithms, so this book✽ would be a survey of techniques, practices and modes of thought. As this century’s programmer can go to TAOCP to compare algorithms and data structures for solving small-scale problems then use selected algorithms and data structures in their own work, so next century’s applier of computing could go to this book✽ to compare techniques and ways of reasoning about problems in computing then use selected techniques and reasons in their own work. Few people would read such a thing from cover to cover. But many would have it to hand, and would be able to get on with the work of invention without having to rewrite all of Doug Engelbart’s work before they could get to the new stuff.

It's dangerous to go alone! Take this.

✽: don’t get hung up on the idea that a book is a collection of quires of some pigmented flat organic matter bound into a codex, though.

Intuitive is the Enemy of Good

In the previous instalment, I discussed an interview in which Alan Kay maligned growth-restricted user interfaces. Here’s the quote again:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

This is nowhere more evident than in the world of the mobile app. Any one app comprises a very small number of very focussed, very easy to use features. This has a couple of different effects. One is that my phone as a whole is an incredibly broad, incredibly shallow experience. For example, one goal I want help with from technology is:

As an obese programmer, I want to understand how I can improve my lifestyle in order to live longer and be healthier.

Is there an app for that? No; I have six apps that kindof together provide an OK, but pretty disjointed experience that gets me some dissatisfying way toward my goal. I can tell three of these apps how much I run, but I have to remember that some subset can feed information to the others but the remainder cannot. I can tell a couple of them how much I ate, but if I do it in one of them then another won’t count it correctly. Putting enough software to fulfil my goal into one app presumably breaks the cardinal rule of making every feature available within two gestures of the app’s launch screen. Therefore every feature is instead hidden behind the externalised myriad gestures required to navigate my home screens and their folders to get to the disparate subsets of utility.

The second observable effect is that there is a lot of wasted potential in both the device, and the person operating that device. You have never met an expert iPhone user, for the simple reason that someone who’s been using an iPhone for six years is no more capable than someone who has spent a week with their new device diligently investigating. There is no continued novelty, there are no undiscovered experiences. There is no expertise. Welcome to the land of the perpetual beginner.

Thankfully, marketing provided us with a thought-terminating cliché, to help us in our discomfort with this situation. They gave us the cars and trucks analogy. Don’t worry that you can’t do everything you’d expect with this device. You shouldn’t expect to do absolutely everything with this device. Notice the sleight of brain?

Let us pause for a paragraph to notice that even if making the most simple, dumbed-down (wait, sorry, intuitive) experience were our goal, we use techniques that keep that from within our grasp. An A/B test will tell you whether this version is incrementally “better” than that version, but will not tell you whether the peak you are approaching is the tallest mountain in the range. Just as with evolution, valley crossing is hard without a monumental shake-up or an interminable period of neutral drift.

Desktop environments didn’t usually get this any better. The learning path for most WIMP interfaces can be listed thus:

  1. cannot use mouse.
  2. can use mouse, cannot remember command locations.
  3. can remember command locations.
  4. can remember keyboard shortcuts.
  5. ???
  6. programming.

A near-perfect example of this would be emacs. You start off with a straightforward modeless editor window, but you don’t know how to save, quit, load a file, or anything. So you find yourself some cheat-sheet, and pretty soon you know those things, and start to find other things like swapping buffers, opening multiple windows, and navigating around a buffer. Then you want to compose a couple of commands, and suddenly you need to learn LISP. Many people will cap out at level 4, or even somewhere between 3 and 4 (which is where I am with most IDEs unless I use them day-in, day-out for months).

The lost magic is in level 5. Tools that do a good job of enabling improvement without requiring that you adopt a skill you don’t identify with (i.e. programming, learning the innards of a computer) invite greater investment over time, rewarding you with greater results. Photoshop gets this right. Automator gets it right. AppleScript gets it wrong; that’s just programming (in fact it’s all the hard bits from Smalltalk with none of the easy or welcoming bits). Yahoo! Pipes gets it right but markets it wrong. Quartz Composer nearly gets it right. Excel is, well, a bit of a boundary case.

The really sneaky bit is that level 5 is programming, just with none of the trappings associated with the legacy way of programming that professionals do it. No code (usually), no expressing your complex graphical problem as text, no expectation that you understand git, no philosophical wrangling over whether squares are rectangles or not. It’s programming, but with a closer affinity with the problem domain than bashing out semicolons and braces. Level 5 is where we can enable people to get the most out of their computers, without making them think that they’re computering.

How much programming language is enough?

Many programmers have opinions on programming languages. Maybe, if I present an opinion on programming languages, I can pass off as a programmer.

An old debate in psychology and anthropology is that of nature vs nurture, the discussion over which characteristics of humans and their personalities are innate and which are learned or otherwise transferred.

We can imagine two extremists in this debate turning their attention to programming languages. On the one hand, you might imagine that if the ability to write a computer program is somehow innate, then there is a way of expressing programming concepts that is closely attuned to that innate representation. Find this expression, and everyone will be able to program as fast as they can think. Although there’ll still be arguments over bracket placement, and Dijkstra will still tell you it’s rubbish.

On the other hand, you might imagine that the mind is a blank slate, onto which can be writ any one (or more?) of diverse patterns. Then the way in which you will best express a computer program is dependent on all of your experiences and interactions, with the idea of a “best” way therefore being highly situated.

We will leave this debate behind. It seems that programming shares some brain with learning other languages, and when it comes to deciding whether language is innate or learned we’re still on shaky ground. It seems unlikely on ethical grounds that Nim Chimpsky will ever be joined by Charles Babboonage, anyway.

So, having decided that there’s still an open question, there must exist somewhere into which I can insert my uninvited opinion. I had recently been thinking that a lot of the ceremony and complexity surrounding much of modern programming has little to do with it being difficult to represent a problem to a computer, and everything to do with there being unnecessary baggage in the tools and languages themselves. That is to say that contrary to Fred Brooks’s opinion, we are overwhelmed with Incidental Complexity in our art. That the mark of expertise in programming is being able to put up with all the nonsense programming makes you do.

From this premise, it seems clear that less complex programming languages are desirable. I therefore look admirably at tools like Self, io and Scheme, which all strive for a minimum number of distinct parts.

However, Clemens Szyperski from Microsoft puts forward a different argument in this talk. He works on the most successful development environment. In the talk, Szyperski suggests that experienced programmers make use of, and seek out, more features in a programming language to express ideas concisely, using different features for different tasks. Beginners, on the other hand, benefit from simpler languages where there is less to impede progress. So, what now? Does the “less is more” principle only apply to novice programmers?

Maybe the experienced programmers Szyperski identified are not experts. There’s an idea that many programmers are expert beginners, that would seem to fit Szyperski’s model. The beginner is characterised by a microscopic, non-holistic view of their work. They are able to memorise and apply heuristic rules that help them to make progress.

The expert beginner is someone who has simply learned more rules. To the expert beginner, there is a greater number of heuristics to choose from. You can imagine that if each rule is associated with a different piece of programming language grammar, then it’d be easier to remember the (supposed) causality behind “this situation calls for that language feature”.

That leaves us with some interesting open questions. What would a programming tool suitable for experts (or the proficient) look like? Do we have any? Alan Kay is fond of saying that we’re stuck with novice-friendly user experiences, that don’t permit learning or acquiring expertise:

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles, etc. and to a lesser extent automobiles).

Perhaps, while you could never argue that common programming languages don’t have learning curves, they are still “generally worthless and/or debilitating”. Perhaps it’s true that expertise at programming means expertise at jumping through the hoops presented by the programming language, not expertise at telling a computer how to solve a problem in the real world.

On too much and too little

In the following text, remember that words like me or I are to be construed in the broadest possible terms.

It’s easy to be comfortable with my current level of knowledge. Or perhaps it’s not the value, but the derivative of the value: the amount of investment I’m putting into learning a thing. Anyway, it’s easy to tell stories about why the way I’m doing it is the right, or at least a good, way to do it.

Take, for example, object-oriented design. We have words to describe insufficient object-oriented design. Spaghetti Code, or a Big Ball of Mud. Obviously these are things that I never succumb to, but other people do. So clearly (actually, not clearly at all, but that’s beside the point) there is some threshold level of design or analysis practice that represents an acceptable minimum. Whatever that value is, it’s less than the amount that I do.

Interestingly there are also words to describe the over-application of object-oriented design. Architecture Astronauts, for example, are clearly people who do too much architecture (in the same way that NASA astronauts got carried away with flying and overdid it, I suppose). It’s so cold up in space that you’ll catch a fever, resulting in Death by UML Fever. Clearly I am only ever responsible for tropospheric architecture, thus we conclude that there is some acceptable maximum threshold for analysis and design too.

The really convenient thing is that my current work lies between these two limits. In fact, I’m comfortable in saying that it always has.

But wait. I also know that I’m supposed to hate the code that I wrote six months ago, probably because I wasn’t doing enough of whatever it is that I’m doing enough of now. But I don’t remember thinking six months ago that I was below the threshold for doing acceptable amounts of the stuff that I’m supposed to be doing. Could it be, perhaps, that the goalposts have conveniently moved in that time?

Of course they have. What’s acceptable to me now may not be in the future, either because I’ve learned to do more of it or because I’ve learned that I was overdoing it. The trick is not so much in recognising that, but in recognising that others who are doing more or less than me are not wrong, they could in fact be me at a different point on my timeline but with the benefit that they exist now so I can share my experiences with them and work things out together. Or they could be someone with a completely different set of experiences, which is even more exciting as I’ll have more stories to swap.

When it comes to techniques and devices for writing software, I tend to prefer overdoing things and then finding out which bits I don’t really need after all, rather than under-application. That’s obviously a much larger cognitive and conceptual burden, but it stems from the fact that I don’t think we really have any clear ideas on what works and what doesn’t. Not much in making software is ever shown to be wrong, but plenty of it is shown to be out of fashion.

Let me conclude by telling my own story of object-oriented design. It took me ages to learn object-oriented thinking. I learned the technology alright, and could make tools that used the Objective-C language and Foundation and AppKit, but didn’t really work out how to split my stuff up into objects. Not just for a while, but for years. A little while after that Death by UML Fever article was written, my employer sent me to Sun to attend their Object-Oriented Analysis and Design Using UML course.

That course in itself was a huge turning point. But just as beneficial was the few months afterward in which I would architecturamalise all the things, and my then-manager wisely left me to it. The office furniture was all covered with whiteboard material, and there soon wasn’t a bookshelf or cupboard in my area of the office that wasn’t covered with sequence diagrams, package diagrams, class diagrams, or whatever other diagrams. I probably would’ve covered the external walls, too, if it wasn’t for Enterprise Architect. You probably have opinions(TM) of both of the words in that product’s name. In fact I also used OmniGraffle, and dia (my laptop at the time was an iBook G4 running some flavour of Linux).

That period of UMLphoria gave me the first few hundred hours of deliberate practice. It let me see things that had been useful, and that had either helped me understand the problem or communicate about it with my peers. It also let me see the things that hadn’t been useful, that I’d constructed but then had no further purpose for. It let me not only dial back, but work out which things to dial back on.

I can’t imagine being able to replace that experience with reading web articles and Stack Overflow questions. Sure, there are plenty of opinions on things like OOA/D and UML on the web. Some of those opinions are even by people who have tried it. But going through that volume of material and sifting the experience-led advice from the iconoclasm or marketing fluff, deciding which viewpoints were relevant to my position: that’s all really hard. Harder, perhaps, than diving in and working slowly for a few months while I over-practice a skill.

ClassBrowser: warts and all

I previously gave a sneak peak of ClassBrowser, a dynamic execution environment for Objective-C. It’s not anything like ready for general use (in fact it can’t really do ObjC very well at all), but it’s at the point where you can kick the tyres and contribute pull requests. Here’s what you need to know:

Have a lot of fun!

ClassBrowser is distributed under the terms of the University of Illinois/NCSA licence (because it is based partially on code distributed with clang, which is itself under that licence).

A sneaky preview of ClassBrowser

Let me start with a few admissions. Firstly, I have been computering for a good long time now, and I still don’t really understand compilers. Secondly, work on my GNUstep Web side-project has tailed off for a while, because I decided I wanted to try something out to learn about the compiler before carrying on with that work. This post will mostly be about that something.

My final admission: if you saw my presentation on the ObjC runtime you will have seen an app called “ClassBrowser” where I showed that some of the Foundation classes are really in the CoreFoundation library. Well, there were two halves to the ClassBrowser window, and I only showed you the top half that looked like the Smalltalk class browser. I’m sorry.

So what’s the bottom half?

This is what the bottom half gives me. It lets me go from this:

ClassBrowser before

via this:

Who are you calling a doIt?

to this:

ClassBrowser after

What just happened?

You just saw some C source being compiled to LLVM bit code, which is compiled just-in-time to native code and executed, all inside that browser app.

Why?

Well why not? Less facetiously:

  • I’m a fan of Smalltalk. I want to build a thing that’s sort of a Smalltalk, except that rather than being the Smalltalk language on the Objective-C runtime (like F-Script or objective-smalltalk), it’ll be the Objective-C(++) language on the Objective-C runtime. So really, a different way of writing Objective-C.
  • I want to know more about how clang and LLVM work, and this is as good a way as any.
  • I think, when it actually gets off the ground, this will be a faster way of writing test-first Objective-C than anything Xcode can do. I like Xcode 5, I think it’s the better Xcode 4 that I always wanted, but there are gains to be had by going in a completely different direction. I just think that whoever strikes out in such a direction should not release their research project as a new version of Xcode :-).

Where can I get me a ClassBrowser?

You can’t, yet. There’s some necessary housekeeping that needs to be done before a first release, replacing some “research” hacks with good old-fashioned tested code and ensuring GNUstep-GUI compatibility. Once that’s done, a rough-and-nearly-ready first open source release will ensue.

Then there’s more work to be done before it’s anything like useful. Particularly while it’s possible to use it to run Objective-C, it’s far from pleasant. I’ve had some great advice from LLVM IRC on how to address that and will try to get it to happen soon.

Updating my ObjC web app on git push

I look at SignUp.woa running on my Ubuntu server, and it looks like this.

Sign up for the APPropriate Behaviour print edition!

That title text doesn’t quite look right.

$ open -a TextWrangler Main.wo/Main.html
$ make
$ make check
$ git add -A
$ git commit -m "Use new main page heading proposed by marketing"
$ git push server master

I refresh my browser.

APPropriate Behaviour print edition: sign up here!

The gsw-hooks README explains how this works. My thanks to Dan North for guiding me around a problem I had between my keyboard and my chair.

Automated tests with the GNUstep test framework

Setup

Of course, it’d be rude not to use a temperature converter as the sample project in a testing blog post. The only permitted alternative is a flawed bank account model.

I’ll create a folder for my project, then inside it a folder for the tests:

$ mkdir -p TemperatureConverter/test
$ cd TemperatureConverter

The test runner, gnustep-tests, is a shell script that looks for tests in subfolders of the current folder. If I run it now, nothing will happen because there aren’t any tests. I’ll tell it that the test folder will contain tests by creating an empty marker file that the script looks for.

$ touch test/TestInfo

Of course, there still aren’t any tests, so I should give it something to build and run. The test fixture files themselves can be Objective-C or Objective-C++ source files.

$ cat > converter.m
#include "Testing.h"

int main(int argc, char **argv)
{
}
^D

Now the test runner has something to do, though not very much. Any of the invocations below will cause the runner to find this new file, compile and run it. It’ll also look for test successes and failures, but of course there aren’t any yet. Still, these invocations show how large test suites could be split up to let developers only run the parts relevant to their immediate work.

$ gnustep-tests #tests everything
$ gnustep-tests test #tests everything in the test/ folder
$ gnustep-tests test/converter.m #just the tests in the specified file

The first test

Following the standard practice of red-green-refactor, I’ll write the test that I want to be able to write and watch it fail. This is it:

#include "Testing.h"

int main(int argc, char **argv)
{
  TemperatureConverter *converter = [TemperatureConverter new];
  float minusFortyF = [converter convertToFahrenheit:-40.0];
  PASS(minusFortyF == -40.0, "Minus forty is the same on both scales");
  return 0;
}

The output from that:

$ gnustep-tests
Checking for presence of test subdirectories ...
--- Running tests in test ---

test/converter.m:
Failed build:     

      1 Failed build


Unfortunately we could not even compile all the test programs.
This means that the test could not be run properly, and you need
to try to figure out why and fix it or ask for help.
Please see /home/leeg/GNUstep/TemperatureConverter/tests.log for more detail.

Unsurprisingly, it doesn’t work. Perhaps I should write some code. This can go at the top of the converter.m test file for now.

#import <Foundation/Foundation.h>

@interface TemperatureConverter : NSObject
- (float)convertToFahrenheit:(float)celsius;
@end

@implementation TemperatureConverter
- (float)convertToFahrenheit:(float)celsius;
{
  return -10.0; //WAT
}
@end

I’m reasonably confident that’s correct. I’ll try it.

$gnustep-tests 
Checking for presence of test subdirectories ...
--- Running tests in test ---

test/converter.m:
Failed test:     converter.m:19 ... Minus forty is the same on both scales

      1 Failed test


One or more tests failed.  None of them should have.
Please submit a patch to fix the problem or send a bug report to
the package maintainer.
Please see /home/leeg/GNUstep/TemperatureConverter/tests.log for more detail.

Oops, I seem to have a typo which should be easy enough to correct. Here’s proof that it now works:

$ gnustep-tests
Checking for presence of test subdirectories ...
--- Running tests in test ---

      1 Passed test

All OK!

Second point, first set

If every temperature in Celsius were equivalent to -40F, then the two scales would not be measuring the same thing. It’s time to discover whether this class is useful for a larger range of inputs.

All of the tests I’m about to add are related to the same feature, so it makes sense to document these tests as a group. The suite calls these groups “sets”, and it works like this:

int main(int argc, char **argv)
{
  TemperatureConverter *converter = [TemperatureConverter new];
  START_SET("celsius to fahrenheit");
  float minusFortyF = [converter convertToFahrenheit:-40.0];
  PASS(minusFortyF == -40.0, "Minus forty is the same on both scales");
  float freezingPoint = [converter convertToFahrenheit:0.0];
  PASS(freezingPoint == 32.0, "Water freezes at 32F");
  float boilingPoint = [converter convertToFahrenheit:100.0];
  PASS(boilingPoint == 212.0, "Water boils at 212F");
  END_SET("celsius to fahrenheit");
  return 0;
}

Now at this point I could build a look-up table to map inputs onto outputs in my converter method, or I could choose a linear equation.

- (float)convertToFahrenheit:(float)celsius;
{
  return (9.0/5.0)*celsius + 32.0;
}

Even tests have aspirations

Aside from documentation, test sets have some useful properties. Imagine I’m going to add a feature to the app: the ability to convert from Fahrenheit to Celsius. This is the killer feature, clearly, but I still need to tread carefully.

While I’m developing this feature, I want to integrate it with everything that’s in production so that I know I’m not breaking everything else. I want to make sure my existing tests don’t start failing as a result of this work. However, I’m not exposing it for public use until it’s ready, so I don’t mind so much if tests for the new feature fail: I’d like them to pass, but it’s not going to break the world for anyone else if they don’t.

Test sets in the GNUstep test suite can be hopeful, which represents this middle ground. Failures of tests in hopeful sets are still reported, but as “dashed hopes” rather than failures. You can easily separate out the case “everything that should work does work” from broken code under development.

  START_SET("fahrenheit to celsius");
  testHopeful = YES;
  float minusFortyC = [converter convertToCelsius:-40.0];
  PASS(minusFortyC == -40.0, "Minus forty is the same on both scales");
  END_SET("fahrenheit to celsius");

The report of dashed hopes looks like this:

$ gnustep-tests 
Checking for presence of test subdirectories ...
--- Running tests in test ---

      3 Passed tests
      1 Dashed hope

All OK!

But we were hoping that even more tests might have passed if
someone had added support for them to the package.  If you
would like to help, please contact the package maintainer.

Promotion to production

OK, well one feature in my temperature converter is working so it’s time to integrate it into my app. How do I tell the gnustep-tests script where to find my class if I remove it from the test file?

I move the classes under test not into an application target, but a library target (a shared library, static library or framework). Then I arrange for the tests to link that library and use its headers. How you do that depends on your build system and the arrangement of your source code. In the GNUstep world it’s conventional to define a target called “check” so developers can write make check to run the tests. I also add an optional argument to choose a subset of tests, so the three examples of running the suite at the beginning of this post become:

$ make check
$ make check suite=test
$ make check suite=test/converter.m

I also arrange for the app to link the same library and use its headers, so the tests and the application use the same logic compiled with the same tools and settings.

Here’s how I arranged for the TemperatureConverter to be in its own library, using gnustep-make. Firstly, I broke the class out of test/converter.m and into a pair of files at the top level, TemperatureConverter.[hm]. Then I created this GNUmakefile at the same level:

include $(GNUSTEP_MAKEFILES)/common.make

LIBRARY_NAME=TemperatureConverter
TemperatureConverter_OBJC_FILES=TemperatureConverter.m
TemperatureConverter_HEADER_FILES=TemperatureConverter.h

-include GNUmakefile.preamble

include $(GNUSTEP_MAKEFILES)/library.make

-include GNUmakefile.postamble

Now my tests can’t find the headers or the library, so it doesn’t build again. In GNUmakefile.postamble I’ll create the “check” target described above to run the test suite in the correct argument. GNUmakefile.postamble is included (if present) after all of GNUstep-make’s rules, so it’s a good place to define custom targets while ensuring that your main target (the library in this case) is still the default.

TOP_DIR := $(CURDIR)

check::
    @(\
    ADDITIONAL_INCLUDE_DIRS="-I$(TOP_DIR)";\
    ADDITIONAL_LIB_DIRS="-L$(TOP_DIR)/$(GNUSTEP_OBJ_DIR)";\
    ADDITIONAL_OBJC_LIBS=-lTemperatureConverter;\
    LD_LIBRARY_PATH="$(TOP_DIR)/$(GNUSTEP_OBJ_DIR):${LD_LIBRARY_PATH}";\
    export ADDITIONAL_INCLUDE_DIRS;\
    export ADDITIONAL_LIB_DIRS;\
    export ADDITIONAL_OBJC_LIBS;\
    export LD_LIBRARY_PATH;\
    gnustep-tests $(suite);\
    grep -q “Failed test” tests.sum; if [ $$? -eq 0 ]; then exit 1; fi\
    )

The change to LD_LIBRARY_PATH is required to ensure that the tests can load the build version of the library. This must come first in the library path so that the tests are definitely investigating the code in the latest version of the library, not some other version that might be installed elsewhere in the system. The last line fails the build if any tests failed (meaning we can use this check as part of a continuous integration system).

More information

The GNUstep test framework is part of gnustep-make, and documentation can be found in its README. Nicola Pero has some useful tutorials about the rest of the make system, having written most of it himself.

Conflicts in my mental model of Objective-C

My worldview as it relates to the writing of software in Objective-C contains many items that are at odds with one another. I either need to resolve them or to live with the cognitive dissonance, gradually becoming more insane as the conflicting items hurl one another at my cortex.

Of the programming environments I’ve worked with, I believe that Objective-C and its frameworks are the most pleasant. On the other hand, I think that Objective-C was a hack, and that the frameworks are not without their design mistakes, regressions and inconsistencies.

I believe that Objective-C programmers are correct to side with Alan Kay in saying that the designers of C++ and Java missed out on the crucial part of object-oriented programming, which is message passing. However I also believe that ObjC missed out on a crucial part of object-oriented programming, which is the compiler as an object. Decades spent optimising the compile-link-debug-edit cycle have been spent on solving the wrong problem. On which topic, I feel conflicted by the fact that we’ve got this Smalltalk-like dynamic language support but can have our products canned for picking the same selector name as some internal secret stuff in someone else’s code.

I feel disappointed that in the last decade, we’ve just got tools that can do the same thing but in more places. On the other hand, I don’t think it’s Apple’s responsibility to break the world; their mission should be to make existing workflows faster, with new excitement being optional or third-party. It is both amazing and slightly saddening that if you defrosted a cryogenically-preserved NeXT application programmer, they would just need to learn reference counting, blocks and a little new syntax and style before they’d be up to speed with iOS apps (and maybe protocols, depending on when you threw them in the cooler).

Ah, yes, Apple. The problem with a single vendor driving the whole community around a language or other technology is that the successes or failures of the technology inevitably get caught up in the marketing messages of that vendor, and the values and attitudes ascribed to that vendor. The problem with a community-driven technology is that it can take you longer than the life of the Sun just to agree how lambdas should work. It’d be healthy for there to be other popular platforms for ObjC programming, except for the inconsistencies and conflicts that would produce. It’s great that GNUstep, Cocotron and Apportable exist and are as mature as they are, but “popular” is not quite the correct adjective for them.

Fundamentally I fear a world in which programmers think JavaScript is acceptable. Partly because JavaScript, but mostly because when a language is introduced and people avoid it for ages, then just because some CEO says all future websites must use it they start using it, that’s not healthy. Objective-C was introduced and people avoided it for ages, then just because some CEO said all future apps must use it they started using it.

I feel like I ought to do something about some of that. I haven’t, and perhaps that makes me the guy who comes up to a bunch of developers, says “I’ve got a great idea” and expects them to make it.

At the old/new interface: jQuery in WebObjects

It turns out to be really easy to incorporate jQuery into an Objective-C WebObjects app (targeting GNUstep Web). In fact, it doesn’t really touch the Objective-C source at all. I defined a WOJavascript object that loads jQuery itself from the application’s web server resources folder, so it can be reused across multiple components:

jquery_script:WOJavaScript {scriptFile="jquery-2.0.2.js"}

Then in the components where it’s used, any field that needs uniquely identifying should have a CSS identifier, which can be bound via WebObjects’s id binding. In this example, a text field for entering an email address in a form will only be enabled if the user has checked a “please contact me” checkbox.

email_field:WOTextField {value=email; id="emailField"}
contact_boolean:WOCheckBox {checked=shouldContact; id="shouldContact"}

The script itself can reside in the component’s HTML template, or in a WOJavascript that looks in the app’s resources folder or returns javascript that’s been prepared by the Objective-C code.

    <script>
function toggleEmail() {
    var emailField = $("#emailField");
    var isChecked = $("#shouldContact").prop("checked");
    emailField.prop("disabled", !isChecked);
    if (!isChecked) {
        emailField.val("");
    }
}
$(document).ready(function() {
    toggleEmail();
    $("#shouldContact").click(toggleEmail);
});
    </script>

I’m a complete newbie at jQuery, but even so that was easier than expected. I suppose the lesson to learn is that old technology isn’t necessarily incapable technology. People like replacing their web backend frameworks every year or so; whether there’s a reason (beyond caprice) warrants investigation.