How to handle Xcode in your meta-build system’s iOS or Mac app target

OK, I’ve said before in APPropriate Behaviour that I dislike build systems that build other build systems:

Some build procedures get so complicated that they spawn another build system that configures the build environment for the target system before building. An archetypal example is GNU autotools – which actually has a three-level build system. Typically the developers will run `autoconf`, a tool that examines the project to find out what questions the subsequent step should ask and generates a script called `configure`. The user downloads the source package and runs `configure`, which inspects the compilation environment and uses a collection of macros to create a Makefile. The Makefile can then compile the source code to (finally!) create the product.

As argued by Poul-Henning Kamp, this is a bad architecture that adds layers of cruft to work around code that has not been written to be portable to the environments where it will be used. Software written to be built with tools like these is hard to read, because you must read multiple languages just to understand how one line of code works.

One problem that arises in any cross-platform development is that assumptions about “the other platforms” (being the ones you didn’t originally write the software on) are sometimes made based on one of the following sources of information:

  • none
  • a superficial inspection of the other platform
  • analogy to the “primary” platform

An example of the third case: I used to work on the Mac version of a multi-platform product, certain core parts of which were implemented by cross-platform libraries. One of these libraries just needed a little configuration for each platform: tell it what file extension to use for shared libraries, and give it the path to the Registry.

What cost me a morning today was an example of the second case: assuming that all Macs are like the one you tried. Let me show you what I mean. Here’s the contents of /Developer on my Mac:

$ ls /Developer/
WebObjects

Wait, where’s Xcode? Oh right, they moved it for the App Store builds didn’t they?

$ ls /Applications/Xcode.app
ls: /Applications/Xcode.app: No such file or directory

WHAAAAA?

OMFG!

Since Xcode 2.5, Xcode has been relocatable and can live anywhere on the filesystem. Even if it is in one of the usual places, that might not be the version a developer wants to use. I keep a few different Xcodes around: usually the current one, the last one I knew everything worked on, and a developer preview release when there is one. I then also tend to forget to throw old Xcodes away, so I’ve got 4 different versions at the moment.

But surely this is all evil chaos from those crazy precious Mac-using weirdos! How can you possibly cope with all of that confusion? Enter xcode-select:

$ xcode-select -print-path
/Applications/Xcode4.6.app/Contents/Developer

Xcode-select is in /usr/bin, so you don’t have the bootstrapping problem of trying to find the tool that lets you find the thing. That means that you can always rely on it being in one place for your scripts or other build tools. You can use it in a shell script:

XCODE_DEVELOPER_DIR=`/usr/bin/xcode-select -print-path`

or in a CMake file:

exec_program(/usr/bin/xcode-select ARGS -print-path OUTPUT_VARIABLE XCODE_DEVELOPER_DIR)

or in whatever other tool you’re using. The path is manually chosen by the developer (using the -switch option), so if for some reason it doesn’t work out (like the developer has deleted that version of Xcode without updating xcode-select), then you can fall back to looking in default locations.

Please do use xcode-select as a first choice for finding Xcode or the developer folder on any Mac system, particularly if your project uses a build generator. It’s more robust to changes—either from Apple or from the users of that Mac—than relying on the developer tools being installed to their default location.

I just updated Appropriate Behaviour

The new release of Appropriate Behaviour—the book about things programmers should do that aren’t programming—is now up. The most obvious, and most awesome, change in this update is a fabulous new cover, designed by Sebastian Hermida of leanpubcovers.com. Should you be in the market for a cover page, I’d strongly recommend him.

Other changes in this release include additions to the (ends of the) chapters on coding practices and learning, and I’ve added part of a new chapter on requirements engineering. As ever, discussion of the book is welcome in its glassboard, details of which are in the introduction.

I’ve found it really interesting in researching this book that I can go back decades and find information that has either been forgotten, or was seemingly ignored even at the time of publication. I think it’s quite clear that there’s a gulf between software as practiced by people who make software, and software as researched by academics; it’s therefore not surprising to see journal articles that apparently never got read by commercial sector developers.

What is more interesting is the extent to which “mainstream” programming books, including ones that apparently made a big splash at the time of their publication, no longer seem relevant. They’ve either been completely dropped from our consciousness (hands up everyone who’s read Peopleware in the last five years), or have been adapted into a one-sentence précis that’s become part of the mythology of programming. A thought experiment by way of an example of this mythologising: quote any sentence from The Mythical Man Month except the one about adding people to a late project. What was the rest of the book about? Is anything else in it relevant to what we do today? Do we know that even that sentence is relevant, or does it just sound plausible?

I’ve been having lots of fun discovering these forgotten entries in our history and bringing some of them into a modern story about programming. But Appropriate Behaviour is not a history book; if anything, it’s a book on social anthropology. The lesson to learn from this post is that it’s not the first anthropological study of programmers; I’d argue that 1971’s The Psychology of Computer Programming is more anthropology than it is psychology. It’s very different from Appropriate Behaviour but they both tread the same ground, analysing the problems faced by a programmer that aren’t directly related to telling a computer what to do.

I imagine the history book would be fun to write, though for the moment I present this, which I hope is also fun to read.

Happy Birthday, Objective-C!

OK, I have to admit that I actually missed the party. Brad Cox first described his “Object-Oriented pre-compiler”, OOPC, in The January 1983 issue of ACM SIGPLAN Notices.

This describes the Object Oriented Pre-Compiler, OOPC, a language and a run-time library for producing C programs that operate by the run-time conventions of Smalltalk 80 in a UNIX environment. These languages offer Object Oriented Programming in which data, and the programs which may access it, are designed, built and maintained as inseparable units called objects.

Notice that the abstract has to explain what OOP is: these were early days at least as far as the commercial software industry viewed objects. Reading the OOPC paper, you can tell that this is the start of what became known as Objective-C. It has a special syntax for sending Smalltalk-style messages to objects identified by pointers to structures, though not the syntax you’ll be used to:

someObject = {|Object, "new"|};
{|myArray, "addObject:", someObject|};

The infix notation [myArray addObject:someObject]; came later, but by 1986 Cox had published the first edition of Object-Oriented Programming: An Evolutionary Approach and co-founded Productivity Products International (later Stepstone) to capitalise on the Objective-C language. I’ve talked about the version of ObjC described in this book in this post, and the business context of this in Software ICs and a component marketplace.

It’s this version of Objective-C, not OOPC, that NeXT licensed from PPI as the basis of the Nextstep API (as distinct from the NEXTSTEP operating system: UNIX is case sensitive, you know). They built the language into a fork of the GNU Compiler Collection, and due to the nature of copyleft this meant they had to make their adaptations available, so GCC on other platforms gained Objective-C too.

Along the way, NeXT added some features to the language: compiler-generated static instances of string classes, for example. They added protocols: I recorded an episode of NSBrief with Saul Mora discussing how protocols were originally used to support distributed objects, but became important design tools. This transformation was particularly accelerated by Java’s adoption of protocols as interfaces. At some (as far as I can tell, not well documented) point in its life, Stepstone sold the rights to ObjC to NeXT, then licensed it back so they could continue supporting their own compiler.

There isn’t a great deal of change to Objective-C from 1994 for about a decade, despite or perhaps due to the change of stewardship in 1996/1997 as NeXT was purchased by Apple. Then, in about 2003, Apple introduced language-level support for exceptions and critical sections. In 2007, “Objective-C 2.0” was released, adding a collection enumeration syntax, properties, garbage collection and some changes to the runtime library. Blocks—a system for supporting closures that had been present in Smalltalk but missing from Objective-C—were added in a later release that briefly enjoyed the name “Objective-C 2.1”, though I don’t think that survived into the public release. To my knowledge 2.0 is the only version designation any Apple release of Objective-C has had.

Eventually, Apple observed that the autozone garbage collector was inappropriate for the kind of software they wanted Objective-C programmers to be making, and incorporated reference-counted memory management from their (NeXT’s, initially) object libraries into the language to enable Automatic Reference Counting.

And that’s where we are now! But what about Dr. Cox? Stepstone’s business was not the Objective-C language itself, but software components, including ICPak101, ICPak201 and the TaskMaster environment for building applications out of objects. It turned out that the way they wanted to sell object frameworks (viz. in a profitable way) was not the way people wanted to buy object frameworks (viz. not at all). Cox turned his attention to Digital Rights Management, and warming up the marketplace to accept pay-per-use licensing of digital artefacts. He’s since worked on teaching object-oriented programming, enterprise architecture and other things; his blog is still active.

So, Objective-C, I belatedly raise my glass to you. You’re nearly as old as I am, and that’s never likely to change. But we’ve both grown over that time, and it’s been fun growing up with you.

Does the history of making software exist?

A bit of a repeated theme in the construction of APPropriate Behaviour has been that I’ve tried to position certain terms or concepts in their historical context, and found it difficult, or impossible to do so with sufficient rigour. There’s an extent to which I don’t want the book to become historiographical so have avoided going too deep into that angle, but have discovered that either no-one else has or that if they have, I can’t find their work.

What often happens is that I can find a history or even many histories, but that these aren’t reliable. I already wrote in the last post on this blog about the difficulties in interpreting references to the 1968 NATO conference; well today I read another two sources that have another two descriptions of the conference and how it kicked off the software crisis. Articles like that linked in the above post help to defuse some of the myths and partisan histories, but only in very specific domains such as the term “software crisis”.

Occasionally I discover a history that has been completely falsified, such as the great sequence of research papers that “prove” how some programmers are ten (or 25, or 1000) times more productive than others or those that “prove” bugs cost 100x more to fix in maintenance. Again, it’s possible to find specific deconstructions of these memes (mainly by reading Laurent Bossavit), but having discovered the emperor is naked, we have no replacement garments with which to clothe him.

There are a very few subjects where I think the primary and secondary literature necessary to construct a history exist, but that I lack the expertise or, frankly, the patience to pursue it. For example you could write a history of the phrase “software engineering”, and how it was introduced to suggest a professionalism beyond the craft discipline that went before it, only to become a symbol of lumbering lethargy among adherents of the craft discipline that came after it. Such a thing might take a trained historian armed with a good set of library cards a few years to complete (the book The Computer Boys Take Over covers part of this story, though it is written for the lay reader and not the software builder). But what of more technical ideas? Where is the history of “Object-Oriented”? Does that phrase mean the same thing in 2013 as in 1983? Does it even mean the same thing to different people in 2013?

Of course there is no such thing as an objective history. A history is an interpretation of a collection of sources, which are themselves interpretations drawn from biased or otherwise limited fonts of knowledge. The thing about a good history is that it lets you see what’s behind the curtain. The sources used will all be listed, so you can decide whether they lead you to the same conclusions as the author. It concerns me that we either don’t have, or I don’t have access to, resources allowing us to situate what we’re trying to do today in the narrative of everything that has gone before and will go hence. That we operate in a field full of hype and innuendo, and lack the tools to detect Humpty Dumptyism and other revisionist rhetoric.

With all that said, are the histories of the software industry out there? I don’t mean the collectors like the museums, who do an important job but not the one I’m interested in here. I mean the histories that help us understand our own work. Do degrees in computer science, to the extent they consider “real world” software making at all, teach the history of the discipline? Not the “assemblers were invented in 1949 and the first binary tree was coded in 19xy” history, but the rise and fall of various techniques, fads, disciplines, and so on? Or have I just volunteered for another crazy project?

I hope not, I haven’t got a good track record at remembering my library cards. Answers on a tweet, please.

An observation designed to aid the reading of books on software

Wherever a book on writing software describes the 1968 NATO conference in Garmisch on Software Engineering, consider whether the clarity of the argument can be improved by adding the following parenthetical clause:

[…], a straw man version of an otherwise real conference that took place in 1968, […]

Usually it can. The proceedings of the conference, which were written post facto by the editors and typists locking themselves in a hotel room with tapes of the sessions and typewriters in various states of repair, are available at Brian Randell’s website along with reflections on their creation. Does the report actually contain the fact presented in whichever book you’re reading now?

Probably not. The article “Crisis, What Crisis?” Reconsidering the Software Crisis of the 1960s and the Origins of Software Engineering investigates the position of the 1968 report in the rhetoric of the software industry and reliance by secondary authors on its content. The conclusion is that the report was largely ignored for about a decade, when it suddenly became the thing that kickstarted the software crisis and software engineering.

It would only be a little satirical to say “the software crisis was invented circa 1980 by Edsger Dijkstra, who postulated its origins in the NATO conference of 1968, a straw man conference” etc.

Anyone Can Write A Manifesto And You Can Too!™

Over a small number of years, I have helped to write some software. During this time I have come to value:

That is, while the things on the right are sometimes the means, the thing on the left is always the end.

Talking about talking

I recently gave a talk to my colleagues about giving talks. Here is an annotated collection of the notes I made in preparation.

- What do you want the audience to get out of the talk?

As you’re constructing your talk, ensure that you’re actually satisfying your mission. If you want to inspire people, make sure you’re not just promoting your own knowledge, business or ability. If you want people to learn things, make sure your talk is appropriate to the experience level of the audience.

- Find out about the audience
    - likely skill level
    - range of experiences
    - interest in technical, business or other issues
- Don't assume that because you think something's obvious, everyone else does

A big stumbling block for novice speakers I talk to is to assume that because you know something, it’s not worth talking about as there are people out there who know way more. Your own experiences and interpretations are different from everyone else’s, it’s very likely that you’ll have something new to contribute—as long as your talk is personal, and not just a restatement of readily-available documentation.

- Decide what it is you're going to say
    - are you trying to inspire or persuade the audience?
        - decide your conclusion
            - if you're worried about timing, give yourself a couple of different exit points
        - how do you want to start?
            - the conclusion's a good place to start

This is a tough place to start though. The idea is that you’re challenging the audience by telling them something that sounds implausible, so they get mentally engaged. If the leap required is too big then you’ll either turn people off, or they’ll still be thinking about the challenge after you’ve started talking.

            - outline the problem that your solution solves

This is the Jobs approach. Start by saying the current world sucks. Explain what a better world would look like. Make it obvious that your proposal leads to the better world. Tell them the thing you propose is available in the foyer as soon as the talk finishes. It’s based on setting up one or more distinctions between the world as it is (or as you tell the audience they currently perceive it) and as it could be (or as you tell the audience they should want it), then showing that those distinctions have actually been resolved. Nancy Duarte did a good talk on this topic.

        - what's the flow between the problem and the conclusion?
        - notice this isn't "tell them what you're about to tell them…"
    - are you trying to educate the audience?
        - you can't in under an hour; aim for awareness or persuasion
        - put additional relevant content on your blog and refer to it
        - you still need a flow

In this case, the audience’s problem is “we don’t know how to do [x]”, the better world is one where they do know how to do [x], and the solution is your content. Don’t try to cram all the code into your talk because it’s distracting, only relevant in a few cases, and hard to parse while keeping up with the presentation. Instead, give people the key parts of the solution so that when they come across [x], they’ll remember some things from your presentation which will help them piece together the full solution.

We had a discussion about WWDC talks at this point in the “live” version of these notes. WWDC seems to provide a counterexample to this rule about not educating people in a talk, with graphics-poor code-rich presentations. Those sessions have two goals: giving developers who aren’t in the labs something to do, and being available on video afterward. The live presentation frankly is overwhelming and often confusing, but isn’t the main use of the talk. You’re expected to watch it over, to refer to the documentation, to ask people about the content in the labs.

[To be honest I also expect there’s an extent to which a lot of the people presenting at WWDC are both strongly pressured by the additional workload of the conference and are uncomfortable with speaking publicly, and the format they’ve settled on is one that works in that context and doesn’t make too many compromises or create too much additional stress. That’s just speculation on my part though.]

    - entertainment
        - doesn't need to be jokes, a compelling talk is entertaining
        - in fact be careful of jokes unless you know the crowd
        - certainly don't lead the laughter

I put this in as a homage to Thorsten Heins, who ruined an otherwise reasonably competent and well-executed presentation by laughing at and even applauding his own jokes. If you want to try a joke, think twice. I only do it in arenas where I’m comfortable I know the people. If you still want to do it, and it falls flat, move on.

        - if you want a set piece, make it relevant to the talk and practice the bejeezus out of it
- engage the audience
    - "hands up if" exercises are light forms of interaction but make people engage with the talk

As discussed earlier, making people too introspective will distract them from your talk. But these days you have to stop people from diving back into their laptops and working during talks, so you need to provide some form of engagement. You also have to deal with the fact that various sub-sections of your audience may be jet lagged, full of lunch or hungover. If they don’t have a part to play in your talk they’ll sleep through it.

    - make eye contact with every part of the audience
        - you will see people asleep or working; don't worry
        - don't forget the back of the room

It’s easy for introverts particularly to “protect” ourselves from the audience by avoiding looking at them. The problem is it then doesn’t feel like we’re talking to everyone out there. You shouldn’t aim to make eye contact with every individual attendee, that doesn’t scale—you definitely should talk to each “part” of the audience though. Talking to someone at the back of the room when you start your talk helps you pitch your voice correctly.

    - motion, nervous or otherwise, gives people a reason to be concentrating on you rather than the screen/their phones
- slides
    - too much text and you lose people while they read along
    - again, relevant content
    - animation where relevant helps, where irrelevant distracts
    - progressive disclosure and hiding

So much has been said about building good slides that I don’t want to add anything. Make sure your own notes are separate from your slides, and everything on-screen now is germane to what you’re saying now.

- q&a
    - there will be an awkward question. Your goal is to handle it gracefully, not to avoid it coming up
    - continuum fallacy

Most of the “well actually” questions you’ll get in conference Q&As are not actually questions. They’re the phrase “I know more than you” dressed up with some rhetorical sugar to appear more question-like. These are poisonous: the audience gains nothing from them, you gain nothing from them, the person “asking” gains nothing from them. Nonetheless we can’t ban people who ask these questions from conferences, so we just have to cope with them. “That’s an interesting point, we should talk about it later” is an OK response—especially if you need to catch the train as soon as you’re off the stage.

The ones that are questions frequently represent instances of the continuum fallacy: you said X is true. Well actually I’ve found an edge case where X isn’t true, therefore X is never true. No. Politely point out the fallacious reasoning, move on to another question. The biggest mistake I make in handling Q&A is letting people argue the toss over questions like these. Again, the rest of the audience is learning nothing from a pointless to and fro; and there are more of them than the two of you having the discussion.

    - set expectations on questions at the beginning, e.g. "I'll take questions at the end" or "interruptions welcome". "No questions" is hard to get the conference organisers to agree with (though would probably help a lot of nervous speakers)—filibustering sometimes works but is apparently rude :)
    - you're effectively chairing a discussion you're also involved in, so don't be afraid of setting topic boundaries. The q&a has to be valuable to the audience.