Coding. Standards.

I just realised that this month marks the 10th anniversary of my first payment for writing software (on, of all the weird things to be writing software on in 2002, a NeXTstation)! What have I learned from those ten years? What advice would I give to someone who wants to do this stuff for at least 10 years?

Programming is the easy bit.

Well, comparatively. There are hard bits in programming, and every few years a new paradigm comes along that means you have to unlearn whatever it was you were doing and learn something else to do anyway. So learning programming is never done, but still programming is easier than:

  • estimating. The one project I’ve worked on that finished on its planned completion date only did so by accident.
  • getting any kind of agreement out of two or more people.
  • accepting that the other person isn’t a dick, but has different goals and problems than you.
  • objectively evaluating your own work.
  • objectively assessing someone else’s evaluation of your work.
  • stopping programming when you’re done.

You always need to be learning.

You can’t compete on price in the software market, because there’s always some student somewhere who’s willing to do the same work for free. It was Mike who first taught me that. You have to compete on quality, which means you need to strive to improve your own quality. Because other people are too, so you need to run just to stand still.

There are various ways to learn, and they’re not mutually exclusive. A combination of books/articles, experimentation, and discussion with peers is valuable. If your town has an Appsterdam or a CocoaHeads, get along and say hello.

You probably don’t want to be doing this in 10 years.

I was actually a UNIX programmer a decade ago (well, I was mainly a student). Then I was a barman, then sysadmin, then a Linux server application programmer, then a Mac app programmer, then a contract Mac programmer, then a Java app programmer, then a security consultant, then an iOS app programmer. There’s only a small probability that I’ll be an iOS app programmer in 10 years.

This world moves really quickly. Ten years ago, the iPod was a new and relatively risky proposition. Macs used PowerPC CPUs. Windows XP was the new hotness, and .NET was just about to appear – meanwhile Mac OS X was a sluggish amalgam of NeXT, Java and legacy code. Java, by the way, was run by a now-defunct company called Sun Microsystems, which was trying to work out how to survive the dot-com crash.

Speaking of the dot-com crash, it seems highly likely that within the next decade we’ll see the dot-app crash. App downloads are worth $0.18¢ each, but an app costs $200k – apparently it’s hard work. That means you’ve got to either get yourself into the long tail value-wise (i.e. have a very good app that people will pay for), or you’ve got to find a million users for version 1.0.

For everyone else, the market isn’t worth staying in long-term. The market will bore of brochureware apps, only a few high-value brands will be able to support unprofitable vanity apps, and VCs will realise that throwing their money after an app with no profit strategy is the same as throwing their money after a website with no profit strategy.

It’s likely that at least one of the companies that’s big in the current software world – Microsoft, Apple, Oracle, Google and the like – will be big in the software world of 2022. It’s also likely that there’ll be some new comers that change things completely: Facebook and Twitter didn’t exist ten years ago, and neither did Android, Inc. Sometimes companies that seem to be in an interminable tailspin – like Apple – turn themselves around and become successful.

Learn more than one thing

This is related, in part, to what came above: the thing you’re using right now may not exist, or may be hard to get work in, in a few years’ time. On the other hand, some things seem to outlive the cockroaches: C – and by extension, languages that can link somewhat seamlessly with C like C++, Fortran and so on – have been going on forever. It can be hard to predict which of these camps your favourite tech sits in, so learning more than one thing keeps you employable.

More than that, if your technology of choice comes from a single supplier (e.g. Microsoft, Apple, Embarcadero) then diversification just makes good business sense. This particularly applies in the age of the app store where that sole supplier can also be your sole vendor – you don’t want to sign your entire business’s value over to one other company.

Learning another thing makes you better at the first thing

This is another reason why diversifying your technology portfolio is beneficial. Many of the changes I’ve made recently in the way I write object-oriented software come from talking to Clojure programmers.

The more different things you know, the more connections you’ll be able to make between them. The more you’ll be able to critically analyse one technology, beyond what the vendor tells you. And the more you’ll be able to understand other new things and incorporate them into your Weltanschauung.

Conclusion

My summary could be “learn whatever you can: you never know which bits you need”. Or it could be “don’t rely on your supplier to solve all of your problems”.

I think it’s actually going to be: analyse everything. Reflect on your work: what went well? What didn’t? Could you have done things better? If you don’t think you could have, then you’re probably wrong: what would you need to know to identify the bit that actually could’ve gone better?

But know when to stop, too. Analysis paralysis is as much of a problem as going in blind. At some point, you need to suck it up and move on. Trading these two things against each other is the real difficulty in software engineering.

Posted in advancement of the self, Business, code-level, OOP, software-engineering | 2 Comments

Objective-C literals and subscripts

If you’re using clang from their website instead of sticking with Apple’s release, you get support for Objective-C literals and object subscripting. I thought I’d take the BrowseOverflow app and apply this new syntax to it. Notice that the code below doesn’t match what’s in github, which still works with currently-released versions of Xcode and their compilers.

Indexed/Keyed subscripting: we’ve seen this before

Using the syntax described above, you can subscript into an object using something that looks like the traditional C square bracket notation. If you use an integer, you get indexed subscripting:

        Answer *thisAnswer = question.answers[indexPath.row];

If you use an object, you get keyed subscripting:

            NSMutableDictionary *userInfo = [NSMutableDictionary dictionaryWithCapacity: 1];
            if (localError != nil) {
                userInfo[NSUnderlyingErrorKey] = localError;
            }

This is something that’s been available in many languages before. Smalltalk (from which Objective-C derives) had a well-defined subscripting syntax across all objects, using at: and set:at:. C++ permits classes to supply the operator[]() method to use C-style index subscripting just as Objective-C does.

An aside in defence of operator overloading

If you look at traditional Objective-C syntax, you can see that (roughly speaking, and anyone who points out the edge case isn’t my friend) there are the things in square brackets that are objects and messages, and the things without square brackets are primitive types. There are two different worlds, and never the twain shall meet.

But actually, we’ve learned that encapsulation is good, and that allowing people to concisely express their intent is better than making them deal with our implementation details. Therefore we want to integrate our data types in the language. We want to tell people “add this thing to the other thing”, not “you need to call this function with these parameters which will add the things”.

Providing custom implementations of the standard operators is the best way of doing that. Yes, it hides what’s happening: that’s the point. Yes, it can be abused: the entire software industry is based on a foundation that makes it possible to write bad software. If you want to take away tools that can be used to introduce bugs, you need to take away everyone’s compilers and interpreters.

An aside on the aside about Objective-C operator overloading

So far, Objective-C objects can provide custom implementations of two C operators: the field access operator . used for type safe property access, and the subscript operator [] used as we saw before I digressed.

The reason subscript overloading works in ObjC is that it’s illegal to apply arithmetic operations to object pointers. In C, foo[bar] is just a fancy way of writing *(foo+bar) (which is why bar[foo] also works). If you’re not allowed to apply the [] operator to an id, then you know something else must be happening: i.e. you know that the object subscript behaviour is required.

Well the fact that you can’t do pointer arithmetic on an id means that you could also, for example, check for illegal use of the + operator and call -objectByAddingObject:.

Back to the point: boxing done well.

Many languages make some attempt to “box” primitive types like numbers in high-level value types like number objects. This often causes problems: for example in Java, which has both automatic boxing and method overloading, I could do this:

public void foo(int x) { … }
public void foo(Java.lang.Integer x) {…}

foo(3);

It’s not clear whether 3 refers to the primitive type or to the object type, and therefore it’s not clear which method will get called.

The same problem could have been encountered in ObjC: does foo[3] refer to indexed subscripting or to keyed subscripting using an NSNumber instance?

Thankfully whoever designed the Objective-C boxing behaviour decided it must always be explicit. You can get a number like this:

- (void)browseOverflowViewControllerTests_viewDidAppear: (BOOL)animated {
    NSNumber *parameter = @(animated);
    objc_setAssociatedObject(self, viewDidAppearKey, parameter, OBJC_ASSOCIATION_RETAIN);
}

but otherwise numbers will always be treated as numbers, not as objects.

When it all gets a bit much

You’ve got two seconds, the house is on fire, what does this line do?

    NSString *questionBody = [parsedObject[@"questions"] lastObject][@"body"];

It can be a bit hard to read that, and to pick out which brackets go with messages and which go with subscripts. I can imagine the sort of people who like to issue pronouncements on whether to use the dot operator to access properties will love the new subscripting syntax.

Posted in code-level, OOP | 2 Comments

Supporting both ARC and MRC build settings

Let’s face it, people don’t read `README`s. If you write library code that people are going to use in their own projects, you can’t rely on that bit at the bottom of the documentation that tells people to do -fobjc-arc on your files that they drop into your project. You can rely on all the issues that get reported about memory leaks :-).

The actual solution

Your project should build a library (static by necessity on iPhone, there are other options on the Mac) so developers can just add that one library target as a build dependency, and drop the headers into their own projects.

The result is that now your memory management is hidden behind the object boundary and the naming conventions of your methods. You should probably still be using manual reference counting if you want people who’ve already written apps to be able to link against your code without problems, because there are still apps out there that target versions of iOS that can’t link ARCified objects. Regardless, whether an app is ARCified or not it will be able to link your library.

The other solution

Sometimes you find code that developers are supposed to integrate by dropping the source files into their targets. This is worse than providing a static library: now you’ve made the developer care about the internals of your code – the compiler flags you need to set become something they have to deal with in their target’s build settings. This includes the setting for whether automatic reference counting is enabled.

…unless you support both possibilities. I’ve used the macros defined below to use the same code with both automatic and manual reference counting compiler settings. This code included Core Foundation bridged objects, so this isn’t just “the trivial case” (whatever that is).

#if __has_feature(objc_arc)
# define FZARelease(obj)
# define FZAAutorelease(obj) (obj)
# define FZARetain(obj) (obj)
#else
# define FZARelease(obj) [(obj) release]
# define FZAAutorelease(obj) [(obj) autorelease]
# define FZARetain(obj) [(obj) retain]
#endif

Objective-C garbage collection

I haven’t had a need to test how that interacts with garbage collection, or build code that works in all three environments. However, if you already wrote your code to support (rather than require) GC, and you don’t rely on CFMakeCollectable, this collection of macros at least won’t make anything worse.

Posted in Uncategorized | 2 Comments

App security consultancy from your favourite boffin

I’m very excited to soon be joining the ranks of Agant Ltd, working on some great apps with an awesome team of people. I’ll be bringing with me my favourite title, Smartphone Security Boffin. Any development team can benefit from a security boffin, but I’m also very excited to be in product development with the people who make some of the best products on the market.

There’s another side to all of this: once again can security boffinry be at your disposal. I’ll be available on a consultancy basis to help out with your application security and privacy issues. If you’re unsure how to do SSL right in your iOS app, are having difficulty getting your Mac software to behave in the App Store sandbox, or need help to identify the security pain points in your application’s design or code, I can lend a hand.

Of course, it’s not just about technology. The best way to help your developers get security right is to help your developers to help themselves. When I’ve done security training for developers before I’ve seen those flashes of enlightenment when delegates realise how the issues relate to their own work; the hasty notes of class or method names to look into back at the desk; the excited discusses in the breaks. Security training for iOS app developers is great for the people, great for the product – and, of course, something I can help out with.

To check availability and book some time (which will be no earlier than July), drop me a line on the twitters or at graham@agant.com.

Posted in Business, ssl, threatmodel | Comments Off on App security consultancy from your favourite boffin

Class clusters, placeholder objects, value-oriented programming, and all that good stuff.

Have you ever seen this exception in your crash log?

2012-05-29 17:55:37.240 Untitled 2[5084:707] *** Terminating app due to uncaught exception ‘NSInvalidArgumentException’, reason: ‘*** -length only defined for abstract class. Define -[NSPlaceholderString length]!’

What’s that NSPlaceholderString class?

Leaving aside NSMutableString for a moment[*], there’s no way for a developer who’s got an instance of an NSString to modify that string. In this model a string instance represents the value of that string: the word “hello” is always going to be “hello”. You can build a sentence that includes the word “hello” in a sequence – e.g. “hello, world”. You can build a different sentence, e.g. “goodbye, world”. You haven’t changed the value of the word “hello” to “goodbye”, you’ve changed the value of the sentence to include a word with a different value.

OK, let’s take that to an extreme. If any string that a developer gets back from NSString‘s API is immutable, then that should include the string she gets back from +allocWithZone:, right? So any extra data passed in an -initWith… method can’t be used to change the string object we just allocated.

That’s OK, because -init… methods are allowed to return a different object, preserving this “don’t change the value” principle. Imagine the C string initialiser for NSString looking like this (I doubt it does – I think it internally converts the string to UTF-16 – but it’ll do as an example):

-(id)initWithCString: (char *)cString encoding: (NSStringEncoding)encoding
{
  NSCString *otherString = [NSCString allocWithZone: [self zone]];
  [self release];
  otherString->length = strlen(cString);
  otherString->bytes = malloc(otherString->length);
  strlcpy(otherString->bytes, cString, otherString->length);
  otherString->storedEncoding = encoding;
  return otherString;
}

This doesn’t violate the no-modification contract, because it only changes an object that’s being built and that the end developer hasn’t seen yet. Once the developer gets to look at this string – when it’s returned from the initialiser – it’ll be immutable.

So this means that the string which was returned from +allocWithZone: represents a particular value of a string: the string that has yet to be assigned a value. Indeed, it’s a placeholder string. But any string that has yet to be assigned a value can be represented by the same placeholder, because they all mean the same thing. That means we can save some memory by creating a Flyweight instance of the placeholder. Even if multiple call sites in multiple threads all get the same instance of our placeholder string, there’s no danger of them tripping over each other because they’ll all then get different strings as they tell the placeholder what values they need to represent.

In fact, if code in two different threads need to represent the same value, it’s safe to give them both references to the same object. Neither can change that object and spoil things for the other.

This pattern of keeping objects immutable in the eyes of client code, providing transformations that result in new objects rather than modifying existing objects, makes a raft of thread safety problems disappear and reduces the complexity of class APIs. I’ll be using it more often in my object models.

[*]To be honest, I’d like to leave it aside forever. It satisfies the Law of Demeter, but there’s a whole class of concurrency problems that only exist because “a mutable string isa string”.

Posted in code-level, Foundation, software-engineering | Comments Off on Class clusters, placeholder objects, value-oriented programming, and all that good stuff.

Is privacy a security feature?

I’ve spoken a lot about privacy recently: mainly because it’s an important problem. Important enough to hit the headlines; important enough for trade associations and independent developers alike to make a priority. Whether it’s talks at conferences, or guiding people on designing or implementing their apps, there’s been a lot of privacy involved. But is it really on-topic for a security boffin?

The “yes” camp: Microsoft

In Michael Howard and David LeBlanc’s book, “Writing Secure Code, Second Edition”, there’s a whole chapter on privacy:

Most privacy threats are information disclosure threats. When performing threat analysis, you should look at all such threats as potential privacy violations.

In this view, a privacy problem is a consequence of a failure of confidentiality being disrupted. You model your application, taking into account what data it protects, what value the customers put on that data, and how important it is to protect the confidentiality. Personally-identifying information is modelled in exactly this way.

Privacy automatically falls out of this modelling technique: if people can get access to confidential data, then you have a privacy violation (that also looks like a security vulnerability because it appears in your threat model).

The “no” camp: Oh, it’s Microsoft again

A different viewpoint is expressed in another book by Michael Howard (with Steve Lipner this time): “The Security Development Lifecycle”.

Many people see privacy and security as different views of the same issue. However, privacy can be seen as a way of complying with policy and security as a way of enforcing policy. […] Privacy’s focus is compliance with regulatory requirements[…], corporate policy, and customer expectations.

So in this model, privacy is a statement of intent, and security is a tool to ensure your software follows through on your intent. It’s the difference between design and implementation: privacy is about ensuring you build the right thing, and security helps you build the thing right. The two have nothing to say about each other, except that if you didn’t get the security right you can’t make any claim about whether the policy expressed in the privacy requirements will successfully be met in deployment.

The “who cares?” camp: me

The argument above seems to be a question of semantics, and trying to apportion responsibility for different aspects of development to different roles. In fact, everyone involved in making a product has the same goal – to make a great product – and such niggling is distracting from that goal.

Most of my professional work fits into one of a few categories:

  • Learning stuff
  • Making stuff
  • Helping other people make better stuff
  • Making other people better at making stuff than I am

So if, in the process of helping someone with their security, I should be able to help with their app’s privacy too, should I really keep quiet until we’ve solved some quibbling point of semantics?

Posted in Privacy, software-engineering | Leave a comment

Thoughts on Tech Conferences

This post is being, um, posted from the venue for GOTO Copenhagen 2012. It’s the end result of a few months of reflection on what I get out of conferences, what I want to get out of conferences, what I put into (and want to put into) conferences and the position of tech conferences in our industry. I’ve also been discussing things a lot with my friends and peers; I’ve tried to attribute specific quotes where I remember who said them but let it be known that many people have contributed to the paragraphs below in many different ways. I’ll make it clear at the outset that I’m talking about my experience at independent commercial and non-profit tech conferences, not scientific conferences (of which I have little experience) or first-party events like WWDC (which are straightforward marketing exercises).

Conference speakers

My favourite quote on this subject is courtesy of Mike; I remember him saying it in his MDevcon keynote this year but I’m also fairly sure he’s said the same thing earlier:

The talks at a conference are only there so that you can claim the ticket cost as an expense.

We’re in a knowledge economy; but knowledge itself is not of any value unless it’s applied. That means it’s not the people who tell other people what’s going on who’re are doing the most important work; that’s being done by the people who take this raw knowledge, synthesise it into a weltanschauung – a model of how the world works – and then make things according to that model. Using an analogy with the economy of physical things, when we think of the sculpture of David in Florence we think of Michelangelo, the sculptor, not of the quarry workers who extracted the marble from the ground. Yes their work was important and the sculpture wouldn’t exist without the rock, but the most important and valuable contribution comes from the sculptor. So it is in the software world. Speakers are the quarry workers; the marble hewers, providing chunks of rough knowledge-stuff to the real artisans – the delegates – who select, combine and discard such knowledge-stuff to create the valuable sculptures: the applications.

Michelangelo's David, source: Wikimedia Commons.

Conference speakers who believe that the value structure is the other way around are deluding themselves. Your talk is put on at the conference to let people count the conference as a work expense, and to inspire further discussion and research among the delegates on the topic you’re talking about. It’s not there so that you can promote your consultancy/book/product, or produce tweet worthy quotes, or show off how clever you are. Those things run the gamut from “fringe benefits” to “deleterious side effects”.

As an aside, the first time I presented at a Voices That Matter conference I was worried due to the name; it sounds like the thing that matters at this conference is the speakers’ voices. In fact I suspect there is some of that as many of the presenters have books published by the conference hosts, but it’s a pretty good conference covering a diverse range of topics, with plenty of opportunities to talk to fellow delegates. And IIRC all attendees got an “I am one of the voices that matter” sticker. Anyway, back to the topic at hand: thus do we discover a problem. Producing a quality conference talk is itself knowledge work, that requires careful preparation, distillation and combination of even more raw knowledge-stuff. It takes me (an experienced speaker who usually gets good, but not rave, reviews) about three days to produce a new one hour talk, a roughly 25:1 ratio of preparation:delivery. That’s about a day of deciding what to say and what to leave out, a day of designing and producing materials like slides, handouts and sample code, and a day of practising and editing. Of course, that’s on top of whatever research it was that led me to believe I could give the talk in the first place. The problem I alluded to at the start of the last paragraph is this: there’s a conflict between acknowledging that the talks are the bricks-and-mortar of the conference rather than the end product, and wanting some return on the time invested. How that conflict’s resolved depends on the personal values of the individual; I won’t try to speak for any of my peers here because I don’t know their minds.

The conference echo chamber

That’s not my phrase; I’ve heard it a lot and can track my most recent recollection to @secwhat’s post Conference Angst. Each industry’s conferences has a kind of accepted worldview that is repeated and reinforced in the conference sessions, and that only permits limited scrutiny or questioning – except for one specific variety which I’m coming onto later. As examples, the groupthink in indie Mac/iOS conferences is “developers only need developer features that have been blessed by Apple”. There’s recently been significant backlash to the RubyMotion framework, as there usually is when a new third-party abstraction for iOS appears. But isn’t abstraction a good thing in software engineering? The truth is, of course, that there are more things in heaven and earth than are dream’t in Apple’s philosophy.

The information security groupthink is that information security is working. Shocking though it may sound, that’s far from obvious, evident or even demonstrable. Show me the blind test where similar projects were run with different levels of info sec engagement and where the outcome was significantly different. Demonstrate how any company’s risk profile has changed since last year. Also, show me an example of security practitioners being ahead of the curve, predicting and preparing for a new development in the field: where were the talks on hacktivism before Anonymous or Wikileaks?

One reason that the same views are repeated over multiple conferences is that the same circuit of speakers travels to all of the conferences. I’m guilty of perpetuating that myself, being (albeit unintentionally in one year) one of the speakers in the iOS circuit. And when I’ve travelled to Seattle or Atlanta or Copenhagen or Aberystwyth, I’ve always recognised at least a few names in the speaker line-up. [While I mentioned Aberystwyth here, both iOSDevUK and NSConf take steps to address the circuit problem. iOSDevUK had a number of first-time speakers and a “bar camp” where people could contribute their own talks. NSConf has the blitz talks which are an accessible way to get a large number of off-circuit speakers, and on one occasion ran a whole day of attendee-contributed sessions called NSConf Mini. When you give people who don’t normally present the opportunity to do so, someone will step up.]

I mentioned before that the echo chamber only permits limited scrutiny, and that comes in the form of the “knowing troll” talk. Indeed at GOTO there’s a track on the final day called “Iconoclasm”, which is populated solely with this form of talk. Where the echo chamber currently resounds to the sound of , it’s permitted to deliver an “ sucks” talk. This will usually present a straw man version of and list its failings or shortcomings. That’s allowed because it actually reinforces – real-world examples are rarely anything like as bad as the straw man version, therefore isn’t really that bad. This form of talk is often a last-session-of-the-day entry and doesn’t really lead people to challenge their beliefs. What happens later is that when everyone moves on to the next big thing, the “ sucks” talks will become the main body of the conference and “<X+1> sucks” will be the new troll talk.

Conferences

Weirdly, while the word conference means a bringing together of people to talk, coming from the same root as “conversation”, many conferences are designed around a one-way flow of words from the speakers to the delegates. Here’s the thing with that. As I said in my keynote talk at MDevcon, we learn from each other by telling and listening to stories. Terry Pratchett, Jack Cohen and Ian Stewart even went as far as to reclassify humans as pan narrans, the storytelling chimpanzee. Now if you’ve got M speakers and 10M<N<100M delegates, then putting a sequence of speakers up and listening to their stories gets you a total of M stories. Letting the N delegates each share their stories, and then letting each of the N-1 other delegates share the stories that the first N stories reminded them of, and so on, would probably lead to a total of N! stories if you had the time to host that. But where that does happen, it’s usually an adjunct to the “big top” show which is the speaker series. [And remember: if you’ve got C conferences, you don’t have C*M speakers, you have M+ε speakers.]

There’s one particular form of wider participation that never works well, and that’s to follow a speaker session with Q&A. Listen carefully to the questions asked at the next Q&A you’re in, and you’ll find that many are not questions, but rhetorical statements crafted to make the “asker” appear knowledgable. Some of those questions that are questions are rhetorical land mines with the intent of putting the speaker on the back foot, again to make the asker seem intellectually talented. Few of these questions will actually be of collective value to the plenus, so there’s not much point in holding the Q&A in front of everyone.

Speaker talks are only one way to run a session, though. Panels, workshops and debates all invite more collaboration than speaker sessions. They’re also much more difficult to moderate and organise, so are rarely seen: many conferences have optional days that are called “workshops” but in reality are short training courses run by an invited speaker. In the iOS development world, lab sessions are escaping the confines of WWDC and being seen at more independent conferences. These are like one-on-one or few-on-few problem solving workshops, which are well focussed and highly collaborative but don’t involve many people (except at Voices that Matter, where they ran the usability workshops on the stage in front of the audience). A related idea being run at GOTO right now, which I need to explore, is a whole track of pair programming sessions. The session host chooses a technology and a problem, and invites delegates onto the stage to work through the challenge with the host in a pair-programming format. That’s a really interesting way to attract wider participation; I’ll wait until I’ve seen it in action before reaching an opinion on whether it works.

There’s another issue, that requires a bit more setup to explain. Here’s a Venn diagram for any industry with a conference scene; the areas are indicative rather than quantitative but they show the relation between:

  • the population of all practitioners;
  • the subsection of that population that attends conferences; and
  • the subsection of that population that speaks at conferences.

Conference Venn diagram

So basically conferences scale really badly. Even once we’ve got past the fact that conferences are geared up to engage the participation of only a handful of their attendees, the next limiting factor is that most people in [whatever industry you’re in] aren’t attending conferences. For the stories told at a conference (in whatever fashion) to have the biggest impact on their industry, they have to break the confines of the conference. This would traditionally, in many conferences, involve either publishing the proceedings (I’ve not heard of this happening in indie tech conferences since the NATO conferences of 1968-9, although Keith Duncan is one of a couple of people to mention to me the more general idea of a peer-reviewed industry journal) or the session videos (which is much more common).

To generate the biggest impact, the stories involved must be inspiring and challenging so that the people who watched them, even those who didn’t attend the conference, feel motivated to reflect on and change the way they work, and to share their experiences (perhaps at the same conference, maybe elsewhere). Before moving on to a summary of everything I’ve said so far, I’ll make one more point about the groups drawn on the Venn diagram. Speakers tend to be specialists (or, as Marcus put it in his NSConf talk, subject matter experts) in one or two fields; that’s not surprising given the amount of research effort that goes into a talk (described above). Additionally, some speakers are asked to conferences because they have published a book on the topic the convenor wishes them to speak on; that’s an even longer project of focussed research. This in itself is a problem, because a lot of the people having difficulty with their work are likely to be neophytes, but apparently we’re not listening to them. We listen to self-selected experts opining on why everyone needs to take security/TDD/whatever seriously and why that involves retaining the experts’ consultancy service: we never listen to the people who can tell us that after a month of trying this Objective-C stuff still doesn’t make sense. These are the people who can give us insight into how to improve our practice, because these are the people reminding the experts (and indeed the journeymen) of the problems they had when they’d been at this for a month. They tell us about the issues everyone has, and give us ideas on how we can fix it for all (future) participants.

Conference goers, then, get the benefit of a small handful of specialists: in other words they have a range of experience to call on (vicariously) that is both broad and deep. Speakers of course have the same opportunity, though don’t always get to take full advantage of the rest of a conference due to preparation, equipment tests, post-talk question sessions and the like. The “non-goers” entry in the diagram represents a vast range of skills and experiences, so it’s hard to find any one thing to say about them. Some will be “distance delegates”, attending every conference by purchasing the videos, transcripts or other materials. Some will absorb information by other means, including meet-ups, books, blogs etc. And some will be lone coders who never interact with anyone in their field. Imagine for a moment that your goal in life is to apply the Boy Scout Rule (which I’m going to attribute again to @ddribin because I can’t remember who he got it from; Uncle Bob probably) to your whole industry. Your impact on $thing_you_do will be to leave the whole field, the whole practice a bit better than it was when you got here. (If that really is your goal, then skip the imagination part for a bit.)

It seems to me that the best people to learn from are the conference delegates (who have seen a wide section of the industry in considerable depth) and the best people to transfer that knowledge to are, well, everybody.

Summary of the current position

Conferences are good. I don’t want people to think I’m hating on conferences. They’re enjoyable events, there are plenty of good ones, there’s an opportunity to learn things, and to see fresh perspectives on many aspects of our industry. They’re also more popular than ever, with new events appearing (and selling out rapidly) every year. However, these perspectives often have an introspective, echo chamber quality. We’re often listening to a small subset of the conference delegates, and if you integrate over multiple conferences you find the subset gets relatively smaller because it’s the same people presenting all the time. Most delegates will not get the benefit of listening to all of the other delegates, which means they’re missing out on engaging with some of the broadest experience in the industry. Most of the practitioners in your corner of the industry probably don’t attend any conferences anyway; there aren’t enough seats for that to work.

The ideal tech conference

OK, I am very clearly lying here: this isn’t the ideal tech conference, it’s my ideal tech conference. In my world, those are the same thing. PerfectConf features a much more diverse portfolio of speakers. In the main this is achieved exactly the way that Appsterdam does it; by offering the chance to speak to anyone who’ll take it, by looking for things that are interesting to hear about rather than accomplished or expert speakers to say it, and by giving novice speakers the chance to train with the experts before they go in front of the stage. Partly this diversity is achieved by allowing people who aren’t comfortable with speaking the opportunity to host a different kind of session, for example a debate or a workshop.

In addition to engaging session hosts who would otherwise be apprehensive about presenting, we get to hear about the successes and tribulations encountered by the whole cohort of delegates. At least one session would be a plenary debate, focussed on a problem that the industry is currently facing. This session has the modest aim of discovering a solution to the problem to move the industry as a whole forward. Another way in which diversity is introduced into the conference is by listening to people outside of our own sector. If infosec is having trouble getting budget for its activities, perhaps they ought to invite more CFOs or comptrollers to its conferences to discuss that. If iPhone app developers find it hard to incorporate concurrency into their application designs, they could do worse than to listen to an Erlang or Occam expert. Above all, the echo chamber would be avoided; session hosts would be asked to challenge the perceived industry status quo.

I’ve long thought that if a talk of mine doesn’t annoy at least one member of the audience then I haven’t said anything useful; a former manager of mine said “if we both think the same way about everything then one of us is redundant”. This way of thinking would be codified into the conference. Essentially, what I’m talking about is the death of the thought leader (or “rock star”). Rather than having one subject matter expert opining on how everyone should think about security, UX, marketing, or whatever, PerfectConf encourages the community to work together like a slime mould, allowing the collective motion of all of the members to explore all opportunities and options and select the best one by communicating freely across the colony.

Slime mold solving a maze (photo: Nature)

Finally, PerfectConf proceedings are published as soon as practical; not just the speaker sessions but the debates too. Where the plenus reaches a consensus, the consensus decision becomes available for all those people who couldn’t make it to the conference of to discover, consider, and potentially adopt or react to. Unfortunately I’m not a conference organiser.

Posted in advancement of the self, books, Business, NSConf, Talk, WWDC | Leave a comment

BrowseOverflow as a Code Kata

This article was originally posted over at InformIT.

My goal in writing Test-Driven iOS Development was to take readers from not knowing how to write a test for their iOS apps, to understanding the TDD workflow and how it could work for them. That mirrored the journey that I had taken in learning about test-driven development, and that had led me to wanting to write a book to share what I’d learned with my peers.

This has an interesting effect on the structure of the book. Not all of the sample code from the BrowseOverflow is shown (though it’s all available on GitHub). This isn’t an accident we made in the editorial process. It’s a feature: for any test shown in the book, there are numerous different ways you could write application code that would all be just as good. Anything that causes the test to pass, and doesn’t make the other tests fail, is fine.

Just as the many-worlds model in quantum theory says that there are many branches of the universe that are created every time a decision is made, so there’s a “many-BrowseOverflows” model of Test-Driven iOS Development. Every time a test is written, there are many different solutions to passing the test, leading to there being multiple potential BrowseOverflows. The code that you see on GitHub is just one possible BrowseOverflow, but any other code that satisfies the same tests is one of the other possible BrowseOverflows.

This means that you can treat the book like a kata: the Japanese martial art technique of improving a practice by repeating it over and over. The first time you read Test-Driven iOS Development, you can choose to follow the example code very closely. Where the code isn’t given in the book, you might choose to look at the source code to understand how I solved the problems posed by the tests. But the end of the book is not the end of the journey: you can go back, taking the tests but implementing all of the app code yourself. You can do this as many times as you want, trying to find new ways to write code that produces a BrowseOverflow app.

Finally, when you’re more confident with the red-green-refactor way of working, you can write a BrowseOverflow app that’s entirely your own creation. You can define what the app should do, create tests that express those requirements and write the code to implement it however you like. This is a great way to test out new ways of working, for example different test frameworks like GHUnit or CATCH. It also lets you try out different ways of writing the application code: you could write the same app but trying to use more blocks, or try to use the smallest number of properties in any class, or any other challenge you want to set yourself. Because you know what the software should be capable of, you’re free to focus on whatever skill you’re trying to exercise.

Posted in advancement of the self, books, code-level, TDD, TDiOSD | Comments Off on BrowseOverflow as a Code Kata

Using GNUstep libraries with Xcode

I was recently asked about building projects that use GNUstep from Xcode. The fact is, it’s incredibly easy.

By default, GNUstep on Mac OS X installs its libraries to /usr/local/lib and its frameworks to /Library/Frameworks. Therefore if you want to include GNUstep-base additions, you just hit the + button in your target’s “link binary with libraries” section and find the libgnustep-baseadd.dylib entry under Mac OS X 10.7. If you wanted to use GNUstepWeb, you’d look for WebObjects.framework in the same list.

You can get access to the GNUstep base additions in your code by including <GNUstepBase/GNUStep.h.

Notice that if you manage a GNUstep project using Xcode, you’ll only be able to build it on a Mac (unless you go to the bother of writing a build tool with GNUstep’s unfortunately-named XCode.framework – which ironically doesn’t currently work on Mac OS X.). If you need to target non-Mac platforms, your options are to build Cocotron-style cross-compilers and add them to your Xcode project, or to create an Xcode project with an External Build System target and manage your build via make anyway.

An interesting extension would be to define a new filesystem layout for GNUstep-make that deposited all of the frameworks and libraries into a .sdk folder that could be used in Xcode as an additional SDK.

Posted in code-level, gnustep, tool-support | 1 Comment

Building a unit test target with GNUstep make

Just a quick note on how I build my test tools (they run separately, either by manual invocation or via CI) when I’m working in GNUstep.

Firstly, you’ll need Catch. Then given test files that look like this:

test_class.mm
#define CATCH_CONFIG_MAIN
#include "catch.hpp"
#import 

TEST_CASE("Using foundation", "I should be able to use Foundation classes from a test tool")
{
  NSArray *array = [NSArray array];
  REQUIRE([array count] == 0);
}

(only one of your files should define CATCH_CONFIG_MAIN as you must only have one main function).

Then I define a tests target in my Makefile:

GNUmakefile

include $(GNUSTEP_MAKEFILES)/common.make

#… stuff to define my app target

TEST_TOOL_NAME=tests
tests_OBJCC_FILES = test_class.mm other_tests.mm blah.mm
tests_OBJCCFLAGS="-ICatch/include"

-include Makefile.preamble

#… include target definitions for the app target
include $(GNUSTEP_MAKEFILES)/test-tool.make

-include Makefile.postamble

And there it is. Now whenever I make my app I also get obj/tests which’ll run the test suite for me.

Posted in TDD | Comments Off on Building a unit test target with GNUstep make