On running out of words

John Gruber’s subscription to Wiktionary expired:

At just 20 percent of unit sales, Apple isn’t even close to a monopoly. At 92 percent profit share, they have a market dominance that rivals any actual monopoly the tech industry has ever seen. We don’t even have a term for this situation, it’s so unusual.

We do have a term: monopoly will do just fine. Gruber says that Apple “isn’t even close to a monopoly”, but you don’t need to have all or even most of the unit sales in a market in order to be able to act monopolistically. An entity (or a cabal) only needs a big enough share of the sales in order to be able to set prices independent of the other competitors in the market. (Working at big telecoms companies has the effect of teaching you specifics of market economics, but then so did those economics classes I took at University.)

That 92% profit on 20% sales is indicative, rather than contraindicative, of a monopoly. And there’s another word we could use, too: monopsony. Let’s say that you’ve made an iOS app, and now you want to sell it. Do you create a storefront on your website to do that? Do you contact Sears and see how many boxes they want? Speak to some third-party distributor? No, you can only sell to Apple, they are the only buyer for iOS apps.

The thing it’s important to remember about monopolies or monopsonies is that they are not inherently bad: badness happens when an entity uses its dominant position in a market to set prices or other terms that are not considered fair, and that’s a pretty woolly situation. When the one buyer in your market decides that your contribution is “amateur hour” (sucks to be a hobbyist, I guess), or that your content is “over the line”, and doesn’t want to buy your product, you have no other vendors to sell it to: is that fair?

This is an argument that relies too much on legal details and nuance to be able to answer as a novice, so I’ll spare you my “amateur hour” pontification. I would imagine that a legal system that did explore this question would consider analogous environments, like the software market of the 1990s. Back then, Microsoft bundled a web browser and a media player with their operating systems and used their market power (which let them act as a monopoly even though competitors existed) as an operating system vendor to make it hard to sell competing browsers or media players. It might be an interesting thought experiment to compare that situation with today’s.

Apple noticed there are programmers outside the valley

If my summary sounds cynical, it’s because I’m cynical of the old Apple way where they only hired engineers who wanted to relocate within the shadow of (whatever the big thing in SV is: Stanford now, but probably HP when Apple was younger). I’m excited that they’ll get to hire from a broader range of applicants as they stagger, eyes blinking, into the wide world outside Cupertino.

Standing at the Crossroads

A while back I wrote Conflicts in my Mental Model of Objective-C, in which I listed a few small scale dichotomies or cognitive dissonances that plagued my notion of my work. I just worked out what the overall picture is, the jigsaw into which all of these pieces can be assembled.

And I do mean just. It’s about 1AM on Christmas Eve, but this picture hit me so hard I couldn’t stop thinking about it without writing it down and getting it out of my head. If it doesn’t explain everything, it shows me the shape of the solution at least.

A tale of two Apples

I believe that everything I wrote in the Conflicts post can be understood in terms of two different and (of course) opposed models of Apple. I also believe that the two models are irreconcilable, but that the opposition is also accidental, not essential. That by removing the supposed conflict between them, everything I thought was a problem can be resolved.

It was the best of iPads

The iPad is, I would argue (and accept that this is a subjective argument) the most tasteful application of computing technology to the world of many computer users. I would further argue, and this is perhaps on firmer ground, that the reason the iPad is so tasteful is because Apple spend a lot more time and resources on worrying about questions of taste than many of their competitors and others in the industry.

There are many visions that have combined to produce the iPad, but interestingly the one that I think is clearest is John Scully’s Knowledge Navigator. In the concept videos for Knowledge Navigator, a tablet computer with natural language speech comprehension and a multi-touch screen is able to use the many hyperlinked documents available on the Web to answer a wide range of questions, make information available to its users and even help them to plan their schedules.

This is what we have now. This is the iPad, with Siri and a host of third-party applications. Apple even used to use the slogan “there’s an app for that”. Do you have a problem? You can probably solve it with iOS and a trip to the app store.

It was the worst of iPads

OK, what do you do if your problem isn’t solved on the app store, or the available solutions aren’t satisfactory?

Well first, you’d better get yourself another computer because while the iPad is generally designed for solving problems it isn’t designed for solving general problems. You might be able to find some code editors on the iPad, but you sure aren’t going to use them to write an iPad app without external assistance.

OK, so you’ve got your computer, and you’ve learned how to do the stuff that makes iPad apps. Now you just pay a recurring fee to be allowed to put that stuff onto your iPad. And what if you want to share that with your friends? Only if it meets Apple’s approval.

If the iPad is the Knowledge Navigator, it is not the Dynabook. A Dynabook is a computer that you can use to solve your problems on, but it’s also one on which you can create solutions to your own problems.

The promise of the Dynabook is that if you understand what your problem is, you can model that problem on the Dynabook. You model it with objects-either your own or ones supplied for you. You can change and create these objects until they model the problem you have, at which point you can use them to compute a solution.

The irony is that we have all the parts needed in a Dynabook, all in the iPad. Computer so simple even children can use it? Check. Objects? Check. Repositories of objects created by other people so we don’t have to rewrite our own basic objects all the time? We call that CocoaPods (or RubyGems, or whatever the poison in your area of the world). But we just can’t put all of these things together on that computer itself.

That would be distasteful. That might let people do things that make the iPad look bad. That might mean iPads providing experiences that haven’t been vetted by the mothership.

Does using an iPad ever make you wonder how iPads work? What they can do? What you can make them do? You can answer these questions, but not using an iPad. Your Knowledge Navigator does not know the route to that particular destination.

This is what is truly meant when it is said that the iPad is not upgradeable. Forget swapping out memory chips or radio transmitters. Those are just lumps of sand inside a box made of melted sand and refined rock. The iPad is not upgradeable because you are stuck with the default experience: the out-of-the-box facilities plus those that have been approved from on high. It might be good, but it might not be good enough.

Notice that this is not an “everyone must program” position. That would be a very bad experience. The position is rather “everyone must have the facility, should they be so inclined, to make their computer better for them than the manufacturers did”.

Conclusions?

I think that the Apple described above is not at the intersection of technology and the liberal arts. It is at the border, a self-appointed barrier of things that might flow between the two.

I believe that the two visions can be reconciled, and that a thing can be both the Knowledge Navigator and the Dynabook. I don’t believe you have to disable some experiences to provide others. I believe that Apple the champions of tasteful computing can be applauded at every turn while Apple the high priests of the church of computing can be fought tooth and nail.

Enablers? Yes, please. Arbiters? No, thanks.

Conflicts in my mental model of Objective-C

My worldview as it relates to the writing of software in Objective-C contains many items that are at odds with one another. I either need to resolve them or to live with the cognitive dissonance, gradually becoming more insane as the conflicting items hurl one another at my cortex.

Of the programming environments I’ve worked with, I believe that Objective-C and its frameworks are the most pleasant. On the other hand, I think that Objective-C was a hack, and that the frameworks are not without their design mistakes, regressions and inconsistencies.

I believe that Objective-C programmers are correct to side with Alan Kay in saying that the designers of C++ and Java missed out on the crucial part of object-oriented programming, which is message passing. However I also believe that ObjC missed out on a crucial part of object-oriented programming, which is the compiler as an object. Decades spent optimising the compile-link-debug-edit cycle have been spent on solving the wrong problem. On which topic, I feel conflicted by the fact that we’ve got this Smalltalk-like dynamic language support but can have our products canned for picking the same selector name as some internal secret stuff in someone else’s code.

I feel disappointed that in the last decade, we’ve just got tools that can do the same thing but in more places. On the other hand, I don’t think it’s Apple’s responsibility to break the world; their mission should be to make existing workflows faster, with new excitement being optional or third-party. It is both amazing and slightly saddening that if you defrosted a cryogenically-preserved NeXT application programmer, they would just need to learn reference counting, blocks and a little new syntax and style before they’d be up to speed with iOS apps (and maybe protocols, depending on when you threw them in the cooler).

Ah, yes, Apple. The problem with a single vendor driving the whole community around a language or other technology is that the successes or failures of the technology inevitably get caught up in the marketing messages of that vendor, and the values and attitudes ascribed to that vendor. The problem with a community-driven technology is that it can take you longer than the life of the Sun just to agree how lambdas should work. It’d be healthy for there to be other popular platforms for ObjC programming, except for the inconsistencies and conflicts that would produce. It’s great that GNUstep, Cocotron and Apportable exist and are as mature as they are, but “popular” is not quite the correct adjective for them.

Fundamentally I fear a world in which programmers think JavaScript is acceptable. Partly because JavaScript, but mostly because when a language is introduced and people avoid it for ages, then just because some CEO says all future websites must use it they start using it, that’s not healthy. Objective-C was introduced and people avoided it for ages, then just because some CEO said all future apps must use it they started using it.

I feel like I ought to do something about some of that. I haven’t, and perhaps that makes me the guy who comes up to a bunch of developers, says “I’ve got a great idea” and expects them to make it.

Coupling in a Cocoa[ Touch] App

This is one of my occasional “problem looking for a solution” posts. It’d be great to discuss this over on App.net or G+ or somewhere. I don’t think, at the outset of writing this post, that the last sentence is going to solve the problems identified in the body.

I’ve built applications in a few different technologies, and I think that the Cocoa and Cocoa Touch apps I’ve seen have been the most tightly coupled to their host frameworks with the possible exception of Delphi. I include both my code and that I’ve seen of other people—I’m mainly talking about my work of course. I’ve got a few ideas on why that might be.

The coupling issue in detail

Ignore Foundation for a moment. A small amount of Foundation is framework-ish: the run loop and associated event sources and sinks. The rest is really a library of data types and abstracted operating system facilities.

But now look at AppKit or UIKit. Actually, imagine not looking at them. Consider removing all of the code in your application that directly uses an AppKit or UIKit class. Those custom views, gone. View controller subclasses? Boom. Managed documents? Buh-bye.

OK, now let’s try the same with Core Data. Imagine it disappeared from the next version of the OS. OK, so your managed objects disappear, but how much of the rest of your app uses them directly? Now how about AVFoundation, or GameKit, and so on?

I recently took a look at some of my code that I’d written very recently, and found that less code survived these excisions than I might like. To be sure, it’s a greater amount than from an application I wrote a couple of years ago, but all the same my Objective-C code is tightly coupled to the Apple frameworks. The logic and features of the application do not stand on their own but are interwoven with the things that make it a Mac, or iPhone app.

Example symptoms

I think Colin Campbell said it best:

iOS architecture, where MVC stands for Massive View Controller

A more verbose explanation is that these coupled applications violate the Single Responsibility Principle. There are multiple things each class is doing, and this means multiple reasons that I might need to change any of these classes. Multiple reasons to change means more likelihood that I’ll introduce a bug into this class, and in turn more different aspects of the app that could break as a result.

Particularly problematic is that changes in the way the app needs to interact with the frameworks could upset the application behaviour: i.e. I could break what the app does by changing how it presents a UI or how it stores objects. Such modifications could come about when new framework classes are introduced, or new features added to existing classes.

What’s interesting is that I’ve worked on applications using other frameworks, and done a better job: I can point at an Eclipse RCP app where the framework dependencies all lie at the plugin interface boundaries. Is there something specific to Cocoa or Cocoa Touch that leads to applications being more tightly coupled? Let’s look at some possibilities.

Sample code

People always malign sample code. When you’re just setting out, it assumes you know too much. When you’re expert, it takes too many shortcuts and doesn’t display proper [choose whichever rule of code organisation is currently hot]. Sample code is like the avocado of the developer documentation world: it’s only ripe for a few brief minutes between being completely inedible and soft brown mush.

Yes, sample code is often poorly-decoupled, with all of the application logic going in the app delegate or the view controller. But I don’t think I can blame my own class design on that. I remember a time when I did defend a big-ass app delegate by saying it’s how Apple do it in their code, but that was nearly a decade ago. I don’t think I’ve looked to the sample code as a way to design an app for years.

But sample code is often poorly-decoupled regardless of its source. In researching this post, I had a look at the sample code in the Eclipse RCP book; the one I used to learn the framework for the aforementioned loosely-coupled app. Nope, that code is still all “put the business logic in the view manager” in the same way Apple’s and Microsoft’s is.

Design of the frameworks and history

I wonder whether there’s anything specific about the way that Apple’s frameworks are designed that lead to tight coupling. One thing that’s obvious about how many of the frameworks were designed is the “in the 1990s” aspect of the issue.

Documentation on object-oriented programming from that time tells us that subclassing was the hotness. NSTableView was designed such that each of its parts would be “fully subclassable”, according to the release notes. Enterprise Objects Framework (and as a result, Core Data) was designed such that even your data has to be in a subclass of a framework class.

Fast forward to now and we know that subclassing is extremely tight coupling. Trying to create a subclass of a type you do control is tricky enough, but when it’s someone else’s you have no idea when they’re going to add methods that clash with yours or change behaviour you’re relying on. But those problems, real though they are, are not what I’m worried about here: subclassing is vendor lock-in. Every time you make it harder to extract your code from the vendor’s framework, you make it harder to leave that framework. More on that story later.

So subclassing couples your code to the superclass, and makes it share the responsibility of the superclass and whatever it is you want it to do. That becomes worse when the superclass itself has multiple responsibilities:

  • views are responsible for drawing, geometry and event handling.
  • documents are responsible for loading and saving, managing windows, handling user interaction for load, save and print requests, managing the in-memory data representing the document, serialising document access, synchronising user interface access with background work and printing.
  • pretty much everything in an AppKit app is responsible in some way for scripting support which is a cross-cutting concern.

You can see Apple’s frameworks extricating themselves from the need to subclass everything, in some places. The delegate pattern, which was largely used either to supply data or let the application respond to events from long-running tasks, is now also used as a decorator to provide custom layout decisions, as with the 10.4 additions to the NSTableViewDelegate, UITableViewDelegate and more obviously the UICollectionViewDelegateFlowLayout. Delegate classes are still coupled to the protocol interface, but are removed from the class hierarchy giving us more options to adapt our application code onto the interface.

Similarly, the classes that offer an API based on supplying a completion handler require that the calling code be coupled to the framework interface, but allow anything to happen on completion. As previously mentioned you sometimes get other problems related to the callback design, but at least now the only coupling to the framework is at the entry point.

No practice at decoupling

This is basically the main point, isn’t it? A lack of discipline comes from a lack of practice. But why should I (and, I weakly argue, other programmers whose code I’ve read) be out of practice?

No push to decouple

Here’s what I think the reason could be. Remind me of the last application you saw that wasn’t either a UIKit app or an AppKit app. Back in the dim and distant past there might be more variety: the code in your WebObjects Objective-C server might also be used in an AppKit+EOF client app. But most of us are writing iOS software that will only run on iOS, so the effort in decoupling an iOS app from UIKit has intellectual benefits but no measurable tangible benefits.

The previous statement probably shouldn’t be taken as absolute. Yes, there are other ways to do persistence than Core Data. Nonetheless, it’s common to find iOS apps with managed object contexts passed to every view controller: imagine converting that ball of mud to use BNRPersistence.

What about portability? There are two options for writing cross-platform Objective-C software: frameworks that are API compatible with UIKit or AppKit, or not. While it is possible to build an application that has an Objective-C core and uses, say, Qt or Win32 to provide a UI, in practice I’ve never seen that.

There also aren’t any “alternative” Objective-C application frameworks on the platforms ObjC programmers do support. In the same way that you might want to use the same Java code in a SWT app, a Swing app, and a Spring MVC server; you’re not going to want to port your AppKit app to SomeoneElsesKit. You’ll probably not want to move away from UIKit to $something_else in the near future in the same way it might be appropriate to move from Struts to Spring or from Eclipse RCP to NetBeans Platform.

Of course, this is an argument with some circular features. Because it’s hard to move the code I write away from UIKit, if something else does come along the likelihood is I’d be disinclined to take advantage of it. I should probably change that: vendor lock-in in a rapidly changing industry like software is no laughing matter. Think of the poor people who are stuck with their AWT applications, or who decided to build TNT applications on NeWS because it was so much more powerful than X11.

Conclusion

This is the last sentence, and it solves nothing; I did tell you that would happen.

Happy Birthday, Objective-C!

OK, I have to admit that I actually missed the party. Brad Cox first described his “Object-Oriented pre-compiler”, OOPC, in The January 1983 issue of ACM SIGPLAN Notices.

This describes the Object Oriented Pre-Compiler, OOPC, a language and a run-time library for producing C programs that operate by the run-time conventions of Smalltalk 80 in a UNIX environment. These languages offer Object Oriented Programming in which data, and the programs which may access it, are designed, built and maintained as inseparable units called objects.

Notice that the abstract has to explain what OOP is: these were early days at least as far as the commercial software industry viewed objects. Reading the OOPC paper, you can tell that this is the start of what became known as Objective-C. It has a special syntax for sending Smalltalk-style messages to objects identified by pointers to structures, though not the syntax you’ll be used to:

someObject = {|Object, "new"|};
{|myArray, "addObject:", someObject|};

The infix notation [myArray addObject:someObject]; came later, but by 1986 Cox had published the first edition of Object-Oriented Programming: An Evolutionary Approach and co-founded Productivity Products International (later Stepstone) to capitalise on the Objective-C language. I’ve talked about the version of ObjC described in this book in this post, and the business context of this in Software ICs and a component marketplace.

It’s this version of Objective-C, not OOPC, that NeXT licensed from PPI as the basis of the Nextstep API (as distinct from the NEXTSTEP operating system: UNIX is case sensitive, you know). They built the language into a fork of the GNU Compiler Collection, and due to the nature of copyleft this meant they had to make their adaptations available, so GCC on other platforms gained Objective-C too.

Along the way, NeXT added some features to the language: compiler-generated static instances of string classes, for example. They added protocols: I recorded an episode of NSBrief with Saul Mora discussing how protocols were originally used to support distributed objects, but became important design tools. This transformation was particularly accelerated by Java’s adoption of protocols as interfaces. At some (as far as I can tell, not well documented) point in its life, Stepstone sold the rights to ObjC to NeXT, then licensed it back so they could continue supporting their own compiler.

There isn’t a great deal of change to Objective-C from 1994 for about a decade, despite or perhaps due to the change of stewardship in 1996/1997 as NeXT was purchased by Apple. Then, in about 2003, Apple introduced language-level support for exceptions and critical sections. In 2007, “Objective-C 2.0” was released, adding a collection enumeration syntax, properties, garbage collection and some changes to the runtime library. Blocks—a system for supporting closures that had been present in Smalltalk but missing from Objective-C—were added in a later release that briefly enjoyed the name “Objective-C 2.1”, though I don’t think that survived into the public release. To my knowledge 2.0 is the only version designation any Apple release of Objective-C has had.

Eventually, Apple observed that the autozone garbage collector was inappropriate for the kind of software they wanted Objective-C programmers to be making, and incorporated reference-counted memory management from their (NeXT’s, initially) object libraries into the language to enable Automatic Reference Counting.

And that’s where we are now! But what about Dr. Cox? Stepstone’s business was not the Objective-C language itself, but software components, including ICPak101, ICPak201 and the TaskMaster environment for building applications out of objects. It turned out that the way they wanted to sell object frameworks (viz. in a profitable way) was not the way people wanted to buy object frameworks (viz. not at all). Cox turned his attention to Digital Rights Management, and warming up the marketplace to accept pay-per-use licensing of digital artefacts. He’s since worked on teaching object-oriented programming, enterprise architecture and other things; his blog is still active.

So, Objective-C, I belatedly raise my glass to you. You’re nearly as old as I am, and that’s never likely to change. But we’ve both grown over that time, and it’s been fun growing up with you.

On free apps

This post is sort-of a follow-on to @daveaddey’s post on the average app; although in reality it’s a follow-on to the response that comes out every time a post on app store revenue is written.

Events go like this:

  1. Some statistic about app store revenue.
  2. “Your numbers include free apps. You shouldn’t include free apps.

Yes you should. The revenue that comes from the app store is indeed shared across all apps, free and paid. Free apps contribute significantly to the long-tail price distribution of apps on the store, and the consumer perception that apps shouldn’t cost much. Some of them generate revenue through in-app purchase: revenue that probably is counted in Apple’s “we’ve given $5bn to developers” number. Some of them will generate revenue through iAd: it’s not clear whether that’s included in the $5bn but it’s certainly money paid by Apple to developers. Similarly, it’s not clear whether money paid through Newsstand subscriptions (again, in free apps) counts: it probably does. Some of them have not always and will not always be free; again these apps make money directly from Apple.

A lot of free apps come from companies that do other things, but feel a marketing need to be on the app store:Amazon, O2, facebook and others go down this route. In these cases there is absolutely no money to be made from the app directly, though there are possibly many sorts of collateral benefits. It costs Amazon money to write the Windowshop app, but they bank on users buying more things from them than if the apps doesn’t exist.

On the other hand, some free apps are just written by developers who want to put a free app on the store so that it can act as their portfolio when they try to get work as iOS app developers.

So yes, other business models exist. But when talking about money made from the app store, you have to include all of those products that don’t make money on the app store. Ignoring the odd outlier is fine statistics. Ignoring large quantities of data that make your conclusions look bad is not science; it’s witchcraft.

On community

This is a post that had been boiling for a while; I talked a little about the topic when I was in Appsterdam earlier this year, and had a few more thoughts which were completely supplanted and rearranged by watching iOSDevUK. I threw away my earlier draft; you’re about to read something different. Where you see “we”, “us” or “our community” you should probably take it to mean Cocoa programmers, though read on to find out why “us” doesn’t always make sense.

Acknowledgements

So many people have contributed to this, by saying things that I agree with, by saying things that I disagree with, by organising conferences, or in other ways. I’ve tried to cite where appropriate but I’ve probably missed someone somewhere. Sorry :-(.

Introduction

This article is more the presentation of a problem and some thoughts about it than an attempt to argue in favour of a particular solution. I’ll investigate what it means to be in “the Cocoa programming community”, beginning with whether or not Apple is in a community of its own devising. I’ll ask whether there’s room for more collaboration in the community, and whether the community of Cocoa programmers encompasses all Cocoa programmers. Finally, I’ll notice that these are questions as yet unanswered, and explore what the solutions and non-solutions might be.

On Apple and the community

This is the bit that I’d done most work on already, as it was the topic of my Appsterdam talk. The summary of that talk is pretty much the same as Dave’s working-with-Apple pro tip in his iOSDevUK talk. As his was more succinct, I’ll use that version:

Apple is people too. Don’t be a dick.

(I’m a fan of people not being dicks.)

The thing is that as Scotty said, the community wins when all of its members win. But he also said that Apple isn’t in the community, so don’t they obviate themselves from this relationship?

Well, no. If we look at the community that most of the people reading this post – and that most of the people at iOSDevUK – consider themselves a member of, it’s the community of iOS app makers. It happens that all of these people depend on the same thing: on iOS. Being nice to Apple and helping them just makes good business sense. If you’re not helping Apple to win, they might decide to help you lose.

On a related subject: for Apple to win, it’s not necessary for anyone else to lose. In fact, I’m not the first person to say this. I’m stealing from a man who was, at the time this quote was coined, freshly CEO after having been a management consultant at Apple:

We have to let go of this notion that for Apple to win, Microsoft has to lose. We have to embrace a notion that for Apple to win, Apple has to do a really good job. And if others are going to help us that’s great, because we need all the help we can get, and if we screw up and we don’t do a good job, it’s not somebody else’s fault, it’s our fault.

So Microsoft, Windows 8 and Windows Phone 8 don’t have to lose. Google and Android don’t have to lose. Enterprise Java programmers don’t have to lose. Your competitors don’t have to lose. The team in Apple that make that thing that just crashed don’t have to lose.

On that last note, Apple is the biggest company in the world and you’re supplying one or a handful of 600,000 or so different replaceable components that helps them make a trivial fraction of their income. So if the choice you give them is “do what I need or I’ll stop working with you”, they’ll pick option 2. “Fix Radar or GTFO”? It’s cheaper and easier for Apple to GTFO.

That’s not to say the best strategy is always to do whatever Apple want. Well, actually it probably is in the short term, but Apple is real people and real people benefit from constructive feedback too.

Just who is “them”, anyway?

Around the time that I started to be a proper software writing person, there was a strong division in Mac development. The side I was in (and I was young, opinionated, easily led, and was definitely in this faction) was the Yellow Box. We knew that the correct way to write software for the Mac was to use the Foundation and AppKit APIs via the Objective-C or Java languages.

We also knew that the other people, the Blue Boxers who were using libraries compatible with Mac OS 8 and the C or C++ languages, were grey-bearded dinosaurs who didn’t get it.

This sounds crazy now, right? Should I also point out that I wrote a Carbon app, just to make it sound a little crazier?

That’s because it is crazy. Somehow those of us who had chosen a different programming language knew that we were better at writing software; much better than those clowns who just made the most successful office suite ever, the most successful picture editing app ever, or the most successful video player ever. Because we’d taken advice on how to write software from a company that was 90 days away from bankruptcy and had proven incapable of executing on software development, we were awesome and the people who were making the shittons of money on the most popular software of all time were clueless idiots.

But what about the people who were writing Mac software with WXWindows (which included myself), or RealBASIC, or the PerlObjCBridge (which also included me)? Where did those fit in this dichotomy? Or the people over on Windows (me again) or Solaris (yup, me here)?

The definition of “us” and “them” is meaningless. It needs to be, in order to remain fluid enough that a new “them” can always be found. Looking through my little corner of history, I can see a few other distinctions that have come and gone over time. CodeWarrior vs Project Builder. Mach-O vs CFM. iPhone vs Android. Windows vs Mac. UNIX vs VMS. BSD vs System V. KDE vs GNOME. Java vs Objective-C. Browser vs native. BitKeeper vs Monotone. Dots vs brackets.

Let’s look in more detail at the Windows vs Mac distinction. If you cast your mind back, you’ll recall that around 2000 it was much easier to make money on Windows. People who were in the Mac camp made hand-waving references to technical superiority, or better user interfaces, or breaking the Microsoft hegemony, or not needing to be super-rich. Many of those Mac developers are now iPhone developers. In the iOS vs Android distinction, iOS developers readily point to the larger amount of money that’s available in making iOS apps…wait.

O(community)

The community contribution fraction

As Scotty said, an important role in a community is that of the reader/consumer/learner, the people who take and use the information that’s shared through the community. Indeed in any community this is likely to be the largest share of the community’s population; the people who produce and share the information are also making use of it too.

The thing is, that means that there are many people who are making use of those great ideas, synthesising them, and making even new and better ideas. And we’re not finding out about them. Essentially there is more knowledge than there is opportunity to share knowledge.

It’d be great to have some way to make it super-easy for everyone who was involved in “the community” to contribute, even if it’s just to add a single thought or idea to the pool. As Scotty said, there’s no way you can force people to contribute, and that’s not even desirable as it’s a great way to put people off talking to you ever again.

So you can’t hold a gun up to people and force them to tell you a fact about Objective-C. You can ensure everyone knows what forms of contribution take place; perhaps they’ll find something that’s easier than they thought or something they’ll enjoy. Perhaps they’ll give it a go, and enjoy it.

Face to face

Conferences are definitely not that simple way for everybody to contribute. Conferences are great, though as I’ve said before there aren’t enough seats for them to have a wide direct impact on the community. Tech conferences will never be a base for broad participation, both due to finite size (even WWDC comprises less than one percent of registered developers on the platforms) and limited scope for contribution – particularly the bias toward contributors with “prior”.

One “fix” to scale up the conference is to run the conference all year long. This allows people who don’t like the idea of being trapped in a convention with the same 200 people for a week the option to dip in and out as they see fit. It gives far more opportunity for contribution – because there are many more occasions on which contribution is needed. On the other hand, part of the point of a conference is that the attendees are all at the same place at the same time, so there’s definitely some trade off to be had.

Conferences and Appsterdams alike lead to face-to-face collaboration; the most awesomest flavour of collaboration there is. In return, they require (like Cocoaheads, NSCoder or whatever you call your pub/café meet) that you have the ability to get to the venue. This can call for anything from a walk down the street via a couple of ten-hour flights to relocating yourself and your family.

Smaller-scale chances for face to face interaction exist: one-on-few training courses and one-on-one mentoring and apprenticeships. These are nearly, but not quite, one-way flows of information and ideas from the trainer or sensei to the students or proteges. There are opportunities to make mentoring a small part of your professional life so it doesn’t seem to require a huge time investment.

Training courses, on the other hand, do. Investment by the trainer, who must develop a course, teach it, respond to feedback, react to technology changes and so on. Investment by the trainees, who must spend an amount of time and money attending the course, then doing any follow-up exercises or exams. They’re great ways to quickly get up to speed with a technology by immersing yourselves in them, but no-one is ever going to answer the question “how can I easily contribute to my community?” with “run a training course”.

Teaching at a distance

A lower barrier to entry is found by decoupling the information from the person presenting the information. For as long as there has been tech there have been tech books; it’s easy (if you have $10-$50) to have a book automatically delivered to your house or reader and start absorbing its facts. For published books, there’s a high probability that the content has been proofread and technically reviewed and therefore says something a bit accurate in a recognisable language.

On the other hand, there are very few “timeless” books about technology. Publisher schedules introduce some delay between finishing a manuscript and having something to sell, further reducing any potential shelf life. If you’re in the world of Apple development and planning to say anything about, for example, Objective-C or Xcode, you’re looking at a book that will last a couple of months before being out of date.

Writing a book, then, takes a long time which already might be a blocker to contribution for a lot of people. There’s also the limitation on who will even be invited to contribute: the finite number of publishers out there will preferentially select for established community members and people who have demonstrated an ability to write. It’s easier to market books that way.

The way to avoid all of that hassle is to write a blog (hello!). You get to write things without having to be selected by some commissioning editor. Conversely, you aren’t slowed down by the hassles of having people help you make the thing you write better, either—unless you choose to seek that help.

You then need to find somebody to read your blog. This is hard.

Stats for this blog: most pages have only ever been read a couple of hundred times.

If someone else already has an audience, you can take advantage of that. Jeff Atwood previously wrote about using stack overflow as a blog, where you’d get great reach because they bring their audience. Of course, another thing you can do on stack overflow is answer questions from other people: so that quick answer you contribute is actually solving someone’s problem.

This is, in my opinion, the hallowed middle ground between books (slow, static, hard to get into, with a wide reach) and blogs (fast, reactive, easy to pick up, hard to get discovered). Self-publishing a book is a lot like spending ages writing a long blog post. On the other hand, contributing to a community resource like a Q and A site or a wiki means only writing the bit of the book that you’re best placed to contribute. It also means sharing the work of ensuring correctness and value among the whole contributor base.

Our community / People with ideas ≪ 1

Whatever your definition of “the community”: the iOS developer community, the object-oriented programming community, the developer community—there are many more people who aren’t in that community. But they still have things to say that could be interesting and help us see what we do in different ways.

I’m not so sure that there are people out there doing what we do who don’t even passively engage with the rest of the community. Maybe there are, maybe there are lots. But I’m sure most people have at least read a book, or done a search that ended up at a mailing list post or blog entry. Very few people will never have used community-supplied resources; although it’s possible that there are programmers out there who’ve learned everything they know from first party documentation.

What I am sure of is that if you’re an Objective-C developer building mobile apps and you only listen to other Objective-C developers building mobile apps, you’re missing out on the information and ideas you could be taking from everyone else. Dave Addey told us to go and visit museums and art galleries to get inspiration, but that’s not all there is to it. Talk to someone doing Objective-C in a different context. Talk to someone doing Java, or Clojure. Talk to business people, or artists, or musicians. Break out of the echo chamber, and find out whether what other people are doing could be applied to what you’re doing.

Conclusions

As promised, there aren’t really any conclusions here. It’s more a collection of my own thoughts dumped out from brain to MarsEdit in order to let me make sense of them, and to stop me having to think about them at bedtime.

What’s clear is that there are a load of different ways for people to contribute to a community. Consumption of other people’s thoughts, advice and ideas is itself a very beneficial service as it’s how new ideas get synthesised, how new practices are formed and how the community collectively improves its output. It would be even better if what those people were doing were also made available and shared with the rest of us, to achieve an exponential growth in experience and advancement across the whole community.

But that’s not guaranteed to happen. The best thing to do is not to try driving people to contribute, but to give them so many opportunities to do so that, at some point, someone in the community will be in the position that sharing something is really easy and they choose to do so.

Other techniques to improve the number of ideas you get from the community are to be less adversarial in your definition of community, and more broad in your inclusion. The “community of people making iOS apps with Objective-C” is small, the “community of people making things” is universal.

On the Mac App Store

I’ve just come off iDeveloper.TV Live with Scotty and John, where we were talking about the Mac app store. I had some material prepared about the security side of the app store that we didn’t get on to – here’s a quick write up.

There’s a lot of discussion on twitter and the macsb mailing list, and doubtless elsewhere, about the encryption paperwork that Apple are making us fill in. It’s not Apple’s fault, it’s the U.S. Department of Commerce. You see, back in the cold war (and, frankly, ever since) the government have been of the opinion that encryption is a weapon (because it hides data from their agents) and so are powerful computers (because they can do encryption that’s expensive to crack). So the Bureau of Industry and Security developed the Export Administration Regulations to control the flow of such heinous weapons through the commercial sector.

Section 5, part 2 covers computer equipment and software. Specific provision is made for encryption, in the documentation we find that Items may be controlled as encryption items even if the encryption is actually performed by the operating system, an external library, a third-party product or a cryptographic processor. If an item uses encryption functionality, whether or not the code that performs the encryption is included with the item, then BIS evaluates the item based on the encryption functionality it uses.

So there you go. If you’re exporting software from the U.S. (and you are, if you’re selling via Apple’s app store) then you need to fill in the export notification.

Other Mac App Store security things, in “oh God is it that late already” format:

  • Receipt validation. No different really from existing licensing frameworks. All you can do is make it hard to find the tests from the binary. I had an idea about a specific way to do that, but want to test it before I release it. As you’ve no doubt found, anti-cracking measures aren’t easy.
  • Users. The user base for the MAS will be wider, and less tech-savvy, than the users existing micro-ISVs are selling to. Make sure your intent with regard to user data, particularly the consequences of your app’s workflow, are clear.
  • Similarly, be clear about the content of updates. Clearer than Apple are: “contains various fixes and improvements” WTF?
  • As we’ve found with the iOS store, it’s harder to push an update out than when you host the download yourself. Getting security right (or, pragmatically, not too wrong) the first time avoids emergency update submissions.
  • Permissions. Your app needs to run entirely as the current user, who may not be an admin. If you’re a developer, you’re probably running as an admin. Test with a non-admin account. Better, do all of your development in a non-admin account. Add yourself to the _developer group so you can still use gdb/Instruments properly.

Rumors of your runtime’s death are greatly exaggerated

This is supposed to be the week in which Apple killed Java and Flash on the Mac, but it isn’t. In fact, looking at recent history, Flash could be about to enter its healthiest period on the platform, but the story regarding Java is more complicated.

Since releasing Mac OS X back in 2001, Apple has maintained the ports of both the Flash and Java runtimes on the platform. This contrasts with the situation on Windows, where Adobe and Sun/Oracle respectively distribute and maintain the runtimes. (Those with long memory will remember the problems regarding Microsoft’s JRE, which was eventually killed by court injunction.) In both cases, Apple has received occasional chastisement when its supported version of the runtime lagged behind the upstream release.

In the case of Flash, this was always related to security issues. Apple’s version would lack a couple of patches that Adobe had made available. However, because Adobe maintains the official runtime for Mac OS X, it’s super-easy to grab the latest version and stay up to date. If Apple stops maintaining its sporadically-updated distribution of the Flash runtime, then everyone who needs it will be forced into grabbing a new version from Adobe (who are free to remind users to install updates as other third-party vendors do). Everyone who doesn’t need it doesn’t have the runtime installed, thus reducing the attack surface of their browsers. Win-win.

Java, as I mentioned, is a more complicated kettle of bananas. Apple isn’t redistributing the upstream JRE in this case, they’re building their own from upstream sources. While other runtimes exist, none integrates as well with the OS due to effort Apple put in early on to support Yellow Box for Java and Aqua Swing. This means that you can’t just go somewhere else to get your fix of JRE – it won’t work the same.

There isn’t a big market of Java apps on the Mac platform, so there isn’t a big vendor who might pick up the slack and provide an integrated JRE. Oracle support OpenOffice on the platform, but that isn’t a big earner. There’s IBM with their Lotus products – I’ll come onto that shortly. That just leaves the few Java client apps that do matter on the Mac – IDEs. As a number of Java developers have stated loudly this week, there are many Java developers currently using Macs. These people could either switch to a different OS, or look to maintain support for their IDEs on the Mac: which means supporting the JRE and the GUI libraries used by those IDEs.

By far the people in the best position to achieve that are on Eclipse. The Eclipse IDE (and the applications built on its Rich Client Platform, including IBM’s stuff mentioned earlier) use a GUI library called SWT. This uses native code to draw its widgets using real Cocoa (in recent versions anyway – there’s a Carbon port too) so SWT already works with native drawing in whatever JRE people bring along. This SoyLatte port of OpenJDK can already run Eclipse Helios. Eclipse works with Apache Harmony too, though the releases lag behind quite a bit.

So the conclusion is that your runtime isn’t dead, in fact its support is equivalent to that found on the Windows platform. However, if you’re using Java you might experience a brief period in the wilderness unless/until the community support effort catches up with its own requirements – which aren’t the same as Apple’s.