On Mental Health

This post has been a while in the writing, I suppose waiting for the perfect time to publish it. The two things that happened today to make me finally commit it to electrons were the news about Robin Williams, and reading Robert Bloch’s That Hell-Bound Train. Explaining the story’s relevance would spoil it, but it’s relevant. And short.

I didn’t leave Big Nerd Ranch because I disliked the job. I loved it. I loved working with clever people, and teaching clever people, and building things with clever people, and dicking around on Campfire posting meme images with clever people, and alternately sweating and freezing in Atlanta with clever people, and speaking stilted Dutch with clever people.

I left because I spent whole days staring at Xcode without doing anything. Because I knew that if I wrote code and pushed it, what I would do would be found lacking, and they’d realise that I only play a programmer on TV, even though I also knew that they were friendly, kind, supportive people who would never be judgemental.

When I left I was open and honest with my colleagues and my manager, and asked them to be open and honest with each other. I felt like I was letting them down, and wanted to do the least possible burning of bridges. I finished working on the same day that I voiced my problems, then I went to bed and had a big cry.

Which sounds bad, but was actually a release of sorts. I don’t remember the previous time I’d cried, or really done anything that expresses emotion. I think it may have been whenever British Mac podcast episode 35 was on, playing the final scene of Blackadder Goes Forth. Which apparently was back in 2006.

Anyway, I then took a couple of months away from any sort of work. On the first day post-Ranch I made an appointment to see a doctor, who happened to have available time that same day. He listened to a story much like the above, and to a description of myself much like the below, and diagnosed depression.

You may have seen that demo of Microsoft’s hyper-lapse videos where you know that there’s loads going on and tons of motion things to react to, but the image is eerily calm and stable. Yup. There’s lots going on, but in here everything’s muffled and has no effect.

It’s not like coasting, though. It’s like revving the engine with the clutch disengaged. I never stop thinking. That can get in the way of thinking about things that I actually want to think about, because I’m already thinking about something else. It means getting distracted in conversations, because I’m already thinking about something else. It means not getting to sleep until I stop thinking.

It also means writing a lot. You may have got rid of a song stuck in your head (an earworm) by playing that song through, or by singing it to yourself. I get rid of brainworms by writing them down. I use anything: from a Moleskine notebook and fountain pen on an Edwardian writing slope to Evernote on a phone.

Now, you may be thinking—or you may not. It may just be that I think you’re thinking it. While I’m still extraverted, I’m also keen to avoid being in situations where I think other people might be judging me, because I’ll do it on their behalves. It’s likely that I’m making this up—that it’s a bit weird that I keep telling jokes if I’m supposed to be emotionally disengaged. Jokes are easy: you just need to think of two things and invent a connection between them. Or tell the truth, but in a more obvious way than the truth usually lets on. You have to think about what somebody else is thinking, and make them think something else. Thinking about thinking has become a bit of a specialty.

Having diagnosed me, the doctor presented two choices: either antidepressant medication, or cognitive behavioural therapy. I chose the latter. It feels pretty weird, like you’re out to second-guess yourself. Every time you have a bad (they say “toxic”, which seems apt: I have a clear mental image of a sort of blue-black inky goop in the folds of my brain that stops it working) thought you’re supposed to write it down, write down the problems with the reasoning that led to it, and write down a better interpretation of the same events. It feels like what it is—to psychology what the census is to anthropology. Complex science distilled into a form anyone can fill in at home.

This post has been significantly more introspective than most of this blog, which is usually about us programmers collectively. Honestly I don’t know what the message to readers is, here. It’s not good as awareness; you probably all know that this problem exists. It’s not good as education; I’m hardly the expert here and don’t know what I’m talking about. I think I just wanted to talk about this so that we all know that we can talk about this. Or maybe to say that programmers should be careful about describing settings as crazy or text as insane because they don’t know who they’re talking to. Maybe it was just to stop thinking about it.

Depending on the self-interest of strangers

The title is borrowed from an economics article by Art Carden, which is of no further relevance to this post. Interesting read though, yes?

I’m enjoying the discussion in the iOS Developer Community™ about dependency of app makers on third-party libraries.

My main sources for what I will (glibly, and with a lot of simplification) summarise as the “anti-dependency” argument are a talk by Marcus Zarra which he gave (a later version than I saw of) at NSConference, and a blog post by Justin Williams.

This is not perhaps an argument that is totally against use of libraries, but an argument in favour of caution and conservatism. As I understand it, the position(s) on this side can be summarised as an attempt to mitigate the following risks:

  • if I incorporate someone else’s library, then I’m outsourcing my understanding of the problem to them. If they don’t understand it in the same way that I do, then I might not end up with a desired solution.
  • if I incorporate someone else’s library, then I’m outsourcing my understanding of the code to them. If it turns out I need something different, I may not be able to make that happen.
  • incorporating someone else’s library may mean bringing in a load of code that doesn’t actually solve my problem, but that increases the cognitive load of understanding my product.

I can certainly empathise with the idea that bringing in code to solve a problem can be a liability. A large app I was involved in writing a while back used a few open source libraries, and all but one of them needed patching either to fix problems or to extend their capabilities for our novel setting. The one (known) bug that came close to ending up in production was due to the interaction between one of these libraries and the operating system.

But then there’s all of my code in my software that’s also a liability. The difference between my code and someone else’s code, to a very crude level of approximation that won’t stand up to intellectual rigour but is good enough for a first pass, is that my code cost me time to write. Other than that, it’s all liability. And let’s not accidentally give a free pass to platform-vendor libraries, which are written by the same squishy, error-prone human-meat that produces both the first- and third-party code.

The reason I find this discussion of interest is that at the beginning of OOP’s incursion into commercial programming, the key benefit of the technique was supposedly that we could all stop solving the same problems and borrow or buy other peoples’ solutions. Here are Brad Cox and Bill Hunt, from the August 1986 issue of Byte:

Encapsulation means that code suppliers
can build. test. and document solutions to difficult user
interface problems and store them in libraries as reusable
software components that depend only loosely on the applications that use them. Encapsulation lets consumers assemble generic components directly into their applica­tions. and inheritance lets them define new application­ specific components by inheriting most of the work from generic components in the library.

Building programs by reusing generic components will seem strange if you think of programming as the act of assembling the raw statements and expressions of a programming language. The integrated circuit seemed just as strange to designers who built circuits from discrete elec­tronic components. What is truly revolutionary about object-oriented programming is that it helps programmers reuse existing code. just as the silicon chip helps circuit builders reuse the work of chip designers. To emphasise this parallel we call reusable classes Software-ICs.

In the “software-IC” formulation of OOP, CocoaPods, Ruby Gems etc. are the end-game of the technology. We take the work of the giants who came before us and use it to stand on their shoulders. Our output is not merely applications, which are fleeting in utility, but new objects from which still more applications can be built.

I look forward to seeing this discussion play out and finding out whether it moves us collectively in a new direction. I offer zero or more of the following potential conclusions to this post:

  • Cox and contemporaries were wrong, and overplayed the potential for re-use to help sell OOP and their companies’ products.
  • The anti-library sentiment is wrong, and downplays the potential for re-use to help sell billable hours.
  • Libraries just have an image problem, and we can define some trustworthiness metric (based, perhaps, on documentation or automated test coverage) that raises the bar and increases confidence.
  • Libraries inherently work to stop us understanding low-level stuff that actually, sooner or later, we’ll need to know about whether we like it or not.
  • Everyone’s free to do what they want, and while the dinosaurs are reinventing their wheels, the mammals can outcompete them by moving the craft forward.
  • Everyone’s free to do what they want, and while the library-importers’ castles are sinking into the swamps due to their architectural deficiencies, the self-inventors can outcompete them by building the most appropriate structures for the tasks at hand.

Agile application security

There’s a post by clever security guy Jim Bird on Appsec’s Agile Problem: how can security experts participate in fast-moving agile (or Agile™) projects without either falling behind or dragging the work to a halt?

I’ve been the Appsec person on such projects, so hopefully I’m in a position to provide at least a slight answer :-).

On the team where I did this work, projects began with the elevator pitch, application definition statement, whatever you want to call it. “We want to build X to let Ys do Z”. That, often with a straw man box-and-line system diagram, is enough to begin a conversation between the developers and other stakeholders (including deployment people, marketing people, legal people) about the security posture.

How will people interact with this system? How will it interact with our other systems? What data will be touched or created? How will that be used? What regulations, policies or other constraints are relevant? How will customers be made aware of relevant protections? How can they communicate with us to identify problems or to raise their concerns? How will we answer them?

Even this one conversation has achieved a lot: everybody is aware of the project and of its security risks. People who will make and support the system once it’s live know the concerns of all involved, and that’s enough to remove a lot of anxiety over the security of the system. It also introduces awareness while we’re working of what we should be watching out for. A lot of the suggestions made at this point will, for better or worse, be boilerplate: the system must make no more use of personal information than existing systems. There must be an issue tracker that customers can confidentially participate in.

But all talk and no trouser will not make a secure system. As we go through the iterations, acceptance tests (whether manually run, automated, or results of analysis tools) determine whether the agreed risk profile is being satisfied.

Should there be any large deviations from the straw man design, the external stakeholders are notified and we track any changes to the risk/threat model arising from the new information. Regular informal lunch sessions give them the opportunity to tell our team about changes in the rest of the company, the legal landscape, the risk appetite and so on.

Ultimately this is all about culture. The developers need to trust the security experts to make their expertise available and help out with making it relevant to their problems. The security people need to trust the developers to be trying to do the right thing, and to be humble enough to seek help where needed.

This cultural outlook enables quick reaction to security problems detected in the field. Where the implementors are trusted, the team can operate a “break glass in emergency” mode where solving problems and escalation can occur simultaneously. Yes, it’s appropriate to do some root cause analysis and design issues out of the system so they won’t recur. But it’s also appropriate to address problems in the field quickly and professionally. There’s a time to write a memo to the shipyard suggesting they use thicker steel next time, and there’s a time to put your finger over the hole.

If there’s a problem with agile application security, then, it’s a problem of trust: security professionals, testers, developers and other interested parties[*] must be able to work together on a self-organising team, and that means they must exercise knowledge where they have it and humility where they need it.

[*] This usually includes lawyers. You may scoff at the idea of agile lawyers, but I have worked with some very pragmatic, responsive, kind and trustworthy legal experts.

Happy 19th birthday, Cocoa!

On October 19th, 1994 NeXT Computer, Inc. (later NeXT Software, Inc.) published a specification for OpenStep, a cross-platform interface for application programming, based on their existing Objective-C frameworks and the Display PostScript graphics system.

A little bit of history

First there came message-passing object oriented programming, in the form of Smalltalk. Well, not first, I mean first there was Simula 67, and there were even things before that but every story has to start somewhere. In 1983 Brad Cox added Smalltalk messaging to the C language to create the Object-Oriented pre-compiler. In his work with Tom Love at Productivity Products International, this eventually became Objective-C.

Object-Oriented Programming: an Evolutionary Approach

If PPI (later Stepstone) had any larger customers than NeXT, they had none that would have a bigger impact on the software industry. In 1988 NeXT released the first version of their UNIX platform, NEXTSTEP. Its application programming interface combined the “application kit” of Objective-C objects representing windows, menus, and views with Adobe’s Display PostScript to provide a high-fidelity (I mean, if you like grey, I suppose) WYSIWYG app environment.

NeXTSTEP Programming Step One: Object-Oriented Applications

N.B.: my reason for picking the Garfinkel and Mahoney book will become clear later. It happens to be the book I learned to make apps from, too.

Certain limitations in the NEXTSTEP APIs became clear. I will not exhaustively list them nor attempt to put them into any sort of priority, suffice it to say that significant changes became necessary. When the Enterprise Objects Framework came along, NeXT also introduced the Foundation Kit, a “small set of base utility classes” designed to promote common conventions, portability and enhanced localisation through Unicode support. Hitherto, applications had used C strings and arrays.

It was time to let app developers make use of the Foundation Kit. For this (and undoubtedly other reasons), the application kit was rereleased as the App Kit, documented in the specification we see above.

The release of OpenStep

OpenStep was not merely an excuse to do application kit properly, it was also part of NeXT’s new strategy to license its software and tools to other platform vendors rather than limiting it to the few tens of thousands of its own customers. Based on the portable Foundation Kit, NeXT made OpenStep for its own platform (now called OPENSTEP) and for Windows NT, under the name OpenStep Enterprise. Sun Microsystems licensed it for SPARC Solaris, too.

What happened, um, NeXT

The first thing to notice about the next release of OpenStep is that book cover designers seem to have discovered acid circa 1997.

Rhapsody Developer's Guide

Everyone’s probably aware of NeXT’s inverse takeover of Apple at the end of 1996. The first version of OpenStep to be released by Apple was Rhapsody, a developer preview of their next-generation operating system. This was eventually turned into a product: Mac OS X Server 1.0. Apple actually also released another OpenStep product: a y2k-compliant patch to NeXT’s platform in late 1999.

It’s kindof tempting to tell the rest of the story as if the end was clear, but at the time it really wasn’t. With Rhapsody itself it wasn’t clear whether Apple would promote Objective-C for OpenStep (now called “Yellow Box”) applications, or whether they would favour Yellow Box for Java. The “Blue Box” environment for running Mac apps was just a virtual machine with an older version of the Macintosh system installed, there wasn’t a way to port Mac software natively to Rhapsody. It wasn’t even clear whether (or if so, when) the OpenStep software would become a consumer platform, or whether it was destined to be a server for traditional Mac workgroups.

That would come later, with Mac OS X, when the Carbon API was introduced. Between Rhapsody and Mac OS X, Apple introduced this transition framework so that “Classic” software could be ported to the new platform. They also dropped one third of the OpenStep-specified libraries from the system, as Adobe’s Display PostScript was replaced with Quartz and Core Graphics. Again, reasons are many and complicated, though I’m sure someone noticed that if they released Mac OS X with the DPS software then their bill for Adobe licences would increase by a factor of about 1,000. The coloured box naming scheme was dropped as Apple re-used the name of their stagecast creator software: Cocoa.

So it pretty much seemed at the time like Apple were happy to release everything they had: UNIX, Classic Mac, Carbon, Cocoa-ObjC and Cocoa-Java. Throw all of that at the wall and some of it was bound to stick.

Building Cocoa Applications

Over time, some parts indeed stuck while others slid off to make some sort of yucky mess at the bottom of the wall (you know, it is possible to take an analogy too far). Casualties included Cocoa-Java, the Classic runtime and the Carbon APIs. We end in a situation where the current Mac platform (and, by slight extension, iOS) is a direct, and very close, descendent of the OpenStep platform specified on this day in 1994.

Happy birthday, Cocoa!

Reading List

I was asked “what books do you consider essential for app making”? Here’s the list. Most of these are not about specific technologies, which are fleeting and teach low-level detail. Those that are tech-specific also contain a good deal of what and why, in addition to the coverage of how.

This post is far from exhaustive.

I would recommend that any engineer who has not yet read it should read Code Complete 2. Then I would ask them the top three things they agreed with and top three they disagreed with, as criticality is the hallmark of a good engineer :-).

Other books I have enjoyed and learned from and I believe others would too:

  • Steve Krug, “Don’t make me think!”
  • Michael Feathers, “Refactoring” and “Working Effectively with Legacy Code”
  • Bruce Tate, “Seven languages in seven weeks”
  • Jez Humble and David Farley, “Continuous Delivery”
  • Hunt and Thomas, “The Pragmatic Programmer”
  • Gerald Weinberg, “The psychology of computer programming”
  • David Rice, “Geekonomics”
  • Robert M. Pirsig, “Zen and the art of motorcycle maintenance”
  • Alan Cooper, “About Face 3”
  • Jeff Johnson, “Designing with the mind in mind”
  • Fred Brooks, “the design of design”
  • Kent Beck, “Test-Driven Development”
  • Mike Cohn, “User stories applied”
  • Jef Raskin, “The humane interface”

Most app makers are probably doing object-oriented programming. The books that explain the philosophy of this and why it’s important are Meyer’s “Object-oriented software construction” and Cox’s “Object-oriented programming an evolutionary approach”.

NIMBY Objects

Members of comfortable societies such as English towns have expectations of the services they will receive. They want their rubbish disposed of before it builds up too much, for example. They don’t so much care how it’s dealt with, they just want to put the rubbish out there and have it taken away. They want electricity supplied to their houses, it doesn’t so much matter how as long as the electrons flow out of the sockets and into their devices.

Some people do care about the implementation, in that they want it to be far enough away from it not to have to pay it any mind. These people are known as NIMBYs, after the phrase Not In My Back Yard. Think what it will do to traffic/children walking to school/the skyline/property prices etc. to have this thing I intend to use near my house!

A NIMBY wants to have their rubbish taken away, but does not want to be able to see the landfill or recycling centre during their daily business. A NIMBY wants to use electricity, but does not want to see a power station or wind turbine on their landscape.

What does this have to do with software? Modules in applications (which could be—and often are—objects) should be NIMBYs. They should want to make use of other services, but not care where the work is done except that it’s nowhere near them. The specific where I’m talking about is the execution context. The user interface needs information from the data model but doesn’t want the information to be retrieved in its context, by which I mean the UI thread. The UI doesn’t want to wait while the information is fetched from the model: that’s the equivalent of residential traffic being slowed down by the passage of the rubbish truck. Drive the trucks somewhere else, but Not In My Back Yard.

There are two ramifications to this principle of software NIMBYism. Firstly, different work should be done in different places. It doesn’t matter whether that’s on other threads in the same process, scheduled on work queues, done in different processes or even on different machines, just don’t do it anywhere near me. This is for all the usual good reasons we’ve been breaking work into multiple processes for forty years, but a particularly relevant one right now is that it’s easier to make fast-ish processors more numerous than it is to make one processor faster. If you have two unrelated pieces of work to do, you can put them on different cores. Or on different computers on the same network. Or on different computers on different networks. Or maybe on the same core.

The second is that this execution context should never appear in API. Module one doesn’t care where module two’s code is executed, and vice versa. That means you should never have to pass a thread, an operation queue, process ID or any other identifier of a work context between modules. If an object needs its code to run in a particular context, that object should arrange it.

Why do this? Objects are supposed to be a technique for encapsulation, and we can use that technique to encapsulate execution context in addition to code and data. This has benefits because Threading Is Hard. If a particular task in an application is buggy, and that task is the sole responsibility of a single class, then we know where to look to understand the buggy behaviour. On the other hand, if the task is spread across multiple classes, discovering the problem becomes much more difficult.

NIMBY Objects apply the Single Responsibility Principle to concurrent programming. If you want to understand surprising behaviour in some work, you don’t have to ask “where are all the places that schedule work in this context?”, or “what other places in this code have been given a reference to this context?” You look at the one class that puts work on that context.

The encapsulation offered by OOP also makes for simple substitution of a class’s innards, if nothing outside the class cares about how it works. This has benefits because Threading Is Hard. There have been numerous different approaches to multiprocessing over the years, and different libraries to support the existing ones: whatever you’re doing now will be replaced by something else soon.

NIMBY Objects apply the Open-Closed Principle to concurrent programming. You can easily replace your thread with a queue, your IPC with RPC, or your queue with a Disruptor if only one thing is scheduling the work. Replace that one thing. If you pass your multiprocessing innards around your application, then you have multiple things to fix or replace.

There are existing examples of patterns that fit the NIMBY Object description. The Actor model as implemented in Erlang’s processes and many other libraries (and for which a crude approximation was described in this very blog) is perhaps the canonical example.

Android’s AsyncTask lets you describe the work that needs doing while it worries about where it needs to be done. So does IKBCommandBus, which has been described in this very blog. Android also supports a kind of “get off my lawn” cry to enforce NIMBYism: exceptions are raised for doing (slow) network operations in the UI context.

There are plenty of non-NIMBY APIs out there too, which paint you into particular concurrency corners. Consider -[NSNotificationCenter addObserverForName:object:queue:usingBlock:] and ignore any “write ALL THE BLOCKS” euphoria going through your mind (though this is far from the worst offence in block-based API design). Notification Centers are for decoupling the source and sink of work, so you don’t readily know where the notification is coming from. So there’s now some unknown number of external actors defecating all over your back yard by adding operations to your queue. Good news: they’re all being funnelled through one object. Bad news: it’s a global singleton. And you can’t reorganise the lawn because the kids are on it: any attempt to use a different parallelism model is going to have to be adapted to accept work from the operation queue.

By combining a couple of time-honoured principles from OOP and applying them to execution contexts we come up with NIMBY Objects, objects that don’t care where you do your work as long as you don’t bother them with it. In return, they won’t bother you with details of where they do their work.

Dogma-driven development

You can find plenty of dogmatic positions in software development, in blogs, in podcasts, in books, and even in academic articles. “You should (always/never) write tests before writing code.” “Pair programming is a (good/bad) use of time.” “(X/not X) considered harmful.” “The opening brace should go on the (same/next) line.”

Let us ignore, for the moment, that only a maximum of 50% of these commandments can actually be beneficial. Let us skip past the fact that demonstrating which is the correct position to take is fraught with problems. Instead we shall consider this question: dogmatic rules in software engineering are useful to whom?

The Dreyfus model of skill acquisition tells us that novices at any skill, not just programming, understand the skill in only a superficial way. Their recollection of rules is non-situational; in other words they will try to apply any rule they know at any time. Their ability to recognise previously penchant free scenarios is small-scale, not holistic. They make decisions by analysis.

The Dreyfus brothers proposed that the first level up from novice was competence. Competents have moved to a situational recollection of the rules. They know which do or do not apply in their circumstances. Those who are competent can become proficient, when their recognition becomes holistic. In other words, the proficient see the whole problem, rather than a few small disjointed parts of it.

After proficiency comes expertise. The expert is no longer analytical but intuitive, they have internalised their craft and can feel the correct way to approach a problem.

“Rules” of software development mean nothing to the experts or the proficient, who are able to see their problem for what it is and come up with a context-appropriate solution. They can be confusing to novices, who may be willing to accept the rule as a truth of our work but unable to see in which contexts it applies. Only the competent programmers have a need to work according to rules, and the situational awareness to understand when the rules apply.

But competent programmers are also proficient programmers waiting to happen. Rather than being given more arbitrary rules to follow, they can benefit from being shown the big picture, from being led to understand their work more holistically than as a set of distinct parts to which rulaes can be mechanistically – or dogmatically – applied.

Pronouncements like coding standards and methodological commandments can be useful, but not on their own. By themselves they help competent programmers to remain competent. They can be situated and evaluated, to help novices progress to competence. They can be presented as a small part of a bigger picture, to help competents progress to proficiency. As isolated documents they are significantly less valuable.

Dogmatism considered harmful.

Compatibility

Solaris 10, scheduled to be supported until January, 2021, can still run BSD binaries built for Solaris 1 (a retroactive name for SunOS 4.1), released in 1991. I wonder for how long the apps we wrote for our iPhones back in 2008 – the ones we had to pay $99 even to run on our own devices – will last.

On the design of iOS 7 and iconographoclasm

As I write this, the WWDC keynote presentation has been over for a little more than half a day. That, apparently, is plenty of time in which to evaluate a new version of an operating system based on a few slides, a short demonstration, and maybe a little bit of playing with an early developer preview.

What I see is a lot less texture than an iOS 6 screen, with flatter, simpler icons and a very thin Sans Serif typeface. That’s because I’m looking at a Windows RT screen, though. What’s that? iOS 7 looks the same? They’ve gone Metro-style? Surely this heralds the end of days. Dogs and cats lying with each other, fire and brimstone raining from the sky. It would be time to give all the money back to the shareholders, except that it costs too much to bring all that cash back into the US.

Oh no, wait. It’s just a phone, not the apocalypse. Also it’s just less than half a day with a preview of a phone. If first impressions of a graphics set were enough to form a decision on how an artifact is to carry around with me all day, every day for the next couple of years, the iPhone would have been dead a long time ago. Remember the HTC Desire, with its bright, saturated background and parallax scrolling? Or the Samsung Galaxy series, with their bigger screens that are easier to see across a brightly-lit and packed showroom? How about the Nokia Lumia, with its block colours and large animations? Those are the kings of the short-tem attention grab. Coincidentally they’re the things that many journalists, trying to find an angle on day one of a device’s release, latched on to as obvious iPhone killers. Well, those and anything else released in the last few years.

What no-one has investigated is how the new iOS interface is to use all the time, because no-one has had all the time yet. I don’t know whether the changes Apple have made to the design are better, but I do know (largely because they mentioned it at the beginning of the talk) that they thought about them. The icons on the home screen, for example, are a lot more simplistic than they previously were. Is that because they were designed by some cretin wielding a box of Crayola, or did they find some benefit to simpler icons? Perhaps given a large number of apps, many users are hunting and pecking for app icons rather than relying on muscle memory to locate the apps they need. If this is true, a simpler shape could perhaps be recognised more quickly as the pages of apps scroll under the thumb.

As I say, I don’t know if the changes are for the better or not, but I do know they weren’t the result of whimsy. Though if Apple are chagrined at the noise of a million designers angrily dribbbling over their keyboards, they only have themselves to blame. It’s my belief that this evaluation of computer products based on what they look like rather than on their long-term use has its origin with Apple, specifically with the couple of years of Apple Design Awards that preceded the iPhone app store’s launch. It’s these awards that heralded the “Delicious generation” of app design that has informed Mac and iOS ISVs (and Apple) to date. It’s these awards that valued showy, graphically-rich apps without knowing whether they were useful: giving design awards to software that people could not, at time of award, yet use at all.

Now it turns out that many of these winners did indeed represent useful apps that you could go back to. I was a long-term user of Delicious Library 2 right up until the day Delicious Library 3 was launched. That benefit was not directly evident on the granting of the ADA though: what Apple were saying was “we would like to see you develop apps that look like this” rather than “we recognise this as the type of app our customers have derived lots of enjoyment from”. It’s my belief that this informed the aesthetics and values of the ISV “community”, including the values with which they appraised new software. If Apple are suffering a backlash now from people who don’t immediately love the new design of iOS 7, it is their own petard by which they have been hoisted.

An entirely unwarranted comparison between software engineering and astronomy

Back in the early days of astronomy, the problem of the stars that wander from fixed positions in the sky needed solving. Many astronomers, not the first of which was Ptolemy, proposed that these “planetai” could be modeled as following little curves—epicycles—through their larger motions. As it was found that these epicycles continued to fail, smaller and smaller iterations were added. It was not until astronomers realised that they were not at the centre of the universe that they realised this was an over-complicated and unnecessary model.

Here, in the early days of making software, the problem of the software that wanders from budget, quality and time expectations needed solving. Many programmers, not the first of which was Boehm, proposed that these “projects” could be modeled as following little curves—spirals—through their larger motions. As it was found that these spirals continued to fail, smaller and smaller iterations were added.