Structure and Interpretation of Computer Programmers

I make it easier and faster for you to write high-quality software.

Tuesday, March 30, 2010

Rehearsals in beta!

I have a new application, Rehearsals, an online practice diary for musicians. If that sounds like the kind of thing you’re interested in, and you have Mac OS X 10.6 or newer, then please download the beta release and test it out. There’s absolutely no charge, and if you submit feedback to support <at> rehearsalsapp <dot> com you’ll be eligible for a free licence for version 1.0 once that’s released. There are no limitations on the beta version, so please do download and start using it!

You can follow @rehearsals_app for updates to the beta programme (new releases are automatically downloaded using Sparkle, if you enable it in the app).

posted by Graham Lee at 12:22  

Tuesday, December 15, 2009

Consulting versus micro-ISV development

Reflexions on the software business really is an interesting read. Let me borrow Adrian’s summary of his own post:

Now, here’s an insider tip: if your objective is living a nightmare, tearing yourself apart and swear never touching a keyboard again, choose [consulting]. If your objective is enjoying a healthy life, making money and living long and prosper, choose [your own products].

As the author himself allows, the arguments presented either way are grossly oversimplified. In fact I think there is a very simple axiom underlying what he says, which if untrue moves the balance away from writing your own products and into consulting, contracting or even salaried work. Let me start by introducing some features missed out of the original article. They may, depending on your point of view, be pros or cons. They may also apply to more than one of the roles.

A consultant:

  • builds up relationships with many people and organisations
  • is constantly learning
  • works on numerous different products
  • is often the saviour of projects and businesses
  • gets to choose what the next project is
  • has had the risks identified and managed by his client
  • can focus on two things: writing software, and convincing people to pay him to write software
  • renegotiates when the client’s requirements change

A μISV developer:

  • is in sales, marketing, support, product management, engineering, testing, graphics, legal, finance, IT and HR until she can afford to outsource or employ
  • has no income until version 1.0 is out
  • cannot choose when to put down the next version to work on the next product
  • can work on nothing else
  • works largely alone
  • must constantly find new ways to sell the same few products
  • must pay for her own training and development

A salaried developer:

  • may only work on what the managers want
  • has a legal minimum level of security
  • can rely on a number of other people to help out
  • can look to other staff to do tasks unrelated to his mission
  • gets paid holiday, sick and parental leave
  • can agree a personal development plan with the highers-up
  • owns none of the work he creates

I think the axiom underpinning Adrian Kosmaczewski’s article is: happiness ∝ creative freedom. Does that apply to you? Take the list of things I’ve defined above, and the list of things in the original article, and put them not into “μISV vs. consultant” but “excited vs. anxious vs. apathetic”. Now, this is more likely to say something about your personality than about whether one job is better than another. Do you enjoy risks? Would you accept a bigger risk in order to get more freedom? More money? Would you trade the other way? Do you see each non-software-developing activity as necessary, fun, an imposition, or something else?

So thankyou, Adrian, for making me think, and for setting out some of the stalls of two potential careers in software. Unfortunately I don’t think your conclusion is as true as you do.

posted by Graham Lee at 18:42  

Thursday, August 27, 2009

Indie app milestones part one

In the precious and scarce spare time I have around my regular contracting endeavours, I’ve been working on my first indie app. It reached an important stage in development today; the first time where I could show somebody who doesn’t know what I’m up to the UI and they instinctively knew what the app was for. That’s not to say that the app is all shiny and delicious; it’s entirely fabricated from standard controls. Standard controls I (personally) don’t mind so much. However the GUI will need quite a bit more work before the app is at its most intuitive and before I post any teaser screenshots. Still, let’s see how I got here.

The app is very much a “scratching my own itch” endeavour. I tooled around with a few ideas for apps while sat in a coffee shop, but one of them jumped out as something I’d use frequently. If I’ll use it, then hopefully somebody else will!

So I know what this app is, but what does it do? Something I’d bumped into before in software engineering was the concept of a User Story: a testable, brief description of something which will add value to the app. I broke out the index cards and wrote a single sentence on each, describing something the user will be able to do once the user story is added to the app. I’ve got no idea whether I have been complete, exhaustive or accurate in defining these user stories. If I need to change, add or remove any user stories I can easily do that when I decide that it’s necessary. I don’t need to know now a complete roadmap of the application for the next five years.

As an aside, people working on larger teams than my one-man affair may need to estimate how much effort will be needed on their projects and track progress against their estimates. User stories are great for this, because each is small enough to make real progress on in short time, each represents a discrete and (preferably) independent useful addition to the app and so the app is ready to ship any time an integer number of these user stories is complete on a branch. All of this means that it shouldn’t be too hard to get the estimate for a user story roughly correct (unlike big up-front planning, which I don’t think I’ve ever seen succeed), that previous complete user stories can help improve estimates on future stories and that even an error of +/- a few stories means you’ve got something of value to give to the customer.

So, back with me, and I’ve written down an important number of user stories; the number I thought of before I gave up :-). If there are any more they obviously don’t jump out at me as a potential user, so I should find them when other people start looking at the app or as I continue using/testing the thing. I eventually came up with 17 user stories, of which 3 are not directly related to the goal of the app (“the user can purchase the app” being one of them). That’s a lot of user stories!

If anything it’s too many stories. If I developed all of those before I shipped, then I’d spend lots of time on niche features before even finding out how useful the real world finds the basic things. I split the stories into two piles; the ones which are absolutely necessary for a preview release, and the ones which can come later. I don’t yet care how late “later” is; they could be in 1.0, a point release or a paid upgrade. As I haven’t even got to the first beta yet that’s immaterial, I just know that they don’t need to be now. There are four stories that do need to be now.

So, I’ve started implementing these stories. For the first one I went to a small whiteboard and sketched UI mock-ups. In fact, I came up with four. I then set about finding out whether other apps have similar UI and how they’ve presented it, to choose one of these mock-ups. Following advice from the world according to Gemmell I took photos of the whiteboard at each important stage to act as a design log – I’m also keeping screenshots of the app as I go. Then it’s over to Xcode!

So a few iterations of whiteboard/Interface Builder/Xcode later and I have two of my four “must-have” stories completed, and already somebody who has seen the app knows what it’s about. With any luck (and the next time I snatch any spare time) it won’t take long to have the four stories complete, at which point I can start the private beta to find out where to go next. Oh, and what is the app? I’ll tell you soon…

posted by Graham Lee at 00:44  

Friday, April 17, 2009

NSConference: the aftermath

So, that’s that then, the first ever NSConference is over. But what a conference! Every session was informative, edumacational and above all enjoyable, including the final session where (and I hate to crow about this) the “American” team, who had a working and well-constructed Core Data based app, were soundly thrashed by the “European” team who had a nob joke and a flashlight app. Seriously, we finally found a reason for doing an iPhone flashlight! Top banana. I met loads of cool people, got to present with some top Cocoa developers (why Scotty got me in from the second division I’ll never know, but I’m very grateful) and really did have a good time talking with everyone and learning new Cocoa skills.

It seems that my presentation and my Xcode top tip[] went down really well, so thanks to all the attendees for being a great audience, asking thoughtful and challenging questions and being really supportive. It’s been a couple of years since I’ve spoken to a sizable conference crowd, and I felt like everyone was on my side and wanted the talk – and indeed the whole conference – to be a success.

So yes, thanks to Scotty and Tim, Dave and Ben, and to all the speakers and attendees for such a fantastic conference. I’m already looking forward to next year’s conference, and slightly saddened by having to come back to the real world over the weekend. I’ll annotate my Keynote presentation and upload it when I can.

[] Xcode “Run Shell Script” build phases get stored on one line in the project.pbxproj file, with all the line breaks replaced by n. That sucks for version control because any changes by two devs result in a conflict over the whole script. So, have your build phase call an external .sh file where you really keep the shell script. Environment variables will still be available, and now you can work with SCM too :-).

posted by Graham Lee at 18:16  

Friday, April 3, 2009

Controlling opportunity

In Code Complete, McConnell outlines the idea of having a change control procedure, to stop the customers from changing the requirements whenever they see fit. In fact one feature of the process is to be heavy enough to dissuade customers from registering changes.

The Rational Unified Process goes for the slightly more neutral term Change Request Management, but the documentation seems to imply the same opinion, that it is the ability to make change requests which must be limited. The issue is that many requests for change in software projects are beneficial, and accepting the change request is not symptomatic of project failure. The most straightforward example is a bug report – this is a change request (please fix this defect) which converts broken software into working software. Similarly, larger changes such as new requirements could convert a broken business case into a working business case; ultimately turning a failed project into a revenue-generator.

In my opinion the various agile methodologies don’t address this issue, either assuming that with the customer involved throughout, no large change would ever be necessary, or that the iterations are short enough for changes to be automatically catered for. I’m not convinced; perhaps after the sixth sprint of your content publishing app the customer decides to open a pet store instead.

I humbly suggest that project managers replace the word “control” in their change documentation with “opportunity” – let’s accept that we’re looking for ways to make better products, not that we need excuses never to edit existing Word files. OMG baseline be damned!

posted by Graham Lee at 18:12  

Monday, February 23, 2009

Cocoa: Model, View, Chuvmey

Chuvmey is a Klingon word meaning “leftovers” – it was the only way I could think of to keep the MVC abbreviation while impressing upon you, my gentle reader, the idea that what is often considered the Controller layer actually becomes a “Stuff” layer. Before explaining this idea, I’ll point out that my thought processes were set in motion by listening to the latest Mac Developer Roundtable (iTunes link) podcast on code re-use.

My thesis is that the View layer contains Controller-ey stuff, and so does the Model layer, so the bit in between becomes full of multiple things; the traditional OpenStep-style “glue” or “shuttle” code which is what the NeXT documentation meant by Controller, dynamic aspects of the model which could be part of the Model layer, view customisation which could really be part of the View layer, and anything which either doesn’t or we don’t notice could fit elsewhere. Let me explain.

The traditional source for the MVC paradigm is Smalltalk, and indeed How to use Model-View-Controller is a somewhat legendary paper in the use of MVC in the Smalltalk environment. What we notice here is that the Controller is defined as:

The controller interprets the mouse and keyboard inputs from the user, commanding the model and/or the view to change as appropriate.

We can throw this view out straight away when talking about Cocoa, as keyboard and mouse events are handled by NSResponder, which is the superclass of NSView. That’s right, the Smalltalk Controller and View are really wrapped together in the AppKit, both being part of the View. Many NSView subclasses handle events in some reasonable manner, allowing delegates to decorate this at key points in the interaction; some of the handlers are fairly complex like NSText. Often those decorators are written as Controller code (though not always; the Core Animation -animator proxies are really controller decorators, but all of the custom animations are implemented in NSView subclasses). Then there’s the target-action mechanism for triggering events; those events typically exist in the Controller. But should they?

Going back to that Smalltalk paper, let’s look at the Model:

The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller).

If the behaviour – i.e. the use cases – are implemented in the Model, well where does that leave the Controller? Incidentally, I agree with and try to use this behavior-and-data definition of the Model, unlike paradigms such as Presentation-Abstraction-Control where the Abstraction layer really only deals with entities, with the dynamic behaviour being in services encapsulated in the Control layer. All of the user interaction is in the View, and all of the user workflow is in the Model. So what’s left?

There are basically two things left for our application to do, but they’re both implementations of the same pattern – Adaptor. On the one hand, there’s preparing the Model objects to be suitable for presentation by the View. In Cocoa Bindings, Apple even use the class names – NSObjectController and so on – as a hint as to which layer this belongs in. I include in this “presentation adaptor” part of the Controller all those traditional data preparation schemes such as UITableView data sources. The other is adapting the actions etc. of the View onto the Model – i.e. isolating the Model from the AppKit, UIKit, WebObjects or whatever environment it happens to be running in. Even if you’re only writing Mac applications, that can be a useful isolation; let’s say I’m writing a Recipe application (for whatever reason – I’m not, BTW, for any managers who read this drivel). Views such as NSButton or NSTextField are suitable for any old Cocoa application, and Models such as GLRecipe are suitable for any old Recipe application. But as soon as they need to know about each other, the classes are restricted to the intersection of that set – Cocoa Recipe applications. The question of whether I write a WebObjects Recipes app in the future depends on business drivers, so I could presumably come up with some likelihood that I’m going to need to cross that bridge (actually, the bridge has been deprecated, chortle). But other environments for the Model to exist in don’t need to be new products – the unit test framework counts. And isn’t AppleScript really a View which drives the Model through some form of Adaptor? What about Automator…?

So let me finish by re-capping on what I think the Controller layer is. It’s definitely an adaptor between Views and Models. But depending on who you ask and what software you’re looking at, it could also be a decorator for some custom view behaviour, and maybe a service for managing the dynamic state of some model entities. To what extent that matters depends on whether it gets in the way of effectively writing the software you need to write.

posted by Graham Lee at 22:45  

Saturday, January 3, 2009

Quote of the year (so far)

From David Thornley via StackOverflow:

“Best practices” is the most impressive way to spell “mediocrity” I’ve ever seen.

I couldn’t agree more. Oh, wait, I could. thud There it goes.

posted by Graham Lee at 01:15  

Tuesday, December 2, 2008

You keep using that word. I do not think it means what you think it means.

In doing a little audience research for my spot at MacDev 2009, I’ve discovered that the word “security” to many developers has a particular meaning. It seems to be consistent with “hacker-proof”, and as it could take most of my hour to set the record straight in a presentation context, here instead is my diatribe in written form. Also in condensed form; another benefit of the blog is that I tend to want to wrap things up quickly as the hour approaches midnight.

Security has a much wider scope than keeping bad people out. A system (any system, assume I’m talking software but I could equally be discussing a business process or a building or something) also needs to ensure that the “good” people can use it, and it might need to respond predictably, or to demonstrate or prove that the data are unchanged aside from the known actions of the users. These are all aspects of security that don’t fit the usual forbiddance definition.

You may have noticed that these aspects can come into conflict, too. Imagine that with a new version of OS X, your iMac no longer merely takes a username and password to log a user in, but instead requires that an Apple-approved security guard – who, BTW, you’re paying for – verifies your identity in an hour-long process before permitting you use of the computer. In the first, “hacker-proof” sense of security, this is a better system, right? We’ve now set a much higher bar for the bad guys to leap before they can use the computer, so it’s More Secure™. Although, actually, it’s likely that for most users this behaviour would just get on one’s wick really quickly as they discover that checking Twitter becomes a slow, boring and expensive process. So in fact by over-investing in one aspect of security (the access control, also sometimes known as identification and authorisation) my solution reduces the availability of the computer, and therefore the security is actually counter-productive. Whether it’s worse than nothing at all is debatable, but it’s certainly a suboptimal solution.

And I haven’t even begun to consider the extra vulnerabilities that are inherent in this new, ludicrous access control mechanism. It certainly looks to be more rigorous on the face of things, but exactly how does that guard identify the users? Can I impersonate the guard? Can I bribe her? If she’s asleep or I attack her, can I use the system anyway? Come to that, if she’s asleep then can the user gain access? Can I subvert the approval process at Apple to get my own agent employed as one of the guards? What looked to be a fairly simple case of a straw-man overzealous security solution actually turns out to be a nightmare of potential vulnerabilities and reduced effectiveness.

Now I’ve clearly shown that having a heavyweight identification and authorisation process with a manned guard post is useless overkill as far as security goes. This would seem like a convincing argument for removing the passport control booths at airports and replacing them with a simple and cheap username-and-password entry system, wouldn’t it? Wouldn’t it?

What I hope that short discussion shows is that there is no such thing as a “most secure” applications; there are applications which are “secure enough” for the context in which they are used, and there are those which are not. But the same solution presented in different environments or for different uses will push the various trade-offs in desirable or undesirable directions, so that a system or process which is considered “secure” in one context could be entirely ineffective or unusable in another.

posted by Graham Lee at 00:45  

Tuesday, November 4, 2008

More on MacDev

Today is the day I start preparing my talk for MacDev 2009. Over the coming weeks I’ll likely write some full posts on the things I decide not to cover in the talk (it’s only an hour, after all), and perhaps some teasers on things I will be covering (though the latter are more likely to be tweeted).

I’m already getting excited about the conference, not only because it’ll be great to talk to so many fellow Mac developers but due to the wealth of other sessions which are going to be given. All of them look really interesting though I’m particularly looking forward to Bill Dudney’s Core Animation talk and Drew McCormack’s session on performance techniques. I’m also going to see if I can get the time to come early to the user interface pre-conference workshop run by Mike Lee; talking to everyone else at that workshop and learning from Mike should both be great ways to catch up on the latest thoughts on UI design.

By the way, if you’re planning on going to the conference (and you may have guessed that I recommend doing so), register early because the tickets are currently a ton cheaper. Can’t argue with that :-).

posted by Graham Lee at 14:39  

Sunday, September 28, 2008

MacDev 2009!

It’s a long way off, but now is a good time to start thinking about the MacDev ’09 conference, organised by the inimitable Scotty of the Mac Developer Network. This looks like being Europe’s closest answer to WWDC, but without all those annoying “we call this Interface Builder, and we call this Xcode” sessions. Oh, and a certain Sophist Mac engineer software will be talking about building a secure Cocoa application.

posted by Graham Lee at 20:27  
Next Page »

Powered by WordPress