Structure and Interpretation of Computer Programmers

I make it easier and faster for you to write high-quality software.

Tuesday, December 15, 2009

Consulting versus micro-ISV development

Reflexions on the software business really is an interesting read. Let me borrow Adrian’s summary of his own post:

Now, here’s an insider tip: if your objective is living a nightmare, tearing yourself apart and swear never touching a keyboard again, choose [consulting]. If your objective is enjoying a healthy life, making money and living long and prosper, choose [your own products].

As the author himself allows, the arguments presented either way are grossly oversimplified. In fact I think there is a very simple axiom underlying what he says, which if untrue moves the balance away from writing your own products and into consulting, contracting or even salaried work. Let me start by introducing some features missed out of the original article. They may, depending on your point of view, be pros or cons. They may also apply to more than one of the roles.

A consultant:

  • builds up relationships with many people and organisations
  • is constantly learning
  • works on numerous different products
  • is often the saviour of projects and businesses
  • gets to choose what the next project is
  • has had the risks identified and managed by his client
  • can focus on two things: writing software, and convincing people to pay him to write software
  • renegotiates when the client’s requirements change

A μISV developer:

  • is in sales, marketing, support, product management, engineering, testing, graphics, legal, finance, IT and HR until she can afford to outsource or employ
  • has no income until version 1.0 is out
  • cannot choose when to put down the next version to work on the next product
  • can work on nothing else
  • works largely alone
  • must constantly find new ways to sell the same few products
  • must pay for her own training and development

A salaried developer:

  • may only work on what the managers want
  • has a legal minimum level of security
  • can rely on a number of other people to help out
  • can look to other staff to do tasks unrelated to his mission
  • gets paid holiday, sick and parental leave
  • can agree a personal development plan with the highers-up
  • owns none of the work he creates

I think the axiom underpinning Adrian Kosmaczewski’s article is: happiness ∝ creative freedom. Does that apply to you? Take the list of things I’ve defined above, and the list of things in the original article, and put them not into “μISV vs. consultant” but “excited vs. anxious vs. apathetic”. Now, this is more likely to say something about your personality than about whether one job is better than another. Do you enjoy risks? Would you accept a bigger risk in order to get more freedom? More money? Would you trade the other way? Do you see each non-software-developing activity as necessary, fun, an imposition, or something else?

So thankyou, Adrian, for making me think, and for setting out some of the stalls of two potential careers in software. Unfortunately I don’t think your conclusion is as true as you do.

posted by Graham Lee at 18:42  

Saturday, January 3, 2009

Quote of the year (so far)

From David Thornley via StackOverflow:

“Best practices” is the most impressive way to spell “mediocrity” I’ve ever seen.

I couldn’t agree more. Oh, wait, I could. thud There it goes.

posted by Graham Lee at 01:15  

Thursday, May 22, 2008

Managers: Don’t bend it that far, you’ll break it!

Go on then, what’s wrong with the words we already have? I think they’re perfectly cromulent, it’s very hard to get into a situation where the existing English vocabulary is insufficient to articulate one’s thoughts. I expect that linguists and lexicographers have some form of statistic measuring the coverage in a particular domain of a language’s expression; I also expect that most modern languages have four or five nines of coverage in the business domain.

So why bugger about with it? Why do managers (and by extension, everyone trying to brown-nose their way into the management) have to monetise that which can readily be sold[1]? Why productise that which can also be sold? Why incentivise me when you could just make me happy? Why do we need to touch base, when we could meet (or, on the other hand, we could not meet)? Do our prospectives really see the value-add proposition, or are there people who want to buy our shit?

Into the mire which is CorpSpeak treads the sceadugenga that is TechRepublic, Grahames yrre bær. The first words in their UML in a Nutshell review is "Takeaway". Right, well, I don’t think they’re about to give us a number 27 with egg-fried rice. (As a noun, that meaning appears only in the Draft Additions to the OED from March 2007.) Nor is there likely to be some connection with golf. All right, let’s read on.

UML lets you capture, document, and communicate information about an application and its design, so it’s an essential tool for modeling O-O systems. Find out what’s covered in O’Reilly’s UML in a Nutshell and see if it belongs in your library.

Ah, that would be a précis, unless I’m very much mistaken. Maybe even a synopsis. Where did you get the idea this was a takeaway? I can’t even work out what the newspeak meaning for takeaway might be. Had I not seen the linked review, I had thought the “if you take away one idea from this article, make it this” part of the article. In other words, if you’re so stupid that you can only remember one sentence from a whole page, we’ll even tell you which sentence you should concentrate on. This use[2] doesn’t fit with that retroactive definition though, because the conclusion which can be drawn from the above-quoted paragraph is that one might want to read the whole article. I would much rather believe that management types in a hurry would remember the subsequent sentence as their only recollection of the article.

UML in a Nutshell: A Desktop Quick Reference is not misnamed.

[1]You may argue that the word should be spelled “monetize”, as the word most probably came from American English, but it doesn’t matter because it doesn’t bloody exist. Interestingly, the verb sell originated in the Old English verb sellan, meaning to give, with no suggestion of barter or trade.

[2]Language usage is the only place I’ll admit the existence of the word usage.

posted by Graham Lee at 23:22  

Monday, May 5, 2008

Social and political requirements gathering

I was originally going to talk about API: Design Matters and Cocoa, but, and I believe the title of this post may give this away, I’m not going to now. That’s made its way into OmniFocus though, so I’ll do it sooner or later. No, today I’m more likely to talk about The Cathedral and the Bazaar, even though that doesn’t seem to fit the context of requirements gathering.

So I’ve been reading a few papers on Requirements Engineering today, most notably Goguen’s The Dry and the Wet. One of the more interesting and subtle conclusions to draw from such a source (or at least, it’s subtle if you’re a physics graduate who drifted into Software Engineering without remembering to stop being a physicist) is the amount of amount of political influence in requirements engineering. Given that it costs a couple of orders of magnitude more to mend a broken requirement in maintenance than in requirements-gathering (Boehm knew this back in 1976), you’d think that analysts would certainly leave their own convictions at the door, and would try to avoid the "write software that management would like to buy" trap too.

There are, roughly speaking, three approaches to requirements elicitation. Firstly, the dry, unitarian approach where you assume that like a sculpture in a block of marble, there is a single "ideal" system waiting to be discovered and documented. Then there’s the postmodern approach, in which any kind of interaction between actors and other actors, or actors and the system, is determined entirely by the instantaneous feelings of the actors and is neither static nor repeatable. The key benefit brought by this postmodern approach is that you get to throw out any idea that the requirements can be baselined, frozen, or in any other way rendered static to please the management.

[That’s where my oblique CatB reference comes in – the Unitary analysis model is similar to ESR’s cathedral, and is pretty much as much of a straw man in that ‘purely’ Unitary requirements are seldom seen in the real world; and the postmodern model is similar to ESR’s bazaar, and is similarly infrequent in its pure form. The only examples I can think of where postmodern requirements engineering would be at all applicable are in social collaboration tools such as Facebook or Git.]

Most real requirements engineering work takes place in the third, intermediate realm; that which acknowledges that there is a plurality among the stakeholders identified in the project (i.e. that the end-user has different goals from his manager, and she has different goals than the CEO), and models the interactions between them in defining the requirements. Now, in this realm software engineering goes all quantum; there aren’t any requirements until you look for them, and the value of the requirements is modified by the act of observation. A requirement is generated by the interaction between the stakeholders and the analyst, it isn’t an intrinsic property of the system under interaction.

And this is where the political stuff comes in. Depending on your interaction model, you’ll get different requirements for the same system. For instance, if you’re of the opinion that the manager-charge interaction takes on a Marxist or divisive role, you’ll get different requirements than if you use an anarchic model. That’s probably why Facebook and Lotus Notes are completely different applications, even though they really solve the same problem.

Well, in fact, Notes and Facebook solve different problems, which brings us back to a point I raised in the second paragraph. Facebook solves the "I want to keep in contact with a bunch of people" problem, while Notes solves the "we want to sell a CSCW solution to IT managers" problem. Which is itself a manifestation of the political problem described over the last few paragraphs, in that it represents a distortion of the interaction between actors in the target environment. Of course, even when that interaction is modelled correctly (or at least with sufficient accuracy and precision), it’s only valid as long as the social structure of the target environment doesn’t change – or some other customer with a similar social structure comes along ;-)

This is where I think that the Indie approach common in Mac application development has a big advantage. Many of the Indie Mac shops are writing software for themselves and perhaps a small band of friends, so the only distortion of the customer model which could occur would be if the developer had a false opinion of their own capabilities. There’s also the possibility to put too many “developer-user” features in, but as long as there’s competition pushing down the complexity of everybody’s apps, that will probably be mitigated.

posted by Graham Lee at 20:36  

Friday, January 18, 2008

Project: Autonomous Revolutionary Goldfish

I was going to write, am still going to write, about how silly project names get bandied about in the software industry. But in researching this post (sorry blogosphere, I’ve let you down) I found that the Software-generated Gannt chart was patented by Fujitsu in the US in 1998, which to me just explains everything that is wrong with the way the US patent system is applied to software. For reference, Microsoft Project was written in 1987 (although is not strictly prior art for the patent. Project does everything in its power to prevent the user from creating a Gannt chart, in my experience).

Anyway, why is it that people care more about the fact that they’re going to be using Leopard, Longhorn, Cairo, Barcelona or Niagara than about what any one of those is? As discussed in [1], naming software projects (though really I’m talking about projects in the general sense of collections of tasks in order to complete a particular goal) in the same way you might name your pet leads to an unhealthy psychological attachment to the project, causing it to develop its own (perceived) personality and vitality which can cause the project to continue long after it ought to have been killed. For every Cheetah, there’s a Star Trek that didn’t quite make it. And why should open source projects like Firefox or Ubuntu GNU/Linux need "code names" if their innards are supposed to be on public display?

I’ve decided that I know best, of course. My opinion is that, despite what people may say about project names being convenient shorthand to assist discussion, naming your project in an obtuse way splits us into the two groups which humanity adores: those of us who know, and those of you who don’t. The circumstance I use to justify this is simple: if project names are mnemonics, why aren’t the projects named in a mnemonic fashion? In what way does Rhapsody describe "port of OPENSTEP/Mach to PowerPC with the Platinum look and feel"? Such cultish behaviour of course leads directly to the point made in the citation; because we don’t want to be the people in the know of something not worth knowing, we tend to keep our dubiously-named workflow in existence for far longer than could be dispassionately justified.

Of course, if I told you the name of the project I’m working on, you wouldn’t have any idea what I’m working on ;-).

[1]Pulling the Plug: Software Project Management and the Problem of Project Escalation, Mark Keil. MIS Quarterly, Vol. 19, No. 4 (Dec., 1995), pp. 421-447

posted by Graham Lee at 00:01  

Friday, November 23, 2007

Post #100!

And to celebrate, we look at the differences between managers and humansprogrammers.

posted by Graham Lee at 00:02  

Powered by WordPress