Programming, maths and the other things

Sarah Mei argues that programming is not math, arguing instead that programming is language. I don’t think it’s hard to see the truth in the first part, though due to geopolitical influences on my personality I’d make the incrementally longer statement that programming is not maths.

But there’s maths in programming

Let’s agree to leave aside the situations in which we use programming to solve mathematics problems, such as geometry or financial modelling. These are situations in which the maths is intrinsic to the problem domain, and worrying about the amount of maths involved could potentially confuse two different sources of maths.

Nonetheless, one may argue that the computer is simulating a mathematical structure, and that therefore you need to understand the mathematical model of the structure in order to get the computer to do the correct thing. I can model the computer’s behaviour using the lambda calculus, and I’ve got a mathematically-rich model. I can model the computer’s behaviour as a sequence of operations applied to an infinite paper tape, and I’ve got a different model. These two models can be interchanged, even if they surface different aspects of the real situation being modelled.

It’s the second of the two models that leads to the conclusion that capability at advanced maths is not intrinsic to success at programming. If what the computer’s doing can be understood in terms of relatively simple operations like changing the position of a tape head and tallying numbers, then you could in principle not only understand a program but even replicate it yourself without a deep knowledge of mathematics. Indeed that principle provides the foundation to one argument on the nature of software as intellectual property: a computer program is nothing more than a sequence of instructions that could be followed by someone with a pencil and paper, and therefore cannot represent a patentable invention.

Maths, while not intrinsic, may be important to programming

It may, but it probably isn’t. Indeed it’s likely that many thousands of programmers every day ignore key results in the mathematical investigation of programming, and still manage to produce software that (at least sort-of) works.

Take the assertion as an example. Here’s a feature of many programming environments that has its root in the predicate systems that can be used to reason about computer programs. The maths is simple enough: if predicate P is true before executing statement S, and consequent Q will hold after its execution, then the result of executing S will be P AND Q.

From this, and knowledge of what your program should be doing, then you can prove the correctness of your program. Because of the additive nature, if you can prove the outcome of every statement in a subroutine, then you can make an overall statement about the outcome of that subroutine. Similarly, you can compose the statements you can make about subroutines to prove the behaviour of a program composed out of those subroutines. Such is the basis of techniques like design by contract, and even more formal techniques like proof-carrying code and model-carrying code.

…which are, by and large, not used. You can write an assertion without having the mathematical knowledge described above, and you can write a program without any assertions.

Here’s what Tony Hoare said about what programmers know of their programs, back in 1969 (pronoun choice is original):

At present, the method which a programmer uses to convince himself of the correctness of his program is to try it out in particular cases and to modify it if the results produced do not correspond to his intentions. After he has found a reasonably wide variety of example cases on which the program seems to work, he believes that it will always work.

Since then, we’ve definitely (though not consistently) adopted tools to automate the trying out of programs (or routines in programs) on particular cases, and largely not adopted tools for automatic proof construction and checking. Such blazing progress in only 45 years!

There are more things in heaven and earth, Horatio

Here’s a potted, incomplete list of things required of someone or a group of people making software. The list is presented in order of remembering them, and doesn’t reflect any intrinsic dependencies.

  • know of a problem that needs solving
  • think of a solution to the problem
  • presume or demonstrate that the solution can be made out of software
  • understand whether the solution will be useful and valuable
  • understand the constraints within which the solution will operate
  • understand and evaluate the changes to the system that arise from the solution existing
  • design an implementation of the solution to fit the constraints (maths is optional here, but index cards and arrows on whiteboards will often work)
  • build that solution (maths is optional here)
  • demonstrate that the implementation does indeed solve the problem (maths is optional here)
  • convince people that they in fact need this problem solved in this particular way
  • explain to people how to use this implementation
  • react to changes that occur in the implementation’s environment
  • pay for all of the above (some maths helps here)

The point is that actual programming is a very small part of everything that’s going on. Even if maths were intrinsic to parts of that activity, it would still be a tiny contribution to the overall situation. Programming is maths in the same way that cooking is thermodynamics.

Posted in philosophy after a fashion | Leave a comment

Intellectual property and software: the nuclear option

There are many problems that arise from thinking about the ownership of software and its design. Organisations like the Free Software Foundation and Open Source Initiative take advantage of the protections of copyright of source code – presumed to be a creative work analogous to a written poem or a painting on canvas – to impose terms on how programs derived from the source code can be used.

Similar controls are not, apparently, appropriate for many proprietary software companies who choose not to publish their source code and control its use through similar licences. Some choose to patent their programs – analogous to a machine or manufactured product – controlling not how they are used but the freedom of competitors to release similar products.

There is a lot of discomfort in the industry with the idea that patents should apply to software. On the other hand, there is also distaste when a competitor duplicates a developer’s software, exactly the thing patents are supposed to protect against.

With neither copyright nor patent systems being sufficient, many proprietary software companies turn to trade secrets. Rather than selling their software, they license it to customers on the understanding that they are not allowed to disassemble or otherwise reverse-engineer its working. They then argue that because they have taken reasonable steps to protect their program’s function from publication, it should be considered a trade secret – analogous to their customer list or the additives in the oil used by KFC.

…and some discussions on software ownership end there. Software is a form of intellectual property, they argue, and we already have three ways to copy with that legally: patents, copyright, and trade secrets. A nice story, except that we can quickly think of some more.

If copyright is how works of art are protected, then we have to acknowledge that not all works of art are considered equal. Some are given special protection as trade marks: exclusive signs of the work or place of operation of a particular organisation. Certain features of a product’s design are considered similarly as the trade dress of that product. Currently the functionality of a product cannot be considered a trademark or trade dress, but what would be the ramifications of moving in that direction?

We also have academic priority. Like the patent system, the academic journal system is supposed to encourage dissemination of new results (like the patent system, arguments abound over whether or not it achieves this aim). Unlike the patent system, first movers are not awarded monopoly, but recognition. What would the software industry look like if companies had to disclose which parts of their products they had thought of themselves, and which they had taken from Xerox, or VisiCorp, or some other earlier creator? Might that discourage Sherlocking and me-too products?

There’s also the way that nuclear propagation is controlled. It’s not too hard to find out how to build nuclear reactors or atomic weapons, particularly as so much of the work was done by the American government and has been released into the public domain. What it is hard to do is to build a nuclear reactor or atomic weapon. While the knowledge is unrestricted, its application is closely controlled. Control is overseen by an international agency, part of the United Nations. This has its parallels with the patent system, where centralisation into a government office is seen as one of the problems.

The point of this post is not to suggest that any one of the above analogues is a great fit for the problem of ownership and competition in the world of software. The point is to suggest that perhaps not all of the available options have been explored, and that accepting the current state of the world “because we’ve exhausted all the possibilities” would be to give up early.

Posted in economics, IANAL | Leave a comment

On Mental Health

This post has been a while in the writing, I suppose waiting for the perfect time to publish it. The two things that happened today to make me finally commit it to electrons were the news about Robin Williams, and reading Robert Bloch’s That Hell-Bound Train. Explaining the story’s relevance would spoil it, but it’s relevant. And short.

I didn’t leave Big Nerd Ranch because I disliked the job. I loved it. I loved working with clever people, and teaching clever people, and building things with clever people, and dicking around on Campfire posting meme images with clever people, and alternately sweating and freezing in Atlanta with clever people, and speaking stilted Dutch with clever people.

I left because I spent whole days staring at Xcode without doing anything. Because I knew that if I wrote code and pushed it, what I would do would be found lacking, and they’d realise that I only play a programmer on TV, even though I also knew that they were friendly, kind, supportive people who would never be judgemental.

When I left I was open and honest with my colleagues and my manager, and asked them to be open and honest with each other. I felt like I was letting them down, and wanted to do the least possible burning of bridges. I finished working on the same day that I voiced my problems, then I went to bed and had a big cry.

Which sounds bad, but was actually a release of sorts. I don’t remember the previous time I’d cried, or really done anything that expresses emotion. I think it may have been whenever British Mac podcast episode 35 was on, playing the final scene of Blackadder Goes Forth. Which apparently was back in 2006.

Anyway, I then took a couple of months away from any sort of work. On the first day post-Ranch I made an appointment to see a doctor, who happened to have available time that same day. He listened to a story much like the above, and to a description of myself much like the below, and diagnosed depression.

You may have seen that demo of Microsoft’s hyper-lapse videos where you know that there’s loads going on and tons of motion things to react to, but the image is eerily calm and stable. Yup. There’s lots going on, but in here everything’s muffled and has no effect.

It’s not like coasting, though. It’s like revving the engine with the clutch disengaged. I never stop thinking. That can get in the way of thinking about things that I actually want to think about, because I’m already thinking about something else. It means getting distracted in conversations, because I’m already thinking about something else. It means not getting to sleep until I stop thinking.

It also means writing a lot. You may have got rid of a song stuck in your head (an earworm) by playing that song through, or by singing it to yourself. I get rid of brainworms by writing them down. I use anything: from a Moleskine notebook and fountain pen on an Edwardian writing slope to Evernote on a phone.

Now, you may be thinking—or you may not. It may just be that I think you’re thinking it. While I’m still extraverted, I’m also keen to avoid being in situations where I think other people might be judging me, because I’ll do it on their behalves. It’s likely that I’m making this up—that it’s a bit weird that I keep telling jokes if I’m supposed to be emotionally disengaged. Jokes are easy: you just need to think of two things and invent a connection between them. Or tell the truth, but in a more obvious way than the truth usually lets on. You have to think about what somebody else is thinking, and make them think something else. Thinking about thinking has become a bit of a specialty.

Having diagnosed me, the doctor presented two choices: either antidepressant medication, or cognitive behavioural therapy. I chose the latter. It feels pretty weird, like you’re out to second-guess yourself. Every time you have a bad (they say “toxic”, which seems apt: I have a clear mental image of a sort of blue-black inky goop in the folds of my brain that stops it working) thought you’re supposed to write it down, write down the problems with the reasoning that led to it, and write down a better interpretation of the same events. It feels like what it is—to psychology what the census is to anthropology. Complex science distilled into a form anyone can fill in at home.

This post has been significantly more introspective than most of this blog, which is usually about us programmers collectively. Honestly I don’t know what the message to readers is, here. It’s not good as awareness; you probably all know that this problem exists. It’s not good as education; I’m hardly the expert here and don’t know what I’m talking about. I think I just wanted to talk about this so that we all know that we can talk about this. Or maybe to say that programmers should be careful about describing settings as crazy or text as insane because they don’t know who they’re talking to. Maybe it was just to stop thinking about it.

Posted in Uncategorized | Leave a comment

Contractually-obligated testing

About a billion years ago, Bertrand Meyer (he of Open-Closed Principle fame) introduced a programming language called Eiffel. It had a feature called Design by Contract, that let you define constraints that your program had to adhere to in execution. Like you can convince C compilers to emit checks for rules like integer underflow everywhere in your code, except you can write your own rules.

To see what that’s like, here’s a little Objective-C (I suppose I could use Eiffel, as Eiffel Studio is in homebrew, but I didn’t). Here’s my untested, un-contractual Objective-C Stack class.

@interface Stack : NSObject

- (void)push:(id)object;
- (id)pop;

@property (nonatomic, readonly) NSInteger count;

@end

static const int kMaximumStackSize = 4;

@implementation Stack
{
    __strong id buffer[4];
    NSInteger _count;
}

- (void)push:(id)object
{
    buffer[_count++] = object;
}

- (id)pop
{
    id object = buffer[--_count];
    buffer[_count] = nil;
    return object;
}

@end

Seems pretty legit. But I’ll write out the contract, the rules to which this class will adhere provided its users do too. Firstly, some invariants: the count will never go below 0 or above the maximum number of objects. Objective-C doesn’t actually have any syntax for this like Eiffel, so this looks just a little bit messy.

@interface Stack : ContractObject

- (void)push:(id)object;
- (id)pop;

@property (nonatomic, readonly) NSInteger count;

@end

static const int kMaximumStackSize = 4;

@implementation Stack
{
    __strong id buffer[4];
    NSInteger _count;
}

- (NSDictionary *)contract
{
    NSPredicate *countBoundaries = [NSPredicate predicateWithFormat: @"count BETWEEN %@",
                                    @[@0, @(kMaximumStackSize)]];
    NSMutableDictionary *contract = [@{@"invariant" : countBoundaries} mutableCopy];
    [contract addEntriesFromDictionary:[super contract]];
    return contract;
}

- (void)in_push:(id)object
{
    buffer[_count++] = object;
}

- (id)in_pop
{
    id object = buffer[--_count];
    buffer[_count] = nil;
    return object;
}

@end

I said the count must never go outside of this range. In fact, the invariant must only hold before and after calls to public methods: it’s allowed to be broken during the execution. If you’re wondering how this interacts with threading: confine ALL the things!. Anyway, let’s see whether the contract is adhered to.

int main(int argc, char *argv[]) {
    @autoreleasepool {
        Stack *stack = [Stack new];
        for (int i = 0; i < 10; i++) {
            [stack push:@(i)];
            NSLog(@"stack size: %ld", (long)[stack count]);
        }
    }
}

2014-08-11 22:41:48.074 ContractStack[2295:507] stack size: 1
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 2
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 3
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 4
2014-08-11 22:41:48.076 ContractStack[2295:507] *** Assertion failure in -[Stack forwardInvocation:], ContractStack.m:40
2014-08-11 22:41:48.077 ContractStack[2295:507] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException',
 reason: 'invariant count BETWEEN {0, 4} violated after call to push:'

Erm, oops. OK, this looks pretty useful. I’ll add another clause: the caller isn’t allowed to call -pop unless there are objects on the stack.

- (NSDictionary *)contract
{
    NSPredicate *countBoundaries = [NSPredicate predicateWithFormat: @"count BETWEEN %@",
                                    @[@0, @(kMaximumStackSize)]];
    NSPredicate *containsObjects = [NSPredicate predicateWithFormat: @"count > 0"];
    NSMutableDictionary *contract = [@{@"invariant" : countBoundaries,
             @"pre_pop" : containsObjects} mutableCopy];
    [contract addEntriesFromDictionary:[super contract]];
    return contract;
}

So I’m not allowed to hold it wrong in this way, either?

int main(int argc, char *argv[]) {
    @autoreleasepool {
        Stack *stack = [Stack new];
        id foo = [stack pop];
    }
}

2014-08-11 22:46:12.473 ContractStack[2386:507] *** Assertion failure in -[Stack forwardInvocation:], ContractStack.m:35
2014-08-11 22:46:12.475 ContractStack[2386:507] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException',
 reason: 'precondition count > 0 violated before call to pop'

No, good. Having a contract is a bit like having unit tests, except that the unit tests are always running whenever your object is being used. Try out Eiffel; it’s pleasant to have real syntax for this, though really the Objective-C version isn’t so bad.

Finally, the contract is implemented by some simple message interception (try doing that in your favourite modern programming language of choice, non-Rubyists!).

@interface ContractObject : NSObject
- (NSDictionary *)contract;
@end

static SEL internalSelector(SEL aSelector);

@implementation ContractObject

- (NSDictionary *)contract { return @{}; }

- (NSMethodSignature *)methodSignatureForSelector:(SEL)aSelector
{
    NSMethodSignature *sig = [super methodSignatureForSelector:aSelector];
    if (!sig) {
        sig = [super methodSignatureForSelector:internalSelector(aSelector)];
    }
    return sig;
}

- (void)forwardInvocation:(NSInvocation *)inv
{
    SEL realSelector = internalSelector([inv selector]);
    if ([self respondsToSelector:realSelector]) {
        NSDictionary *contract = [self contract];
        NSPredicate *alwaysTrue = [NSPredicate predicateWithValue:YES];
        NSString *calledSelectorName = NSStringFromSelector([inv selector]);
        inv.selector = realSelector;
        NSPredicate *invariant = contract[@"invariant"]?:alwaysTrue;
        NSAssert([invariant evaluateWithObject:self],
            @"invariant %@ violated before call to %@", invariant, calledSelectorName);
        NSString *preconditionKey = [@"pre_" stringByAppendingString:calledSelectorName];
        NSPredicate *precondition = contract[preconditionKey]?:alwaysTrue;
        NSAssert([precondition evaluateWithObject:self],
            @"precondition %@ violated before call to %@", precondition, calledSelectorName);
        [inv invoke];
        NSString *postconditionKey = [@"post_" stringByAppendingString:calledSelectorName];
        NSPredicate *postcondition = contract[postconditionKey]?:alwaysTrue;
        NSAssert([postcondition evaluateWithObject:self],
            @"postcondition %@ violated after call to %@", postcondition, calledSelectorName);
        NSAssert([invariant evaluateWithObject:self],
            @"invariant %@ violated after call to %@", invariant, calledSelectorName);
    }
}

@end

SEL internalSelector(SEL aSelector)
{
    return NSSelectorFromString([@"in_" stringByAppendingString:NSStringFromSelector(aSelector)]);
}
Posted in architecture of sorts, code-level, OOP, TDD | 1 Comment

The Wealth of Applications

Adam Smith’s Inquiry into the Nature and Causes of the Wealth of Nations opens by discussing the division of labour. How people are able to get more done when they each pick a small part of the work to be done and focus on that, trading their results with others to gain access to the common wealth. He goes on to explain that while what people are really trading is labour, it’s hard to think about that so the more comprehensible stand-in of money is used instead.

Smith’s example is a pin factory, where one person might draw out metal into wire, another cut it to length, a third sharpen one end, a fourth flatten the opposite end. He says that there is a

great increase in the quantity of work, which, in consequence of the division of labour, the same number of people are capable of performing

Great, so dividing labour must be a good thing, right? That’s why a totally post-Smith industry like producing software has such specialisations as:

Oh, this argument isn’t going the way I want. I was kindof hoping to show that software development, as much a product of Western economic systems as one could expect to find, was consistent with Western economic thinking on the division of labour. Instead, it looks like generalists are prized.

[Aside: there really are a few limited areas of specialisation, but then generalists co-opt their words to gain by association a share of whatever cachet is associated with the specialists. For example, did you know that writing UNIX applications makes you an embedded programmer? Another specialisation is kernel programming, but it’s easy to find sentiment that it’s just programming too.]

Maybe the conditions for division of labour just haven’t arisen in programming. Returning to Smith, we learn that such division “is owing to three different circumstances”:

  • the increase of dexterity in every particular [worker]

This seems to be true for programmers. There is an idea that programmers are not fungible resources, and that the particular skills and experiences of any individual programmer can be more or less relevant to the problems your team is trying to solve. If all programmers can truly be generalists, then programmers would basically be interchangeable (though some may be faster, or produce fewer defects, than others).

Indeed Harlan Mills proposed that software teams be built around the idea that different people are suited to different roles. The Chief Programmer Team unsurprisingly included a chief programmer, analogous to a surgeon in a medical team. As needed other specialists including a librarian, technical writers, testers, and “language lawyers” (those with expertise in a particular programming language) could be added to support the chief programmer and perform tasks related to their specialities.

  • the saving of time which is commonly lost in passing from one species of work to another

Programmers believe in this, too. We call it, after DeMarco and Lister, flow.

Which leaves only one place in which to look for a difficulty in applying the division of labour, so let’s try that one.

  • to the invention of a great number of machines which facilitate and abridge labour

So we could probably divide up the labour of computing if only we could invent machines that could do the computing for us.

Hold on. Isn’t inventing machines what we do? Yet they don’t seem to be “facilitating and abridging labour”, at least not for us. Why is that?

At the technical level, there are two things holding back the tools we create:

  • they’re not always very good. Building your software on top of a tool as a way to save labour is perceived as a lottery: the price is the labour involved in working with or around the tool, and the pay-off is the hope that this is less work than not using the tool.

  • they only really turn computers into other computers, and as such aren’t as great a benefit as one might expect. We’re trying to get to “the computer you have can solve the problem you have”, and we only really have tooling for “the computer you have can be treated the same way as any other computer”. We’re still trying to get to “any computer (including the one you have) can solve the problem you have”.

This last point demonstrates that the criterion of facilitating and abridging labour is partially solved. The goal of moving closer to the end state, where our automated machines help with solving actual problems, seems to have come up a few times (mostly around the 1980s) with no lasting impact: in object-oriented programming, artificial intelligence and computer-aided software engineering; specifically Upper CASE.

Why not close that gap, particularly if there is (or at least has been) both academic and research interest in doing so? We have to leave the technology behind now, and look at the economics of software production. Let’s say that computing technology is in the middle of a revolution analogous to the industrial revolution (partly because Brad Cox already went there so I don’t have to overthink the analogy).

The interesting question is: when will we leave the revolution? The industrial revolution was over when mechanisation was no longer the new, exciting, transformative thing. When the change was over, the revolution was over. It was over when people accepted that machines existed, and factored them into the costs of doing their businesses like staff and materials. When you could no longer count on making a machine as a profitable endeavour in itself: it had to do something. It had to pay for itself.

During the industrial revolution, you could get venture capital just for building a cool-looking machine. You could make money from a machine just by showing it off to people. The bear didn’t have to dance well: it was sufficient that it could dance at all.

We’re at this stage now, in the computer revolution. You can get venture capital just for building a cool-looking website. Finding customers who need that thing can come later, if at all. So we haven’t exited the revolution, and won’t while it’s still possible to get money just for building the thing with no customers.

How is that situation supported? In Smith’s world, the transparent relationship between supply and demand drove the efficiency requirements and thus the division of labour and the creation of machines. You know how many pins you need, and you know that your profits will be greater if you can buy cheaper pins. The pin factory owners know how many employees they have and what wages they need, and know that if they can increase the number of pins made per person-time, they can sell pins for less money and make more profit.

How do supply and demand work in software? Supply exists in bucketloads. If you want software written, you can always find some student who wants to improve their skills or even a professional programmer willing to augment their GitHub profile.

[This leads to another deviation from Adam Smith’s world, but one that can be left for later: he writes

In the advanced state of society, therefore, they are all very poor people who follow as a trade, what other people pursue as a pastime.

This is evidently not true in software, where many people will work for free and others are very well-paid.]

There’s plenty of supply, but the demand question is trickier to answer. In the age of supply, not demand Aurel Kleinerman (via Bob Cringely) suggests that supply is driving demand. That people (this seemingly limitless pool of software labour) are building things, then showing them to people and seeing whether they can find some people who want those things. I think this is true, but also over-simplified.

It’s not that there’s no demand, it’s that the demand is confused. People don’t know what could be demanded, and they don’t know what we’ll give them and whether it’ll meet their demand, and they don’t know even if it does whether it’ll be better or not. This comic strip demonstrates this situation, but tries to support the unreasonable position that the customer is at fault over this.

Just as using a library is a gamble for developers, so is paying for software a gamble for customers. You are hoping that paying for someone to think about the software will cost you less over some amount of time than paying someone to think about the problem that the software is supposed to solve.

But how much thinking is enough? You can’t buy software by the bushel or hogshead. You can buy machines by the ton, but they’re not valued by weight; they’re valued by what they do for you. So, let’s think about that. Where is the value of software? How do I prove that thinking about this is cheaper, or more efficient, than thinking about that? What is efficient thinking, anyway?

Could it be that knowledge work just isn’t amenable to the division of labour? Clearly not: I am not a lawyer. A lawyer who is a lawyer may not be a property lawyer. A lawyer who is a property lawyer may only know UK property law. A lawyer who is a UK property lawyer might only know laws applicable to the sale of private dwellings. And so on: knowledge work certainly is divisible.

Could it be that there’s no drive for increased efficiency? Maybe so. In fact, that seems to be how the consumer economy works: invent things that people want so that they need money so that they need to work so that there are jobs in making the things that people want. If it got too easy to do middle-class work, then there’d be less for the middle class to do, which would mean less middle class spending, which would mean less demand for consumer goods, which would… Perhaps knowledge work needs to be inefficient to support its own economy.

If that’s true, will there ever be a drive toward division of labour in the field of software? Maybe, when the revolution is over and the interesting things to create for their own sake lie elsewhere. When computers are no longer novel technology, but are simply a substrate of industry. When the costs and benefits are sufficiently well-understood that business owners can know what they need, when it’s done, and how much it should cost.

That’ll probably be preceded by another software-led recession, as VCs realise that the startups they’re funding are no longer interesting for their own sake and move on to the (initially much smaller) nascent field of the next revolution. Along with that will come the changes in regulation, liability and insurance as businesses accept that software should “just work” and policy adjusts to support that. With stability comes the drive to reduce costs and increase quality and rate of output, and with that comes the division of labour predicted by Adam Smith.

Posted in economics, futurology | Leave a comment

One decade in

The first working week of August 2014 comes ten years after the first working week of August 2004. You knew that. The first working week of August 2004 was the first week since completing my degree that I worked for a living: the start of a sequence of (paid) events that led me to here.

Obviously it’s not the start of the sequence at all, but I’ve already covered that story. It’s not even when I first learned Objective-C: that was about a year earlier. However, stories are easier to tell if they begin once upon a time, rather than in the middle of a collection of events, the connections between which being subtle and hard to examine.

It would be nice to give a recommendation to people who are in the position now that I was ten years ago, but it’s unlikely that the same things that worked back in 2004 are still applicable. Should you want to try, then my suggestion is this: bet your whole career on some apparently minuscule niche, and hope against hope that the only vendor supporting it creates a whole new industry within about four years so that your seemingly poor decision cashes out.

As an aside, you can draw clear lines around the things I was using back then that I’m still using now. Some of the lines are fuzzy: I’m still using “UNIX”, though that doesn’t mean the same thing (nor did it mean then anything that would’ve been recognisable to a user from 1994, 1984 or 1974).

It would perhaps be less nice to give a list of lessons that I claim to have learned over those ten years. Those would, of course, be lessons that I derive now from my recollection of that time, and would mostly serve to add to the corpus of folklore that permeates our field.

Which brings me on to the one thing I unequivocally do know after ten years in [IT, computers, whatever you want to call it]: that I still don’t know a lot. I definitely know more about programming computers than I did then, but that’s only an infinitesimal part of the fundamental interconnectedness of all things.

Posted in whatevs | Comments Off on One decade in

PADDs, not the iPad

Alan Kay says that Xerox PARC bought its way into the future by paying lots of money for each computer. Today, you can (almost) buy your way into the future of mobile computers by paying small amounts of money for lots of computers. This is the story of the other things that need to happen to get into the future.

I own rather a few portable computers.

Macs, iPads, Androids and more.

Such a photo certainly puts me deep into a long tail of interwebbedness. However, let me say this: within a couple of decades this photo will look reasonable, for the number of portable computers in one office room. Faster networks will enable more computers in a single location, and that will be an interesting source of future applications.

Here’s the distant future. The people in this are doing something that today’s people do not do so readily.

PADD

One of them is giving his PADD to the other. Is that because PADDs are super-expensive, so everybody has to share them? No: it’s because they’re so cheap, it’s easy to give one to someone who needs the information on it and to print/replicate/fax/magic a new one.

We can’t do that today. Partly it’s because the devices are pretty expensive. But their value isn’t just associated with their monetary worth: they’ve got so many personalisations, account settings and stored credentials on them that the idea of giving an unlocked device to someone else is unconscionable to many people. The trick is not merely to make them cheap, but to make them disposable.

Disposable pad computers also solve another problem: that of how to display multiple views simultaneously on the pad screen.

PADDs

You can go from rearranging metaphorical documents on a metaphorical desktop, back to actually arranging documents on a desktop. Rather than fighting with admittedly ingenious split-screen UI, you can just put multiple screens side by side.

The cheapest tablet computer I’m aware of that’s for sale near me is around £30, but that’s still too expensive. When they’re effectively free, and when it’s as easy to give them away as it is to use them, then we’ll really be living in the future.

Just as it started, this post ends with Xerox PARC, and the inspiration for this post:

Pads are intended to be “scrap computers” (analogous to scrap paper) that can be grabbed and used anywhere; they have no individualized identity or importance.

Posted in futurology, UI | Comments Off on PADDs, not the iPad

The reasonable effectiveness of developer tools

In goals upon goals upon goals, I suggested that a fixation on developer tools is misplaced. This is not to say that developer tools are unhelpful, nor that they can’t have a significant impact on our work.

Consider the following, over-restricted, definition of what a programmer does:

A programmer’s responsibility is to turn a computer into a solution to somebody’s problem.

We have plenty of tools designed to stop you having to consider the details of this computer when doing that: assemblers, compilers, device drivers, hardware abstraction layers, virtual machines, memory managers and so on. Then we have tools to speed up aspects of working in those abstractions: build systems, IDEs and the like. And tools that help make sure you moved in the correct direction: testing tools, analysers and the like.

Whether we have tools that help you move from an abstract view of your computer to even an abstract view of your problem depends strongly on your problem domain, and the social norms of programmers in that space. Science is fairly well-supplied, for example, with both commercial and open source tools.

But many developers will be less lucky, or less aware of the tools at their disposal. Having been taken from “your computer…” to “any computer…” by any of a near-infinite collection of generic developer tools, they will then get to “…can solve this problem” by building their own representations of the aspects of the problem. In this sense, programming is still done the way we did it in the 1970s, by deciding what our problem is and how we can model bits of it in a computer.

It’s here, in the bit where we try to work out whether we’re building a useful thing that really solves the problems real people really have, that there are still difficulties, unnecessary costs and incidental complexity. Therefore it’s here where judicious selection and use of tools can be of benefit, as their goals support our goals of supporting our users’ goals.

And that’s why I think that developer tools are great, even while warning against fixating upon them. Fixate on the things that need to be done, then discover (or create) tools to make them faster, better and redundant.

Posted in software-engineering, tool-support | Comments Off on The reasonable effectiveness of developer tools

Goals upon goals upon goals

As I read Ed Finkler’s piece on losing excitement in technology, I found myself recognising pieces of my own story. The prospect of a new language or framework no longer seems like a new toy, an excuse to stay up all night studying it, using it and learning its secrets as I would have done a few years ago. Instead I find myself asking what new problems are introduced, whether they’re worth accepting over the devils we know are in our existing tools and how many developer-decades are soon to be lost in reimplementing libraries that have been in CPAN for decades in a new language and delivered via a new packaging system.

Because, as I alluded to when not really talking about Swift and as more eloquently described by Matt Gemmell, our problems do not, for the most part, come from our tools and will not be solved by adding more tools. Indeed a developer’s fixation on their tools will allow the surrounding business, market, legal and social problems to go unchecked. No change of programming language will turn customers who don’t want to pay $0.99 into customers who do want to pay $99. Implicitly unwrapped optionals might catch the occasional bug in development but they just aren’t worth a 100x increase in value to people on the sharp end of our creations.

What will be of benefit to them? I think we’re going to have to go through the software equivalent of the consolidation that the consumer goods industry has already seen in hardware. Remember the introduction to the iPhone?

An iPod, a phone, an internet mobile communicator, these are NOT three separate devices!

Actually it’s none of those things. Well, it is, in that it’s all of those and more. Fundamentally, it’s a honking great handheld control unit with hundreds of buttons on, just like Sony used to make for their TV remotes. One of those buttons will turn it into a phone, one will turn it into an iPod, one will turn it into a web browser, another lets you add other buttons that do other things. They don’t all do their things in the same way, and just to keep things interesting the buttons will arbitrarily change location and design and behaviour every so often. The Sony remotes didn’t do that.

The core experience of an iPhone, or Android phone, or Windows phone—the thing you see when you switch it on and wait long enough—is a launcher. It’s the Program Manager from Windows 3.0, packaged up in a shiny interactive box. Both Program Manager and today’s replacement offer the same promise: you’ve got a feeling that you left a button around here somewhere that probably starts you doing the thing you need to do.

So we still need to make good on the promise that you have just one device, by making that one device act like just one device (preferably one which works properly and does what people expect, which is what I spend a lot of time thinking about and working on). How we get there will be the interesting problem for at least another decade. How our tools support that journey will be a fun sideshow, certainly important but hardly the focus. The tools support the goals that support our goals that support the world’s goals.

Posted in futurology, philosophy after a fashion | Comments Off on Goals upon goals upon goals

Intra-curricular activities

I’m apparently fascinated by the idea of defining curricula for learning programming. I’ve written about how we need to be careful what we try to pay forward from the way we learned in the past, and I’ve talked about how we do need to pay it forward so that the second hundred years see faster progress than the first hundred years.

I’m a fan (with reservations, as seen below) of the book series as a form of curriculum. Take something like Kent Beck’s signature series, which covers a decent subset of both technical and social approaches in software development in breadth and in depth. You could probably imagine developers who would benefit from reading some or all of the books in the series. In fact, you may be one.

Coping with people approaching the curriculum from different skill levels and areas of experience is hard. Not just for the book series, it’s hard in general. Universities take the simplifying approach of assuming that everybody wants to learn the same stuff, and teaching that stuff. And to some extent that’s easy for them, because the backgrounds of prospective students is relatively uniform. Even so, my University course organised incoming students into two groups; those who had studied complex numbers at A-level and those who had not. The difference was simply that the group who had not were given a couple of lectures on complex numbers, then it was assumed that they also knew the topic from the fourth week.

Now consider selling a programming book to the public. Part of the proposal process with all of the publishers I’ve worked with has been describing the target audience. Is this a book for people who have never programmed before? For people who have programmed a little, but never used this particular tool or technique? People who have programmed a lot but never used this tool? Is this thing similar to what they have used before, or very different? For people who are somewhat familiar with the tool? For experts (and how is that defined)? Is it for readers comfortable with maths? For readers with no maths background?

Every “no” in answer to one of those questions is an opportunity to improve the experience for a subset of the potential audience by tailoring it to that subset. It’s also an opportunity to exclude a subset of the audience by making the content less relevant to them.

[I’ll digress here to explain how I worked that out for my books: whether it’s selfishness or a failure of empathy, I wrote books that I wanted to read but that didn’t exist. Therefore the expected experience is something similar to mine, back when I filled in the proposal form.]

Clearly no single publication will cover the whole phase space of potential readers and be any good. The interesting question is how much it’s worth covering with multiple publications; whether the idea of series-as-curriculum pulls in the general direction as much as scope-limiting each book pulls in the specific. Should the curriculum take readers on a straight line from novice to master? Should it “fan in” from multiple introductions? Should it “fan out” in multiple directions of interest and enquiry? Would a non-linear curriculum be inclusive or offputtingly confusing? Should the questions really be answered by substituting the different question “how many people would buy that”?

Posted in academia, advancement of the self, books, edjercashun, learning | Comments Off on Intra-curricular activities