More Excel-lent Adventures

I previously wrote about Excel as the most successful IDE:

Now what makes a spreadsheet better as a development environment is difficult to say; I’m unaware of anyone having researched it.

That research is indeed extant, and the story is well-told in A Small Matter of Programming. While Professor Nardi’s focus is on end-user programming, reading her book raises questions about gaps in professional programmer tools.

Specifically, in the realm of collaboration. Programming tools support a few limited forms of collaboration:

  • individual work
  • a team of independent individuals working on a shared project
  • pair programmers
  • code review

For everything else, there’s a whiteboard. Why? What’s missing?

Posted in code-level, tool-support | Leave a comment

What it takes to “win” a discussion

You may have been to some kind of debate club at school, or at least had a debate in a class. If so, the debate you had was probably a competitive debate, and went something along these lines (causality is not presented as its usual wibbly-wobbly self to keep the sentences short):

  • A motion is proposed.
  • Someone presents a statement in favour of the motion.
  • Someone else presents a statement against the motion.
  • A second statement in favour is made.
  • A second opposing statement is made.
  • Questions are asked and answered.
  • A favouring summary is made.
  • An opposing summary is made.
  • Somehow one “side” or the other “wins”, perhaps by a vote.

Or you may have been to court. If so, you probably saw something that went along these lines:

  • A charge is proposed.
  • Someone presents a case, including evidence, supporting the charge.
  • Someone else presents a case, including evidence, refuting the charge.
  • Questions are asked and answered.
  • A supporting summary is made.
  • A refuting summary is made.
  • Somehow one “side” or the other “wins”, perhaps by the agreement of a group of people.

Both forms of conversation are very formal and confrontational. They’re also pretty hard to get right without a lot of practice.

And here’s a secret that the Internet Illuminati apparently tries to keep shielded from many people: not every conversation needs to work like that.

Back in the time of the war of the three kingdoms, the modern party system of British politics didn’t exist in the same way it does now. Members of parliament would form associations based on agreements of the matter under discussion, but the goal of parliament was to reach consensus on that matter. “Winning” was achieved by coming to the best conclusion available.

And, it turns out, that’s a possible outcome for today’s discussions too. Let’s investigate what that might mean.

  • Someone wants to know what tool to use to achieve some goal. This conversation is “won” by exploring the possibilities and their pros and cons. Shouting across other people’s views until they give up doesn’t count as winning, because nothing is learned. That’s losing.
  • Two people have different experiences. Attempting to use clever rhetorical tricks to demonstrate that the other person’s views are invalid doesn’t count as winning, because nothing is learned. That’s losing.

Learning things, by the way, is pretty cool.

Posted in learning | Leave a comment

APPosite Concerns

I’ve started another book project: APPosite Concerns is in the same series as, and is somehow a sequel to, APPropriate Behaviour. So now I just have one question to ask.

What is going to be in the book?

This question is easy to answer in broad terms. My mental conception of who I am and how I make software is undergoing a Narsil-like transformation: it has been broken and is currently being remade.

APPropriate Behaviour was a result of the build-up of stresses that led to it being broken. As I became less and less satisfied with the way in which I made software, I explored higher and higher levels looking for meaning. Finally, I was asking the (pseudo-)profound questions: what is behind software development? How does one philosophise about it? What does it mean?

If APPropriate Behaviour is the ascent, then APPosite Concerns is an exploration of the peak. It’s an exploration of what we find when nothing is worth believing in, of the questions we ask when there is really no understanding of what the answers might be.

It’s clear to me that plenty of the essays in this blog are relevant to this exploration, but of course there’s not much point writing a book that’s just some articles culled from my blog. There needs to be, if you’ll excuse a trip into the world of self-important businessperson vocabulary for a second, some value add.

I’ve written loads recently. As I said right here, I write a lot at the moment. I write to get ideas out of my brain, so that I can ignore them and move on to other ideas. Or so that I can get to sleep. I write on a 1950s typewriter, I write on loose leaf paper, I write in notebooks, I write in Markdown files.

I know that there’ll be plenty in there that can be put to good use, but which pieces are the valuable ones? Is it the fictionalised autobiography, written in the style of a Victorian novel? The submitted-and-rejected science fiction short about the future of the United Nations? The typewritten screed about the difficulties of iOS provisioning? The Platonic dialogue on the ethics of writing software?

One thing that’s evident is that a reorganisation is required. Blogs proceed temporally, but books can take on any other order. The disparate essays from my collection are related: indeed given the same emotional state, any given subject trigger leads me to the same collection of thoughts. I could probably recreate any of the articles in SICPers not from memory, but from the same initial conditions. There’s a consistent, though evidently evolving, worldview expressed in my recent writing. Connecting the various parts conceptually will be useful for both of us.

[By the way, there will eventually be a third part representing the descent: that part has in a very real sense not yet been written.]

Posted in books | Leave a comment

Apple’s Watch and Jony’s Compelling Beginning

There are a whole lot of constraints that go into designing something. Here are the few I could think of in a couple of minutes:

  • what people already understand about their interactions with things
  • what people will discover about their interactions with things
  • what people want to do
  • what people need to do
  • what people understand their wants and needs
  • what people understand about their wants and needs
  • how people want to be perceived by other people
  • which things that people need or want to do you want to help with
  • which people you want to help
  • what is happening around the people who will be doing the things
  • what you can make
  • what you can afford
  • what you’re willing to pay
  • what materials exist

Some of those seem to be more internal than others, but there’s more of a continuum between “external” and “internal” than a switch. In fact “us” and “them” are really part of the same system, so it’s probably better to divide them into constraints focussing on people and constraints focussing on industry, process and company politics.

Each of Apple’s new device categories moves the designs further from limitations of the internal constraints toward the people-centric limitations. Of course they’re not alone in choosing the problems they solve in the industry, but their path is a particular example to consider. They’re not exclusively considering the people-focussed constraints with the watch, there still are clear manufacturing/process constraints: battery life, radio efficiency are obvious examples.

There are conflicts between some of the people-focussed constraints. You might have a good idea for how a watch UI should work, but it has to be tempered by what people will expect to do which makes new user interface designs an evolutionary process. So you have to take people from what they know to what they can now do.

That’s a slow game, that Apple appear to have been playing very quickly of late.

  • 1984: WIMP GUI, but don’t worry there’s still a typewriter too.

There’s a big gap here, in which the technical constraints made the world adapt to the computer, rather than the computer adapt to the world. Compare desks from the 1980s, 1990s and 2000s, and indeed the coming and going of the diskette box and the mousemat.

  • 2007: touchscreen, but we’ve made things look like the old WIMP GUI from the 1980s a bit, and there’s still a bit of a virtual typewriter thing going on.

  • 2010: maybe this whole touchscreen thing can be used back where we were previously using the WIMP thing.

  • 2013: we can make this touchscreen thing better if we remove some bits that were left over from the WIMP thing.

  • 2014: we need to do a new thing to make the watch work, but there’s a load of stuff you’ll recognise from (i) watches, (ii) the touchscreen thing.

Now that particular path through the tangle of design constraints is far from unique. Compare the iPad to the DynaBook and you’ll find that Alan Kay solved many of the same problems, but only for people who are willing to overlook the fact that what he proposed couldn’t be built. Compare the iPhone to the pocket calculator, and you find that it was possible to portable computing many decades earlier but with reduced functionality. Apple’s products are somewhere in between these two extremes: balancing what can be done now and what could possibly be desired.

For me, the “compelling beginning” is a point along Apple’s (partly deliberate, and partly accidental) continuum, rather than a particular watershed. They’re at a point where they can introduce products that are sufficiently removed from computeriness that people are even willing to discuss them as fashion objects. Yes, it’s still evidently the same grey-and-black glass square that the last few years of devices have been. Yes, it’s still got a shrunk-down springboard list of apps like the earlier devices did.

The Apple Watch (and contemporary equivalents) are not amazing because the bear dances well, they’re amazing because the bear dances at all. The possibility of thinking about a computer as an aesthetic object, one that solves your problems and expresses your identity, rather than a box that does computer things and comes in a small range of colours, is new. The ability to consider a computer more as an object in its environment than as a collection of technical and political constraints changes how they interact with us and us with them. That is why it’s compelling.

And of course the current watch borrows cues from the phone that came before it, to increase familiarity. Future ones, and other things that come after it, will be able to jettison those affordances as expectations and comfort change. That is why it’s a beginning.

Posted in UI | Leave a comment

Sitting on the Sidelines

Thank you, James Hague, for your article You Can’t Sit on the Sidelines and Become a Philosopher. I got a lot out of reading it, because I identified myself in it. Specifically in this paragraph:

There’s another option, too: you could give up. You can stop making things and become a commentator, letting everyone know how messed-up software development is. You can become a philosopher and talk about abstract, big picture views of perfection without ever shipping a product based on those ideals. You can become an advocate for the good and a harsh critic of the bad. But though you might think you’re providing a beacon of sanity and hope, you’re slowly losing touch with concrete thought processes and skills you need to be a developer.

I recognise in myself a lot of the above, writing long, rambling, tedious histories; describing how others are doing it wrong; and identifying inconsistencies without attempting to resolve them. Here’s what I said in that last post:

I feel like I ought to do something about some of that. I haven’t, and perhaps that makes me the guy who comes up to a bunch of developers, says “I’ve got a great idea” and expects them to make it.

Yup, I’m definitely the person James was talking about. But he gave me a way out, and some other statements that I can hope to identify with:

You have to ignore some things, because while they’re driving you mad, not everyone sees them that way; you’ve built up a sensitivity. […] You can fix things, especially specific problems you have a solid understanding of, and probably not the world of technology as a whole.

The difficulty is one of choice paralysis. Yes, all of those broken things are (perhaps literally) driving me mad, but there’s always the knowledge that trying to fix any one of them means ignoring all of the others. Like the out of control trolley, it’s easier to do nothing and pretend I’m not part of the problem than to deliberately engage with choosing some apparently small part of it. It’s easier to read Alan Kay, to watch Bret Victor and Doug Engelbart and to imagine some utopia where they had greater influence. A sort of programmerpunk fictional universe.

As long as you eventually get going again you’ll be fine.

Hopefully.

Posted in advancement of the self, philosophy after a fashion | Leave a comment

Why is programming so hard?

I have been reflecting recently on what it was like to learn to program. The problem is, I don’t clearly remember: I do remember that there was a time when I was no good at it. When I could type a program in from INPUT or wherever, and if it ran correctly I was golden. If not, I was out of luck: I could proof-read the listing to make sure I had introduced no mistakes in creating my copy from the magazine, but if it was the source listing itself that contained the error, I wasn’t about to understand how to fix it.

The programs I could create at this stage were incredibly trivial, of the INPUT "WHAT IS YOUR NAME"; N$: IF N$="GRAHAM" THEN PRINT "HELLO, MY LORD" ELSE PRINT "GO AWAY ";N$ order of complexity. But that program contains pretty much most of what there is to computing: input, output, memory storage and branches. What made it hard? I’ll investigate whether it was BASIC itself that made things difficult later. Evidently I didn’t have a good grasp of what the computer was doing, anyway.

I then remember a time, a lot later, when I could build programs of reasonable complexity that used standard library features, in languages like C and Pascal. That means I could use arrays and record types, procedures, and library functions to read and write files. But how did I get there? How did I get from 10 PRINT "DIXONS IS CRAP" 20 GOTO 10 to building histograms of numeric data? That’s the bit I don’t remember: not that things were hard, but the specific steps or insights required to go from not being able to do a thing to finding it a natural part of the way I work.

I could repeat that story over and over, for different aspects of programming. My first GUI app, written in Delphi, was not much more than a “fill in the holes” exercise using its interface builder and code generator. I have an idea that my understanding of what I thought objects and classes were supposed to do was sparked by a particular training course I took in around 2008, but I still couldn’t point to what that course told me or what gaps in my knowledge it filled. Did it let me see the bigger picture around facts I already knew, did it correct a fallacious mental model, or did it give me new facts? How did it help? Indeed, is my memory even correct in pinpointing this course as the turning point? (The course, by the way, was Object-Oriented Analysis and Design Using UML.)

Maybe I should be writing down instances when I go from not understanding something to understanding it. That would work if such events can be identified: maybe I spend some time convincing myself that I do understand these things while I still don’t, or tell myself I don’t understand these things long after I do.

One place I can look for analogies to my learning experience is at teaching experience. A full litany of the problems I’ve seen in teaching programming to neophytes (as opposed to professional training, like teaching Objective-C programming to Rubyists, which is a very different thing) would be long and hard to recall. Tim Love, has seen and recorded similar problems to me (as have colleagues I’ve talked to about teaching programming).

A particular issue from that list that I’ll dig into here is the conflation of assignment and equality. The equals sign (=) was created in the form of two parallel lines of identical length, as no two things could be more equal. But it turns out then when used in many programming languages, in fact two things related by = could be a lot more equal. Here’s a (fabricated, but plausible) student attempt to print a sine table in C (preprocessor nonsense elided).

int main() {
  double x,y;
  y = sin(x);
  for (x = 0; x <= 6.42; x = x + 0.1)
    printf("%lf %lf\n", x, y);

}

Looks legit, especially if you’ve done any maths (even to secondary school level). In algebra, it’s perfectly fine for y to be a dependent variable related to x via the equality expressed in that program, effectively introducing a function y(x) = sin(x). In fact that means the program above doesn’t look legit, as there are not many useful solutions to the simultaneous equations x = 0 and x = x + 0.1. Unfortunately programming languages take a Humpty-Dumpty approach and define common signs like = to mean what they take them to mean, not what everybody else is conventionally accepting them to mean.

Maybe the languages themselves make learning this stuff harder, with their idiosyncrasies like redefining equality. This is where my musing on BASIC enters back into the picture: did I find programming hard because BASIC makes programming hard? It’s certainly easy to cast, pun intended, programming expertise as accepting the necessity to work around more roadblocks imposed by programming tools than inexpert programmers are capable of accepting. Anyone who has managed to retcon public static void main(String[] args) into a consistent vision of something that it’s reasonable to write every time you write a program (and to read every time to inspect a program, too) seems more likely to be subject to Stockholm Syndrome than to have a deep insight into how to start a computer program going.

We could imagine it being sensible to introduce neophytes to a programming environment that exposes the elements of programming with no extraneous trappings or aggressions. You might consider something like Self, which has the slot and message-sending syntax as its two features. Or LISP, which just has the list and list syntax. Or Scratch, which doesn’t even bother with having syntax. Among these friends, BASIC doesn’t look so bad: it gives you tools to access its model of computation (which is not so different from what the CPU is trying to do) and not much more, although after all this time I’m still not entirely convinced I understand how READ and DATA interact.

Now we hit a difficult question: if those environments would be best for beginners, why wouldn’t they be best for anyone else? If Scratch lets you computer without making mistakes associated with all the public static void nonsense, why not just carry on using Scratch? Are we mistaking expertise at the tools with expertise at the concepts, or what we currently do with what we should do, or complexity with sophistication? Or is there a fundamental reason why something like C++, though harder and more complex, is better for programmers with some experience than the environments in which they gained that experience?

If we’re using the wrong tools to introduce programming, then we’re unnecessarily making it hard for people to take their first step across the threshold, and should not be surprised when some of them turn away in disgust. If we’re using the wrong tools to continue programming, then we’re adding cognitive effort unnecessarily to a task which is supposed to be about automating thought. Making people think about not making people think. Masochistically imposing rules for ourselves to remember and follow, when we’re using a tool specifically designed for remembering and following rules.

Posted in edjercashun | 1 Comment

Programming, maths and the other things

Sarah Mei argues that programming is not math, arguing instead that programming is language. I don’t think it’s hard to see the truth in the first part, though due to geopolitical influences on my personality I’d make the incrementally longer statement that programming is not maths.

But there’s maths in programming

Let’s agree to leave aside the situations in which we use programming to solve mathematics problems, such as geometry or financial modelling. These are situations in which the maths is intrinsic to the problem domain, and worrying about the amount of maths involved could potentially confuse two different sources of maths.

Nonetheless, one may argue that the computer is simulating a mathematical structure, and that therefore you need to understand the mathematical model of the structure in order to get the computer to do the correct thing. I can model the computer’s behaviour using the lambda calculus, and I’ve got a mathematically-rich model. I can model the computer’s behaviour as a sequence of operations applied to an infinite paper tape, and I’ve got a different model. These two models can be interchanged, even if they surface different aspects of the real situation being modelled.

It’s the second of the two models that leads to the conclusion that capability at advanced maths is not intrinsic to success at programming. If what the computer’s doing can be understood in terms of relatively simple operations like changing the position of a tape head and tallying numbers, then you could in principle not only understand a program but even replicate it yourself without a deep knowledge of mathematics. Indeed that principle provides the foundation to one argument on the nature of software as intellectual property: a computer program is nothing more than a sequence of instructions that could be followed by someone with a pencil and paper, and therefore cannot represent a patentable invention.

Maths, while not intrinsic, may be important to programming

It may, but it probably isn’t. Indeed it’s likely that many thousands of programmers every day ignore key results in the mathematical investigation of programming, and still manage to produce software that (at least sort-of) works.

Take the assertion as an example. Here’s a feature of many programming environments that has its root in the predicate systems that can be used to reason about computer programs. The maths is simple enough: if predicate P is true before executing statement S, and consequent Q will hold after its execution, then the result of executing S will be P AND Q.

From this, and knowledge of what your program should be doing, then you can prove the correctness of your program. Because of the additive nature, if you can prove the outcome of every statement in a subroutine, then you can make an overall statement about the outcome of that subroutine. Similarly, you can compose the statements you can make about subroutines to prove the behaviour of a program composed out of those subroutines. Such is the basis of techniques like design by contract, and even more formal techniques like proof-carrying code and model-carrying code.

…which are, by and large, not used. You can write an assertion without having the mathematical knowledge described above, and you can write a program without any assertions.

Here’s what Tony Hoare said about what programmers know of their programs, back in 1969 (pronoun choice is original):

At present, the method which a programmer uses to convince himself of the correctness of his program is to try it out in particular cases and to modify it if the results produced do not correspond to his intentions. After he has found a reasonably wide variety of example cases on which the program seems to work, he believes that it will always work.

Since then, we’ve definitely (though not consistently) adopted tools to automate the trying out of programs (or routines in programs) on particular cases, and largely not adopted tools for automatic proof construction and checking. Such blazing progress in only 45 years!

There are more things in heaven and earth, Horatio

Here’s a potted, incomplete list of things required of someone or a group of people making software. The list is presented in order of remembering them, and doesn’t reflect any intrinsic dependencies.

  • know of a problem that needs solving
  • think of a solution to the problem
  • presume or demonstrate that the solution can be made out of software
  • understand whether the solution will be useful and valuable
  • understand the constraints within which the solution will operate
  • understand and evaluate the changes to the system that arise from the solution existing
  • design an implementation of the solution to fit the constraints (maths is optional here, but index cards and arrows on whiteboards will often work)
  • build that solution (maths is optional here)
  • demonstrate that the implementation does indeed solve the problem (maths is optional here)
  • convince people that they in fact need this problem solved in this particular way
  • explain to people how to use this implementation
  • react to changes that occur in the implementation’s environment
  • pay for all of the above (some maths helps here)

The point is that actual programming is a very small part of everything that’s going on. Even if maths were intrinsic to parts of that activity, it would still be a tiny contribution to the overall situation. Programming is maths in the same way that cooking is thermodynamics.

Posted in philosophy after a fashion | Leave a comment

Intellectual property and software: the nuclear option

There are many problems that arise from thinking about the ownership of software and its design. Organisations like the Free Software Foundation and Open Source Initiative take advantage of the protections of copyright of source code – presumed to be a creative work analogous to a written poem or a painting on canvas – to impose terms on how programs derived from the source code can be used.

Similar controls are not, apparently, appropriate for many proprietary software companies who choose not to publish their source code and control its use through similar licences. Some choose to patent their programs – analogous to a machine or manufactured product – controlling not how they are used but the freedom of competitors to release similar products.

There is a lot of discomfort in the industry with the idea that patents should apply to software. On the other hand, there is also distaste when a competitor duplicates a developer’s software, exactly the thing patents are supposed to protect against.

With neither copyright nor patent systems being sufficient, many proprietary software companies turn to trade secrets. Rather than selling their software, they license it to customers on the understanding that they are not allowed to disassemble or otherwise reverse-engineer its working. They then argue that because they have taken reasonable steps to protect their program’s function from publication, it should be considered a trade secret – analogous to their customer list or the additives in the oil used by KFC.

…and some discussions on software ownership end there. Software is a form of intellectual property, they argue, and we already have three ways to copy with that legally: patents, copyright, and trade secrets. A nice story, except that we can quickly think of some more.

If copyright is how works of art are protected, then we have to acknowledge that not all works of art are considered equal. Some are given special protection as trade marks: exclusive signs of the work or place of operation of a particular organisation. Certain features of a product’s design are considered similarly as the trade dress of that product. Currently the functionality of a product cannot be considered a trademark or trade dress, but what would be the ramifications of moving in that direction?

We also have academic priority. Like the patent system, the academic journal system is supposed to encourage dissemination of new results (like the patent system, arguments abound over whether or not it achieves this aim). Unlike the patent system, first movers are not awarded monopoly, but recognition. What would the software industry look like if companies had to disclose which parts of their products they had thought of themselves, and which they had taken from Xerox, or VisiCorp, or some other earlier creator? Might that discourage Sherlocking and me-too products?

There’s also the way that nuclear propagation is controlled. It’s not too hard to find out how to build nuclear reactors or atomic weapons, particularly as so much of the work was done by the American government and has been released into the public domain. What it is hard to do is to build a nuclear reactor or atomic weapon. While the knowledge is unrestricted, its application is closely controlled. Control is overseen by an international agency, part of the United Nations. This has its parallels with the patent system, where centralisation into a government office is seen as one of the problems.

The point of this post is not to suggest that any one of the above analogues is a great fit for the problem of ownership and competition in the world of software. The point is to suggest that perhaps not all of the available options have been explored, and that accepting the current state of the world “because we’ve exhausted all the possibilities” would be to give up early.

Posted in economics, IANAL | Leave a comment

On Mental Health

This post has been a while in the writing, I suppose waiting for the perfect time to publish it. The two things that happened today to make me finally commit it to electrons were the news about Robin Williams, and reading Robert Bloch’s That Hell-Bound Train. Explaining the story’s relevance would spoil it, but it’s relevant. And short.

I didn’t leave Big Nerd Ranch because I disliked the job. I loved it. I loved working with clever people, and teaching clever people, and building things with clever people, and dicking around on Campfire posting meme images with clever people, and alternately sweating and freezing in Atlanta with clever people, and speaking stilted Dutch with clever people.

I left because I spent whole days staring at Xcode without doing anything. Because I knew that if I wrote code and pushed it, what I would do would be found lacking, and they’d realise that I only play a programmer on TV, even though I also knew that they were friendly, kind, supportive people who would never be judgemental.

When I left I was open and honest with my colleagues and my manager, and asked them to be open and honest with each other. I felt like I was letting them down, and wanted to do the least possible burning of bridges. I finished working on the same day that I voiced my problems, then I went to bed and had a big cry.

Which sounds bad, but was actually a release of sorts. I don’t remember the previous time I’d cried, or really done anything that expresses emotion. I think it may have been whenever British Mac podcast episode 35 was on, playing the final scene of Blackadder Goes Forth. Which apparently was back in 2006.

Anyway, I then took a couple of months away from any sort of work. On the first day post-Ranch I made an appointment to see a doctor, who happened to have available time that same day. He listened to a story much like the above, and to a description of myself much like the below, and diagnosed depression.

You may have seen that demo of Microsoft’s hyper-lapse videos where you know that there’s loads going on and tons of motion things to react to, but the image is eerily calm and stable. Yup. There’s lots going on, but in here everything’s muffled and has no effect.

It’s not like coasting, though. It’s like revving the engine with the clutch disengaged. I never stop thinking. That can get in the way of thinking about things that I actually want to think about, because I’m already thinking about something else. It means getting distracted in conversations, because I’m already thinking about something else. It means not getting to sleep until I stop thinking.

It also means writing a lot. You may have got rid of a song stuck in your head (an earworm) by playing that song through, or by singing it to yourself. I get rid of brainworms by writing them down. I use anything: from a Moleskine notebook and fountain pen on an Edwardian writing slope to Evernote on a phone.

Now, you may be thinking—or you may not. It may just be that I think you’re thinking it. While I’m still extraverted, I’m also keen to avoid being in situations where I think other people might be judging me, because I’ll do it on their behalves. It’s likely that I’m making this up—that it’s a bit weird that I keep telling jokes if I’m supposed to be emotionally disengaged. Jokes are easy: you just need to think of two things and invent a connection between them. Or tell the truth, but in a more obvious way than the truth usually lets on. You have to think about what somebody else is thinking, and make them think something else. Thinking about thinking has become a bit of a specialty.

Having diagnosed me, the doctor presented two choices: either antidepressant medication, or cognitive behavioural therapy. I chose the latter. It feels pretty weird, like you’re out to second-guess yourself. Every time you have a bad (they say “toxic”, which seems apt: I have a clear mental image of a sort of blue-black inky goop in the folds of my brain that stops it working) thought you’re supposed to write it down, write down the problems with the reasoning that led to it, and write down a better interpretation of the same events. It feels like what it is—to psychology what the census is to anthropology. Complex science distilled into a form anyone can fill in at home.

This post has been significantly more introspective than most of this blog, which is usually about us programmers collectively. Honestly I don’t know what the message to readers is, here. It’s not good as awareness; you probably all know that this problem exists. It’s not good as education; I’m hardly the expert here and don’t know what I’m talking about. I think I just wanted to talk about this so that we all know that we can talk about this. Or maybe to say that programmers should be careful about describing settings as crazy or text as insane because they don’t know who they’re talking to. Maybe it was just to stop thinking about it.

Posted in Uncategorized | Leave a comment

Contractually-obligated testing

About a billion years ago, Bertrand Meyer (he of Open-Closed Principle fame) introduced a programming language called Eiffel. It had a feature called Design by Contract, that let you define constraints that your program had to adhere to in execution. Like you can convince C compilers to emit checks for rules like integer underflow everywhere in your code, except you can write your own rules.

To see what that’s like, here’s a little Objective-C (I suppose I could use Eiffel, as Eiffel Studio is in homebrew, but I didn’t). Here’s my untested, un-contractual Objective-C Stack class.

@interface Stack : NSObject

- (void)push:(id)object;
- (id)pop;

@property (nonatomic, readonly) NSInteger count;

@end

static const int kMaximumStackSize = 4;

@implementation Stack
{
    __strong id buffer[4];
    NSInteger _count;
}

- (void)push:(id)object
{
    buffer[_count++] = object;
}

- (id)pop
{
    id object = buffer[--_count];
    buffer[_count] = nil;
    return object;
}

@end

Seems pretty legit. But I’ll write out the contract, the rules to which this class will adhere provided its users do too. Firstly, some invariants: the count will never go below 0 or above the maximum number of objects. Objective-C doesn’t actually have any syntax for this like Eiffel, so this looks just a little bit messy.

@interface Stack : ContractObject

- (void)push:(id)object;
- (id)pop;

@property (nonatomic, readonly) NSInteger count;

@end

static const int kMaximumStackSize = 4;

@implementation Stack
{
    __strong id buffer[4];
    NSInteger _count;
}

- (NSDictionary *)contract
{
    NSPredicate *countBoundaries = [NSPredicate predicateWithFormat: @"count BETWEEN %@",
                                    @[@0, @(kMaximumStackSize)]];
    NSMutableDictionary *contract = [@{@"invariant" : countBoundaries} mutableCopy];
    [contract addEntriesFromDictionary:[super contract]];
    return contract;
}

- (void)in_push:(id)object
{
    buffer[_count++] = object;
}

- (id)in_pop
{
    id object = buffer[--_count];
    buffer[_count] = nil;
    return object;
}

@end

I said the count must never go outside of this range. In fact, the invariant must only hold before and after calls to public methods: it’s allowed to be broken during the execution. If you’re wondering how this interacts with threading: confine ALL the things!. Anyway, let’s see whether the contract is adhered to.

int main(int argc, char *argv[]) {
    @autoreleasepool {
        Stack *stack = [Stack new];
        for (int i = 0; i < 10; i++) {
            [stack push:@(i)];
            NSLog(@"stack size: %ld", (long)[stack count]);
        }
    }
}

2014-08-11 22:41:48.074 ContractStack[2295:507] stack size: 1
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 2
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 3
2014-08-11 22:41:48.076 ContractStack[2295:507] stack size: 4
2014-08-11 22:41:48.076 ContractStack[2295:507] *** Assertion failure in -[Stack forwardInvocation:], ContractStack.m:40
2014-08-11 22:41:48.077 ContractStack[2295:507] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException',
 reason: 'invariant count BETWEEN {0, 4} violated after call to push:'

Erm, oops. OK, this looks pretty useful. I’ll add another clause: the caller isn’t allowed to call -pop unless there are objects on the stack.

- (NSDictionary *)contract
{
    NSPredicate *countBoundaries = [NSPredicate predicateWithFormat: @"count BETWEEN %@",
                                    @[@0, @(kMaximumStackSize)]];
    NSPredicate *containsObjects = [NSPredicate predicateWithFormat: @"count > 0"];
    NSMutableDictionary *contract = [@{@"invariant" : countBoundaries,
             @"pre_pop" : containsObjects} mutableCopy];
    [contract addEntriesFromDictionary:[super contract]];
    return contract;
}

So I’m not allowed to hold it wrong in this way, either?

int main(int argc, char *argv[]) {
    @autoreleasepool {
        Stack *stack = [Stack new];
        id foo = [stack pop];
    }
}

2014-08-11 22:46:12.473 ContractStack[2386:507] *** Assertion failure in -[Stack forwardInvocation:], ContractStack.m:35
2014-08-11 22:46:12.475 ContractStack[2386:507] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException',
 reason: 'precondition count > 0 violated before call to pop'

No, good. Having a contract is a bit like having unit tests, except that the unit tests are always running whenever your object is being used. Try out Eiffel; it’s pleasant to have real syntax for this, though really the Objective-C version isn’t so bad.

Finally, the contract is implemented by some simple message interception (try doing that in your favourite modern programming language of choice, non-Rubyists!).

@interface ContractObject : NSObject
- (NSDictionary *)contract;
@end

static SEL internalSelector(SEL aSelector);

@implementation ContractObject

- (NSDictionary *)contract { return @{}; }

- (NSMethodSignature *)methodSignatureForSelector:(SEL)aSelector
{
    NSMethodSignature *sig = [super methodSignatureForSelector:aSelector];
    if (!sig) {
        sig = [super methodSignatureForSelector:internalSelector(aSelector)];
    }
    return sig;
}

- (void)forwardInvocation:(NSInvocation *)inv
{
    SEL realSelector = internalSelector([inv selector]);
    if ([self respondsToSelector:realSelector]) {
        NSDictionary *contract = [self contract];
        NSPredicate *alwaysTrue = [NSPredicate predicateWithValue:YES];
        NSString *calledSelectorName = NSStringFromSelector([inv selector]);
        inv.selector = realSelector;
        NSPredicate *invariant = contract[@"invariant"]?:alwaysTrue;
        NSAssert([invariant evaluateWithObject:self],
            @"invariant %@ violated before call to %@", invariant, calledSelectorName);
        NSString *preconditionKey = [@"pre_" stringByAppendingString:calledSelectorName];
        NSPredicate *precondition = contract[preconditionKey]?:alwaysTrue;
        NSAssert([precondition evaluateWithObject:self],
            @"precondition %@ violated before call to %@", precondition, calledSelectorName);
        [inv invoke];
        NSString *postconditionKey = [@"post_" stringByAppendingString:calledSelectorName];
        NSPredicate *postcondition = contract[postconditionKey]?:alwaysTrue;
        NSAssert([postcondition evaluateWithObject:self],
            @"postcondition %@ violated after call to %@", postcondition, calledSelectorName);
        NSAssert([invariant evaluateWithObject:self],
            @"invariant %@ violated after call to %@", invariant, calledSelectorName);
    }
}

@end

SEL internalSelector(SEL aSelector)
{
    return NSSelectorFromString([@"in_" stringByAppendingString:NSStringFromSelector(aSelector)]);
}
Posted in architecture of sorts, code-level, OOP, TDD | 1 Comment