The trouble with layers

In describing Inside-Out Apps I expressed my distrust of the “everything is MVC” school of design.

[…]when you get overly attached to MVC, then you look at every class you create and ask the question “is this a model, a view, or a controller?”. Because this question makes no sense, the answer doesn’t either: anything that isn’t evidently data or evidently graphics gets put into the amorphous “controller” collection, which eventually sucks your entire codebase into its innards like a black hole collapsing under its own weight.

Let me extract a general rule: layers are bad. It’s not just that I can’t distinguish your app’s architecture diagram from a sandwich’s architecture diagram, or a trifle’s. The problem is in the boxes.

As Possibly Alan, IDK and I discussed recently, the problem with box-and-arrow diagrams is that the boxes are really just collections of arrows zoomed out. Combine that with something my colleague Uri told me, that “a package in the dependency diagram is just the smallest thing that contains a cycle”, and you end up with your layer cake diagram looking like this:

Layer Cake

Three big wastelands of “anything goes”, with some vague idea that the one at the top shouldn’t talk to the one at the bottom. Or maybe it’s that any of them can talk down as much as they like but never up. Either way it’s not clear how any of the dependencies in this system are controlled. Is it “anything goes” within a layer? If two related classes belong in different layers, are they supposed to talk to each other or not? Can data types from one layer be passed to another without adaptation? Just what is the layer boundary?

Posted in architecture of sorts | Comments Off on The trouble with layers

Yes, you may delete tests

A frequently-presented objection to the concept of writing automated tests is that it ossifies the implementation of the system under test. “If I’ve got all the tests you’re proposing,” I hear, “then I won’t be able to make any changes without breaking the tests.” There are two aspects to this problem.

Tests should make assertions about the externally-observable behaviour of the system under test. If you have tests that assert that a particular private method does its thing in a particular way, then yes, you are baking in the assumption that this private method works in a particular way. And yes, that makes it harder to change things. But it isn’t necessary.

Any test that is no longer useful is no longer useful: delete it. Those tests that helped you to build the thing were useful while you were building the thing: if they’re not helping you now then just get rid of them.

When you’re building a component, you’re probably thinking quite deeply about the algorithm it will implement, and how it will communicate with its collaborators. Thinking about those will mean writing tests about them, and whenever those tests are in the suite they’ll require the algorithm be implemented in a particular way, and that the module talk to its collaborators in a particular way. Do you no longer care about those details, as long as it does the right thing? Delete the tests, and just keep the ones that make sure it does the right thing.

Posted in TDD | Comments Off on Yes, you may delete tests

The laser physics of software

I’ve worked in a few different places where there have been high-powered lasers, the sort that would make short work of slicing through Sean Connery in a Bond movie. With high-powered lasers comes mandatory laser safety training. At least, it does in the UK.

The first time you receive laser safety training it comes as a bit of a surprise. Then, the second (and later) times, you get to watch the same surprise in everybody else who’s watching it for the first time.

The surprise comes because you kind of expect to hear about cooked retinas, skin burns and all sorts of unpleasant nastiness. Which of course you do, but those aren’t the likely forms of laser accident. The surprising bit is the list of “popular” (not popular) ways to get injured by a laser:

  1. You electrocute yourself on the power supply.
  2. You ignite the liquid oxygen that’s built up around the liquid nitrogen cooling system, probably using the same power supply.
  3. You drop the laser (or indeed the power supply) on your foot.

So to programming. While we’re all donning our protective goggles of shiny new type systems and lack of mutable state, the “popular” (not popular) problems (we don’t know what we’re doing, we don’t know what we should be doing, we’re bad at treating people well) are not being addressed.

Posted in whatevs | Leave a comment

Full-stack

That moment where you’re looking back through your notes to see that you’ve:

  • modelled charge carrier behaviour in semiconductors
  • built a processor from discrete logic components
  • patched kernels
  • patched operating system tools
  • written filesystems
  • written device drivers
  • contributed to a foundation library
  • fixed compiler bugs
  • performance-tuned databases
  • built native apps
  • built web apps
  • tested the above
  • taught other developers
  • mentored other developers
  • sold the above skills

and a thought occurs: someone who knows both PHP and JavaScript is called a full-stack developer.

Someone who tells you that programmers are rational actors who are above marketing is lying. Everything in the field is marketing, including the idea of rationality. Performing such marketing becomes easier when the recipients don’t think it exists, or don’t think they’re the sort of people who would fall for it.

I’m reminded of the audience reaction to this serial entrepreneur describing his recently-acquired startup’s technology (skip to 10:24 if the timestamped URL doesn’t do that automatically for you). He presents these four points:

  • DOS: 1 Analogy Unit
  • Mac + HI Toolbox: 5 Analogy Units
  • DOS + Windows: 7 Analogy Units
  • Mach OS + OpenStep: 20 Analogy Units

Listen to the applause for the unjustified number 20.

Posted in whatevs | Comments Off on Full-stack

More Excel-lent Adventures

I previously wrote about Excel as the most successful IDE:

Now what makes a spreadsheet better as a development environment is difficult to say; I’m unaware of anyone having researched it.

That research is indeed extant, and the story is well-told in A Small Matter of Programming. While Professor Nardi’s focus is on end-user programming, reading her book raises questions about gaps in professional programmer tools.

Specifically, in the realm of collaboration. Programming tools support a few limited forms of collaboration:

  • individual work
  • a team of independent individuals working on a shared project
  • pair programmers
  • code review

For everything else, there’s a whiteboard. Why? What’s missing?

Posted in code-level, tool-support | Leave a comment

What it takes to “win” a discussion

You may have been to some kind of debate club at school, or at least had a debate in a class. If so, the debate you had was probably a competitive debate, and went something along these lines (causality is not presented as its usual wibbly-wobbly self to keep the sentences short):

  • A motion is proposed.
  • Someone presents a statement in favour of the motion.
  • Someone else presents a statement against the motion.
  • A second statement in favour is made.
  • A second opposing statement is made.
  • Questions are asked and answered.
  • A favouring summary is made.
  • An opposing summary is made.
  • Somehow one “side” or the other “wins”, perhaps by a vote.

Or you may have been to court. If so, you probably saw something that went along these lines:

  • A charge is proposed.
  • Someone presents a case, including evidence, supporting the charge.
  • Someone else presents a case, including evidence, refuting the charge.
  • Questions are asked and answered.
  • A supporting summary is made.
  • A refuting summary is made.
  • Somehow one “side” or the other “wins”, perhaps by the agreement of a group of people.

Both forms of conversation are very formal and confrontational. They’re also pretty hard to get right without a lot of practice.

And here’s a secret that the Internet Illuminati apparently tries to keep shielded from many people: not every conversation needs to work like that.

Back in the time of the war of the three kingdoms, the modern party system of British politics didn’t exist in the same way it does now. Members of parliament would form associations based on agreements of the matter under discussion, but the goal of parliament was to reach consensus on that matter. “Winning” was achieved by coming to the best conclusion available.

And, it turns out, that’s a possible outcome for today’s discussions too. Let’s investigate what that might mean.

  • Someone wants to know what tool to use to achieve some goal. This conversation is “won” by exploring the possibilities and their pros and cons. Shouting across other people’s views until they give up doesn’t count as winning, because nothing is learned. That’s losing.
  • Two people have different experiences. Attempting to use clever rhetorical tricks to demonstrate that the other person’s views are invalid doesn’t count as winning, because nothing is learned. That’s losing.

Learning things, by the way, is pretty cool.

Posted in learning | Leave a comment

APPosite Concerns

I’ve started another book project: APPosite Concerns is in the same series as, and is somehow a sequel to, APPropriate Behaviour. So now I just have one question to ask.

What is going to be in the book?

This question is easy to answer in broad terms. My mental conception of who I am and how I make software is undergoing a Narsil-like transformation: it has been broken and is currently being remade.

APPropriate Behaviour was a result of the build-up of stresses that led to it being broken. As I became less and less satisfied with the way in which I made software, I explored higher and higher levels looking for meaning. Finally, I was asking the (pseudo-)profound questions: what is behind software development? How does one philosophise about it? What does it mean?

If APPropriate Behaviour is the ascent, then APPosite Concerns is an exploration of the peak. It’s an exploration of what we find when nothing is worth believing in, of the questions we ask when there is really no understanding of what the answers might be.

It’s clear to me that plenty of the essays in this blog are relevant to this exploration, but of course there’s not much point writing a book that’s just some articles culled from my blog. There needs to be, if you’ll excuse a trip into the world of self-important businessperson vocabulary for a second, some value add.

I’ve written loads recently. As I said right here, I write a lot at the moment. I write to get ideas out of my brain, so that I can ignore them and move on to other ideas. Or so that I can get to sleep. I write on a 1950s typewriter, I write on loose leaf paper, I write in notebooks, I write in Markdown files.

I know that there’ll be plenty in there that can be put to good use, but which pieces are the valuable ones? Is it the fictionalised autobiography, written in the style of a Victorian novel? The submitted-and-rejected science fiction short about the future of the United Nations? The typewritten screed about the difficulties of iOS provisioning? The Platonic dialogue on the ethics of writing software?

One thing that’s evident is that a reorganisation is required. Blogs proceed temporally, but books can take on any other order. The disparate essays from my collection are related: indeed given the same emotional state, any given subject trigger leads me to the same collection of thoughts. I could probably recreate any of the articles in SICPers not from memory, but from the same initial conditions. There’s a consistent, though evidently evolving, worldview expressed in my recent writing. Connecting the various parts conceptually will be useful for both of us.

[By the way, there will eventually be a third part representing the descent: that part has in a very real sense not yet been written.]

Posted in books | Leave a comment

Apple’s Watch and Jony’s Compelling Beginning

There are a whole lot of constraints that go into designing something. Here are the few I could think of in a couple of minutes:

  • what people already understand about their interactions with things
  • what people will discover about their interactions with things
  • what people want to do
  • what people need to do
  • what people understand their wants and needs
  • what people understand about their wants and needs
  • how people want to be perceived by other people
  • which things that people need or want to do you want to help with
  • which people you want to help
  • what is happening around the people who will be doing the things
  • what you can make
  • what you can afford
  • what you’re willing to pay
  • what materials exist

Some of those seem to be more internal than others, but there’s more of a continuum between “external” and “internal” than a switch. In fact “us” and “them” are really part of the same system, so it’s probably better to divide them into constraints focussing on people and constraints focussing on industry, process and company politics.

Each of Apple’s new device categories moves the designs further from limitations of the internal constraints toward the people-centric limitations. Of course they’re not alone in choosing the problems they solve in the industry, but their path is a particular example to consider. They’re not exclusively considering the people-focussed constraints with the watch, there still are clear manufacturing/process constraints: battery life, radio efficiency are obvious examples.

There are conflicts between some of the people-focussed constraints. You might have a good idea for how a watch UI should work, but it has to be tempered by what people will expect to do which makes new user interface designs an evolutionary process. So you have to take people from what they know to what they can now do.

That’s a slow game, that Apple appear to have been playing very quickly of late.

  • 1984: WIMP GUI, but don’t worry there’s still a typewriter too.

There’s a big gap here, in which the technical constraints made the world adapt to the computer, rather than the computer adapt to the world. Compare desks from the 1980s, 1990s and 2000s, and indeed the coming and going of the diskette box and the mousemat.

  • 2007: touchscreen, but we’ve made things look like the old WIMP GUI from the 1980s a bit, and there’s still a bit of a virtual typewriter thing going on.

  • 2010: maybe this whole touchscreen thing can be used back where we were previously using the WIMP thing.

  • 2013: we can make this touchscreen thing better if we remove some bits that were left over from the WIMP thing.

  • 2014: we need to do a new thing to make the watch work, but there’s a load of stuff you’ll recognise from (i) watches, (ii) the touchscreen thing.

Now that particular path through the tangle of design constraints is far from unique. Compare the iPad to the DynaBook and you’ll find that Alan Kay solved many of the same problems, but only for people who are willing to overlook the fact that what he proposed couldn’t be built. Compare the iPhone to the pocket calculator, and you find that it was possible to portable computing many decades earlier but with reduced functionality. Apple’s products are somewhere in between these two extremes: balancing what can be done now and what could possibly be desired.

For me, the “compelling beginning” is a point along Apple’s (partly deliberate, and partly accidental) continuum, rather than a particular watershed. They’re at a point where they can introduce products that are sufficiently removed from computeriness that people are even willing to discuss them as fashion objects. Yes, it’s still evidently the same grey-and-black glass square that the last few years of devices have been. Yes, it’s still got a shrunk-down springboard list of apps like the earlier devices did.

The Apple Watch (and contemporary equivalents) are not amazing because the bear dances well, they’re amazing because the bear dances at all. The possibility of thinking about a computer as an aesthetic object, one that solves your problems and expresses your identity, rather than a box that does computer things and comes in a small range of colours, is new. The ability to consider a computer more as an object in its environment than as a collection of technical and political constraints changes how they interact with us and us with them. That is why it’s compelling.

And of course the current watch borrows cues from the phone that came before it, to increase familiarity. Future ones, and other things that come after it, will be able to jettison those affordances as expectations and comfort change. That is why it’s a beginning.

Posted in UI | Leave a comment

Sitting on the Sidelines

Thank you, James Hague, for your article You Can’t Sit on the Sidelines and Become a Philosopher. I got a lot out of reading it, because I identified myself in it. Specifically in this paragraph:

There’s another option, too: you could give up. You can stop making things and become a commentator, letting everyone know how messed-up software development is. You can become a philosopher and talk about abstract, big picture views of perfection without ever shipping a product based on those ideals. You can become an advocate for the good and a harsh critic of the bad. But though you might think you’re providing a beacon of sanity and hope, you’re slowly losing touch with concrete thought processes and skills you need to be a developer.

I recognise in myself a lot of the above, writing long, rambling, tedious histories; describing how others are doing it wrong; and identifying inconsistencies without attempting to resolve them. Here’s what I said in that last post:

I feel like I ought to do something about some of that. I haven’t, and perhaps that makes me the guy who comes up to a bunch of developers, says “I’ve got a great idea” and expects them to make it.

Yup, I’m definitely the person James was talking about. But he gave me a way out, and some other statements that I can hope to identify with:

You have to ignore some things, because while they’re driving you mad, not everyone sees them that way; you’ve built up a sensitivity. […] You can fix things, especially specific problems you have a solid understanding of, and probably not the world of technology as a whole.

The difficulty is one of choice paralysis. Yes, all of those broken things are (perhaps literally) driving me mad, but there’s always the knowledge that trying to fix any one of them means ignoring all of the others. Like the out of control trolley, it’s easier to do nothing and pretend I’m not part of the problem than to deliberately engage with choosing some apparently small part of it. It’s easier to read Alan Kay, to watch Bret Victor and Doug Engelbart and to imagine some utopia where they had greater influence. A sort of programmerpunk fictional universe.

As long as you eventually get going again you’ll be fine.

Hopefully.

Posted in advancement of the self, philosophy after a fashion | Leave a comment

Why is programming so hard?

I have been reflecting recently on what it was like to learn to program. The problem is, I don’t clearly remember: I do remember that there was a time when I was no good at it. When I could type a program in from INPUT or wherever, and if it ran correctly I was golden. If not, I was out of luck: I could proof-read the listing to make sure I had introduced no mistakes in creating my copy from the magazine, but if it was the source listing itself that contained the error, I wasn’t about to understand how to fix it.

The programs I could create at this stage were incredibly trivial, of the INPUT "WHAT IS YOUR NAME"; N$: IF N$="GRAHAM" THEN PRINT "HELLO, MY LORD" ELSE PRINT "GO AWAY ";N$ order of complexity. But that program contains pretty much most of what there is to computing: input, output, memory storage and branches. What made it hard? I’ll investigate whether it was BASIC itself that made things difficult later. Evidently I didn’t have a good grasp of what the computer was doing, anyway.

I then remember a time, a lot later, when I could build programs of reasonable complexity that used standard library features, in languages like C and Pascal. That means I could use arrays and record types, procedures, and library functions to read and write files. But how did I get there? How did I get from 10 PRINT "DIXONS IS CRAP" 20 GOTO 10 to building histograms of numeric data? That’s the bit I don’t remember: not that things were hard, but the specific steps or insights required to go from not being able to do a thing to finding it a natural part of the way I work.

I could repeat that story over and over, for different aspects of programming. My first GUI app, written in Delphi, was not much more than a “fill in the holes” exercise using its interface builder and code generator. I have an idea that my understanding of what I thought objects and classes were supposed to do was sparked by a particular training course I took in around 2008, but I still couldn’t point to what that course told me or what gaps in my knowledge it filled. Did it let me see the bigger picture around facts I already knew, did it correct a fallacious mental model, or did it give me new facts? How did it help? Indeed, is my memory even correct in pinpointing this course as the turning point? (The course, by the way, was Object-Oriented Analysis and Design Using UML.)

Maybe I should be writing down instances when I go from not understanding something to understanding it. That would work if such events can be identified: maybe I spend some time convincing myself that I do understand these things while I still don’t, or tell myself I don’t understand these things long after I do.

One place I can look for analogies to my learning experience is at teaching experience. A full litany of the problems I’ve seen in teaching programming to neophytes (as opposed to professional training, like teaching Objective-C programming to Rubyists, which is a very different thing) would be long and hard to recall. Tim Love, has seen and recorded similar problems to me (as have colleagues I’ve talked to about teaching programming).

A particular issue from that list that I’ll dig into here is the conflation of assignment and equality. The equals sign (=) was created in the form of two parallel lines of identical length, as no two things could be more equal. But it turns out then when used in many programming languages, in fact two things related by = could be a lot more equal. Here’s a (fabricated, but plausible) student attempt to print a sine table in C (preprocessor nonsense elided).

int main() {
  double x,y;
  y = sin(x);
  for (x = 0; x <= 6.42; x = x + 0.1)
    printf("%lf %lf\n", x, y);

}

Looks legit, especially if you’ve done any maths (even to secondary school level). In algebra, it’s perfectly fine for y to be a dependent variable related to x via the equality expressed in that program, effectively introducing a function y(x) = sin(x). In fact that means the program above doesn’t look legit, as there are not many useful solutions to the simultaneous equations x = 0 and x = x + 0.1. Unfortunately programming languages take a Humpty-Dumpty approach and define common signs like = to mean what they take them to mean, not what everybody else is conventionally accepting them to mean.

Maybe the languages themselves make learning this stuff harder, with their idiosyncrasies like redefining equality. This is where my musing on BASIC enters back into the picture: did I find programming hard because BASIC makes programming hard? It’s certainly easy to cast, pun intended, programming expertise as accepting the necessity to work around more roadblocks imposed by programming tools than inexpert programmers are capable of accepting. Anyone who has managed to retcon public static void main(String[] args) into a consistent vision of something that it’s reasonable to write every time you write a program (and to read every time to inspect a program, too) seems more likely to be subject to Stockholm Syndrome than to have a deep insight into how to start a computer program going.

We could imagine it being sensible to introduce neophytes to a programming environment that exposes the elements of programming with no extraneous trappings or aggressions. You might consider something like Self, which has the slot and message-sending syntax as its two features. Or LISP, which just has the list and list syntax. Or Scratch, which doesn’t even bother with having syntax. Among these friends, BASIC doesn’t look so bad: it gives you tools to access its model of computation (which is not so different from what the CPU is trying to do) and not much more, although after all this time I’m still not entirely convinced I understand how READ and DATA interact.

Now we hit a difficult question: if those environments would be best for beginners, why wouldn’t they be best for anyone else? If Scratch lets you computer without making mistakes associated with all the public static void nonsense, why not just carry on using Scratch? Are we mistaking expertise at the tools with expertise at the concepts, or what we currently do with what we should do, or complexity with sophistication? Or is there a fundamental reason why something like C++, though harder and more complex, is better for programmers with some experience than the environments in which they gained that experience?

If we’re using the wrong tools to introduce programming, then we’re unnecessarily making it hard for people to take their first step across the threshold, and should not be surprised when some of them turn away in disgust. If we’re using the wrong tools to continue programming, then we’re adding cognitive effort unnecessarily to a task which is supposed to be about automating thought. Making people think about not making people think. Masochistically imposing rules for ourselves to remember and follow, when we’re using a tool specifically designed for remembering and following rules.

Posted in edjercashun | 1 Comment