I have some small idea of what I’m doing

I feel partly to blame for the current minor internet shitstorm.

But first, some scene-setting. There have long been associations between the programmer community and particular subcultures, some of which have become—not monocultural—at least dominant cultures within the world of computering. When I entered the field in the early 2000s, it was the tail end of the cyberpunk subculture: electronic and rock music, long hair on men, short hair on women, often dyed, black band or logo t-shirts, combat trousers or jeans, Doctor Martens 1460 boots. Antisocial work hours, caffeine-fuelled weekend long hacks, “all your base are belong to us” memes. Obtuse, but workhorse, C and Perl code. Maybe some Scheme if you were in the Free Software Foundation.

Then toward the end of the decade the hipster subculture rose to dominance. Mac laptops. Nice clothes, worn ironically. Especially the bow tie. Dishevelled hair. Fixed-gear bicycles. Turned up trouser cuffs and no socks. Looking to be the technical cofounder, looking down on those who ask them to be the technical cofounder. Coffee, now daytime only, had to be incredibly fussy. The evening drink of choice was Pabst Blue Ribbon. If your software wasn’t in the tech stack of choice—a Ruby on Rails app, deployed to Heroku, edited in TextMate, hosted on Github—then were you crushing any code? Bro, do you even lift?

After a few years of this I noticed that one difference between these two cultures was an approach to knowledge, or more specifically its lack. It was easy to nerdsnipe a cyberpunk: if they didn’t know something they would go and find out. Usenet groups had multiple FAQ lists because many people would all try to find the answers to the questions, and wikis weren’t yet popular. In the hipster craze that followed, confidence in one’s own knowledge reigned supreme. You showed that you knew everything you know, and you showed that everything you didn’t know wasn’t worth knowing.

This came to a head in my little Apple-centric niche of the computering field in 2015, when that whole community had chased monad tutorials and half-digested chapters on category theory into every corner of the conference and mailing list ecosystem. People gave talks not to share their knowledge, but to share that they were the people who knew the knowledge. Attendees turned up to product development conferences expecting to learn how a new programming language made it easier to develop programming, and came away confused about endofunctors.

I should be clear here that not everybody in the field was like that, and there are plenty of people who can make difficult maths accessible. There are plenty of people who can make computering accessible without difficult maths. Those people were still present.

But still I determined that something we didn’t have enough of, that had been present in the cyberpunk-esque culture that came before (for all its other faults) was a willingness to say “I have no idea what I’m doing”. Not in a “har har look at me get this wrong” way, but in a “this is interesting, let’s find out more about it” way. An “I’m not the right person to ask, let’s bring in an expert” way. A “to the library!” way.

So after a bit of writing about learning things I didn’t know, I took my (then) decade of experience and position of incredibly-minor celebritydom in that niche little bit of computering, and submitted a talk called “I have no idea what I’m doing” to AltConf 2015. I think it may even have been a very last minute submission, with another speaker pulling out sick. The talk was a collection of anecdotes about things I didn’t understand when the problem came my way, and how I dealt with that. Particularly, given that Swift was a year old at the time, I admitted I had less than a year of Swift experience and knew less about it than I did about Objective-C. I even used the dog picture. The talk was recorded, but unfortunately no longer seems to be available.

My hope in delivering this talk was partly that the people in the room would learn a little about problem solving, but also that they’d learn a lot about how an experienced person can say “here are the limits of my knowledge, I can’t help you with that problem. At least, I can’t yet, but it might be fun/interesting/remunerative to discover more about it.” How it’s OK to not know what you’re doing, if you have a plan or can make one.

In retrospect, I think that what happened was simply that 2015 was too many generations in the software industry after all of the great forcing functions that led to the way computering was currently done. The Agile folks had worked out that we don’t know what the customer will want at the end of the project, so we should optimise our work for not knowing, but they’d done that at the turn of the millennium. The dot bomb had exploded at the same time, so the Lean Startup folks had worked out that there’s no money in the things the customer doesn’t want and you have to very quickly discard all of those.

Everything had shifted left, but it had done so at least a decade earlier. Now those things, Agile and Lean Startup, were the way you did computering, and you could be expert in them. There was certification. They were no longer “because that thing before wasn’t great”, they were “because this is how we do it”. There was another round of venture capitalists in town, and the money taps were starting to turn back on. There was no great need to find out that you were wrong, so it became a cultural taboo to admit it.

Anyway, if we believe DHH, I overshot. Apparently we went from “it’s professional to own up to the limits of your knowledge” to “it’s a badge of honour to not know programming as a programmer.” To be honest I find that the weak part of the post, mostly because I don’t recognise it and he doesn’t supply evidence. The rest—that we are beings capable of learning and growth and we should not revel in ignorance—is the same as what I was trying to say with my dog-meme talk in 2015.

But now the dominant non-monoculture is the “if you’re not with us you’re against us” variant. The “come on internet, you know what to do” quote tweet. Saying that you may have things to learn, and should not still be at the same level of copy-and-paste code years into your job is now the same as saying you must memorise all algorithms and programming language quirks to call yourself a real programmer. And how very DARE he say that, what does he know about programmers anyway?

Discussions of the DHH post seem to be predicated on the idea that it’s a personal attack on people who aren’t DHH-level success, when if it’s an attack at all it’s attacking a straw man identity and in fact is worded more like this non-attack: you have more potential to live up to, find it in yourself to surpass your current limits. But scrape the surface (by showing that DHH didn’t say the things that are claimed to be “ruining it for everyone”), and it seems there’s a certain amount of hating the messenger, not the message, going on.

I’m not sure the cause of this, but I suspect it may be that having learned not to punch down, folks are looking up for targets. That DHH is successful, has said things that people don’t like in the past, so it’ll be OK to not like what he says this time. And the headline is something not to like, therefore the article must just expand on why I was correct not to read it.

I’ve certainly disagreed with DHH before. When he did the “TDD is dead” thing, I went into that from a position of disagreement. But I also knew that he has experience at being successful as a programmer, and will have reflections and knowledge that are beyond my understanding. So I listened to the discussions, and I learned what each of the people involved thought. It was an interesting, and educational experience. I gained a bit more of an idea of what I’m doing.

Posted in advancement of the self, edjercashun | Leave a comment

An Imagined History of Agile Software Development

Having benefited from the imagined history of Object-Oriented Programming, it’s time to turn our flawed retelling toolset to Agile. This history is as inaccurate and biased as it is illuminating.

In the beginning, there was no software. This was considered to be a crisis, because there had been computers since at least the 1940s but nearly half a century later nobody had written any software that could run on them. It had all been cancelled due to cost overruns, schedule overruns, poor quality, or all three.

This was embarrassing for Emperor Carhoare I, whose mighty imperial domain was known as Software Engineering. What was most embarrassing was that at every time when the crisis came to a head, a hologram of Dijkstra would appear and repeat some pre-recorded variation of “I told you so, nobody is clever enough to write any software”.

In frustration, software managers marched their developers off of waterfalls to their doom. If you ever see a software product with a copyright date before, say, 2001, it is a work of fiction, placed by The Great Cunningham to test our faith. I know this because a very expensive consultant told me that it is so.

Eventually the situation got so bad that some people decided to do something about it. They went on a skiing holiday, which was preferable to the original suggestion that they do something about this software problem. But eventually they ended up talking about software anyway, and it turned out that two of them actually had managed to get a software project going. Their approach was extreme: they wrote the software instead of producing interim reports about how little software had been written.

With nothing else to show than a few photographs of mountains, the rest of the skiing group wrote up a little document saying that maybe everybody else writing software might want to try just writing the software, instead of writing reports about how little software had been written. This was explosive. People just couldn’t believe that the solution to writing software was this easy. It must be a trick. They turned to the Dijkstra projection for guidance, but it never manifested again. Maybe he had failed to foresee this crisis? Maybe The Blessed Cunningham was a mule who existed outside psychohistory?

There were two major problems with this “just fucking do it” approach to writing software. The first problem was that it left no breathing room for managers to commission, schedule, and ignore reports on how little software was getting written. Some people got together and wrote the Project Managers’ Declaration of Interdependence. This document uses a lot of words to say “we are totally cool and we get this whole Agile thing you’re talking about, and if you let us onto your projects to commission status reports and track deliverables we’ll definitely be able to pay our bills”.

The second problem, related to the first, is that there wasn’t anything to sell. How can you Agile up your software tools if the point is that tools aren’t as important as you thought? How can you sell books on how important this little nuance at the edge of Agile is, if the whole idea fits on a postcard?

Enter certification. We care more about the people than the process, and if you pay for our training and our CPDs you can prove to everybody that you’ve understood the process for not caring about process. Obviously this is training and certification for the aforementioned co-dependent—sorry, interdependent—project managers. There is certification for developers, but this stuff works best if they’re not actually organised so you won’t find many developers with the certification. Way better to let them divide themselves over which language is best, or which text editor, or which blank space character.

And…that’s it. We’re up to date. No particularly fresh theoretical insight in two decades, we just have a lot of developers treated as fungible velocity sources on projects managed top-down to indirect metrics and reports. Seems like we could benefit from some agility.

Posted in agile | Leave a comment

Episode 44: We Would Know What They Thought When They Did It

We would now what they thought when they did it, a call for a history of ideas in computing.

Leave a comment

Second Brain

The idea of a second brain really hit home. Steven and I were doing some refactoring of some code in our Amiga podcast last night, and every time we moved something between files we had to remember which header files needed including. Neither of us were familiar enough with the libraries to know this, so people in the chat had to keep helping us.

But these are things we’ve already done, so we ought to be able to recall that stuff, with or without support. And when I say with support, I mean with what that post is calling a “second brain”, i.e. with an external, indexed cache of my brain. I shouldn’t need to reconstruct from scratch information I’ve already come across, but neither should I need to remember it all.

There are three problems I can see on the path to second brain adoption. The first, and the one that immediately made itself felt, is having a single interface for all my notes. When I read this article I thought that blogging about it would be a good way to crystallise my thoughts on the topic (it’s working!). So I saved the URL to Pinboard, I wrote a task in OmniFocus to write a blog post, and then when I was ready I fired up MarsEdit to write the blog that would end up on my WordPress.

Remembering that all these bits are in all these systems is itself a brain overload task. And there’s more: I have information in Zotero about academic papers I’ve read, I have two paper notebooks on the go (one for computing projects and one for political projects), I have a Freewrite and a Remarkable, which each have their own sync services, I have abandoned collections of notes in Evernote and Zim…

I have a history of productivity porn addiction that doesn’t translate to productivity. I get excited by new tools, adopt them a bit, but don’t internalise the practice. So then I have another disjoint collection of some notes, maybe in a bullet journal, a Filofax, Livescribe notes, Apple Notes, wherever…making the problem of second brain even harder to manage because now there are more places.

So step one is to centralise these things. That’s not a great task to try to do in a Big Bang, so I’ll do it piecemeal. Evernote is the closest thing I already use to the second brain concept, so today I’ve stopped using my paper notebooks, writing those notes in Evernote instead. I also moved this draft to Evernote and worked on it there.

The second problem is implicit in that last paragraph: migration. Do I sit and scan all my notebooks, with my shocking and OCR-resistant handwriting, into Evernote? Do I paste all my summaries of research articles out of Zotero and into Evernote? No, doing so will take a long time and be very boring. What I’ll do instead is to move toward integration from this moment on. If I need something and I think I already have it, I’ll move it into Evernote. If I don’t have it, I’ll make it in Evernote. It will take a while to reap the benefit, but eventually I’ll have a single place to search when I want to look for things I already know.

And that’s the third of my three problems. Being diligent about searching the second brain. You have to change your approach to solving knowledge problems to be “do I already know this?” The usual, for me at least, is “what do I do to know this?” Now I’m good at that, with lots of experience at finding, appraising, and synthesising information, so doing it from scratch every time is mostly a waste of time rather than a fool’s errand. But it’s time I don’t need to waste.

I think that the fact I haven’t internalised these three aspects of the second brain is due to the generation of computing in which I really invested in computers. Most of the computers I learned to computer on could only do one thing at a time, practically if not absolutely. They didn’t have much storage, and that storage was slow. So that meant having different tools for different purposes. You would switch from the place where you recorded dance moves to the place where you captured information on Intuition data types, and rely on first brain for indexing. You wouldn’t even have all of it in the computer: I was 23 when I got my first digital camera, and 25 before I had an MP3 player. I did my whole undergraduate degree using paper notes and books from libraries and book stores. First brain needed to track where physically any information was in addition to where logically it was.

What I’m saying is I’m a dinosaur.

Posted in advancement of the self, tool-support, whatevs | Leave a comment

Episode 43: what we DO know about software engineering

This episode follows from episode 42: what I have yet to learn.

Leave a comment

Scripting confusion

LaTeX (and TeX, for that matter), syntax is relatively consistent, and uses a lot of backslashes.

Bourne shell syntax is somewhat inconsistent, and also uses backslashes.

Regular expression syntax I seem almost perversely disinclined to remember, and definitely sometimes often uses maybe backslashes.

Therefore, when I need to use a regular expression to work on a LaTeX document, I do it interactively in the shell until it works then paste that into a shell script. Now I need never remember or even uncover how to write that unholy combination of regex, shell, and LaTeX ever again.

For example I have defined a LaTeX macro \todo{I need more examples here.}, that draws the text “TODO: I need more examples here.” with a box around it in the document. I review outstanding sections by searching for lines on which todo items appear, using this script:

#!/bin/sh

grep -n todo\{ "$1" | sed s/\\\\todo\{\\\(.\*\\\)\}/\\1/

Now I get to blank from my mind the unholy combination of syntaces that led to 15 backslashes on one line, but still use the results.

Posted in LaTeX | Leave a comment

Episode 42: What I have yet to learn

This episode is about the things I don’t know about software engineering.

1 Comment

I only have 17 years of experience, but every point on this list accords with my experience. I’ve made my own attempt to catalogue things software developers should know (that are not writing code), but this is a succinct and great summary.

Posted on by Graham | Leave a comment

The “return a command” trick

This is a nice trick, but we need a phrase for that thing where you implement extreme late binding of functions by invoking an active function that selects the function you want based on its name. I can imagine the pattern catching on.

Posted on by Graham | Leave a comment

The missing principle in agile software development

The biggest missing feature in the manifesto for agile software development and the principles behind it is anyone other than the makers and their customer. We get autonomous, self-organising delivery teams but without the sense of responsibility to a broader society one would expect from autonomous professional agents.

Therefore it’s no surprise to find developers working to turn their colleagues into a below-minimum-wage precariat; to rig elections at scale; or to implement family separation policies or travel bans on religious minorities. A principled agile software developer only needs to ask “how can I deliver this, early and continuously, to the customer?” and “how can we adjust our behaviour to become more effective at this?”: they do not need to ask “is this a good idea?” or “should this be delivered at all?”

Principle Zero ought to read something like this.

We build software that transforms the workplace, leisure, interpersonal interaction, and society at large: we do so in consultation with as broad a representation of those interests as possible and ensure that our software is beneficial to those whose lives are being transformed. Within that context, we follow these principles:

Posted in agile, Responsibility | Leave a comment