Ultimate Programmer Super Stack: Last day!

I already wrote about the Ultimate Programmer Super Stack, a huge bundle of books and courses on a range of technologies: Python, JS, Ruby, Java, HTML, node, Aurelia‚Ķ and APPropriate Behaviour, my book on everything that goes into being a programmer that isn’t programming.

Today is the last day of the bundle. Check it out here, it won’t be available for long.

Beginner thoughts

Back story: my period of walkabout, in which I went to see the rest of the computing world beyond Apple land, started in November 2014. This was shortly after Swift’s introduction at WWDC 2014. It ended in October 2018, by which time the language had evolved considerably, its position in the community had advanced greatly, and SourceKitService had stopped crashing.

I have previously written on the learning phases I encountered on exposure to Haskell, now what about Swift? I have the opportunity to reflect on how I react as a beginner, and share that so that we all learn how we (well, I) learn, and maybe discover how we can teach.

About the project

I’m writing a tool that I want, which takes files in one format (RSS) and writes them out in another format (Maildir). You can follow along. The reason for mentioning this here are twofold:

  • I do not know what I’m doing, but I’m willing to share that.
  • To let you understand the (limited, I think) complexity of the thing I’m trying to build.

Thinks: This should not be that hard

I often feel like Swift is making me feel like an idiot. This is because my expectation is too high: I know the platform fairly well. I know the Foundation framework pretty well. I know Xcode pretty well. I understand my problem to some extent. It should just be the programming language that’s different.

And I do different programming languages all the time. What’s another one going to do?

But of course it’s not just the programming language that changed. It changed the conventions for things like naming methods or raising errors, and that means that the framework methods have changed, which means that things I used to know have changed, which means that I do not know as much as I assume. It introduced a new library, which I also don’t know.

Thinks: That unimportant thing was really frustrating

Two such convention changes are correlated: classes that used to be Foundation and are now standard library (or maybe are Foundation still but have been renamed on being bridged, I’m not sure) are renamed from NSThing to Thing. That means that the name of NSURL is now URL.

That means that if you have a variable that represents a URL, you can’t follow the Cocoa convention of leaving the abbreviation uppercased and calling it URL, because now it’s got the same name as the type URL. So the new convention is to call it url.

Objectively, that’s not a big deal. Subjectively, this stuff is baked in pretty deep, and changing it is hard.

Thinks: Even learning something is frustrating

The last event to make me get up and walk around a field was actually discovering something new about Swift, which should be the point, but nonetheless made me feel bad.

I have discovered that when it comes to working with optionals, the language syntax means that There Is More Than One Way To Do It. When I learned about if let and guard let, I was confused by the fact that the thing on the right needed to be an optional, not unwrap one: surely if my rvalue is an optional, then my lvalue should be, too?

Then, when I learned about the ?. and subsequently ?? operators, I thought “there’s no way I would ever have thought to type that, or known how to search for those things”. And even though they only make things shorter, not different, I still felt frustration at the fact that I’d gone through typing things out the long way.

Thinks: There’s More Than One Way Not To Do It

One of the Broken Expectations™ is that I know how to use Strings. Way back when, NeXT apps used char * as their string type. Then Enterprise Objects Framework came along with its Foundation library of data types, including a new-fangled Unicode string class, NSString. Then, well, that was it for absolute ages.

So, when I had a String and I wanted to take the substring to an index, I was familiar with -substringToIndex: and tried to apply that. That method is deprecated, so I didn’t want to use it. OK, well I can string[0..<N]. Apparently not, integer subscripting is not allowed, and the error message tells me to read a code comment to understand why. I wish it told me where that code comment was, or just showed it to me, instead!

Eventually I found that there’s a .prefix(N) method, again this is the sort of thing that makes me think: what’s wrong with me? I’ve been programming for years, I’ve been programming on this platform for years, I should be able to get this.

Conclusion: Read a Book

I had expected that my knowledge of the Mac, Xcode, and Cocoa would be sufficient to carry me through a four-year gap on picking up a language, particularly with the occasional observation of a conference talk about the Swift language (I’ve even given one!). I had expected that picking up a project to build a thing would give me a chance to get acquainted.

I was reflecting on my early experiences with writing NeXT and Mac applications in Objective-C. I had my copy of the NeXT Developer Documentation, or Cocoa in a Nutshell, open on the desk, looking at the methods available and thinking “I have this, I want that, can I find one of these that gets me there?” I had expected that auto-complete in Xcode would be my modern equivalent of working that way.

Evidently not. Picking up the new standard library things, and the new operators, will require deliberate learning. I’ve got some videos lined up, but I think my next action is to find a good book to read.

Two Schools

There always seem to be two schools in software, though exactly where the gates are varies. Alan Kay described how Edsger Dijkstra noticed that “the Atlantic has two sides”.

It was basically all about how different the approaches to computing science were in Europe, especially in Holland and in the United States. In the US, here, we were not mathematical enough, and gee, in Holland, if you’re a full professor, you’re actually appointed by the Queen, and there are many other uh important distinctions made between the two cultures. So, uhm, I wrote a rebuttal paper, just called On the fact that most of the software in the world is written on one side of the Atlantic.

Or maybe both schools are on the same continent.

The essence of [the MIT/Stanford approach] can be captured by the phrase the right thing. […] The worse-is-better philosophy is only slightly different […] and I will call the use of this design strategy the New Jersey approach.

Or they could be ways of thinking, school curricula, or whatever.

Maybe both schools have something to teach us. Maybe it’s the same thing.

No True Humpty-Dumpty

Words change meaning.

Technical words change meaning.

Sometimes, you need to check out a specific commit of a word’s meaning from the version control, to add context to a statement.

“I’m talking about Open Source in its early meaning of Free Software without the confusion over Free, not its later meaning as an ethically empty publication of source code.”

“I mean Object-Oriented Programming as the loosely-defined bucket in which I can put all the ills of software that I’m claiming are solved by Haskell, not the earlier sense of modelling business processes in software with loosely-coupled active programs communicating by sending messages.”

“The word Agile here refers to the later sense of Agile where I run a waterfall with frequent checkpoints and get a certification from a project management institute.”

The problem is that doing so acts as a thought-terminating cliche to people who are not open to hearing a potentially valuable statement about Open Source, Object-Oriented Programming, or Agile development.

If your commit is too early in history, then you can easily be dismissed as etymologically fallacious, or as somebody who won’t accept progress and the glorious devaluation of the word you’re trying to use.

If your commit is too recent in history, then you can easily be dismissed as a Humpty-Dumptyist who’s trying to hide behind a highfalutin term that you have no right to use.

What I’ve come to realise is that my technique for dealing with people who use these rhetorical devices can be as simple as this: ignore them. If they do not want to hear, then I do not need to speak.

There is no browser, only Zuul

My short-lived first plan for a career was in Physics. That’s what my first degree was in, but I graduated with the career goal “do something that isn’t a D.Phil. in Physics” in mind. I’d got on quite well with computers as a hobbyist, and the computing and electronics practicals in my course labs. A job as systems administrator for those very systems (a hotchpotch of NeXTSTEP, Solaris, OpenBSD, and Mac OS X) came up at the time that I graduated so I applied, got it, and became a computerer.

Along the way, I met people who thought that some people should not be computerers because their backgrounds were not exactly identical. The Google manager who could not believe that as an applicant for the lowest-grade QA role, I had not encountered the travelling salesman problem. The software engineer at Facebook who was incensed that someone applying to build a web application in PHP and Javascript did not know that there are eight bits in a byte (never mind that there aren’t, necessarily, eight bits in a byte). And now the random on Twitter who insists that people who don’t fully know browsers aren’t allowed to write web applications.

It’s easy to forget two things: the first is that at some point in the past, you didn’t know what you know now, but you learnt it because you were allowed to participate and were taught. Even if you think you were a self-learner, you had access to playground materials, books, tutorials, online documentation…and someone made those for you and allowed you to use them.

The second is what it was like not to know those things. I remember not understanding how OOP was anything more than putting dots in the name of your function, but now I understand it differently, and don’t know what the thing was that changed.

The second of these things shows how easy it is to be the gatekeeper. If you don’t remember what not understanding something was like, then maybe it came naturally, and if it isn’t coming naturally to someone else well maybe they just don’t get it. But the first shows that you didn’t get it, and yet here you are.

When faced with a gatekeeper, I usually react flippantly. Because of my Physics background, I explain, I know how quantum physics works, and how that enables semiconductors, and how to build a semiconductor transistor, then a NAND gate, then a processor, and basically what I’m saying is I don’t see how anyone who doesn’t know that stuff can claim to know computers at all.

But what I say to everyone else is “this stuff is really interesting, let me show it to you“.

Be the keymaster, not the gatekeeper.

Bottom-up teaching

We’re told that the core idea in computer programming is problem-solving. That one of the benefits of learning about computer programming (one that is not universally accepted) is gaining the skill of problem decomposition.

If you look at real teaching of computing, it seems to have more to do with solution composition than problem decomposition. The latter seems to be background noise: here are the things you can build solutions with, presumably at some point you’ll come across a solution that’s the same size and shape as one of your problem components though how is left up to you.

I have many books on programming languages. Each lists the features of the language, and gives minimally complex examples of the use of those features. In that sense, Kernighan and Ritchie’s “The C Programming Language” (section 1.3, the for statement) is as little an instructional in solving problems using a computer as Eric Nikitin’s “Into the Realm of Oberon” (section 7.1, the FOR loop) or Dave Thomas’s “Programming Elixir” (section 7.2, Using Head and Tail to Process a List).

A course textbook on bitcoin and blockchain (Narayanan, Bonneau, Felten, Miller and Goldfeder, “Bitcoin and Cryptocurrency Technologies”) starts with Section 1.1, “Cryptographic hash functions”, and builds a cryptocurrency out of them, leaving motivational questions about politics and regulation to Chapter 7.

This strategy is by no means universal: Liskov and Guttag’s “Program Development in Java” starts out by describing abstraction, then looks at techniques for designing abstractions in Java. Adele Goldberg and Alan Kay described teaching Smalltalk by proposing exploratory projects, designing the objects that model the problem under consideration and the way in which they will communicate, then incrementally filling in by designing classes and methods that have the desired properties. C.J. Date’s “An Introduction to Database Systems” answers the question “why databases?” before introducing the relational model, and doesn’t introduce SQL until it can be situated in the context of the relational model.

Both of these approaches, and their associated techniques (the bottom-up approach and solution construction; the top-down approach and problem decomposition) are useful; the former leads to progress and the latter leads to understanding. But both must be taken in concert, because understanding without progress leads to the frustration of an unsolved problem and progress without understanding is merely the illusion of progress.

My guess is that more programmers – indeed whole movements, when we consider the collective state of things like OOP, functional programming, BDD, or agile practices – are in the “bottom-up only” group than in the “top-down only” or “a bit of both” groups. That plenty more copies of Introduction to Programming in [This Week’s Hot Language] have been sold than Techniques for Making Your Problem Amenable to Computation. That the majority of software really does comprise of solutions looking for problems.

On books

I’d say that if there’s one easy way to summarise how I work, it’s as an information focus. I’m not great at following a solution all the way to the bitter end so you should never let me be a programmer (ahem): when all that’s left is the second 90% of the effort in fixing the bugs, tidying up edge cases and iterating on the interaction, I’m already bored and looking for the next thing. Where I’m good is where there’s a big problem to solve, and I can draw analogies with things I’ve seen before and come up with the “maybe we should try this” suggestions.

Part of the input for that is the experience of working in lots of different contexts, and studying for a few different subjects. A lot of it comes from reading: my goodreads account lists 870 books and audiobooks that I’ve read and I know it to be an incomplete record. Here are a few that I think have been particularly helpful (professionally speaking, anyway).

  • Douglas Adams, The Hitch-Hikers’ Guide to the Galaxy. Adams is someone who reminds us not to take the trappings of society too seriously, and to question whether what we’re doing is really necessary. Are digital watches really a neat idea? Also an honourable mention to the Dirk Gently novels for introducing the fundamental interconnectedness of all things.
  • Steve Jackson and Ian Livingstone, The Warlock of Firetop Mountain. I can think of at least three software projects that I’ve been able to implement and describe as analogies to the choose your own adventure style of book.
  • David Allen, Getting Things Done, because quite often it feels like there’s too much to do.
  • Douglas Hofstadter, Godel, Escher, Bach: An Eternal Golden Braid is a book about looking for the patterns and connections in things.
  • Victor Papanek, Design for the Real World, for reminding us of the people who are going to have to put up with the consequences of the things we create.
  • Donald Broadbent, Perception and Communication, for being the first person to systematically explore that topic.
  • Steven Hawking, A Brief History of Time, showing us how to make complex topics accessible.
  • Roger Penrose, The Road to Reality, showing us how to make complex topics comprehensively presentable.
  • Douglas Coupland, Microserfs, for poking fun at things I took seriously.
  • Janet Abbate, Recoding Gender, because computering is more accessible to me than to others for no good reason.
  • Joshua Bloch, Effective Java, Second Edition, for showing that part of the inaccessibility is a house of cards of unsuitable models with complex workarounds, and that programmers are people who delight in knowing, not addressing, the workarounds.
  • Michael Feathers, Working Effectively with Legacy Code, the one book every programmer should read.
  • Steve Krug, Don’t make me think!, a book about the necessity of removing exploration and uncertainty from computer interaction.
  • Seymour Papert, Mindstorms, a book about the necessity of introducing exploration and uncertainty into computer interaction.
  • Richard Stallman, Free as in Freedom 2.0, for suggesting that we should let other people choose between ther previous two options.
  • Brad Cox, Object-Oriented Programming: An Evolutionary Approach, for succinctly and effortlessly explaining objects a whole decade before everybody else got confused by whether a dog is an animal or a square is a rectangle.
  • Gregor Kiczales, Jim des Rivieres, and Daniel G. Bobrow, The Art of the Metaobject Protocol showed me that OOP is just one way to do OOP, and that functional programming is the same thing.
  • Simson Garfinkel and Michael Mahoney, NEXTSTEP Programming: Step One was where I learnt to create software more worthwhile than a page of BASIC instructions.
  • Gil Amelio, On the Firing Line: My 500 Days at Apple shows that the successful business wouldn’t be here if someone hadn’t managed the unsuccessful business first.

There were probably others.

This is fine

The BBC micro:bit is a tool for introducing young people to programming. It’s a little embedded computer with a few inputs and a matrix of LEDs for output, as well as some control lines. In principle it’s quite easy to use, I made a 1d6 simulator:

from microbit import *
from random import randint

class Die:
    ONE = Image("00000:"
                "00000:"
                "00900:"
                "00000:"
                "00000")
    TWO = Image("00000:"
                "09000:"
                "00000:"
                "00090:"
                "00000")
    THREE = Image("00000:"
                  "09000:"
                  "00900:"
                  "00090:"
                  "00000")
    FOUR = Image("00000:"
                 "09090:"
                 "00000:"
                 "09090:"
                 "00000")
    FIVE = Image("00000:"
                 "09090:"
                 "00900:"
                 "09090:"
                 "00000")
    SIX = Image("00000:"
                "09090:"
                "09090:"
                "09090:"
                "00000")
    ALL = [ONE, TWO, THREE, FOUR, FIVE, SIX]
    @classmethod
    def throw(self):
        return self.ALL[randint(0,5)]


display.show(Die.ALL, delay=100, wait=False, loop=True)

while True:
    if button_a.is_pressed():
        display.show(Die.throw())
    elif button_b.is_pressed():
        display.show(Die.ALL, delay=100, wait=False, loop=True)

Simple (I mean, simple if you know what the word randint means (hint: it means “random integer” as long as you know what an integer is), that programming languages that aren’t Fortran call the first element of an array element zero, even though it’s obviously ONE as far as a die is concerned, and you’re OK with the word classmethod).

Here it is in action: woo! I AER PROGRAMMER!!!

And here’s the litany of things I did to get there:

  • I opened the “details.txt” file on the micro:bit, which tells me a load of build numbers and flash dates. I don’t know what I’m supposed to do with that. This is hard.
  • I opened the “microbit.html” file on the micro:bit, which redirects me to microbit.co.uk, which tells me that as part of the BBC’s restructuring of its online content, I will be redirected to microbit.org. I decided this was probably OK, and ended up looking at the same site but with a different URL (whatever one of those is).
  • I clicked on the introductory video and was told that I need an Adobe Flash Player, whatever one of those is.
  • I eventually found enough buttons to find a website that is a python editor that lets me write code for my micro:bit, and a tutorial for writing Python for the micro:bit. I know what a python is, but do not know why I would want a snake for my micro:bit.
  • The tutorial tells me to use a thing called a “mu”, which is apparently not the website python editor but is another python editor. I do not know what a mu is, but I must download it.
  • There are three different download buttons, depending on whether I have a Windows, a Mac, or a Linux, whatever they are.
  • If I have a Windows, then I to enable the REPL (whatever that is) I must download a serial driver (whatever that is).
  • The driver site tells me that if I have a Windows 10 (whatever that is), then I should not download the serial driver.
  • If I have a Mac, then the first time (but maybe not other times, that’s not clear) I open the mu I have to right click it.
  • If I have a Linux, then I must chmod u+x the mu and make sure my user (isn’t that me?) is in the dialout (whatever that is) group (whatever that is).
  • Now I can write my python. To put it on the micro:bit, I must “flash” it. I guess that is because I have an Adobe Flash Player from earlier.
  • The micro:bit shows me helpful messages when I get something wrong. It scrolls the message “id=41 SyntaxError: invalid syntax” when I flash my python. I do not know what this means, but I was told that it’s helpful. Therefore I am stupid.

I am not singling out the BBC, the micro:bit makers, the mu creators, Microsoft, or any of the other individuals or organisations involved with creating this easy-to-use educational environment. This is fine. This is how computers work, nay this is how computers work when we are making it easy to work them. This is our industry.

To quote Freddie Mercury: is this the world we created? We made it on our own. Is this the world we devastated, right to the bone? If there’s a God in the sky looking down what can he think of what we’ve done to the world that we created?

On the extremes of computer science

I didn’t study computer science at school or university, and still manage to work as a programmer.

That is not to say that I don’t need to know some things that are taught on computer science courses. Just this week I’ve had to build a couple of different data structures and understand their running time: very CS.

I’ve also needed to know things that aren’t on a CS degree, too. The acceptance criteria for one of my projects are written in French, and none of the CS courses I’ve seen in UK universities include that in the syllabus.

I’m neither arguing for nor against the validity of a CS background in professional software development. I’m arguing against taking either side. You need to know some CS things to write software, you need to know some other things, multiple backgrounds are appropriate and welcome.

“When I had that problem”

A common lie in programming is that every project is new, that no problem has been seen before. This is the reason given for estimates being bad, for plans being bad, for design being bad…for anything other than diving in uninformed being bad.

But I’ve noticed that more and more frequently my discussions about problems-technical problems, organisational problems, personal problems-involve the phrase “when I had that problem”. That somebody (and, as time goes on, that’s more frequently me) has seen this problem or one with many similarities before.

It’s time to stop pretending that your UI fronting a database table is up there among the Hilbert problems as one of the big research questions of the 21st century. We have seen that before, or something like it, and we tried things, some of them worked. They probably weren’t the best possible solutions but they were solutions.