Let’s talk about self-documenting code

You think your code is self-documenting. That it doesn’t need comments or Doxygen or little diagrams, because it’s clear from the code what it does.

I do not think that that is true.

Even if your reader has at least as much knowledge of the programming language you’ve used as you have, and at least as much knowledge of the libraries you’ve used as you have, there is still no way that your code is self-documenting.

How long have you been doing your job? How long have you been talking to experts in the problem domain, solving similar problems, creating software in this region? The likelihood is, whoever you are, that the new person on your team has never done that, and that your code contains all of the jargon terms and assumptions that go with however-much-experience-you-have experience at solving those problems.

How long were you working on that story, or fixing that bug? How long have you spent researching that specific change that you made? However long it is, everybody else on your team has not spent that long. You are the world expert at that chunk of code, and it’s self-documenting to you as the world expert. But not to anybody else.

We were told about “working software over comprehensive documentation”, and that’s true, but nobody said anything about avoiding sufficient documentation. And nobody else has invested the time to understand the code that you just wrote that you did, so the only person for whom your code is self-documenting is you.

Help us other programmer folks out, think about us when avoiding documentation.

All the things

It’s been a long time since I had a side project, or one that didn’t get abandoned very early on. I tend to get sidetracked by other thoughts about computing, or think “while I’m doing this, I’m leaving that unsolved” so nothing gets very far.

In an attempt to address that, to clear all of the different thoughts I have about the matter of computing out of my head, organise them, identify conflicts, and prioritise what I work on, I spent this evening jotting down the big points and a brief abstract about each one. I’m hoping this will cut the Gordian knot by letting me see it all in one place and start to make choices.

The format I chose to represent this braindump is this personal Technology Radar, based on the Thoughtworks build-your-own tool. It seemed like a good place to see everything at once, and look for clusters or trends.

You’ll notice that almost everything in this radar is fairly old tech! That’s mostly a matter of taste, as I enjoy learning about things that were tried, what succeeded or failed, and what can be learnt from that to put to use today. I’m not good at novelty for novelty’s sake.

I expect to get some mileage (for my own benefit, you might like it too) out of expanding on some of the entries in this radar over a few more posts, so I’ve created a techradar category in this blog that you can filter on/out.

On what makes a “good” comment

I have previously discussed the readability of code:

The author must decide who will read the code, and how to convey the important information to those readers. The reader must analyse the code in terms of how it satisfies this goal of conveyance, not whether they enjoyed the indentation strategy or dislike dots on principle.

Source code is not software written in a human-readable notation. It’s an essay, written in executable notation.

Now how does that relate to comments? Comments are a feature of programming languages that allow all other text-based languages—executable or otherwise—to be injected into the program. The comment feature has no effect on the computer’s interpretation of the software, but wildly varying effects on the reader’s interpretation. From APPropriate Behaviour:

[There are] problems with using source code as your only source of information about the software. It does indeed tell you exactly what the product does. Given a bit of time studying, you can discover how it does it, too. But will the programming language instructions tell you why the software does what it does? Is that weird if statement there to fix a bug reported by a customer? Maybe it’s there to workaround a problem in the APIs? Maybe the original developer just couldn’t work out a different way to solve the problem.

So good documentation should tell you why the code does what it does, and also let you quickly discover how.

We need to combine these two quotes. Yes, the documentation—comments included—needs to express the why and the how, but different readers will have different needs and will not necessarily want these questions answered at the same level.

Take the usual canonical example of a bad comment, also given in APPropriate Behaviour and used for a very similar discussion:

//add one to i
`i++;`

To practiced developers, this comment is just noise. It says the same thing as the line below it.

The fact is that to novice developers too it says the same thing as the line below it, but they have not yet learned to read the notation fluently. This means that they cannot necessarily readily tell that they say the same thing: therefore the comment adds value.

Where someone familiar with the (programming) language might say that the comment only reiterates what the software does, and therefore adds no value, a neophyte might look at the function name to decide what it does and look to comments like this to help them comprehend how it does it.

Outside of very limited contexts, I would avoid comments like that though. I usually assume that a reader will be about as comfortable with the (computer) language used as I am, and either knows the API functions or (like me) knows where to find documentation on them. I use comments sparingly, to discuss trade-offs being made, information relied on that isn’t evident in the code itself or discussions of why what’s being done is there, if it might seem odd without explanation.

Have I ever written a good comment?

As examples, here are some real comments I’ve written on real code, with all the context removed and with reviews added. Of course, as with the rest of the universe “good” and “bad” are subjective, and really represent conformance with the ideas of comment quality described above and in linked articles.

 /*note - answer1.score < answer2.score, but answer1 is accepted so should
 *still be first in the list of answers.
 */

This is bad. You could work this one out with a limited knowledge of the domain, or from the unit tests. This comment adds nothing.

/* NASTY HACK ALERT
 * The UIWebView loads its contents asynchronously. If it's still doing
 * that when the test comes to evaluate its content, the content will seem
 * empty and the test will fail. Any solution to this comes down to "hold
 * the test back for a bit", which I've done explicitly here.
 * http://stackoverflow.com/questions/7255515/why-is-my-uiwebview-empty-in-my-unit-test
 */

This is good. I’ve explained that the code has a surprising shape, but for a reason I understand, and I’ve provided a reference that goes into more detail.

    //Knuth Section 6.2.2 algorithm D.

This is good, if a bit too brief. I’ve cited the reference description (to me, anyway: obviously Knuth got it from somewhere else) of the algorithm. If you want to know why it does what it does, you can go and read the discussion there. If there’s a bug you can compare my implementation with Knuth’s. Of course Knuth wrote more than one book, so I probably should have specified “The Art of Computer Programming” in this comment.

/**
 * The command bus accepts commands from the application and schedules work
 * to fulfil those commands.
 */

This is not what I mean by a comment. It’s API documentation, it happens to be implemented as a comment, but it fills a very particular and better-understood role.

What do other people’s comments look like?

Here are some similarly-annotated comments, from a project I happen to have open (GNUstep-base).

/*
 * If we need space allocated to store a return value,
 * make room for it at the end of the callframe so we
 * only need to do a single malloc.
 */

Explains why the programmer wrote it this way, which is a good thing.

  /* The addition of a constant '8' is a fudge applied simply because
   * some return values write beynd the end of the memory if the buffer
   * is sized exactly ... don't know why.
   */

This comment is good in that explains what is otherwise a very weird-looking bit of code. It would be better if the author had found the ultimate cause and documented that, though.

/* This class stores objects inline in data beyond the end of the instance.
 * However, when GC is enabled the object data is typed, and all data after
 * the end of the class is ignored by the garbage collector (which would
 * mean that objects in the array could be collected).
 * We therefore do not provide the class when GC is being used.
 */

This is a good comment, too. There’s a reason the implementation can’t be used in particular scenarios, here’s why a different one is selected instead.

/*
 *  Make sure the array is 'sane' so that it can be deallocated
 *  safely by an autorelease pool if the '[anObject retain]' causes
 *  an exception.
 */

This is a bad comment, in my opinion. Let’s leave aside for the moment the important issue of our industry’s relationship with mental illness. What exactly does it mean for an array to be ‘sane’? I can’t tell from this comment. I could look at the code, and find out what is done near this comment. However, I could not decide what there contributes to this particular version of ‘sanity’: particularly, what if anything could I remove before it was no longer ‘sane’? Why is it that this particular version of ‘sanity’ is required?

What do other people say about comments?

For many people, the go-to (pun intended) guide on coding practice is, or was, Code Complete, 2nd Edition. As with this blog and APPropriate Behaviour, McConnell promotes the view that comments are part of documentation and that documentation is part of programming as a social activity. From the introduction to Chapter 32, Self-Documenting Code:

Like layout, good documentation is a sign of the professional pride a programmer puts into a program.

He talks, as do some of the authors in 97 Things Every Programmer Should Know, about documenting the design decisions, both at overview and detailed level. That is a specific way to address the “why” question, because while the code shows you what it does it doesn’t express the infinitude of things that it does not do. Why does it not do any of them? Good question, someone should answer it.

Section 32.3 is, in a loose way, a Socratic debate on the value of comments. In a sidebar to this is a quote attributed to “B. A. Sheil”, from an entry in the bibliography, The Psychological Study of Programming. This is the source that most directly connects the view on comments I’ve been expressing above and in earlier articles to the wider discourse. The abstract demonstrates that we’re in for an interesting read:

Most innovations in programming languages and methodology are motivated by a belief that they will improve the performance of the programmers who use them. Although such claims are usually advanced informally, there is a growing body of research which attempts to verify them by controlled observation of programmers’ behavior. Surprisingly, these studies have found few clear effects of changes in either programming notation or practice. Less surprisingly, the computing community has paid relatively little attention to these results. This paper reviews the psychological research on programming and argues that its ineffectiveness is the result of both unsophisticated experimental technique and a shallow view of the nature of programming skill.

Here is not only the quote selected by McConnell but the rest of its paragraph, which supplies some necessary context. The emphasis is Sheil’s.

Although the evidence for the utility of comments is equivocal, it is unclear what other pattern of results could have been expected. Clearly, at some level comments have to be useful. To believe otherwise would be to believe that the comprehensibility of a program is independent of how much information the reader might already have about it. However, it is equally clear that a comment is only useful if it tells the reader something she either does not already know or cannot infer immediately from the code. Exactly which propositions about a program should be included in the commentary is therefore a matter of matching the comments to the needs of the expected readers. This makes widely applicable results as to the desirable amount and type of commenting so highly unlikely that behavioral experimentation is of questionable value.

So it turns out that at about the time I was being conceived, so was the opinion on comments (and documentation and code readability in general) to which I ascribe: that you should write for your audience, and your audience probably needs to know more than just what the software is up to.

That Sheil reference also contains a cautionary tale about the “value” of comments:

Weissman found that appropriate comments caused hand simulation to proceed significantly faster, but with significantly more errors.

That’s a reference to Laurence Weissman’s 1974 PhD Thesis.

Meta-writing

Barely 4,000 years ago, documents were written on heavy, clay tablets. The Epic of Gilgamesh, one of the earliest known works of fiction, was written on 11 such tablets with a 12th added later. There was only one thing you could do with these tablets: read. Fast forward to the 21-st century and things are very different. The word “tablet” has taken on a new meaning, and documents can be delivered wirelessly, updated as new versions are written. They can also contain rich media and hyperlinked references to other content. And with these new capabilities come new considerations when preparing your documents—or “docs”—for your readers.

The above story seems rambling and pointless, doesn’t it? But change the timescale and the technology, and every single bloody report on mobile technology starts in exactly the same way.

Specifications for interchanging objects

One of the interesting aspects of Smalltalk and similar languages including Objective-C and Ruby is that while the object model exposes a hierarchy of classes, consumers of objects in these environments are free to ignore the position of the object in that hierarchy. The hierarchy can be thought of as a convenience: on the one hand, for people building objects (“this object does all the same stuff as instances of its parent class, and then some”). It’s also a convenience for people consuming objects (“you can treat this object like it’s one of these types further up the hierarchy”).

So you might think that -isKindOfClass: represents a test for “I can use this object like I would use one of these objects”. There are two problems with this, which are both expressed across two dimensions. As with any boolean test, the problems are false positives and false negatives.

A false positive is when an object passes the test, but actually can’t be treated as an instance of the parent type. In a lot of recent object-oriented code this is a rare problem. The idea of the Liskov Substitution Principle, if not its precise intent as originally stated, has become entrenched in the Object-Oriented groupthink.

I’ve worked with code from the 1980s though where these false positives exist: an obvious example is “closing off” particular selectors. A parent class defines some interface, then subclasses inherit from that class, overriding selectors to call [self doesNotRecognize:] on features of the parent that aren’t relevant in the subclass. This is still possible today, though done infrequently.

False negatives occur when an object fails the -isKindOfClass: test but actually could be used in the way your software intends. In Objective-C (though neither in Smalltalk[*] nor Ruby), nil _does_ satisfy client code’s needs in a lot of cases but never passes the hierarchy test. Similarly, you could easily arrange for an object to respond to all the same selectors as another object, and to have the same dynamic behaviour, but to be in an unrelated position in the hierarchy. You _can_ use an OFArray like you can use an NSArray, but it isn’t a kind of NSArray.

[*] There is an implementation of an Objective-C style Null object for Squeak.

Obviously if the test is broken, we should change the test. False negatives can be addressed by testing for protocols (again, in the languages I’ve listed, this only applies to Objective-C and MacRuby). Protocols are unfortunately named in this instance: they basically say “this object responds to any selector in this list”. We could then say that rather than testing for an object being a kind of UIView, we need an object that conforms to the UIDrawing protocol. This protocol doesn’t exist, but we could say that.

Problems exist here. An object that responds to all of the selectors doesn’t necessarily conform to the protocol, so we still have false negatives. The developer of the class might have forgotten to declare the protocol (though not in MacRuby, where protocol tests are evaluated dynamically), or the object could forward unknown selectors to another object which does conform to the protocol.

There’s still a false positive issue too: ironically protocol conformance only tells us what selectors exist, not the protocol in which they should be used. Learning an interface from a protocol is like learning a language from a dictionary, in that you’ve been told what words exist but not what order they should be used in or which ones it’s polite to use in what circumstances.

Consider the table view data source. Its job is to tell the table view how many sections there are, how many rows there are in each section, and what cell to display for each row. An object that conforms to the data source protocol does not necessarily do that. An object that tells the table there are three sections but crashes if you ask how many rows are in any section beyond the first conforms to the protocol, but doesn’t have the correct dynamic behaviour.

We have tools for verifying the dynamic behaviour of objects. In his 1996 book Superdistribution: Objects as Property on the Electronic Frontier, Brad Cox describes a black box test of an object’s dynamic behaviour, in which test code messages the object then asserts that the object responds in expected ways. This form of test was first implemented in a standard fashion, to my knowledge, in 1998 by Kent Beck as a unit test.

Unit tests are now also a standard part of the developer groupthink, including tests as specification under the name Test-Driven Development But we still use them in a craft way, as a bespoke specification for our one-of-a-kind classes. What we should really do is to make more use of these tests: substituting our static, error-prone type tests for dynamic specification tests.

A table view does not need something that responds to the data source selectors, it needs something that behaves like a data source. So let’s create some tests that any data source should satisfy, and bundle them up as a specification that can be tested at runtime. Notice that these aren’t quite unit tests in that we’re not testing our data source, we’re testing any data source. We could define some new API to test for satisfactory behaviour:

- (void)setDataSource: (id <UITableViewDataSource>)dataSource {
  NSAssert([Demonstrate that: dataSource satisfies: [Specification for: @protocol(UITableViewDataSource)]]);
  _dataSource = dataSource;
  [self reloadData];
}

But perhaps with new language and framework support, it could look like this:

- (void)setDataSource: (id @<UITableViewDataSource>)dataSource {
  NSAssert([dataSource satisfiesSpecification: @specification(UITableViewDataSource)]);
  _dataSource = dataSource;
  [self reloadData];
}

You could imagine that in languages that support design-by-contract, such as Eiffel, the specification of a collaborator could be part of the contract of a class.

In each case, the expression inside the assertion handler would find and run the test specification appropriate for the collaborating object. Yes this is slower than doing the error-prone type hierarchy or conformance tests. No, that’s not a problem: we want to make it right before making it fast.

Treating test fixtures as specifications for collaboration between objects, rather than (or in addition to) one-off tests for one-off classes, opens up new routes for collaboration between the developers of the objects. Framework vendors can supply specifications as enhanced documentation. Framework consumers can supply specifications of how they’re using the frameworks as bug reports or support questions: vendors can add those specifications to a regression testing arsenal. Application authors can create specifications to send to contractors or vendors as acceptance tests. Vendors could demonstrate that their code is “a drop-in replacement” for some other code by demonstrating that both pass the same specification.

But finally it frees object-oriented software from the tyranny of the hierarchy. The promise of duck typing has always been tempered by the dangers, because we haven’t been able to show that our duck typed objects actually can quack like ducks until it’s too late.

The Liskov Citation Principle

In her keynote speech at QCon London 2013 on The Power of Abstraction, Barbara Liskov referred to several papers contemporary with her work on abstract data types. I’ve collected these references and found links to free copies of the articles where available.

Dijkstra 1968 Go To statement considered harmful

Wirth 1971 Program development by stepwise refinement

Parnas 1971 Information distribution aspects of design methodology

Liskov 1972 A design methodology for reliable software systems

Schuman and Jorrand 1970 Definition mechanisms in extensible programming languages
Not apparently available online for free

Balzer 1967 Dataless Programming

Dahl and Hoare 1972 Hierarchical program structures
Not apparently available online for free

Morris 1973 Protection in programming languages

Liskov and Zilles 1974 Programming with abstract data types

Liskov 1987 Data abstraction and hierarchy

Does that thing you like doing actually work?

Genuine question. I’ve written before about Test-Driven Development, and I’m sure some of you practice it: can you show evidence that it’s better than (or, for that matter, evidence that it’s worse than) some other practice? Statistically significant evidence?

How about security? Can you be confident that there’s a benefit to spending any money or time on information security countermeasures? On what should it be spent? Which interventions are most successful? Can you prove that?

I am, of course, asking whether there’s any evidence in software engineering. I ask rhetorically, because I believe that there isn’t—or there isn’t a lot that’s in a form useful to practitioners. A succinct summary of this position comes courtesy of Anthony Finkelstein:

For the most part our existing state-of-practice is based on anecdote. It is, at its very best quasi-evidence-based. Few key decisions from the choice of an architecture to the configuration of tools and processes are based on a solid evidential foundation. To be truthful, software engineering is not taught by reference to evidence either. This is unacceptable in a discipline that aspires to engineering science. We must reconstruct software engineering around an evidence-based practice.

Now there is a discipline of Evidence-Based Software Engineering, but herein lies a bootstrapping problem that deserves examination. Evidence-Based [ignore the obvious jokes, it’s a piece of specific jargon that I’m about to explain] practice means summarising the significant results in scientific literature and making them available to practitioners, policymakers and other “users”. The primary tools are the systematic literature review and its statistics-heavy cousin, the meta-analysis.

Wait, systematic literature review? What literature? Here’s the problem with trying to do EBSE in 2012. Much software engineering goes on behind closed doors in what employers treat as proprietary or trade-secret processes. Imagine that a particular project is delayed: most companies won’t publish that result because they don’t want competitors to know that their projects are delayed.

Even for studies, reports and papers that do exist, they’re not necessarily accessible to the likes of us common programmers. Let’s imagine that I got bored and decided to do a systematic literature survey of whether functional programming truly does lead to fewer concurrency issues than object-oriented programming.[*] I’d be able to look at articles in the ACM Digital Library, on the ArXiv pre-print server, and anything that’s in Leamington Spa library (believe me, it isn’t much). I can’t read IEEE publications, the BCS Computer Journal, or many others because I can’t afford to subscribe to them all. And there are probably tons of journals I don’t even know about.

[*]Results of asking about this evidence-based approach to paradigm selection revealed that either I didn’t explain myself very well or people don’t like the idea of evidence mucking up their current anecdotal world views.

So what do we do about this state of affairs? Actually, to be more specific: if our goal is to provide developers with better access to evidence from our field, what do we do?

I don’t think traditional journals can be the answer. If they’re pay-to-read, developers will never see them. If they’re pay-to-write, the people who currently aren’t supplying any useful evidence still won’t.

So we need something lighter weight, free to contribute to and free to consume; and we probably need to accept that it then won’t be subject to formal peer review (in exactly the same way that Wikipedia isn’t).

I’ve argued before that a great place for this work to be done is the Free Software Foundation. They’ve got the components in place: a desire to prove that their software is preferable to commercial alternatives; public development projects with some amount of central governance; volunteer coders willing to gain karma by trying out new things. They (or if not them, Canonical or someone else) could easily become the home of demonstrable quality in software production.

Could the proprietary software developers be convinced to open up on information about what practices do or don’t work for them? I believe so, but it wouldn’t be easy. Iteratively improving practices is a goal for both small companies following Lean Startup and similar techniques, and large enterprises interested in process maturity models like CMMI. Both of these require you to know what metrics are important; to measure, test, improve and iterate on those metrics. This can be done much more quickly if you can combine your results from those of other teams—see what already has or hasn’t worked elsewhere and learn from that.

So that means that everyone will benefit if everyone else is publishing their evidence. But how do you bootstrap that? Who will be first to jump from a culture of silence to a culture of sharing, the people who give others the benefit of their experience before they get anything in return?

I believe that this is the role of the platform companies. These are the companies whose value lies not only in their own software, but in the software created by ISVs on their platforms. If they can help their ISVs to make better software more efficiently, they improve their own position in the market.

I made a web!

That is, I made a C program using the literate programming tool, CWEB. The product it outputs is, almost by definition, self-documenting, so find out about the algorithm and how I built it by reading the PDF. This post is about the process.

Unsurprisingly I found it much more mentally taxing to understand a prose description of a complex algorithm and how I might convert that into C than writing the C itself. In that, and acknowledging that this little project was a very artificial example, it was very helpful to be able to write long-form comments alongside the code.

That’s not to say that I don’t normally comment my code; I often do when I’m trying something I don’t think I understand. But often I’ll write out a prose description of what I’m trying to do in a notebook, or produce incredibly terse C comments. The literate programming environment encouraged me to marry these two ideas and create long prose that’s worth reading, but attach it to the code I’m writing.

I additionally found it useful to be able to break up code into segments by idea rather than by function/class/method. If I think “oh, I’ll need one of these” I can just start a new section, and then reference it in the place it’ll get used. It inverts my usual process, which is to write out the code I think I’ll need to do a task and then go back and pick out isolated sections with refactoring tools.

As a developer’s tool, it’s pretty neat too, though not perfect. The ctangle tool that generates the C source code inserts both comments referring to the section of the web that the code comes from, and (more usefully) preprocessor #line directives. If you debug the executable (which I needed to…) you’ll get told where in the human-readable source the PC is sitting, not where in the generated C.

The source file, a “web” that contains alternating TeX and C code, is eminently readable (if you know both TeX and C, obviously) and plays well with version control. Because this example was a simple project, I defined everything in one file but CWEB can handle multiple-file projects.

The main issue is that it’d be much better to have an IDE that’s capable of working with web files directly. A split-pane preview of the formatted documentation would be nice, and there are some good TeX word processors out there that would be a good starting point. Code completion, error detection and syntax highlighting in both the C and TeX parts would be really useful. Refactoring support would be…a challenge, but probably doable.

So my efforts with CWEB haven’t exactly put me off, but do make me think that even three decades after being created it’s not in a state to be a day-to-day developer environment. Now if only I knew someone with enough knowledge of the Clang API to think about making a C or ObjC IDE…

Illuminative-C

In addition to being a mildly accomplished software engineer, I’ve done some studying and armchair research in the field of ancient languages and palaeography. What happens if we smoosh those fields together?

In a very slight way, art historian and fellow Oxenafordisc Dr. Janina Ramirez did that in her series on Illuminations: the Private Lives of Medieval Kings (erm, Kings and Ælfgifu). In the series she showed off many manuscripts in the British Library collection, but when she went out in the field she took an iPad. It turns out that the BL isn’t too hot on letting you run around with their thousand-year-old kidskin.

You already know my opinion on our digital heritage. This puts it into stark relief: in one hundred years’ time, barring some epic fire in London (those never happen), the BL and its collection will still be there. Will it still be possible to even launch the iPad app she was using? I very much doubt it.

How about if we put the same effort into storing our source code as the scriptoria did into storing their indentures and gospels? Well, I sharpened a goose feather and had a go at just that (warning: very much draft document impending).

Example 2-1 from PCAS

What you see up there is the first sample code in Professional Cocoa Application Security – Listing 2-1. Ignore the fact that you don’t recognise all the letter shapes: things have changed over the centuries. There were a few contortions required to get the source code to work in manuscript form: let me show you them.

First is that in the hand in which I wrote the source, some of the characters needed for Objective-C source code don’t exist. Like ‘v’. I used the fact that u and v are actually the same letter to get around that. Punctuation was harder: I went for roughly accurate rendering, with a single misplaced comma to suggest that the scribe didn’t really understand punctuation.

When it came to comments, I decided they have the same meaning as the gloss in the Lindisfarne Gospels – rendering the difficult language required by the church^Wcompiler into plain English. I therefore roughly scratched them in smaller text with different ink, letting them flow around the code as if they’d been written later. I also put in a few Old English spellings – though again not consistently[*].

The return value posed some difficulty, because we didn’t borrow 0 from the Middle East until a few centuries after the time this script is mimicking. I realised that if a scribe were to illuminate any part of a C function, it’d probably be the return value because that’s the consistent and – from the perspective of the rest of the code – important part. Thus the 0 is highly decorated, with six legs in the fashion of a bug :-).

Bugs hark back to the days of illuminated manuscripts anyway. Any good scribe would know that a mistake in the text was the fault of Titivillus, not of the scribe. Just as those bugs aren’t my fault. Honest.

[*] Next time you want to get angry at a teenager, remember that the work “ask” was once “acsian” with the s on the end, and think about which one of you is bastardising our language.