My iPad-drawn graphics in Rethinking OOD at App Builders 2018 were not very good, so here are the ink-and-paper versions. Please have them to hand when viewing the talk (which is the first of a two-parter, though I haven’t pitched part two anywhere yet).
I recently had the chance to give my OOP-in-FP-in-Swift talk again in NSLondon, and was asked how to build inheritance in that object system. It’s a great question, I gave what I hope was a good answer, and it’s worth some more thought and a more coherent response.
Firstly, let’s look at the type signature for an object in this system:
typealias Object = (Selector) -> IMP
Selector is the name of a method, and an
IMP is a function1 implementing that method. But an
Object is nothing more or less than a function that maps names of methods to implementations of methods. And that’s incredibly powerful, for two reasons.
Reason one: Inheritance is whatever you want it to be.
You are responsible for writing the code to look up methods, which means you get to choose how it works. If you don’t like inheritance at all, then you’re golden: each object knows its own methods and nothing else.
proto message, and ask that object what method to use.
If you like Smalltalk or ObjC or Ruby, then you can create an object called a
Class that creates objects that look up methods by asking the class what methods to use. If the class doesn’t know, then it can ask its
If you like multiple inheritance, then give an object a list of classes/prototypes instead of a single one.
If you’ve got some other idea, build it! Maybe you always thought classification should be based on higher-order logic; well here’s your chance.
(By the way, if you want to do this, you would be well-off defining a convention where methods all take a parameter that can be bound to the receiver: call it
self for example. Then when you’re deep in inheritance-land, but you want to send a message to
self, you still have a reference to it.)
Reason two: Inheritance is whatever you want it to be at each point.
If an object is any arbitrary code that finds a method, then you can build whichever model is most appropriate at the point of use. You can mix and match. The core philosophy at a code level is that objects are just loosely-coupled functions. From a conceptual level that’s incredibly powerful: such loose coupling means that you aren’t forced to make assumptions about how objects are constructed, or glued together. You just use them.
actually a closure, which is even more useful, but that’s not important right now. ↩
I think the last technical conference I attended was FOSDEM last year, and now I’m sat in the lobby of the Royal Library of Brussels working on a project that I want to take to some folks at this year’s FOSDEM, and checking the mailing lists of some projects I’m interested in to find meetups and relevant talks.
It’s been even longer since I spoke at a conference, I believe it would have been App Builders in 2016. When I left Facebook in 2015 with the view of taking a year out of software, I also ended up taking myself out of community membership and interaction. I felt like I didn’t know who I was, and wasn’t about to find out by continually defining myself in terms of other people.
I would have spoken at dotSwift in 2016, but personal life issues probably also related to not knowing who I was stopped me from doing that. In April, I went to Zurich and gave what would be my last talk as a programmer to anyone except immediate colleagues for over 18 months.
During this time, I found that I don’t mind so much what I’m working on: I have positive and negative opinions of all of it. I have lots of strong opinions, and lots of syntheses of other ideas, and need to be a member of a community that can share and critique these opinions, and extend and develop these syntheses.
Which is why, after a bit of social wilderness, I’m reflecting on this week’s first conference and planning my approach to the second.
Advice on presentations – including that given on this blog, is often geared toward the “showbusiness” presentation. We’re usually talking about the big conference talk or product launch, where you can afford to put in the time to make a good, slick performance: a few days of preparation for a half-hour talk is not unheard of.
Not every presentation fits that mould. There are plenty where putting so much time into the presentation would be harmful, but where is the guidance on constructing those presentations?
The minor event
If you spent a few days preparing for a sprint-end demo, or reporting back to your team on some study you did, you’d significantly harm your productivity on the rest of your job. In these contexts, you want to spend a small amount of time building your talk, and you still want to put on a good show: to make your team and your stakeholders feel happy about the work you did, or to make the case persuasively for the tool or technique you studied.
As such, in these cases I still build an outline for my talk outside of the presentation software, and construct my slides to follow the outline. I’ve used OmniOutliner in the past, I use emacs org-mode now, you could use a list of bullets in MarkDown or a pen and a sheet of paper. It doesn’t matter, what matters is that you get what you want to say structured in one place, and the slides that support the presentation done separately.
I try to keep these slides text-free, particularly if the presentation is short, so that people don’t get distracted reading them. If I’m reporting on progress, then screenshots of some progress dashboard make for quickly-constructed slides. My current team shows its burndown every sprint end, that’s a quick screencapture that tells the story of the last two weeks. If there’s some headline figure (26 stories delivered; 80% of MVP scope complete; 2 new people on the team) then a slide containing that number makes for a good backdrop to talking about the subject.
The recurrent deck
The antithesis of the conference keynote presentation style, the recurrent deck is a collection of slides you’ll use over and over again. Your approach to integrating with third-party APIs, your software architecture philosophy, your business goals for 2018…you’ll need to present these over and over again in different contexts.
More to the point, other people will want to present them, too: someone in sales will answer a question about integration by using your integration slides. Your department director will present your team’s goals in the context of her department’s goals. And this works the other way: you can use your CEO’s slide on product strategy to help situate your team goals.
So throw out everything you learned about crafting the slides to fit the story. What you’re doing here is coming up with a reusable visual that can support any story related to the same information. I try to make these slides as information-rich as possible, though still diagrammatic rather than textual to avoid the presentation failure mode of reading out the slide. My current diagram tool is Lucidchart, as it’s my company’s standard, I’ve used OmniGraffle and dia too. Whatever tool, follow the house style (e.g. colour schemes, fonts, iconography) so that when you mash up your slides and your CEO’s slides, it still looks like a coherent presentation.
I try to make each slide self-contained, because I or someone else might take one to use in a different presentation so a single idea shouldn’t need a six-slide reveal and a colleague will find it harder to reuse the slide if it isn’t self-explanatory.
A frequent anti-pattern in slide design is to include the “page number” on the slide: not only is that information useless in a presentation, the only likely outcome is a continuity error when you drag a few slides from different sources to throw a talk together and don’t renumber them. Or worse, can’t: I’ve been given slides before that are screenshots of whatever slide was originally built, so the number is part of a bitmap.
Good reusable slide libraries will also be a boon in quickly constructing the minor event presentations: “we did this because that” can be told with one novel part (we did this) and one appeal to the library (because that).
[Note: this post represents the notes made for my talk at iOS Dev UK 2014. As far as I’m aware, the talk isn’t available on the tubes.]
The Principled Programmer
The first thing to be aware of is that this post is not about my
principles. It’s sort-of about your principles, in a way.
Let’s look at two games. You may not have heard of Chaturanga (unless
you practice yoga, but I’m talking about a different Chaturanga), but
it’s the ancient game that eventually evolved into Chess.
You may not have heard of Nard either, but it’s a very different game
that grew up into Backgammon. There’s a creation myth surrounding
these two games, that says they were invented at the same time. Some
leader thousands of years ago wanted two games; one a game of skill
and the other a game of chance.
The thing is, you can lose at chess by chance: if you happen to be
having an off day and miss a key move that you’d often
make. Similarly, you can lose at backgammon through lack of skill: by
choosing to move the wrong pieces.
We were presented with two options: skill (chaturanga) and not-skill
(nard). However, the games do not actually represent pure states of
the two concepts; they’re more like a quantum system where the
real-world states can be superpositions of the mathematically “pure”
All of this means that we can’t ignore states in-between the two
poles. Such ignorance has a name in the world of critical analysis:
the fallacy of the excluded
That is not the situation we have in bivalent logic, including the
mathematical Boolean formulation frequently used to model what’s going
on in computers. This has the law of the excluded
says that a proposition must either be true or false.
In this case, the fact is that the two propositions (you are playing a
game of skill, or you are playing a game of chance) do not exactly
match the two real possibilities (you are playing chaturanga, or you
are playing nard). There’s a continuum possibility (you are using some
skill and some chance), but a false dichotomy is proposed by the
presentation in terms of the games.
The existence of a rule allows us to form a bivalent predicate: your
action is consistent with the rule. That statement can either be true
or false, and the middle is excluded.
This means we have the possibility for the same confusion that we had
with the games: compliance with the rule may be bivalent, but what’s
going on in reality is more complicated. We might accidentally exclude
the middle when it actually contains something useful. Obviously that
useful thing would not be in compliance with the rule. So you can
think about a rule like this: a statement is a rule when you can
imagine contraventions of the statement that are of no different value
than observances of the statement. Style guides are like this: you can
imagine a position that contravenes the rules of your style guide
that is of no lesser or greater value: following another style
Of course, the value of a style guide comes not from the choice of
style, but from the consistency derived from always adhering to the
rule. It doesn’t matter whether you drive on the left or the right of
the road, as long as everybody chooses the same side.
One famous collection of rules in software
engineering is Extreme Programming. Kent Beck described hearing or
reading about various things that were supposed to be good ideas in
programming, so he turned them up to eleven to see what would
happen. Here are some of the rules.
User stories are written. It’s easy to imagine (or recall)
situations in which we write software without writing user stories:
perhaps where we have formal specifications, or tacit understandings
of the requirements. So that’s definitely a rule.
All production code is pair programmed. The converse – not all
production code is pair programmed – poses no problem. We can imagine that the two conditions are different, and that we might want to choose one over another.
Rules serve two useful functions, of which I shall introduce one
now. They’re great for beginners, who can use them to build a scaffold
in which to place their small-scale, disjoint bits of knowledge. Those
bits of knowledge do not yet support each other, but they do not need
to as the rules tell us what we need to apply in each situation.
The software engineering platypus
Having realised that our rules are only letting us see small pieces of
the picture, we try to scale them up to cover wider
situations. There’s not really any problem with doing that. But we can
get into trouble if we take it too far, because we can come up with
rules that are impossible to violate.
A platitude, then, is a statement so broad that its converse cannot be
contemplated, or is absurd. Where a rule can be violated without
hardship, a platitude cannot be violated at all – or at least not
The problem with platitudes is that because we cannot violate them,
they can excuse any practice. “I write clean code”: OK, but I don’t
believe I know anybody who deliberately writes dirty code. “This
decision was pragmatic”: does that mean any other option would be
dogmatic? But isn’t “always be pragmatic” itself dogma?
Platitudes can easily sweep through a community because it’s
impossible to argue against them. So we have the software
craftsmanship manifesto, which values:
A Community of Professionals. As any interaction
between people who get paid comes under this banner, it’s hard to see
what novelty is supplied here.
Well-Crafted Software. Volunteers please for making shitty
The Principled Programmer.
There must be some happy medium, some realm in which the statements we
make are wider in scope, and thus more complex, than rules, but not so
broad that they become meaningless platitudes that justify whatever
we’re doing but fail to guide us to what we should be doing.
I define this as the domain of the principle, and identify a principle
thus: a statement which can be violated, where the possibilities of
violation give us pause for thought and make us wonder about what it
is we value. To contrast this with the statements presented earlier:
violate a rule: meh, that’s OK, the other options are just as good.
violate a platitude: no, that’s impossible, or ludicrous.
violate a principle: ooh, that’s interesting.
Coming up with good principles is hard. The principles behind the
agile manifesto contain some legitimate principles:
Our highest priority is to satisfy the customer through early and
continuous delivery of valuable software. Interesting. I can imagine
that being one of many priorities of which others might be higher:
growing the customer base, improving software quality, supporting what
they’re using now and deferring delivery of new software until it’s
needed. I’ll have to think about that.
Working software is the primary measure of
progress. Interesting. This seems to suggest that paying off technical
debt – exchanging one amount of working software for another amount of
working software over a period of time – is not progress. I’ll have to
think about that.
But then it also contains rules:
Deliver working software frequently, from a couple of weeks to a
couple of months, with a preference to the shorter timescale. We ship
a couple of times a day, and I don’t feel that’s worse.
Business people and
developers must work together daily throughout the project. Is there
anything wrong with every other day?
Build projects around motivated individuals. Give them the
environment and support they need, and trust them to get the job
done. I cannot imagine a situation where I would hire people who do
not want to do the work.
Continuous attention to technical excellence and good
design enhances agility. This is tautological, as technical excellence
and good design can be defined as those things that enable our goals
Why is it hard? I believe it’s because it’s highly personal, because
what you’re willing to think about and likely to get benefit from
thinking about depends on your own experiences and interests. Indeed
I’m not sure whether I want to define the principle as I have done
above, or whether it’s the questions you ask while thinking about
the validity of those things that are really your principles.
## Nice principles. Now go and turn them into
The thing about thinking is that I don’t want to do it when I don’t
need to. My currency is thought, so if I’m still thinking next year
about the things I was considering this year, I’m doing it wrong.
Principles are great for the things that need to challenge
the way we work now. But they should be short-lived. Remember the
beginner use of rules was only one of two important contexts? The
other context is in freeing up cognitive space for people who
previously had principles, and now want to move on to have new
principles. In short-circuiting the complex considerations you
previously had, mentally automating them to prepare yourself for
Notice that this means that it’s a rule in isolation that doesn’t cause us any problems to violate. It may be that the rule was derived from a principle, so some thought went into its construction. Without that information, all we can see is that there are two possibilities and we’re being told that one of them is acceptable.
The challenge that remains is in communication, because it doesn’t
help for the context of a rule to be misidentified. If you’re a
beginner, and you describe your beginner rule and someone takes it as
an expert rule, they might end up talking about perspectives that
you’re not expecting. Also if you’re an expert and your expert rule is
perceived as a beginner rule, you might end up having to discuss
issues you’ve already considered and resolved.
So by all means, identify your principles. Then leave them behind and
discover new ones.
You would imagine that by now I would have come to realise how long my attention span is and worked to find projects that fit within it, but no. This is one of the changes I need to make soon.
So often I start a project really excited by it, but am really excited by something else before the end. Book projects always work that way, and quite a few software projects. Sometimes even talks, given a long enough lead time between being asked for a topic and actually giving the talk.
The usual result is that I become distracted before the end of the project, which leads to procrastination. That then makes it take longer, which only increases the distraction and disengagement.
What I’m saying is that if I ever say that I’m thinking of starting a PhD, you have my permission to chastise me. Four years is not within my observed boredom limit. Six months is closer to the mark.
Much is written about various paradigms or orientations of programming: Object- (nee Message-) Oriented, Functional, Structured, Dataflow, Logic, and probably others. These are often presented as camps or tribes with which to identify. A Smalltalk programmer will tell you that they are an Object-Oriented programmer, and furthermore those Johnny-come-latelies with their Java are certainly not members of the same group. A Haskell programmer will tell you that they are a functional programmer, and that that is the only way to make working software (though look closely; their Haskell is running on top of a large body of successful, imperative C code).
Notice the identification of paradigms with individual programming languages. Can you really not be object-oriented if you use F#, or is structured programming off-limits to an Objective-C coder? Why is the way that I think so tightly coupled to the tool that I choose to express my thought?
Of course, tools are important, and they do have a bearing on the way we think. But that’s at a fairly low, mechanical level, and programming is supposed to be about abstraction and high-level problem solving. You’re familiar with artists working with particular tools: there are watercolour painters and there are oil painters (and there are others too). Now imagine if the watercolour painting community (there is, of course, no such thing) decreed that it’s impossible to represent a landscape using oil paints and the oil painting community declared that watercolours are “the wrong tools for the job” of painting portraits.
This makes no sense. Oil paints and watercolour paints define how the paint interacts with the canvas, the brush, and the paint that’s already been applied. They don’t affect how the painter sees their subject, or thinks about the shapes involved and the interaction of light, shadow, reflection, and colour. They affect the presentation of those thoughts, but that’s at a mechanical low level.
Programming languages define how the code interacts with the hardware, the libraries, and the code that’s already been applied. They don’t affect how the programmer sees their problem, or thinks about the factors involved. They affect the presentation of those thoughts, but that’s at a mechanical low level.
Given a Cartesian representation of the point (x,y), find its distance from the origin and angle from the x axis.
I’m going to approach this problem using the principles of functional programming. There’s clearly a function that can take us from the coordinates to the displacement, and one that can take us from the coordinates to the angle. Ignoring the implementation for the moment, they look like this:
Point_radius :: float, float -> float Point_angle :: float, float -> float
This solution has its problems. I have two interchangeable arguments (both the x and y ordinates are
floats) used in independent signatures, how do I make it clear that these are the same thing? How do I ensure that they’re used in the same way?
One tool in the arsenal of the functional programmer is pattern matching. I could create a single entry point with an enumeration that selects the needed operation. Now it’s clear that the operations are related, and there’s a single way to interpret the arguments, guaranteeing consistency.
Point :: float, float, Selector -> float
Good for now, but how extensible is this? What if I need to add an operation that returns a different type (for example a description that returns a string), or one that needs other arguments (for example the projection on to a different vector)? To provide that generality, I’ll use a different tool from the functional toolbox: the higher-order function. Rewrite
Point so that instead of returning a
float, it returns a function that captures the coordinates and takes any required additional arguments to return a value of the correct type. To avoid cluttering this example with irrelevant details, I’ll give that function a shorthand named type:
Point :: float, float, Selector -> Method
You may want to perform multiple operations on values that represent the same point. Using a final functional programming weapon, partial application, we can capture the coordinates and let you request different operations on the same encapsulated data.
Point :: float, float -> Selector -> Method
Now it’s clear to see that the
Point function is a constructor of some type that encapsulates the coordinates representing a given two-dimensional Cartesian point. That type is a function that, upon being given a
Selector representing some operation, returns a
Method capable of implementing that operation. The function implements message sending, and
Points are just objects!
Imagine that we wanted to represent points in a different way, maybe with polar coordinates. We could provide a different function,
Point', which captures those:
Point' :: float, float -> Selector -> Method
This function has the same signature as our original function, it too encapsulates the constructor’s arguments (call them instance variables) and returns methods in response to selectors. In other words,
Point' are polymorphic: if they have methods for the distance and angle selectors, they can be used interchangeably.
Write a compiler that takes source code in some language and creates an executable. If it encounters malformed source code, it should report an error and not produce an executable.
Thinking about this with my object-oriented head, I might have a
Compiler object with some method
#compile(source:String) that returns an optional
Executable. If it doesn’t work, then use the
#getErrors():List<Error> method to find out what went wrong.
That approach will work (as with most software problems there are infinite ways to skin the same cat), but it’s got some weird design features. What will the
getErrors() method do if it’s called before the
compile() method? If
compile() is called multiple times, do earlier errors get kept or discarded? There’s some odd and unclear temporal coupling here.
To clean that up, use the object-oriented design principle “Tell, don’t ask”. Rather than requesting a list of errors from the compiler, have it tell an error-reporting object about the problems as they occur. How will it know what error reporter to use? That can be passed in, in accordance with another OO principle: dependency inversion.
Compiler#compile(source:String, reporter:ErrorConsumer): Optional<Executable> ErrorConsumer#reportError(error:Error): void
Now it’s clear that the reporter will receive errors related to the invocation of
#compile() that it was passed to, and there’s no need for a separate accessor for the errors. This clears up confusion as to what the stored state represents, as there isn’t anyway.
Another object-oriented tool is the Single Responsibility Principle, which invites us to design objects that have exactly one reason to change. A compiler does not have exactly one reason to change: you might need to target different hardware, change the language syntax, adopt a different executable format. Separating these concerns will yield more cohesive objects.
Tokeniser#tokenise(source:String, reporter:ErrorConsumer): Optional<TokenisedSource> Compiler#compile(source:TokenisedSource, reporter:ErrorConsumer): Optional<AssemblyProgram> Assembler#assemble(program:AssemblyProgram, reporter:ErrorConsumer): Optional<BinaryObject> Linker#link(objects:Array<BinaryObject>, reporter:ErrorConsumer): Optional<Executable> ErrorConsumer#reportError(error:Error): void
Every class in this system is named
Verber, and has a single method,
#verb. None of them has any (evident) internal state, they each map their arguments onto return values (with the exception of
ErrorConsumer, which is an I/O sink). They’re just stateless functions. Another function is needed to plug them together:
Binder<T,U,V>#bind(T->Optional<U>, U->Optional<V>): (T->Optional<V>)
And now we’ve got a compiler constructed out of functions constructed out of objects.
Those examples were very abstract, not making any use of specific programming languages. Because software design is not coupled to programming languages, and paradigmatic approaches to programming are constrained ways to think about software design. They’re abstract enough to be separable from the nuts and bolts of the implementation language you choose (whether you’ve already chosen it or not).
Those functions in the
Point example could be built using the blocks available in Smalltalk, Ruby or other supposedly object-oriented languages, in which case you’d have objects built out of functions that are themselves built out of objects (which are, of course, built out of functions…). The objects and classes in the
Compiler example can easily be closures in a supposedly functional programming language. In fact, closures and blocks are not really too dissimilar.
What conclusions can be derived from all of this? Clearly different programming paradigms are far from exclusive, so the first lesson is that you don’t have to let your choice of programming language dictate your choice of problem solving approach (nor do you really have to do it the other way around). Additionally where the approaches try to solve the same problem, the specific techniques they comprise are complementary or even identical. Both functional and object-oriented programming are about organisation, decomposition and comprehensibility, and using either or even both can help to further those aims.
Ultimately your choice of tools isn’t going to affect your ability to think by much. Whether they help you express your thoughts is a different matter, but while expression is an important part of our work it’s only a small part.
The ideas here were primarily motivated by Uday Reddy’s Objects as Closures: Abstract Semantics of Object-Oriented Languages (weird embedded PDF reader link). In the real-life version of this presentation I also talked a bit about Theorems for Free (actual PDF link) by Philip Wadler, which isn’t so related but is nonetheless very interesting :).
Take a look at your slides. For each slide, think how you would present the same information if you didn’t have the slide. Practise that, so that you can give the information on the slide without using the slide as an aide memoire. Practise that, until you can introduce that topic, discuss it, and move on to the next without a single reference to the slide. Do the same for each slide.
How will that improve my slides?
It won’t. It will improve your presentation with slides, by turning it into a presentation without slides.
As an optional extra, you could make new slides that support the presentation, but it shouldn’t be necessary.
I enjoyed Jaimee’s discussion of preparing her public talks, and realised that my approach has moved in a different way. I’ve probably talked about this before but I’ve also changed how I go about it. This is my technique, particularly where it diverges from Jaimee’s; synthesis can come later (and will undoubtedly help me!).
I start by thinking up some pithy title: previous talks including “Object-Oriented Programming in Objective-C”, “By your _cmd”, “The Principled Programmer” and “I have no idea what I’m doing” all began there. I often commit—even if only privately—to using a particular title before I have any idea what the talk will be about. I enjoy the creative exercise of fitting the rest of the talk into that constraint!
With a title in place, I brainstorm all of the things I can think of that could potentially fit into that topic. Usually I look back at that brainstorm and discover that it’s rambling, disconnected and mostly boring. Looking through, I search for two or three things that are interesting, particularly if they suggest conflicting ideas or techniques that can be explored, challenged and resolved.
Then it’s time for another outline :). This one explores the selected areas in depth, and it’s from this that I pick the main headlines for the talk, which also shape the introduction and conclusion. With those in mind I write the talk out as an essay, making sure it is consistent, complete and (to the extent I can do this myself) interesting. If it looks OK, then by this point I’ve prepared so much that I can remember the flow of the talk and give it without aids, though I still look for opportunities to support the presentation visually in the slides. In the case of my Principled Programmer talk, I realised the slides weren’t helping at all so did without them.
There are plenty of better presenters than me in the world; Jaimee is one of them. I have merely trial-and-errored my way into a situation where sometimes the same people who see me talk ask me back. I hope that by comparing my method with Jaimee’s and those of other people I can find out how to prepare a better talk.
We have this trope in programming that you should hate the code you wrote six months ago. This is a figurative way of saying that you should be constantly learning and assimilating new ideas, so that you can look at what you were doing earlier this year and have new ways of doing it.
It would be more accurate, though less visceral, to say “you should be proud that the code you wrote six months ago was the best you could do with the knowledge you then had, and should be able to ways to improve upon it with the learning you’ve accomplished since then”. If you actually hate the code, well, that suggests that you think anyone who doesn’t have the knowledge you have now is an idiot. That kind of mentality is actually deleterious to learning, because you’re not going to listen to anyone for whom you have Set the Bozo Bit, including your younger self.
I wrote a lot about learning and teaching in APPropriate Behaviour, and thinking about that motivates me to scale this question up a bit. Never mind my code, how can we ensure that any programmer working today can look at the code I was writing six months ago and identify points for improvement? How can we ensure that I can look at the code any other programmer was working on six months ago, and identify points for improvement?
My suggestion is that programmers should know (or, given the existence of the internet, know how to use the index of) the problems that have already come before, how we solved them, and why particular solutions were taken. Reflecting back on my own career I find a lot of problems I introduced by not knowing things that had already been solved: it wasn’t until about 2008 that I really understood automated testing, a topic that was already being discussed back in 1968. Object-oriented analysis didn’t really click for me until later, even though Alan Kay and a lot of really other clever people had been working on it for decades. We’ll leave discussion of parallel programming aside for the moment.
So perhaps I’m talking about building, disseminating and updating a shared body of knowledge. The building part already been done, but I’m not sure I’ve ever met anyone who’s read the whole SWEBOK or referred to any part of it in their own writing or presentations so we’ll call the dissemination part a failure.
Actually, as I said we only really need an index, not the whole BOK itself: these do exist for various parts of the programming endeavour. Well, maybe not indices so much as catalogues; summaries of the state of the art occasionally with helpful references back to the primary material. Some of them are even considered “standards”, in that they are the go-to places for the information they catalogue:
- If you want an algorithm, you probably want The Art of Computer Programming or Numerical Recipes. Difficulties: you probably won’t understand what’s written in there (the latter book in particular assumes a bunch of degree-level maths).
- If you want idioms for your language, look for a catalogue called “Effective <name of your language>”. Difficulty: some people will disagree with the content here just to be contrary.
- If you want a pattern, well! Have we got a catalogue for you! In fact, have we got more catalogues than distinct patterns! There’s the Gang of Four book, the PloP series, and more. If you want a catalogue that looks like it’s about patterns but is actually comprised of random internet commentators trying to prove they know more than Alastair Cockburn, you could try out the Portland Pattern Repository. Difficulty: you probably won’t know what you’re looking for until you’ve already read it—and a load of other stuff.
I’ve already discussed how conference talks are a double-edged sword when it comes to knowledge sharing: they reach a small fraction of the practitioners, take information from an even smaller fraction, and typically set up a subculture with its own values distinct from programming in the large. The same goes for company-internal knowledge sharing programs. I know a few companies that run such programs (we do where I work, and Etsy publish the talks from theirs). They’re great for promoting research, learning and sharing within the company, but you’re always aware that you’re not necessarily discovering things from without.
So I consider this one of the great unsolved problems in programming at the moment. In fact, let me express it as two distinct questions:
- How do I make sure that I am not reinventing wheels, solving problems that no longer need solving or making mistakes that have already been fixed?
- A new (and for sake of this discussion) inexperienced programmer joins my team. How do I help this person understand the problems that have already been solved, the mistakes that have already been made, and the wheels that have already been invented?
Solve this, and there are only two things left to do: fix concurrency, name things, and improve bounds checking.