Structure and Interpretation of Computer Programmers

I make it easier and faster for you to write high-quality software.

Thursday, December 20, 2018

Research Watch, and Java by Contract

I introduced Java by Contract, a tool for building design-by-contract style invariants, preconditions and postconditions in Java using annotations. It’s MIT licensed, contributions are welcome, and I hope this helps lots of people to introduce stronger correctness checking into your software. And book office hours if you’d like me to help you with that.

Java by Contract came about as part of Research Watch, a new blog series over at The Labrary where I talk about academic work and how us “practitioners” (i.e. people who computer who aren’t in academia) can make use of the results. The first post considers a report of Teaching Quality Object-Oriented Programming to computer science students.

By the way, I will be speaking at Coventry Tech Meetup on 10th January on the topic “Beyond TDD”, and Java by Contract will make an appearance there.

Long-time SICPers readers will remember Programming Literate, a Tumblr discussing results from empirical software engineering. And if you don’t, you’ll probably remember your feeds exploding on July 15, 2013 when I imported all of the posts from there to here. You can think of Research Watch as a reboot of Programming Literate. There’ll be papers new and vintage, empirical and opinionated, on a range of computing topics. If that sounds interesting, subscribe to the Labrary’s RSS feed.

posted by Graham at 10:30  

Wednesday, December 19, 2018

Teaching Quality Object-Oriented Programming

About this paper

Teaching Quality OOP by Yishai A. Feldman, published March 2005 (see the link for full citation).

Notes

One of the points made in my book Object-Oriented Programming the Easy Way is that objects should be specified by their interfaces through contracts, which say what messages the objects respond to, how you use them, and what happens as a result.

While it is up to any one object to decide how it responds to messages, we need to know whether that object represents a useful addition to our system. In other words, we want to know what the object will do in response to what messages. (Page 61)

The book Structure and Interpretation of Computer Programs says the same thing. An abstract data type has a collection of things that can be done, and stuff that happens when you do it.

In general, we can think of data as defined by some collection of selectors and constructors, together with specified conditions that these procedures must fulfill in order to be a valid representation. (§2.1.3)

This seems evident. Knowing that I have a int count(), a boolean contains(Object o) and a void add(Object o) method is insufficient, I need to know how they interact before I can use them. For an array, given:

int x = a.count();
a.add(anObject);
int y = a.count();

you would always expect y - x == 1. For a set, it would depend on the content of the collection before addition; it could be 1 or 0. Knowing what methods are called is insufficient to know what type I am dealing with.

Why is it, then, that programming languages give you types for expressing the methods on an object, but not what they do or how they relate? Why does a Java interface, a Swift protocol, or a C++ abstract class only have the part of the contract related to names, not behaviours?

Famously, the Eiffel language addresses this, and it’s here that we come into contact with the paper that is the topic of this post. Eiffel, and its underlying theory, is well-described in the book Object-Oriented Software Construction by Bertrand Meyer, CTO of Eiffel Software and researcher at ETH Zurich. Feldman wanted to teach his students the theory of “Quality Software” based on two principles that are well-described in OOSC:

  • Design by Contract, because it’s more approachable and usable than formal methods, and more useful than testing; and
  • Command-Query Separation, because it isolates state changes, making it easier to draw conclusions about the behaviour of a software system.

But he didn’t want to teach them Eiffel, because:

it might leave the students with the mistaken and harmful impression that quality programming is confined to one language. (§3)

Additionally, OOSC does not include any exercises, so is not useful as a teaching support book. I will note here that Meyer has also written A Touch of Class, which does have supporting teaching material including exercises, but still uses the Eiffel language and therefore only solves half of the author’s problems.

Feldman taught the theory from OOSC using Eiffel notation, and encouraged students to complete exercises that were in variants of the Java programming language. Tools available at the time read the contract out of special additions to the class’s Javadoc comments, and modified the source code to include assertions at the relevant points in execution.

This led me to wonder about modern Java syntax, and whether it’s possible to make a similar tool using Java’s annotation features so that programmers don’t have to worry whether a source conversion tool has introduced errors, or trace changes to the source code when working back from a failure report to the broken source.

The answer is yes, and so now the Labrary can offer Java by Contract as a tool for designing Java types by contract. It encodes the parts of the contract as names of methods to invoke that return boolean, failing if the answer is false. A rewrite of the Map interface from page 13 of the paper is given below.

public interface Map {
  /**
   * Does the key k appear in the map?
   */
  @Precondition(name = "nonNullK")
  @Postcondition(name = "inMapIffInKeys")
  boolean has(Object k);

  default boolean nonNullK(Object k) {
    return (k != null);
  }
  default boolean inMapIffInKeys(Object k, Boolean result) {
    return (result == this.keys().has(k));
  }

  /**
   * The value of the map at key k, null if undefined.
   */
  @Precondition(name = "nonNullK")
  @Postcondition(name = "nonNullValueIffHasKey")
  Object item(Object k);

  default boolean nonNullValueIffHasKey(Object k, Object ret) {
    return ((ret != null) == this.has(k));
  }

  /**
   * Associate key k with value v.
   */
  @Precondition(name = "nonNullKeyAndValue")
  @Postcondition(name = "hasK")
  @Postcondition(name = "itemForKIsNowV")
  void put(Object k, Object v);

  default boolean nonNullKeyAndValue(Object k, Object v) {
    return ((k != null) && (v != null));
  }
  default boolean hasK(Object k, Object v, Void result) {
    return this.has(k);
  }
  default boolean itemForKIsNowV(Object k, Object v, Void result) {
    return (this.item(k) == v)
  }

  /**
   * Remove key k and associated value from map.
   */
  @Precondition(name = "nonNullK")
  @Postcondition(name = "doesNotHaveK")
  void prune(Object k);

  default boolean doesNotHaveK(Object k, Void result) {
    return !this.has(k);
  }

  /**
   * The set of all keys in the map.
   */
  @Postcondition(name = "nonNullReturn")
  ReadOnlySet keys();

  default boolean nonNullReturn(ReadOnlySet result) {
    return (result != null);
  }
}

This approach lets us write contract conditions that have full access to the internal state of the objects they are implemented on, so internal invariants can be verified in addition to the properties explored through the interface (by specifying them on the implementing class, not on the declaring interface). As a few pages of the paper are dedicated to the author’s (and the class’s) experience with various design by contract tools and their shortcomings, it’s valuable to explore this space and come up with better approaches. I hope that Java by Contract is a useful addition.

Feldman additionally notes that the Java class library, including the Collections library, violates the Command-Query Separation principle. He even notes that the language does: the construct:

x++;

is both a command (increment x) and a query (what value did x previously have?) in a single expression. The early exercises in his class instruct students to write CQS-satisfying library objects (hence the Map example above), which the subsequent exercises build on.

Feldman left academia the year after this paper was written, and is now at IBM Research. That means the trail of development of this class has run out, and we cannot say how the teaching would have adapted to the subsequent decade of progress in Java.

posted by Graham at 16:35  

Tuesday, December 18, 2018

Cleaner Code

Readers of OOP the easy way will be familiar with the distinction between object-oriented programming and procedural programming. You will have read, in that book, about how what we claim is OOP in the sentence “OOP has failed” is actually procedural programming: imperative code that you could write in Pascal or C, with the word “class” used to introduce modularity.

Here’s an example of procedural-masquerading-as-OOP, from Robert C. Martin’s blog post FP vs. OO List Processing:

void updateHits(World world){
  nextShot:
  for (shot : world.shots) {
    for (klingon : world.klingons) {
      if (distance(shot, klingon) <= type.proximity) {
        world.shots.remove(shot);
        world.explosions.add(new Explosion(shot));
        klingon.hits.add(new Hit(shot));
        break nextShot;
      }
    }
  }
}

The first clue that this is a procedure, not a method, is that it isn’t attached to an object. The first change on the road to object-orientation is to make this a method. Its parameter is an instance of World, so maybe it wants to live there.

public class World {
  //...

  public void updateHits(){
    nextShot:
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingons) {
        if (distance(shot, klingon) <= type.getProximity()) {
          this.shots.remove(shot);
          this.explosions.add(new Explosion(shot));
          klingon.hits.add(new Hit(shot));
          break nextShot;
        }
      }
    }
  }
}

The next non-object-oriented feature is this free distance procedure floating about in the global namespace. Let’s give the Shot the responsibility of knowing how its proximity fuze works, and the World the knowledge of where the Klingons are.

public class World {
  //...

  private Set<Klingon> klingonsWithin(Region influence) {
    //...
  }

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        this.shots.remove(shot);
        this.explosions.add(new Explosion(shot));
        klingon.hits.add(new Hit(shot));
      }
    }
  }
}

Cool, we’ve got rid of that spaghetti code label (“That’s the first time I’ve ever been tempted to use one of those” says Martin). Incidentally, we’ve also turned “loop over all shots and all Klingons” to “loop over all shots and nearby Klingons”. The World can maintain an index of the Klingons by location using a k-dimensional tree then searching for nearby Klingons is logarithmic in number of Klingons, not linear.

By the way, was it weird that a Shot would hit whichever Klingon we found first near it, then disappear, without damaging other Klingons? That’s not how Explosions work, I don’t think. As it stands, we now have a related problem: a Shot will disappear n times if it hits n Klingons. I’ll leave that as it is, carry on tidying up, and make a note to ask someone what should really happen when we’ve discovered the correct abstractions. We may want to make removing a Shot an idempotent operation, so that we can damage multiple Klingons and only end up with a Shot being removed once.

There’s a Law of Demeter violation, in that the World knows how a Klingon copes with being hit. This unreasonably couples the implementations of these two classes, so let’s make it our responsibility to tell the Klingon that it was hit.

public class World {
  //...

  private Set<Klingon> klingonsWithin(Region influence) {
    //...
  }

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        this.shots.remove(shot);
        this.explosions.add(new Explosion(shot));
        klingon.hit(shot);
      }
    }
  }
}

No, better idea! Let’s make the Shot hit the Klingon. Also, make the Shot responsible for knowing whether it disappeared (how many episodes of Star Trek are there where photon torpedoes get stuck in the hull of a ship?), and whether/how it explodes. Now we will be in a position to deal with the question we had earlier, because we can ask it in the domain language: “when a Shot might hit multiple Klingons, what happens?”. But I have a new question: does a Shot hit a Klingon, or does a Shot explode and the Explosion hit the Klingon? I hope this starship has a business analyst among its complement!

We end up with this World:

public class World {
  //...

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        shot.hit(klingon);
      }
    }
  }
}

But didn’t I say that the shot understood the workings of its proximity fuze? Maybe it should search the World for nearby targets.

public class World {
  //...

  public void updateHits(){
    for (Shot shot : this.shots) {
      shot.hitNearbyTargets();
    }
  }
}

As described in the book, OOP is not about adding the word “class” to procedural code. It’s a different way of working, in which you think about the entities you need to model to solve your problem, and give them agency. Obviously the idea of “clean code” is subjective, so I leave it to you to decide whether the end state of this method is “cleaner” than the initial state. I’m happy with one fewer loop, no conditions, and no Demeter-breaking coupling. But I’m also happy that the “OO” example is now object-oriented. It’s now looking a lot less like enterprise software, and a lot more like Enterprise software.

posted by Graham at 09:44  

Monday, December 17, 2018

Product teams: our products are not our products

Woah, too many products. Let me explain. No, it will take too long, let me summarise.

Sometimes, people running software organisations call their teams “product teams”, and organise them around particular “products”. I do not believe that this is a good idea. Because we typically aren’t making products, we’re solving problems.

The difference is that a product is “done”. If you have a “product team”, they probably have a “definition of done”, and then release software that has satisfied that definition. Even where that’s iterative and incremental, it leads to there being a “product”. The thing that’s live represents as much of the product as has been done.

The implications of there being a “product” that is partially done include optimising for getting more “done”. Particularly, we will prioritise adding new stuff (getting more “done”) over fixing old stuff (shuffling the deckchairs). We will target productish metrics, like number of daily actives and time spent.

Let me propose an alternative: we are not making products, we are solving problems. And, as much out of honesty as job preservation, let me assure you that the problems are very difficult to solve. They are problems in cybernetics, in other words in communication and control in a complex system. The system is composed of three identifiable, interacting subsystems:

  1. The people who had the problem;
  2. The people who are trying to solve the problem;
  3. The software created to present the current understanding of the solution.

In this formulation, we don’t want “amount of product” to be a goal, we want “sufficiency of solution” to be a goal. We accept that the software does not represent the part of the “product” that has been “done”. The software represents our best effort to date at modelling our understanding of the solution as we comprehend it to date.

We therefore accept that adding more stuff (extending the solution) is one approach we could consider, along with fixing old stuff (reflecting new understanding in our work). We accept that introducing the software can itself change the problem, and that more people using it isn’t necessarily a goal: maybe we’ve helped people to understand that they didn’t actually need that problem solved all along.

Now our goals can be more interesting than bushels of software shovelled onto the runtime furnace: they can be about sufficiency of the solution, empowerment of the people who had the problem, and improvements to their quality of life.

posted by Graham at 15:21  

Friday, December 14, 2018

Mapping software engineering tools

Despite the theory that everything can be done in software (and of course, anything that can’t be done could in principle be approximated using numerical methods, or fudged using machine learning), software engineering itself, the business of writing software, seems to be full of tools that are accepted as de facto standards but, nonetheless, begrudgingly accepted by many teams. What’s going on? Why, if software is eating the world, hasn’t it yet found an appealing taste for the part of the world that makes software?

Let’s take a look at some examples. Jira is very popular among many people. I found a blog post literally called Why I Love Jira. And yet, other people say that Jira is an anti pattern, a sentiment that gets reasonable levels of community support.

Jenkins is almost certainly the (“market”, though it’s free) leader among continuous delivery tools, a position it has occupied since ousting Hudson, from which it was forked. Again, it’s possible to find people extolling the virtues and people hating on it.

Lastly, for some quantitative input, we can find that according to the Stack Overflow 2018 survey, most respondents (78.9%) love Rust, but most people use JavaScript (69.8%). From this we draw the interesting conclusion that the most popular tool in the programming language realm is not, actually, the one that wins the popularity contest.

So, weird question, why does everybody do this to themselves? And then more specifically, why is your team doing it to yourselves, and what can you do about it?

My hypothesis is that all of these tools succeed because they are highly configurable. I mean, JavaScript is basically a configuration language for Chromium (don’t @ me) to solve/cause your problem. Jira’s workflows are ridiculously configurable, and if Jenkins doesn’t do what you want then you can find a plugin to do it, write a plugin to do it or make a Groovy script that will do it.

This appeals to the desire among software engineers to find generalisations. “Look,” we say, “Jenkins is popular, it can definitely be made to do what we want, so let’s start there and configure it to our needs”.

Let’s take the opposing view for the moment. I’m going to drop the programming language example of JS/Rust, because all programming languages are, roughly speaking, entirely interchangeable. The detail is in the roughness. The argument below still applies, but requires more exposition which will inevitably lead to dissatisfaction that I didn’t cover some weird case. So, for the moment, let’s look at other tools like Jira and Jenkins.

The exact opposing view is that our project is distinct, because it caters to the needs of our customers and their (or these days, probably our) environment, and is understood and worked on by our people with our processes, which is not true for any other project. So rather than pretend that some other tool fits our needs or can be bent into shape, why don’t we build our own?

And, for our examples, building such a tool doesn’t appear to be a big deal. Using the expansive software engineering term “just”, a CD tool is “just” a way to run each step in the deployment pipeline and tell someone when a step fails. A development-tracking tool is “just” a way to list the things the team is or could be working on.

This is more or less a standard “build or buy” question, with just one level of indirection: both building and buying are actually measured in terms of time. How long would it take the team to write a new CD tool, and to maintain it? How long would it take the team to configure Jenkins, and to maintain it?

The answer should be fairly easy to consider. Let’s look at the map:

We are at x, of course. We are a short way from the Path of Parsimony, the happy path along which the generic tools work out of the box. That distance is marked on the map as .

Think about how you would measure for your team. You would consider the expectations of the out-of-the-box tool. You would consider the expectations of your team, and of your project. You would look at how those expectations differ, and try to quantify the result.

This tells you something about the gap between what the tool provides by default and what you need, which will help you quantify the amount of customisation needed (the cost of building a spur out from the Path of Parsimony to x). You can then compare that with the cost of building a tool that supports your position directly (the cost of building your own path, running through x).

But the map also suggests another option: why don’t we move from x closer to the path, and make smaller? Which of our distinct assumptions are incidental and can be abandoned, which are essential and need to be supported, and which are historical and could be revised? Is there a way to change the context so that adopting the popular tool is cheaper?

[Left out of the map but just as important is the related question: has somebody else already charted a different path, and how far are we from that? In other words, is there a different off-the-shelf product which needs less configuration than the one we’ve picked, so the total migration-plus-configuration cost is less than sticking where we are?]

My impression is that these questions tend to get asked once at the start of a project or initiative, then not again until the team is so far away from the Path of Parsimony that they are starting to get tangled and stung by the Weeds of Woe. Teams that change tooling such as their issue trackers or CD pipeline tend to do it once the existing way is already hurting too much, and the route back to the path no longer clear.

posted by Graham at 14:01  

Sunday, December 9, 2018

More speed, lower velocity

I frequently meet software teams who describe themselves as “high velocity”, they even have graphs coming from Jira to prove it, and yet their ability to ship great software, to delight their customers, or even to attract their customers, doesn’t meet their expectations. A little bit of sleuthing usually discovers the underlying problem.

Firstly, let’s take a look at that word, “velocity”. I, like Kevlin Henney, have a background in Physics, and therefore I agree with him that Velocity is a vector, and has a direction. But “agile” velocity only measures amount of stuff done to the system over time, not the direction in which it takes the system. That story may be “5 points” when measured in terms of heft, but is that five points of increasing existing customer satisfaction? Five points of new capability that will be demoed at next month’s trade show? Five points of attractiveness to prospects in the sales funnel?

Or is it five points of making it harder for a flagship customer to get their work done? Five points of adding thirty-five points of technical debt work later? Five points of integrating the lead engineer’s pet technology?

All of these things look the same in this model, they all look like five points. And that means that for a “high-velocity” (but really low-velocity, high-speed) team, the natural inclination is to jump on it, get it done, and get those five points under their belt and onto the burn down chart. The faster they burn everything down, the better they look.

Some of the presenting symptoms of a high-speed, low-velocity team are listed below. If you recognise these in your team, book yourself in for office hours and we’ll see if we can get you unstuck.

  • “The Business”: othering the rest of the company. The team believes that their responsibility is to build the thing that they were asked for, and “the business” needs to tell them what to build, and to sell it.
  • Work to rule: we build exactly what was asked for, no more, no less. If the tech debt is piling up it’s because “the business” (q.v.) doesn’t give us time to fix it. If we built the wrong thing it’s because “the business” put it at the top of the backlog. If we built the thing wrong it’s because the acceptance criteria weren’t made clear before we started.
  • Nearly done == done: look, we know our rolling average velocity is 20 bushels of software, and we only have 14 furlongs and two femtocandela of software to show at this demo. But look over here! These 12 lumens and 4 millitesla of software are in QA, which is nearly done, so we’ve actually been working really hard. The fact that you can’t use any of that stuff is unimportant.
  • Mini-waterfall: related to work to rule (q.v.), this is the requirement that everyone do their bit of the process in order, so that the software team can optimise for requirements in -> software out and get that sweet velocity up. We don’t want to be doing discovery in engineering, because that means uncertainty, uncertainty means rework, and rework means lower velocity.
  • Punitive estimation: we’re going to rename “ambiguity” to “risk”, and then punish our product owner for giving us risky stories by boosting their estimates to account for the “risk”. Such stories will never get scheduled, because we’ll never be asked to do that one risky thing when we can get ten straightforward things done in what we are saying is the same time.
  • Story per dev: as a team, our goal is to shovel as much software onto the runtime furnace as possible. Therefore we are going to fan out the tasks to every individual. We are each capable of wielding our own shovel, and very rarely do we accidentally hit each other in the face while shovelling.
posted by Graham at 10:37  

Thursday, December 6, 2018

Figurative Programming and Gloom: the [G]raphical [LOOM]

Donald Knuth is pretty cool. One of the books he wrote that I own and have actually read[*] is Literate Programming, in which he describes (among other things) weaving program text and documentation together in a single narrative.

Two of his books that I own and have sort of dipped into here and there are TeX: the Program, and METAFONT: the Program. These are literate programs, created from webs in which Human text and Computer text are interleaved to tell the story of what the program does.

Human text and computer text, but not images. If you want pictures, you have to carry them around separately. Even though we are highly visual organisms, and many of the programs we produce have significant graphical components, very few programming environments treat images as anything other than external files that can be looked at and maybe previewed. The only programming environment I know of that lets you include images in program source is TempleOS.

I decided to extend the idea of the Literate web to the realm of Figurative Programming. A gloom (graphical loom) web can contain human text, computer text, and image descriptions (e.g. graphviz, plantuml, GLE…) which get included in the human-readable document as figures.

The result is gloom. It’s written in itself, so the easiest way to get started is with the Xcode project at gloomstrap which can extract the proper gloom sources from the gloom web. Alternatively, you can dive in and read the PDF it made about itself.

Because I built gloomstrap first, gloom is really a retelling of that program in a Figurative Programming web, rather than a program that was designed figuratively. Because of that, I don’t really have experience yet of trying to design a system in gloom. My observation was that the class hierarchy I came up with in building gloomstrap didn’t always lend itself to a linear storytelling for inclusion in a web. I expect that were I to have designed it in noweb rather than Xcode, I would have had a different hierarchy or even no classes at all.

Similarly, I didn’t try test-firsting in gloom, and nor did I port the tests that I did write into the web. Instinct tells me that it would be a faff, but I will try it and find out. I think richer expressions of program intention can only be a good thing, and if Figurative Programming is not the way in which that can be done, then at least we will find out something about what to do instead.

[*] Coming up in January’s De Programmatica Ipsum: The Art of _The Art of Computer Programming_, an article about a book that I have _definitely_ read _quite a few bits here and there_ of.

posted by Graham at 16:19  

Monday, December 3, 2018

Two books

A member of a mailing list I’m on recently asked: what two books should be on every engineer’s bookshelf? Here’s my answer.

Many software engineers, the ones described toward the end of Code Complete 2, would benefit most from Donald Knuth’s The Art of Computer Programming and Computers and Typesetting. It is truly astounding that one man has contributed so comprehensively to the art of variable-height monitor configurations.

If, to misquote Bill Hicks, “you’ve got yourself a reader”, then my picks are coloured by the fact that I’ve been trying to rehabilitate Object-Oriented Design for the last few years, by re-introducing a couple of concepts that got put aside over the recent decades:

  1. Object orientation; and
  2. Design.

With that in mind, my two recommendations are the early material from that field that I think shows the biggest divergence in thinking. Readers should be asking themselves “are these two authors really writing about the same topic?”, “where is the user of the software system in this book?”, “who are the users of the software system in this book?”, and “do I really need to choose one or other of these models, why not both or bits of both?”

  1. “Object-Oriented Programming: an evolutionary approach” by Brad Cox (there is another edition with Andrew Novobilski as a co-author). Cox’s model is the npm/CPAN model: programmers make objects (“software ICs”), describe their characteristics in a data sheet, and publish them in a catalogue. Integrators choose likely-looking objects from the catalogue and assemble an application out of them.

  2. “Object-Oriented Software Construction” by Bertrand Meyer. Meyer’s model is the “software engineering” model: work out what the system should do, partition that into “classes” based on where the data should naturally live, and design and build those classes. In designing the classes, pay particular attention to the expectations governing how they communicate: the ma as Alan Kay called the gaps between the objects.

posted by Graham at 10:40  

Powered by WordPress