On the continuous history of approximation

The Difference Engine – the Charles Babbage machine, not the steampunk novel – is a device for finding successive solutions to polynomial equations by adding up the differences introduced by each term between the successive input values.

This sounds like a fairly niche market, but in fact it’s quite useful because there are a whole lot of other functions that can be approximated by polynomial equations. The approach, which is based in calculus, generates a Taylor series (or a MacLaurin series, if the approximation is for input values near zero).

Now, it happens that this collection of other functions includes logarithms:

\(ln(1+x) \approx x – x^2/2 + x^3/3 – x^4/4 + \ldots\)

and exponents:

\(e^x \approx 1 + x + x^2/2! + x^3/3! + x^4/4! + \ldots\)

and so, given a difference engine, you can make tables of logarithms and exponents.

In fact, your computer is probably using exactly this approach to calculate those functions. Here’s how glibc calculates ln(x) for x roughly equal to 1:

  r = x - 1.0;
  r2 = r * r;
  r3 = r * r2;
  y = r3 * (B[1] + r * B[2] + r2 * B[3]
    + r3 * (B[4] + r * B[5] + r2 * B[6]
        + r3 * (B[7] + r * B[8] + r2 * B[9] + r3 * B[10])));
  // some more twiddling that add terms in r and r*r, then return y

In other words, it works out r so that it is calculating ln(1+r), instead of ln(x). Then it adds together r + a*r^2 + b*r^3 + c*r^4 + d*r^5 + ... + k*r^12…it does the Taylor series for ln(1+r)!

Now given these approximations, we can combine numbers into probabilities (using the sigmoid function, which is in terms of e^x) and find the errors on those probabilities (using the cross entropy, which is in terms of ln(x). We can build a learning neural network!

And, more than a century after it was designed, our technique could still do it using the Difference Engine.

Java By Contract: a Worked Example

Java by Contract is an implementation of Design by Contract, as promoted by Bertrand Meyer and the Eiffel Software company, for the Java programming language. The contract is specified using standard Java methods and annotations, making it a more reliable tool than earlier work which used javadoc comments and rewrote the Java source code to include the relevant tests.

Which is all well and good, but how do you use it? Here’s an example.

The problem

There is a whole class of algorithms to approximately find roots to a function, using an iterative technique. Given, in Java syntax, the abstract type MathFunction that implements a function over the double type:

interface MathFunction {
    double f(double x);
}

Define the abstract interface that exposes such an iterative solution, including the details of its contract.

The solution

The interface is designed using the Command-Query Separation Principle. Given access to the function f(), the interface has a command findRoot(seed1, seed2) which locates the root between those two values, and a query root() which returns that root. Additionally, a boolean query exhaustedIterations() reports whether the solution converged.

Both of the queries have the precondition that the command must previously have successfully run; i.e. you cannot ask what the answer was without requesting that the answer be discovered.

The contract on the command is more interesting. The precondition is that for the two seed values seed1 and seed2, one of them must correspond to a point f(x) > 0 and the other to a point f(x) < 0 (it does not matter which). This guarantees an odd, and therefore non-zero, number of roots[*] to f(x) between the two, and the method will iterate toward one of them. If the precondition does not hold, then an even number of roots (including possibly zero) lies between the seed values, so it cannot be guaranteed that a solution exists to find.

In return for satisfying the precondition, the command guarantees that it either finds a root or exhausts its iteration allowance looking. Another way of putting that: if the method exits early, it is because it has already found a convergent solution.

Many, but not all, of these contract details can be provided as default method implementations in the interface. The remainder must be supplied by the implementing class.

/**
 * Given a mathematical function f over the doubles, and two bounds for a root to that function,
 * find the root using an (unspecified) iterative approach.
 * A root is an input value x such that f(x)=0.
 */
public interface RootFinder {
    /**
     * @return The function that this object is finding a root for.
     */
    MathFunction f();
    /**
     * A root to the function f() is thought to lie between seed1 and seed2. Find it.
     * @param seed1 One boundary for the root to f().
     * @param seed2 Another boundary for the root to f().
     */
    @Precondition(name = "seedGuessesStraddleRoot")
    @Postcondition(name = "earlyExitImpliesConvergence")
    void findRoot(double seed1, double seed2);
    /**
     * @return The root to the function f() that was discovered.
     */
    @Precondition(name = "guessWasCalculated")
    Double root();
    /**
     * @return Whether the iterative solution used the maximum number of iterations.
     */
    @Precondition(name = "guessWasCalculated")
    boolean exhaustedIterations();

    default Boolean guessWasCalculated() {
        return this.root() != null;
    }
    default Boolean seedGuessesStraddleRoot(Double seed1, Double seed2) {
        double r1 = f().f(seed1);
        double r2 = f().f(seed2);
        return ((r1 > 0 && r2 < 0) || (r1 < 0 && r2 > 0));
    }
    Boolean earlyExitImpliesConvergence(Double seed1, Double seed2, Void result);
}

Example usage

There are swaths of algorithms to implement this interface. See, for example, the book Numerical Recipes. Given a particular implementation, we can look for roots of a simple function, for example f(x) = x^2 - 2:

    RootFinder squareRootOfTwo = SecantRootFinder.finderForFunction((double x) -> x*x - 2);
    squareRootOfTwo.findRoot(1.0, 2.0);
    System.out.println(String.format("Root: %f", squareRootOfTwo.root()));
    System.out.println(String.format("The solution did%s converge before hitting the iteration limit",
            squareRootOfTwo.exhaustedIterations()?"n't":""));

This suggests that a root exists at x~=1.414214, and that it converged on the solution before running out of goes. Let’s see if there’s another root between 2 and 3:

Exception in thread "main" online.labrary.javaByContract.ContractViolationException:
  online.labrary.javaByContract.Precondition seedGuessesStraddleRoot had unexpected value false on object
  online.labrary.jbcTests.TestGeneratorTests$SecantRootFinder@29774679
    at javaByContract/online.labrary.javaByContract.ContractEnforcer.invoke(ContractEnforcer.java:92)
    at jdk.proxy1/com.sun.proxy.jdk.proxy1.$Proxy4.findRoot(Unknown Source)
    at javaByContract/online.labrary.rootFinder.RootFinder.main(RootFinder.java:10)

Whoops! I’m holding it wrong: the function doesn’t change sign between x=2 and x=3. I shouldn’t expect the tool to work, and indeed it’s been designed to communicate that expectation by failing a precondition.

[*] Nitpick: roots _or singularities_.

Updates to JavaByContract

Some improvements to JavaByContract, the design-by-contract tool for Java:

  • Preconditions, Postconditions and Invariants now appear in the Javadoc for types that use JavaByContract. While this is only a small source change, it’s a huge usability improvement, as programmers using your types can now read the contracts for those types in their documentation.
  • There is Javadoc for the JavaByContract package.
  • The error message on contract violation distinguishes between precondition, postcondition and invariant violation.

I’m speaking generally about moving beyond TDD, using JavaByContract as a specific example, at Coventry Tech Meetup next week. See you there!

The App that Wasn’t (Yet)

One of the early goals written into the mission statement of the Labrary was an eponymous app for organising research notes. I’ve used Mekentosj Springer Readcube Papers for years, and encountered Mendeley and others, and found that they were all more focussed on the minutiae of reference management, rather than the activity of studying and learning from the material you’re collecting in your library. Clearly those are successful apps that have an audience, but is there space for something more lightweight?

I talked to a few people, and the answer was yes. There were people in software engineering, data science, and physics who identified as “light” consumers of academic literature, people who read the primary literature to learn from and find techniques to apply, but do not need or even want the full cognitive weight of bibliographic reference management. They (well, “we”, I wanted it too) wanted to make notes while they were reading papers, and find those notes again. We wanted to keep tags on interesting references to follow up. We wanted to identify the questions we had, and whether they were answered. And we wanted to have enough information—but not more—to help us find the original article again.

My first prototype was as simple as I could make it. There’s a picture below: it’s a ring binder, with topic dividers, and paper notes (at least one separate sheet for each article) which quickly converged on a pro forma layout as shown.

An early prototype of the Labrary app.
An early prototype of the Labrary app.

I liked it, in fact I quickly got to a point where I wouldn’t read an article unless I had access to a pad and pen to add a page to my binder. People I showed it to liked it, too. So this seemed like a good time to crack open the software making tools!

The first software prototype was put together in spare time using GNUstep and Renaissance, and evinced two problems:

  • The UI design led back down the route of “bibliopedantry”, forcing students to put more effort into getting the citation details correct than they wanted to.
  • Renaissance lacked support for some Cocoa controls it would have been helpful to use, so there was a choice to be made to invest more into improving Renaissance or finding a different UI layout tool.
A screenshot of the ill-fated "Library" window in Labrary's GNUstep prototype.
A screenshot of the ill-fated “Library” window in Labrary’s GNUstep prototype.

This experience made me look for other inspiration for ways to organise the user interface so that students get the experience of taking notes, not of fiddling with citation data. I considered writing Labrary as a plugin for the free Calibre e-reader app, so that Labrary could focus on being about study notes and Calibre could focus on being about library management. But ultimately I found the tool that solved the problem best: Apple’s Finder.

The Labrary pro forma note as Finder stationery.
The Labrary pro forma note as Finder stationery.

I’ve recreated the pro forma note from the binder as a text file, and set the “Stationery Pad” flag in the Finder. When I open this file, Finder creates a duplicate and opens that instead, in my editor of choice: ready to become a new study note! I put this in a folder with a Zim index file, so I can get the “shoebox” view of all the notes by opening the folder in Zim. It also does full-content searching, so the goal of finding a student’s notes again is achieved.

Zim open on my research notes folder.
Zim open on my research notes folder.

I’m glad I created the lo-fi paper prototype. It let me understand what I was trying to achieve, and show very quickly that my software implementation was going in the wrong direction. And I’m always happy to be the person to say “do we need to write this, or can it be built out of other bits?”, as I explored for this project with Zim and Calibre.

Research Watch, and Java by Contract

I introduced Java by Contract, a tool for building design-by-contract style invariants, preconditions and postconditions in Java using annotations. It’s MIT licensed, contributions are welcome, and I hope this helps lots of people to introduce stronger correctness checking into your software. And book office hours if you’d like me to help you with that.

Java by Contract came about as part of Research Watch, a new blog series over at The Labrary where I talk about academic work and how us “practitioners” (i.e. people who computer who aren’t in academia) can make use of the results. The first post considers a report of Teaching Quality Object-Oriented Programming to computer science students.

By the way, I will be speaking at Coventry Tech Meetup on 10th January on the topic “Beyond TDD”, and Java by Contract will make an appearance there.

Long-time SICPers readers will remember Programming Literate, a Tumblr discussing results from empirical software engineering. And if you don’t, you’ll probably remember your feeds exploding on July 15, 2013 when I imported all of the posts from there to here. You can think of Research Watch as a reboot of Programming Literate. There’ll be papers new and vintage, empirical and opinionated, on a range of computing topics. If that sounds interesting, subscribe to the Labrary’s RSS feed.

Cleaner Code

Readers of OOP the easy way will be familiar with the distinction between object-oriented programming and procedural programming. You will have read, in that book, about how what we claim is OOP in the sentence “OOP has failed” is actually procedural programming: imperative code that you could write in Pascal or C, with the word “class” used to introduce modularity.

Here’s an example of procedural-masquerading-as-OOP, from Robert C. Martin’s blog post FP vs. OO List Processing:

void updateHits(World world){
  nextShot:
  for (shot : world.shots) {
    for (klingon : world.klingons) {
      if (distance(shot, klingon) <= type.proximity) {
        world.shots.remove(shot);
        world.explosions.add(new Explosion(shot));
        klingon.hits.add(new Hit(shot));
        break nextShot;
      }
    }
  }
}

The first clue that this is a procedure, not a method, is that it isn’t attached to an object. The first change on the road to object-orientation is to make this a method. Its parameter is an instance of World, so maybe it wants to live there.

public class World {
  //...

  public void updateHits(){
    nextShot:
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingons) {
        if (distance(shot, klingon) <= type.getProximity()) {
          this.shots.remove(shot);
          this.explosions.add(new Explosion(shot));
          klingon.hits.add(new Hit(shot));
          break nextShot;
        }
      }
    }
  }
}

The next non-object-oriented feature is this free distance procedure floating about in the global namespace. Let’s give the Shot the responsibility of knowing how its proximity fuze works, and the World the knowledge of where the Klingons are.

public class World {
  //...

  private Set<Klingon> klingonsWithin(Region influence) {
    //...
  }

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        this.shots.remove(shot);
        this.explosions.add(new Explosion(shot));
        klingon.hits.add(new Hit(shot));
      }
    }
  }
}

Cool, we’ve got rid of that spaghetti code label (“That’s the first time I’ve ever been tempted to use one of those” says Martin). Incidentally, we’ve also turned “loop over all shots and all Klingons” to “loop over all shots and nearby Klingons”. The World can maintain an index of the Klingons by location using a k-dimensional tree then searching for nearby Klingons is logarithmic in number of Klingons, not linear.

By the way, was it weird that a Shot would hit whichever Klingon we found first near it, then disappear, without damaging other Klingons? That’s not how Explosions work, I don’t think. As it stands, we now have a related problem: a Shot will disappear n times if it hits n Klingons. I’ll leave that as it is, carry on tidying up, and make a note to ask someone what should really happen when we’ve discovered the correct abstractions. We may want to make removing a Shot an idempotent operation, so that we can damage multiple Klingons and only end up with a Shot being removed once.

There’s a Law of Demeter violation, in that the World knows how a Klingon copes with being hit. This unreasonably couples the implementations of these two classes, so let’s make it our responsibility to tell the Klingon that it was hit.

public class World {
  //...

  private Set<Klingon> klingonsWithin(Region influence) {
    //...
  }

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        this.shots.remove(shot);
        this.explosions.add(new Explosion(shot));
        klingon.hit(shot);
      }
    }
  }
}

No, better idea! Let’s make the Shot hit the Klingon. Also, make the Shot responsible for knowing whether it disappeared (how many episodes of Star Trek are there where photon torpedoes get stuck in the hull of a ship?), and whether/how it explodes. Now we will be in a position to deal with the question we had earlier, because we can ask it in the domain language: “when a Shot might hit multiple Klingons, what happens?”. But I have a new question: does a Shot hit a Klingon, or does a Shot explode and the Explosion hit the Klingon? I hope this starship has a business analyst among its complement!

We end up with this World:

public class World {
  //...

  public void updateHits(){
    for (Shot shot : this.shots) {
      for (Klingon klingon : this.klingonsWithin(shot.getProximity())) {
        shot.hit(klingon);
      }
    }
  }
}

But didn’t I say that the shot understood the workings of its proximity fuze? Maybe it should search the World for nearby targets.

public class World {
  //...

  public void updateHits(){
    for (Shot shot : this.shots) {
      shot.hitNearbyTargets();
    }
  }
}

As described in the book, OOP is not about adding the word “class” to procedural code. It’s a different way of working, in which you think about the entities you need to model to solve your problem, and give them agency. Obviously the idea of “clean code” is subjective, so I leave it to you to decide whether the end state of this method is “cleaner” than the initial state. I’m happy with one fewer loop, no conditions, and no Demeter-breaking coupling. But I’m also happy that the “OO” example is now object-oriented. It’s now looking a lot less like enterprise software, and a lot more like Enterprise software.

Product teams: our products are not our products

Woah, too many products. Let me explain. No, it will take too long, let me summarise.

Sometimes, people running software organisations call their teams “product teams”, and organise them around particular “products”. I do not believe that this is a good idea. Because we typically aren’t making products, we’re solving problems.

The difference is that a product is “done”. If you have a “product team”, they probably have a “definition of done”, and then release software that has satisfied that definition. Even where that’s iterative and incremental, it leads to there being a “product”. The thing that’s live represents as much of the product as has been done.

The implications of there being a “product” that is partially done include optimising for getting more “done”. Particularly, we will prioritise adding new stuff (getting more “done”) over fixing old stuff (shuffling the deckchairs). We will target productish metrics, like number of daily actives and time spent.

Let me propose an alternative: we are not making products, we are solving problems. And, as much out of honesty as job preservation, let me assure you that the problems are very difficult to solve. They are problems in cybernetics, in other words in communication and control in a complex system. The system is composed of three identifiable, interacting subsystems:

  1. The people who had the problem;
  2. The people who are trying to solve the problem;
  3. The software created to present the current understanding of the solution.

In this formulation, we don’t want “amount of product” to be a goal, we want “sufficiency of solution” to be a goal. We accept that the software does not represent the part of the “product” that has been “done”. The software represents our best effort to date at modelling our understanding of the solution as we comprehend it to date.

We therefore accept that adding more stuff (extending the solution) is one approach we could consider, along with fixing old stuff (reflecting new understanding in our work). We accept that introducing the software can itself change the problem, and that more people using it isn’t necessarily a goal: maybe we’ve helped people to understand that they didn’t actually need that problem solved all along.

Now our goals can be more interesting than bushels of software shovelled onto the runtime furnace: they can be about sufficiency of the solution, empowerment of the people who had the problem, and improvements to their quality of life.

Mapping software engineering tools

Despite the theory that everything can be done in software (and of course, anything that can’t be done could in principle be approximated using numerical methods, or fudged using machine learning), software engineering itself, the business of writing software, seems to be full of tools that are accepted as de facto standards but, nonetheless, begrudgingly accepted by many teams. What’s going on? Why, if software is eating the world, hasn’t it yet found an appealing taste for the part of the world that makes software?

Let’s take a look at some examples. Jira is very popular among many people. I found a blog post literally called Why I Love Jira. And yet, other people say that Jira is an anti pattern, a sentiment that gets reasonable levels of community support.

Jenkins is almost certainly the (“market”, though it’s free) leader among continuous delivery tools, a position it has occupied since ousting Hudson, from which it was forked. Again, it’s possible to find people extolling the virtues and people hating on it.

Lastly, for some quantitative input, we can find that according to the Stack Overflow 2018 survey, most respondents (78.9%) love Rust, but most people use JavaScript (69.8%). From this we draw the interesting conclusion that the most popular tool in the programming language realm is not, actually, the one that wins the popularity contest.

So, weird question, why does everybody do this to themselves? And then more specifically, why is your team doing it to yourselves, and what can you do about it?

My hypothesis is that all of these tools succeed because they are highly configurable. I mean, JavaScript is basically a configuration language for Chromium (don’t @ me) to solve/cause your problem. Jira’s workflows are ridiculously configurable, and if Jenkins doesn’t do what you want then you can find a plugin to do it, write a plugin to do it or make a Groovy script that will do it.

This appeals to the desire among software engineers to find generalisations. “Look,” we say, “Jenkins is popular, it can definitely be made to do what we want, so let’s start there and configure it to our needs”.

Let’s take the opposing view for the moment. I’m going to drop the programming language example of JS/Rust, because all programming languages are, roughly speaking, entirely interchangeable. The detail is in the roughness. The argument below still applies, but requires more exposition which will inevitably lead to dissatisfaction that I didn’t cover some weird case. So, for the moment, let’s look at other tools like Jira and Jenkins.

The exact opposing view is that our project is distinct, because it caters to the needs of our customers and their (or these days, probably our) environment, and is understood and worked on by our people with our processes, which is not true for any other project. So rather than pretend that some other tool fits our needs or can be bent into shape, why don’t we build our own?

And, for our examples, building such a tool doesn’t appear to be a big deal. Using the expansive software engineering term “just”, a CD tool is “just” a way to run each step in the deployment pipeline and tell someone when a step fails. A development-tracking tool is “just” a way to list the things the team is or could be working on.

This is more or less a standard “build or buy” question, with just one level of indirection: both building and buying are actually measured in terms of time. How long would it take the team to write a new CD tool, and to maintain it? How long would it take the team to configure Jenkins, and to maintain it?

The answer should be fairly easy to consider. Let’s look at the map:

We are at x, of course. We are a short way from the Path of Parsimony, the happy path along which the generic tools work out of the box. That distance is marked on the map as .

Think about how you would measure for your team. You would consider the expectations of the out-of-the-box tool. You would consider the expectations of your team, and of your project. You would look at how those expectations differ, and try to quantify the result.

This tells you something about the gap between what the tool provides by default and what you need, which will help you quantify the amount of customisation needed (the cost of building a spur out from the Path of Parsimony to x). You can then compare that with the cost of building a tool that supports your position directly (the cost of building your own path, running through x).

But the map also suggests another option: why don’t we move from x closer to the path, and make smaller? Which of our distinct assumptions are incidental and can be abandoned, which are essential and need to be supported, and which are historical and could be revised? Is there a way to change the context so that adopting the popular tool is cheaper?

[Left out of the map but just as important is the related question: has somebody else already charted a different path, and how far are we from that? In other words, is there a different off-the-shelf product which needs less configuration than the one we’ve picked, so the total migration-plus-configuration cost is less than sticking where we are?]

My impression is that these questions tend to get asked once at the start of a project or initiative, then not again until the team is so far away from the Path of Parsimony that they are starting to get tangled and stung by the Weeds of Woe. Teams that change tooling such as their issue trackers or CD pipeline tend to do it once the existing way is already hurting too much, and the route back to the path no longer clear.

More speed, lower velocity

I frequently meet software teams who describe themselves as “high velocity”, they even have graphs coming from Jira to prove it, and yet their ability to ship great software, to delight their customers, or even to attract their customers, doesn’t meet their expectations. A little bit of sleuthing usually discovers the underlying problem.

Firstly, let’s take a look at that word, “velocity”. I, like Kevlin Henney, have a background in Physics, and therefore I agree with him that Velocity is a vector, and has a direction. But “agile” velocity only measures amount of stuff done to the system over time, not the direction in which it takes the system. That story may be “5 points” when measured in terms of heft, but is that five points of increasing existing customer satisfaction? Five points of new capability that will be demoed at next month’s trade show? Five points of attractiveness to prospects in the sales funnel?

Or is it five points of making it harder for a flagship customer to get their work done? Five points of adding thirty-five points of technical debt work later? Five points of integrating the lead engineer’s pet technology?

All of these things look the same in this model, they all look like five points. And that means that for a “high-velocity” (but really low-velocity, high-speed) team, the natural inclination is to jump on it, get it done, and get those five points under their belt and onto the burn down chart. The faster they burn everything down, the better they look.

Some of the presenting symptoms of a high-speed, low-velocity team are listed below. If you recognise these in your team, book yourself in for office hours and we’ll see if we can get you unstuck.

  • “The Business”: othering the rest of the company. The team believes that their responsibility is to build the thing that they were asked for, and “the business” needs to tell them what to build, and to sell it.
  • Work to rule: we build exactly what was asked for, no more, no less. If the tech debt is piling up it’s because “the business” (q.v.) doesn’t give us time to fix it. If we built the wrong thing it’s because “the business” put it at the top of the backlog. If we built the thing wrong it’s because the acceptance criteria weren’t made clear before we started.
  • Nearly done == done: look, we know our rolling average velocity is 20 bushels of software, and we only have 14 furlongs and two femtocandela of software to show at this demo. But look over here! These 12 lumens and 4 millitesla of software are in QA, which is nearly done, so we’ve actually been working really hard. The fact that you can’t use any of that stuff is unimportant.
  • Mini-waterfall: related to work to rule (q.v.), this is the requirement that everyone do their bit of the process in order, so that the software team can optimise for requirements in -> software out and get that sweet velocity up. We don’t want to be doing discovery in engineering, because that means uncertainty, uncertainty means rework, and rework means lower velocity.
  • Punitive estimation: we’re going to rename “ambiguity” to “risk”, and then punish our product owner for giving us risky stories by boosting their estimates to account for the “risk”. Such stories will never get scheduled, because we’ll never be asked to do that one risky thing when we can get ten straightforward things done in what we are saying is the same time.
  • Story per dev: as a team, our goal is to shovel as much software onto the runtime furnace as possible. Therefore we are going to fan out the tasks to every individual. We are each capable of wielding our own shovel, and very rarely do we accidentally hit each other in the face while shovelling.

Figurative Programming and Gloom: the [G]raphical [LOOM]

Donald Knuth is pretty cool. One of the books he wrote that I own and have actually read[*] is Literate Programming, in which he describes (among other things) weaving program text and documentation together in a single narrative.

Two of his books that I own and have sort of dipped into here and there are TeX: the Program, and METAFONT: the Program. These are literate programs, created from webs in which Human text and Computer text are interleaved to tell the story of what the program does.

Human text and computer text, but not images. If you want pictures, you have to carry them around separately. Even though we are highly visual organisms, and many of the programs we produce have significant graphical components, very few programming environments treat images as anything other than external files that can be looked at and maybe previewed. The only programming environment I know of that lets you include images in program source is TempleOS.

I decided to extend the idea of the Literate web to the realm of Figurative Programming. A gloom (graphical loom) web can contain human text, computer text, and image descriptions (e.g. graphviz, plantuml, GLE…) which get included in the human-readable document as figures.

The result is gloom. It’s written in itself, so the easiest way to get started is with the Xcode project at gloomstrap which can extract the proper gloom sources from the gloom web. Alternatively, you can dive in and read the PDF it made about itself.

Because I built gloomstrap first, gloom is really a retelling of that program in a Figurative Programming web, rather than a program that was designed figuratively. Because of that, I don’t really have experience yet of trying to design a system in gloom. My observation was that the class hierarchy I came up with in building gloomstrap didn’t always lend itself to a linear storytelling for inclusion in a web. I expect that were I to have designed it in noweb rather than Xcode, I would have had a different hierarchy or even no classes at all.

Similarly, I didn’t try test-firsting in gloom, and nor did I port the tests that I did write into the web. Instinct tells me that it would be a faff, but I will try it and find out. I think richer expressions of program intention can only be a good thing, and if Figurative Programming is not the way in which that can be done, then at least we will find out something about what to do instead.

[*] Coming up in January’s De Programmatica Ipsum: The Art of _The Art of Computer Programming_, an article about a book that I have _definitely_ read _quite a few bits here and there_ of.