The Primeagen calls it Negative-Space Programming: using assertions to cut off the space of possible programs, leaving only the ones you believe are possible given your knowledge of a program’s state at a point.
Tony Hoare just called it “logic”, which is why we now call it Hoare Logic. OK, no he didn’t. He called it An axiomatic basis for computer programming, but that doesn’t make the joke work. The point is, this is how people were already thinking about program design in 1969 (and, honestly, for a long time before). I’m glad the Primeagen has discovered it, I just don’t think it needs a new name.
Imagine you know the program’s state (or, the relevant attributes of that state) at a point in your program, and you can express that as a predicate—a function of the state that returns a Boolean value—P. In other words, you can assert that P is true at this point. The next thing the computer does is to run the next instruction, C. You know the effect that C has on the environment, so you can assert that it leaves the environment in a new state, expressed by a new predicate, Q.
Tony Hoare writes this as a Hoare Triple, {P}C{Q}. We can write it out as a formulaic sentence:
Given P, When C, Then Q
Given-when-then looks exactly like a description of a test, because it is; although our predicates might not be specific states for the program to be in as in a specific unit test, they might describe sets of states for the program to be in, as in a property-based test.
We can also use the fact that the computer runs program statements in sequence to compose these triples. Given P, when C, then Q; given Q, when D, then R; given R and so on. Now, if we arrange our program so that it only runs C when P is true, and it always runs D next, and so on, we can use the composition rule to say something about the compound statement C;D;etc. If we wrap that compound statement up in a function f(), then we get to say this:
If the caller of f() makes sure that the precondition for calling the function is true before making the call, then f() guarantees that the postcondition is true after the call returns.
This combination of responsibilities and guarantees is a contract. If you start with the contract and work to design an implementation that satisfies the contract, then you are designing by contract. You’re following a discipline of programming, as Dijkstra put it.
Oh, speaking of Dijkstra. We said that you need to know that the computer always runs this part of our program in the sequential way, with the stated precondition, that means it isn’t allowed to jump partway into it with some other, different initial state. If we allowed that then we’d have a much harder time constructing our P at a point in the program, because it would be whatever we think it is if the program follows its normal path, or whatever it is if the program goes to this point from over here, or whatever it is if the program goes to this point from over there, and so on. And our Q becomes whatever we think it is if the computer runs C after the normal path, or whatever it is if the computer goes to this point from over here then runs C, and so on.
Thinking about all of that additional complexity, and persevering with designing a correct program, could be considered harmful. To avoid the harm, we design programs so that the computer can’t go to arbitrary points from wherever the programmer wants. All to support logically-designed programming, or “negative-space programming” as I suppose we’re meant to call it now.
