Structure and Interpretation of Computer Programmers

I make it easier and faster for you to write high-quality software.

Wednesday, February 27, 2019

Why 80?

80 characters per line is a standard worth sticking to, even today. OK, why?

Well, back up. Let’s examine the axioms. Is 80 characters per line a standard? Not really, it’s a convention. IBM cards (which weren’t just made by IBM or read by IBM machines) were certainly 80 characters wide, as were DEC video terminals, which Macs etc. emulate. Actually, that’s not even true. The DEC VT-05 could display 72 characters per line, their later VT-50 and successor models introduced 80 characters. The VT-100 could display 132 characters per line, the same quantity as a line printer (including the ones made by IBM). Other video terminals had 40 or 64 character lines. Teletypewriters typically had shorter lines, like 70 characters.

Typewriters were typically limited to \((\mathrm{width\ of\ page} – 2 \times \mathrm{margin\ width}) \times \mathrm{character\ density}\) characters per line. With wide margins and narrow US paper, you might get 50 characters: with narrow margins and wide A4 paper, maybe 100.

IBM were not the only people to make cards, punches, and readers. Other manufacturers did, with other numbers of characters per card. IBM themselves made 40, 45 and 96 column cards. Remington Rand made cards with 45 or 90 columns.

So, axiom one modified, “80 characters per line is a particular convention out of many worth sticking to, even today.” Is it worth sticking to?

Hints are that it isn’t. The effects of line length on reading online news explored screen-reading with different line lengths: 35, 55, 75 and 95 cpl. They found, from the abstract:

Results showed that passages formatted with 95 cpl resulted in faster reading speed. No effects of line length were found for comprehension or satisfaction, however, users indicated a strong preference for either the short or long line lengths.

However that isn’t a clear slam dunk. Quoting their reference to prior work:

Research investigating line length for online text has been inconclusive. Several studies found that longer line lengths (80 – 100 cpl) were read faster than short line lengths (Duchnicky and Kolers, 1983; Dyson and Kipping, 1998). Contrary to these findings, other research suggests the use of shorter line lengths. Dyson and Haselgrove (2001) found that 55 characters per line were read faster than either 100 cpl or 25 cpl conditions. Similarly, a line length of 45-60 characters was recommended by Grabinger and Osman-Jouchoux (1996) based on user preferences. Bernard, Fernandez, Hull, and Chaparro (2003) found that adults preferred medium line length (76 cpl) and children preferred shorter line lengths (45 cpl) when compared to 132 characters per line.

So, long lines are read faster than short lines, except when they aren’t. They also found that most people preferred the longest or shortest lines the most, but also that everybody preferred the shortest or longest lines the least.

But is 95cpl a magic number? What about 105cpl, or 115cpl? What about 273cpl, which is what I get if I leave my Terminal font settings alone and maximise the window in my larger monitor? Does it even make sense for programmers who don’t have to line up the comment markers in Fortran-77 code to be using monospaced fonts, or would we be better off with proportional fonts?

And that article was about online news articles, a particular and terse form of prose, being read by Americans. Does it generalise to code? How about the observation that children and adults prefer different lengths, what causes that change? Does this apply to people from other countries? Well, who knows?

Buse and Weimer found that “average line length” was “strongly negatively correlated” with perceived readability. So maybe we should be aiming for one-character lines! Or we can offset the occasional 1,000 character line by having lots and lots of one-character lines:

}
}
}
}
}
}

It sounds like there’s information missing from their analysis. What was the actual shape of the data? What were the maximum and minimum line lengths considered, what distribution of line lengths was there?

We’re in a good place to rewrite the title from the beginning of the post: 80 characters per line is a particular convention out of many that we know literally nothing about the benefit or cost of, even today. Maybe our developer environments need a bit of that UX thing we keep imposing on everybody else.

posted by Graham at 16:45  

Monday, February 25, 2019

Last Chance: Ultimate Programmer Super Stack

Remember this? It’s the last day for the stack today. Buy today for $47.95 within six hours and get my APPropriate Behaviour, along with books on running software businesses, building test-driven developers, and all sorts of software stacks including Ruby, Node, Laravel, Python, Java, Kotlin…

posted by Graham at 15:35  

Thursday, February 21, 2019

Ultimate Programmer Super Stack Reloaded

Remember remember the cough 6th of November, when APPropriate Behaviour joined a wealth of other learning material for software engineers in a super-discounted bundle called the Ultimate Programmer Super Stack?

It’s happening again! This is a five-day flash sale, with all same material on levelling up as a programmer, running a startup, and learning new technologies like Aurelia, Node, Python and more. The link at the top of this paragraph goes to the sales page, and you’ve got until Monday, when it’s gone for good.

posted by Graham at 09:13  

Monday, February 18, 2019

The Fragile Manifesto

A lot of what I’ve been reading and thinking about of late is about the agile backlash. More speed, lower velocity reflects on IT teams pursuing “deliver more/newer IT” at the cost of “help the company achieve its mission”. Grooming the Backfog is about one dysfunction that arises as a result: (mis)managing a never-ending road of small changes rather than looking at the big picture and finding a path toward the destination. Our products are not our products attempts to address this problem by recasting teams not as makers of product, but as solvers of problems.

Here’s the latest: UK wasting £37 billion a year on failed agile IT projects. Some people will say that this is a result of not Agiling enough: if you were all Lean and MVP and whatever you’d not get to waste all of that money. I don’t necessarily agree with that: I think there’s actually things to learn by, y’know, reading the article.

The truth is that, despite the hype, Agile development doesn’t always work in practice.

True enough, but not a helpful statement, because “Agile” now means a lot of different things to different people. If we take it to mean the values, principles and practices written by the people who came up with the term, then I can readily believe that it wouldn’t work in practice for people whose context is different from those who came up with the ideas in 2001. Which may well be everyone.

I’m also very confident that it doesn’t mean that. I met a team recently who said they did “Agile”, and discussed their standups and two-week iterations. They also described how they were considering whether to go from an annual to biannual release.

Almost three quarters (73%) of CIOs think Agile IT has now become an industry in its own right while half (50%) say they now think of Agile as “an IT fad”.

The Agile-Industrial Complex is well-documented. You know what isn’t well-documented? Your software.

The report revealed 44% of Agile IT projects that fail, do so because of a failure to produce enough (or any) documentation.

The survey found that 34% of failed Agile projects failed because of a lack of upfront and ongoing planning. Planning is a casualty of today’s interpretation of the Agile Manifesto[…]

68% of CIOs agree that agile teams require more Architects. From defining strategy, to championing technical requirements (such as performance and security) to ensuring development teams stick to the rules of the game, the role of the Architect is sorely missed in the agile space. It must be reintroduced.

A bit near the top of the front page of the manifesto for agile software development is a sentence fragment that says:

Working software over comprehensive documentation

Before we discuss that fragment, I’d just like to quote the end of the sentence. It’s a long way further down the page, so it’s possible that some readers have missed it.

That is, while there is value in the items on the right, we value the items on the left more.

Refactor -> Inline Reference:

That is, while there is value in comprehensive documentation, we value working software more.

Refactor -> Extract Statement:

There is value in comprehensive documentation.

Now I want to apply the same set of transforms to another of the sentence fragments:

There is value in following a plan.

Nobody ever said don’t have a plan. You should have a plan. You should be willing to amend the plan. I was recently asked what I’d do if I found that my understanding of the “requirements” of a system differ from the customer’s understanding. It depends a lot on context but if there truly is a “the customer” and they want something that I’m not expecting to offer them, it’s time for me to either throw away my version or find a different customer.

Similarly, nobody said don’t have comprehensive documentation. I have been on a very “by-the-book” Agile team, where a developer team lead gave feedback that they couldn’t work out where a change would go to enable a particular feature. That’s architecture! What they wanted was an architectural plan of the system. Except that they couldn’t explicitly want that, because software architecture is so, ugh, 1990s and Rational Rose. Wanting an architecture diagram is like wanting to use CORBA, urrr.

Once you get past that bizarre emotional response, give me a call.

posted by Graham at 16:39  

Monday, February 11, 2019

Input-Output Maps are Strongly Biased Towards Simple Outputs

About this paper

Input-Output Maps are Strongly Biased Towards Simple Outputs, Kamaludin Dingle, Chico Q. Camargo and Ard A. Louis, Nature Communications 9, 761 (2018).

Notes

On Saturday I went to my alma mater’s Morning of Theoretical Physics, which was actually on “the Physics of Life” (or Active Matter as theoretical physicists seem to call it). Professor Louis presented this work in relation to RNA folding, but it’s the relevance to neural networks that interested me.

The assertion made in this paper is that if you have a map of a lot of inputs to a (significantly smaller, but still large) collection of outputs, the outputs are not equally likely to occur. Instead, the simpler outputs are preferentially selected.

A quick demonstration of the intuition behind this argument: imagine randomly assembling a fixed number of lego bricks into a shape. Some particular unique shape with weird branches can only be formed by an individual configuration of the bricks. On the other hand, a simpler shape with large degree of symmetry can be formed from different configurations. Therefore the process of randomly selecting a shape will preferentially pick the symmetric shape.

The complexity metric that’s useful here is called Kolmogorov complexity, and roughly speaking it’s a measure of the length of a Universal Turing Machine program needed to describe the shape (or other object). Consider strings. A random string of 40 characters, say a56579418dc7908ce5f0b24b05c78e085cb863dc, may not be representable in any more efficient way than its own characters. But the string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, which is 40 characters, can be written with the Python program:

'a'*40

which is seven characters long including the newline. Assuming eight bits per character, the random string needs 40*8=320 bits to be represented. The forty as can be found by actually finding the Python program, which is 56 bits. The assertion is that a “find a program that generates character sequences of length 40” algorithm (with some particular assumptions in place) will find the a56579… string with probablity 2^-320, but will find the aaa… string with probability 2^-56, which is much, much more likely.

In fact, this paper shows that the upper and lower bounds on the probability of a map yielding a particular output for random input are both dependent on the Kolmogorov complexity of the output. It happens that due to the halting problem, you can’t calculate Kolmogorov complexity for arbitrary outputs. But you can approximate it, for example using Lempel-Ziv complexity (i.e. the length of the input to a lossless compression algorithm needed to recover the same output).

Where does this meet neural networks? In a preprint of a paper selected for the ICLR 2019, with two of the same authors as this paper. Here, we find that a neural network can be thought of as a map between the inputs and weights to a function that does the inference.

Typically neural network architectures have lots more parameters than there are points in the training set, so how is it that they manage to generalise so well? And why is it that different training techniques, including stochastic gradient descent and genetic algorithms, result in networks with comparable performance?

The authors argue that a generalising function is much less complex than an overfitting function, using the same idea of complexity shown above. And that as the training process for the network is sampling the map of inputs->functions, it is more likely to hit on the simple functions than the complex ones. Therefore the fact that neural networks generalise well is intrinsic to the way they select functions from a wealth of possibilities.

My hope is that this is a step toward a predictive theory of neural network architectures. That by knowing the something of the function we want to approximate, we can set a lower bound on the complexity of a network needed to discover sufficiently generalisable functions. This would be huge for both reducing the training effort needed for networks, and for reducing the evaluation runtime. That, in turn, would make it easier to use pretrained networks on mobile and IoT devices.

posted by Graham at 13:30  

Thursday, February 7, 2019

HPC at FOSDEM 2019

This year’s FOSDEM featured an HPC, Big Data and Data Science devroom on the Sunday. This post is the first part of my notes on the topics presented there. If you are interested, book some time and let’s talk about what it means for your and your high-performance computing team.

OpenHPC Update

Adrian Reber from the OpenHPC project gave a refresher on what OpenHPC is, and a status update. OpenHPC has not been represented at FOSDEM since 2016, when the project was very new.

It’s a community-driven project with representation from many vendors and HPC sites. On first blush their output might appear to be “RPM packages” and “documentation” but their mission is actually to discover and share best practices in HPC management. Those packages are all well-tested with each other, and the documentation is tested every release, too. The idea is that if you build the core of your cluster with OpenHPC packages on CentOS-like Linux distributions, on either x86-64 or AArch64, you get to rely on tried and tested work from the whole community.

Reber, who works at Red Hat on their OpenHPC efforts, invited everyone to join the weekly project steering calls in a demonstration of the openness of the project. He discussed future directions, including an upcoming release v1.3.7 that will include packages rebuilt with the ARM HPC compiler for AArch64, and the challenges of understanding when is right to release v1.4 which will drop SLES12 for SLES15 and RHEL7 for RHEL8.

ReFrame

On the subject of HPC libraries, a common frustration is testing codes with various combinations of compilers, MPI libraries, hardware capabilities and so on. Developers both want to know that their code is correct (i.e. the science outcomes are still valid after a change) and that the performance has not been significantly impacted.

Victor Holanda discussed ReFrame, a tool for HPC regression and performance testing developed at CSCS and used regularly on Piz Daint and their other clusters. Written in Python, it gives test authors a way to express what their tests require (e.g. that they must run on machines with CUDA, compile a particular code with one of three different compilers, load environment modules with one of two different MPIs), run the tests, and inspect the output for certain outcomes.

Testers get to run a single command, or point their Jenkins or Travis CIs at a single command, to discover and execute the tests. The ReFrame runtime will compare the environments that the test can use with the ones that are available, and will report on the outcomes in each of those environments.

Inside CSCS, ReFrame is used for a 90 minute nightly production test run, and 10 minute maintenance runs to check for system regressions after configuration changes. They also have a set of diagnostic tests to help understand what’s happened if a node goes bad. Their approach to correctness is very robust; the team do not declare that they support something until it has enough users to know how well it works. They also say that in three years of development they “have never seen a python stacktrace” from ReFrame, as they test ReFrame with ReFrame while they are developing it.

Singularity Containers

Singularity from sylabs is a container runtime tool that specifically addresses problems containerising HPC workloads. Eduardo Arango gave a “what’s new in Singularity” update, as FOSDEM 2017 had already featured an introduction-level talk.

What’s new is that they’ve rewritten in Go. This means they get better integration with libraries used in Docker, Kube etc., and could adopt the de facto standard Containers Networking Interface for software-defined networking when running containers. It also reduces the dependencies needed to get Singularity up and running.

The new version uses a new format for containers, SIF (Singularity Image Format), a read-only SquashFS filesystem along with metadata, all of which can be cryptographically signed using PGP for integrity protection. An upcoming extension will allow a writable overlay to be added to a SIF.

Supporting this, Sylabs have a new container library similar to DockerHub for hosting SIF images for public or private cloud use. They have a key store for those PGP signing keys, and a cloud-based remote image builder for developers who need to build images but can’t do it locally.

Conclusion

This has been part one of my FOSDEM HPC round-up. I’ve focussed on the tools that are out there for automating and simplifying HPC workflows, because it’s an interesting problem and one that presents challenges to many HPC teams. Don’t forget that the Labrary can help!

posted by Graham at 18:30  

Powered by WordPress