Bottom-up teaching

We’re told that the core idea in computer programming is problem-solving. That one of the benefits of learning about computer programming (one that is not universally accepted) is gaining the skill of problem decomposition.

If you look at real teaching of computing, it seems to have more to do with solution composition than problem decomposition. The latter seems to be background noise: here are the things you can build solutions with, presumably at some point you’ll come across a solution that’s the same size and shape as one of your problem components though how is left up to you.

I have many books on programming languages. Each lists the features of the language, and gives minimally complex examples of the use of those features. In that sense, Kernighan and Ritchie’s “The C Programming Language” (section 1.3, the for statement) is as little an instructional in solving problems using a computer as Eric Nikitin’s “Into the Realm of Oberon” (section 7.1, the FOR loop) or Dave Thomas’s “Programming Elixir” (section 7.2, Using Head and Tail to Process a List).

A course textbook on bitcoin and blockchain (Narayanan, Bonneau, Felten, Miller and Goldfeder, “Bitcoin and Cryptocurrency Technologies”) starts with Section 1.1, “Cryptographic hash functions”, and builds a cryptocurrency out of them, leaving motivational questions about politics and regulation to Chapter 7.

This strategy is by no means universal: Liskov and Guttag’s “Program Development in Java” starts out by describing abstraction, then looks at techniques for designing abstractions in Java. Adele Goldberg and Alan Kay described teaching Smalltalk by proposing exploratory projects, designing the objects that model the problem under consideration and the way in which they will communicate, then incrementally filling in by designing classes and methods that have the desired properties. C.J. Date’s “An Introduction to Database Systems” answers the question “why databases?” before introducing the relational model, and doesn’t introduce SQL until it can be situated in the context of the relational model.

Both of these approaches, and their associated techniques (the bottom-up approach and solution construction; the top-down approach and problem decomposition) are useful; the former leads to progress and the latter leads to understanding. But both must be taken in concert, because understanding without progress leads to the frustration of an unsolved problem and progress without understanding is merely the illusion of progress.

My guess is that more programmers – indeed whole movements, when we consider the collective state of things like OOP, functional programming, BDD, or agile practices – are in the “bottom-up only” group than in the “top-down only” or “a bit of both” groups. That plenty more copies of Introduction to Programming in [This Week’s Hot Language] have been sold than Techniques for Making Your Problem Amenable to Computation. That the majority of software really does comprise of solutions looking for problems.

On immutable data structures…?

If you write a scholarly publication and cite another one, what you say about it depends on its mutability. An article or a book can be cited by saying “this publication I’m identifying here says this”. Maybe you have to version your claim: “the second edition of this publication says this”. They’re immutable. Even if the third edition doesn’t say the thing you relied on in constructing your argument, the second edition still did. Someone who can get access to that second edition can look at it and see how you built your synthesis.

You can’t do that with a website. Websites change. Instead, you have to say that “this website identified by this URL, on the date that I read it, said this”. Someone who comes along later has to sort-of trust that, because if the website no longer says that, it might not be possible to tell whether it ever did say that, or whether you’re telling porky pies about your research.

Dependencies in software systems are usually given as if they work like book citations:

gem 'rack', '1.0'

…looks like it says “the thesis that’s constructed by my software is a synthesis in which version 1.0 of rack is axiomatic”, but it doesn’t. It’s really saying “at the time that I want you to think that I actually tested this stuff, it was true that the thing identified by being version 1.0 of rack was…”. It’s really a poorly-constructed website citation.

It’s fun to think, particularly in light of the npm shenanigans, just how long that dependency you didn’t bother downloading will still be around. You can presumably forget about relying on commercial software, as the licence agreement is the legal equivalent of Vader saying “I have altered the deal. Pray I do not alter it any further.” And indeed you can forget most open sores licences, which don’t put any requirements on your supplier. But what about the GPL? Version 3 (retrieved from this URL on 24th March 2016) says that anybody who distributes licensed software as object code may, as one possible way to provide access to the corresponding source code, provide that object:

accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge

What if the person you got the object code from dies within that three year period, do you have the right to ask the executor of their estate for the source code?

Intra-curricular activities

I’m apparently fascinated by the idea of defining curricula for learning programming. I’ve written about how we need to be careful what we try to pay forward from the way we learned in the past, and I’ve talked about how we do need to pay it forward so that the second hundred years see faster progress than the first hundred years.

I’m a fan (with reservations, as seen below) of the book series as a form of curriculum. Take something like Kent Beck’s signature series, which covers a decent subset of both technical and social approaches in software development in breadth and in depth. You could probably imagine developers who would benefit from reading some or all of the books in the series. In fact, you may be one.

Coping with people approaching the curriculum from different skill levels and areas of experience is hard. Not just for the book series, it’s hard in general. Universities take the simplifying approach of assuming that everybody wants to learn the same stuff, and teaching that stuff. And to some extent that’s easy for them, because the backgrounds of prospective students is relatively uniform. Even so, my University course organised incoming students into two groups; those who had studied complex numbers at A-level and those who had not. The difference was simply that the group who had not were given a couple of lectures on complex numbers, then it was assumed that they also knew the topic from the fourth week.

Now consider selling a programming book to the public. Part of the proposal process with all of the publishers I’ve worked with has been describing the target audience. Is this a book for people who have never programmed before? For people who have programmed a little, but never used this particular tool or technique? People who have programmed a lot but never used this tool? Is this thing similar to what they have used before, or very different? For people who are somewhat familiar with the tool? For experts (and how is that defined)? Is it for readers comfortable with maths? For readers with no maths background?

Every “no” in answer to one of those questions is an opportunity to improve the experience for a subset of the potential audience by tailoring it to that subset. It’s also an opportunity to exclude a subset of the audience by making the content less relevant to them.

[I’ll digress here to explain how I worked that out for my books: whether it’s selfishness or a failure of empathy, I wrote books that I wanted to read but that didn’t exist. Therefore the expected experience is something similar to mine, back when I filled in the proposal form.]

Clearly no single publication will cover the whole phase space of potential readers and be any good. The interesting question is how much it’s worth covering with multiple publications; whether the idea of series-as-curriculum pulls in the general direction as much as scope-limiting each book pulls in the specific. Should the curriculum take readers on a straight line from novice to master? Should it “fan in” from multiple introductions? Should it “fan out” in multiple directions of interest and enquiry? Would a non-linear curriculum be inclusive or offputtingly confusing? Should the questions really be answered by substituting the different question “how many people would buy that”?

Preparing for Computing’s Big One-Oh-Oh

However you slice the pie, we’re between two and three decades away from the centenary celebration for applied computing (which is of course significantly after theoretical or hypothetical advances made by the likes of Lovelace, Turing and others). You might count the anniversary of Colossus in 2043, the ENIAC in 2046, or maybe something earlier (and arguably not actually applied) like the Z3 or ABC (both 2041). Whichever one you pick, it’s not far off.

That means that the time to start organising the handover from the first century’s programmers to the second is now, or perhaps a little earlier. You can see the period from the 1940s to around 1980 as a time of discovery, when people invented new ways of building and applying computers because they could, and because there were no old ways yet. The next three and a half decades—a period longer than my life—has been a period of rediscovery, in which a small number of practices have become entrenched and people occasionally find existing, but forgotten, tools and techniques to add to their arsenal, and incrementally advance the entrenched ones.

My suggestion is that the next few decades be a period of uncovery, in which we purposefully seek out those things that have been tried, and tell the stories of how they are:

  • successful because they work;
  • successful because they are well-marketed;
  • successful because they were already deployed before the problems were understood;
  • abandoned because they don’t work;
  • abandoned because they are hard;
  • abandoned because they are misunderstood;
  • abandoned because something else failed while we were trying them.

I imagine a multi-volume book✽, one that is to the art of computer programming as The Art Of Computer Programming is to the mechanics of executing algorithms on a machine. Such a book✽ would be mostly a guide, partly a history, with some, all or more of the following properties:

  • not tied to any platform, technology or other fleeting artefact, though with examples where appropriate (perhaps in a platform invented for the purpose, as MIX, Smalltalk, BBC BASIC and Oberon all were)
  • informed both by academic inquiry and practical experience
  • more accessible than the Software Engineering Body of Knowledge
  • as accepting of multiple dissenting views as Ward’s Wiki
  • at least as honest about our failures as The Mythical Man-Month
  • at least as proud of our successes as The Clean Coder
  • more popular than The Celestial Homecare Omnibus

As TAOCP is a survey of algorithms, so this book✽ would be a survey of techniques, practices and modes of thought. As this century’s programmer can go to TAOCP to compare algorithms and data structures for solving small-scale problems then use selected algorithms and data structures in their own work, so next century’s applier of computing could go to this book✽ to compare techniques and ways of reasoning about problems in computing then use selected techniques and reasons in their own work. Few people would read such a thing from cover to cover. But many would have it to hand, and would be able to get on with the work of invention without having to rewrite all of Doug Engelbart’s work before they could get to the new stuff.

It's dangerous to go alone! Take this.

✽: don’t get hung up on the idea that a book is a collection of quires of some pigmented flat organic matter bound into a codex, though.

Did that restructuring work actually help?

Before getting into the meat of this post, I’d like to get into the meta of this post. This essay, and I imagine many in this blog [Ed: by which I meant the blog this has been imported from], will be treading a fine line. The intended aim is to question accepted industry practice, and find results consistent or inconsistent with the practice as a beneficial task to perform. I’m more likely to select papers that appear to refute the practice, as that’s more interesting and makes us introspect the way we work more than does affirmation. The danger is that this skates too close to iconoclasm, as expressed in the Goto Copenhagen talk title Is it just me or is everything shit?. My intention isn’t to say that whatever we’re doing is wrong, just to provide some healthy inspection and analysis of our industry.

Legacy Software Restructuring: Analyzing a Concrete Case

The thread in this paper is that metrics that have long been used to measure the quality of source code—metrics related to coupling and cohesion—may not actually be relevant to the problems developers have to solve. Firstly, the jargon:

  • coupling refers to the connections between the part of the software (module, class, function, whatever) under consideration and the rest of the software system. Received wisdom is that lower coupling (i.e. fewer connections that are less-tightly intertwined) is better.
  • cohesion refers to the relatedness of the tasks performed by the (module, class, function, whatever) under consideration. The more different responsibilities a component provides or uses, the lower its cohesion. Received wisdom is that higher cohesion (i.e. fewer responsibilities per module) is better.

We’re told that striving for low coupling and high cohesion will make the parts of our software reusable and replaceable, and will reduce the number of code sites we need to change when we want to fix bugs in the future. The focus of this paper is on whether the metrics we use as proxies for these properties actually represent enhancements to the code; in other words, whether we have a systematic way to decide whether a change is an improvement or not.

Approach

The way in which the authors test their metrics is necessarily problematic. There is no objective standard against which they can be prepared—if there were, we’d have an objective standard and we could all go home. They hypothesise that any restructuring effort by a development team must represent an improvement in the codebase: if you didn’t think a change was better, why would you make that change?

Necessity is one such reason. Consider the following thought process:

I need to add this feature to my product, this change was unforeseen at design time, so the architecture doesn’t really support it. I’m not very happy about the this, but shoehorning it in here is the simplest way to support what I need.

To understand other problems with this methodology, a one-paragraph introduction to the postmodern philosophy of software engineering is required. Software, it says, supports not some absolute set of requirements that were derived from studying the universe, but the ad-hoc set of interactions between the various people who engage with the software system. Indeed, the software system itself modifies these interactions, creating a feedback loop that in fact modifies the requirements of the software that was created. Some of the results of this philosophy[*] are expressed in “Manny” Lehman’s Laws of Software Engineering, which are also cited by the paper I’m talking about here. The authors offer one of Lehman’s Laws as:

[*] I don’t consider Lehman’s laws to be objectively true of software artefacts, but to be hypotheses that arise from a particular philosophy of software. I also think that philosophy has value.

Considering Lehman’s law of software evolution, such systems would already have suffered a decrease in their quality due to the maintenance. This would increase the probability that the restructuring has a better modular quality.

This statement is inconsistent. On the one hand, this change improves the quality of some software. On the other hand, the result of a collection of such changes is to decrease the quality. Now there’s nothing to say that a particular change won’t be an improvement; but there’s also nothing to say that the observed change has this property.[*] The postmodern philosophy adds an additional wrinkle: even if this change is better, it’s only better _as perceived by the people currently working with the system_. Others may have different ideas. We saw, in discussing the teaching of programming, that even experienced programmers can have difficulty reading somebody else’s code. I wouldn’t find it a big stretch to posit that different people have different ideas of what constitutes “good” modular decomposition, and that therefore a different set of programmers would think this change to be worse.

[*]Actually I think the sentence in the paper might just be broken; remember that I found this on the preprint server so it might not have been reviewed yet. One of Lehman’s laws says that, for “E-type” software (by which he means systems that evolve with their environment—in other words, systems where a postmodern appraisal is applicable), the software will gradually be perceived as _reducing_ in quality if no maintenance work goes into it. That’s because the system is evolving while the software isn’t; the requirements change without the software catching up.

Results and Discussion

The authors found that, for three particular revisions of Eclipse, the common metrics for coupling and cohesion did not monotonically “improve” with successive restructuring efforts. In some cases, both coupling and cohesion decreased in the same effort. In addition, they found that the number and extent of cyclic dependencies between Java packages increased with every successive version of the platform.

It’s not really possible to choose a conclusion to draw from these results:

  • maybe the dogma of increasing cohesion and decreasing coupling is misleading.
  • maybe the metrics used to measure those properties were poorly chosen (though they are commonly-chosen).
  • maybe the Eclipse developers use some other measurement of quality that the authors didn’t ask about.
  • maybe some of the Eclipse engineers do take these properties into account, and some others don’t, and we [can’t – added on import] even draw general conclusions about Eclipse.

So this paper doesn’t demonstrate that cohesion and coupling metrics are wrong. But it does raise the important question: might they be not right? If you’re relying on some code metrics derived from received wisdom or dogma, it’s time to question whether they really apply to what you do.

Teaching Programming to People. It’s easy, right?

I was doing a literature search for a different subject (which will appear soon), and found a couple of articles related to teaching programming. I don’t know if you remember when you learnt programming, but you probably found it hard. I’ve had some experience of teaching programming: specifically, teaching C to undergraduates. Said undergraduates, as it happens, weren’t on a computing course (they studied Physics), and only turned up to the few classes they had a year because attendance was mandatory. The lectures, which weren’t compulsory, had fewer students showing up.

Teaching Python to Undergraduates

When I took the course, we were taught a Pascal variant on NeXTSTEP. I have some evidence that Algol had been the first programming language taught on the course; probably on an HLH Orion minicomputer. While Pascal was developed in part as a vehicle to teach structured programming concepts, the academics in the computing course at my department were already starting to see it as a toy language with no practical utility. Such justification was used to look for a different language to teach. As you can infer from the previous paragraph, we settled on C: but not before a test where interested students (myself included) who had, for the most part, already taken the Pascal course. The experiences with Python were written up in a Masters’ thesis by Michael Williams, the student who had converted the (Pascal-based, of course) teaching materials to Python.

Like Wirth, when Guido van Rossum designed Python he had teaching in mind; though knowing the criticisms of Pascal he also made it extensible so that it could be used as a “real” language. This extensibility was put to use in the Python experiment at Oxford, giving students the numpy module which they used mainly for its matrix datatype (an important facility in Physics).

What the report shows is that it’s possible to teach someone enough Python to get onto problems with numerical computation in a day; although clearly this is also true of Pascal and C. One interesting observation is the benefit of enforced layout (Python’s meaningful indentation) to both the students, who reported that they did not find it difficult to indent a program correctly; and to the teachers, who found that because students were coerced into laying out their code consistently, it was easier to read and understand the intention of code.

An interesting open question is whether that means enforced indentation leads to more efficient code reviews in general, not just in an expert/neophyte relationship. Many developers using languages that don’t enforce layout choose to add the enforcement themselves. Whether this is an issue at all when modern IDEs can lay out code automatically (assuming developers with enough experience of the IDE to use that feature) also needs answering.

The conclusion of this study was that Python is appropriate as a teaching language for Oxford’s Physics course, though clearly it was not adopted and C was favoured. Why was this? As this decision was made after the report was produced, it doesn’t say, and my own recollection is hazy. I recall the “not for real world use” lobby was involved, that it was also possible to teach C in the time involved, and that while many people wanted to teach Java this was overruled due to a desire to avoid OO. The spurned Java crowd preferred C for its Java-like syntax.

Wait, C?

The next part of this story wasn’t published, but I’ll cover it anyway just for completeness. The year after this Python study, the teaching course did an A/B test where half of the first year course was taught Python, and half C. Whatever conclusions were drawn from this test, C won out so either there was no significant difference in that type of course or the “real worldness” of C was thought to be greater than that of Python. I remember both being given as justifications, but don’t know whether either or both were retrofitted.

Whatever the cause, teaching C was sufficiently not bad that the course is still based on the language.

Going back to that Java decision. How good is Java as a teaching language?

Analyses of Student Programming Errors In Java Programming Courses

I’m going to use the results of this paper to argue that Java is not good as a teaching language.

Programming errors can be categorized as syntax, semantic and logic. A syntax error is an error due to incorrect grammar. Syntax errors are often detected by a program called a compiler, if the language is a compiled language such as Java. A semantic error is an error due to misuse of a programming concept, despite correct syntactic structure. Semantic errors are caught when the program code is compiled. A logic error occurs when the program does not solve the problem that the programmer meant for it to solve.

[Notice that the study is only investigating errors: it’s not completely obvious but “bugs” aren’t included. The author’s only reporting on things that are either compiler or runtime errors in Java-land, like typos and indices out of bounds.]

Categorization of errors of the present study into syntax, semantic, runtime and logic revealed that syntax errors made up 94.1%, semantic errors 4.7% and logic
errors 1.2%.

In the ideal world, a programming course teaches students the principles of programming and how to combine these to solve some computational problem. In learning these things, you expect people to make semantic and logic errors: they don’t yet know how these things work. Syntax errors, on the other hand, are the compiler’s way of saying “meh, you know what you meant but I couldn’t be bothered to work it out”, or “I require you to jump through some hoops and you didn’t”.

You don’t want syntax errors when you’re teaching programming. You want people to struggle with the problems, not the environment in which those problems are presented. Imagine failing a student because they pushed the door to the exam room when it was supposed to be pulled: that’s a syntax error. One of the roles of a demonstrator in a computing course is to be the magic compiler pixie, fixing syntax errors so the students can get back on track.

OK, so not Java. What else is out there?

C++

The Oxford Physics investigation didn’t publish any results on the difficulty of teaching C. Thankfully, the Other Place is more forthcoming. Tim Love, author of Tim Love’s Cricket for the Dragon 32 and teacher of C++ to Engineering undergraduates blogged about difficulties encountered defining functions in C++. There’s no frequency information here, just representative problems. While most of the problems are semantic rather than syntactic, with some logic problems too, we can’t really compare these with the Java analysis above anyway.

The sorts of problems described in this blog post are largely the kind of problems you want, or at least don’t mind, people experiencing while you’re teaching them programming: as long as they get past them, and understand the difference between what they were trying and what eventually worked.

In that context, comments like this are worrying:

I left one such student to read the documentation for a few minutes, but when I returned to him he was none the wiser. The Arrays section of the doc might be sub-optimal, but it can’t be that bad – it’s much the same as last year’s.

So the teacher knows that the course might have problems, but not what they are or how to correct them despite seeing the failure modes in first person. This is not isolated: the Oxford course linked above has not changed substantially since the version I wrote (which added all the Finder & Xcode stuff).

So, we know something about the problems encountered by students of programming. Do we know anything about how they try to solve those problems?

An analysis of patterns of debugging among novice computer science students

We’ve already seen that students didn’t think to use the interactive interpreter feature of Python when the course handbook stopped telling them about it. In this paper, Ahmadzadeh et al modified the Java compiler to collect analytics about errors encountered by students on their course (the methodology looks very similar to the other Java paper, above). An interesting statistic noticed in §3.1, a statistical analysis of compiler errors:

It can be seen from this table that the error that is most common amongst all the subjects is failing to define a variable before it is used. This was almost always the highest frequency error when teaching a range of different concepts.

It’s possible that you could avoid 30-50% of the problems discovered in this study by using a language that doesn’t require explicit declaration of variables. Would the errors then be replaced by something else? Maybe.

In section 4, the authors note that there’s a distinction between being able to debug effectively and being able to program well: most people who are good at debugging are also good programmers, but a minority of good programmers are good at debugging. Of course this is measuring a class of neophytes so it’s possible that this gap eventually closes, but more work would need to be done to demonstrate or disprove that.

I notice that the students in this test are (at least, initially) fixing problems introduced into a program by someone else. Is that skill related to fixing problems in your own code? Might you be more frustrated if you think you’ve finished an assignment only to find there’s a problem you don’t understand in it? Does debugging someone else’s program support the educational goal? This paper suggests that the skills are in fact different, which is why “good” programmers can be “bad” debuggers: they understand programming, but not the problem solved by someone else’s code. They also suggest that “bad” programmers who do well at fixing bugs do it because they understand the aim of the program, and can reason about what the software should be doing. Perhaps being good at fixing bugs means more understanding of specifications than of code—traditionally the outlook of the tester (who is called on to find bugs but rarely to fix them).

Conclusions and Questions

There’s a surprising amount of data out there on the problems faced by students being taught programming—some of it leads directly to actionable conclusions, or at least testable hypotheses. Despite that, some courses look no different from courses I taught in 2004-2006 nor indeed any different from a course I took in 2000-2001.

Judicious selection of language could help students avoid some of the “syntactic” problems in programming, by choosing languages with less ceremony. Whether such a change would lead to students learning faster, enjoying the topic more, or just bumping up against a different set of syntax errors needs to be tested. But can we extrapolate from this? Are environments that are good for student programmers good for novices in general, including inexperienced professionals? Can this be taken further? Could we conclude that some languages waste time for all programmers, or that becoming expert in programming just means learning to cope with your environment’s idiosyncrasies?

And what should we make of this result that being good at programming and debugging do not go together? Should a programming course aim to develop both skills, or should specialisation be noticed and encouraged early? [Is there indeed a degree in software testing offered at any university?]

But, perhaps most urgently, why are so many different groups approaching this problem independently? Physics and Engineering academics are not experts at teaching computing, and as we’ve seen science code is not necessarily the best code. Could someone aggregate these results from various courses and produce the killer course in undergraduate computing?

Elegant Object-oriented Software Design via Interactive, Evolutionary Computation

The abstract of this paper from the ArXiv had me concerned:

Design is fundamental to software development but can be demanding to perform. Thus to assist the software designer, evolutionary computing is being increasingly applied using machine-based, quantitative fitness functions to evolve software designs. However, in nature, elegance and symmetry play a crucial role in the reproductive fitness of various organisms. In addition, subjective evaluation has also been exploited in Interactive Evolutionary Computation (IEC). Therefore to investigate the role of elegance and symmetry in software design, four novel elegance measures are proposed based on the evenness of distribution of design elements. In controlled experiments in a dynamic interactive evolutionary computation environment, designers are presented with visualizations of object-oriented software designs, which they rank according to a subjective assessment of elegance. For three out of the four elegance measures proposed, it is found that a significant correlation exists between elegance values and reward elicited. These three elegance measures assess the evenness of distribution of (a) attributes and methods among classes, (b) external couples between classes, and (c) the ratio of attributes to methods. It is concluded that symmetrical elegance is in some way significant in software design, and that this can be exploited in dynamic, multi-objective interactive evolutionary computation to produce elegant software designs.

The “design” of a software system is, to me, a part of the social science aspect of software engineering. Does the design make it easy for me to work out how the software functions? Can I see what I need to change to fix some problem? If I don’t fix this problem, can you also see what you’d need to change? Does working with this design cause an emotional response?

With this mindset, it’s hard to understand how an objective metric of software design can be formulated. Without that understanding, it’s impossible to see any value in letting a software system design another software system that human developers are going to work on. In fact, it seems entirely back to front: the design is (to me) the part that needs a combination of experience, insight and serendipity to create. If a computer can then automatically fill some of the details in a way that saves time and reduces error, that would be useful. Doing it the other way around means human programmers become blue-collar subordinates to the (literal) software architect.

So, I didn’t exactly jump into the rest of this paper with an open mind, which I recognised was a problem I needed to deal with so I ploughed on anyway. I started by reading “A Survey on Search-based Software Design”, along with some of the other references, with a view to working out just what it was that these researchers are trying to automate. In the event, this post took a couple of weeks to write at my usual “whenever I get a chance” rate—there was a lot to understand.

What’s Going On?

The idea is that certain principles in object-oriented design can be assigned a quantitative value: a “score”, if you will. So you could score a design on how tightly-coupled the classes are, on how many responsibilities each class has, and on other features. You can also decide that a good design would aim to lower or increase particular scores; for example that looser coupling and fewer responsibilities are “better”. You could decide that some designs are just “stillborn”, and no matter how well they do on some metrics you’re never going to use them: a circular reference, or a class with no responsibilities, might instantly be discarded.

Now imagine deriving some initial design for your software, for example from a collection of use cases. (You may be wondering how, and that’s a good question: if your initial guess at the design is derived automatically from the use cases, then the use cases themselves need to be pretty precise, complete and unambiguous. In other words, they need to be written in a computer programming language.[*]) You score that design according to the criteria you defined, then make some “mutations” (which, in the case of evolutionary software design, means applying design patterns from a catalogue). The mutations that score better, you keep, combining them and mutating them further. Eventually you should have a collection of design candidates that are all much better than the initial guess.

[*]As a thought experiment, take the use-case diagram for a cinema booking system used as one of the inputs for this paper’s methods. Try designing a software system to implement these use cases; every time you have a question that isn’t answered by the diagram, make a note but _don’t assume an answer_. How many questions do you end up with? Are you happy using a design in which these questions are unanswered? My guess is you’ll be OK to leave some of them, designing the software to be flexible to different answers. But some will cause more problems unless they’re addressed.

Is This Useful?

I don’t feel like I got over the bias that I went into this post with: that the point of software design is to communicate something about that software among the various people who will be working on it. Computer-generated design is like computer-generated prose, when viewed from that perspective: uncannily close, but no substitute for the real thing.

What you could get from a technique like this are proposals for improvement on designs: indeed one branch (or clade, perhaps) of research in which genetic algorithms are applied to software design is in refactoring. One can imagine future UML tools (or IDEs) offering suggestions at the architecture level, just as current IDEs can offer suggestions for individual lines or methods.

That’s basically what this paper is driving at: the “interactive” part of Interactive Evolutionary Computation. Human participants created the first versions of the designs, and qualitatively evaluated the later iterations (which were both produced and also evaluated by the software). Ultimately, software designers were called upon to reward the “better” designs and to decide when to stop the iteration: i.e. they chose whether to accept the “suggestions” made by the evolutionary algorithm.

So is this technique a step on the way to having that tool? Looking at table 4, you might think that the software did create better designs than the humans in two thirds of cases. Such is the danger of bold typeface. Look again at the standard deviations. Unfortunately, discussing the results with a tame statistician, we couldn’t agree that the analysis in the paper shows the significant results the authors claim. As an example, it’s not clear that pairing any two metrics actually makes sense, or that just because one measurement comes out consistently lower overall it’s a better indicator of “elegance” than another (which might vary more between designs: something we’re not told here).

The authors are on clearer footing when evaluating the relationship between the rewards given and the metrics: ignoring the software algorithm completely, do people consistently think of some particular property of a software design as indicative of elegance? While they’ve only got 7 participants (who, assuming you know the group, can probably be de-anonymised based on the information presented…just saying), and it’s risky to draw general conclusions from such a small number of people[*], there are early indications here of consistency.

[*]particularly as they’re all in academia, and probably all in the same institution.

On Scientific Computing

Or: Not everyone works the way you work

Currently doing the rounds on Twitter is a paper from the ArXiV called Best Practices for Scientific Computing—a paper with 13 authors and 6 pages,including a page-long collection of references. Here’s the abstract:

Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. We describe a set of best practices for scientific software development that have solid foundations in research and experience, and that improve scientists’ productivity and the reliability of their software.

Let me start with an anecdote. It’s 2004, and I’ve just started working as a systems manager in a university computing lab. My job is partly to maintain the computers in the lab, partly to teach programming and numerical computing to Physics undergraduates, and partly to write software that will assist in said teaching. As part of this work I started using version control, both for my source code and for some of the configuration files in /etc on the servers. A more experienced colleague saw me doing this and told me that I was just generating work for myself, that this wasn’t necessary for the small things I was maintaining.

Move on now to 2010, and I’m working in a big scientific facility in the UK. Using software and a lot of computers, we’ve got something that used to take an entire PhD to finish down to somewhere between one and eight hours. I’m on the software team, and yes we’re using version control to track changes to the software and to understand what version is released. Well, kindof, anyway. The “core” is in version control, but one of its main features is to provide a scripting environment and DSL in which scientists at the “lab benches”, if you will, can write up scripts that automate their experiments. These scripts are not (necessarily) version-controlled. Worse, the source code is deployed to the experimental stations so someone who discovers a bug in the core can fix it locally without the change being tracked in version control.

So, a group does an experiment at this facility, and produces some interesting results. You try to replicate this later, and you get different results. Could be software-related, right? All you need to do is to use the same software that the original group used…unfortunately, you can’t. It’s gone.

That’s an example of how scientists failing to use the tools from software development could be compromising their science. There’s a lot of snake oil in the software field, both from people wanting you to use their tools/methodologies because you’ll pay them for it, and from people who have decided that “their” way of working is correct and that any other way is incorrect. You need to be able to cut through all of that nonsense to find out how particular tools or techniques impact the actual work you’re trying to achieve. Current philosophy of science places a high value on reproducibility and auditing. Version control supports that, so it would be beneficial for programmers working in science to use version control.

But version control is only one of the 10 recommendation sections in this paper (another is about using the computer to record history, something that I’ll assume is covered well by the above discussion). That leaves eight other sections, which each contain numbered pronouncements about how scientists should write software.

Were you surprised?

I expect, if you write software in the commercial sector, you wouldn’t find any of their suggestions contentious: examples include naming things meaningfully, using a consistent convention for names and layout, don’t repeat yourself, and so on. I included this paper here to start discussion of an important point.

What goes on in commercial software engineering is not the be-all and end-all of software development. Scientific software has been around for as long as there have been computers to run software on, and indeed not only is some really old software still in use but the people who wrote it are still around and maintaining it. In the aforementioned university lab, one of my tasks was to help a professor who’d been using his home-grown FORTRAN FITS manipulation routines for at least two decades. Every system he’d used it on—most recently PowerPC, MIPS and Alpha workstations—had been big-endian and he didn’t know why it gave the wrong results when used on our new Intel Mac. His postdocs and PhD students were using the same code—in the same FORTRAN language, which he’d either taught them or given them a book on. And then of course when they moved to a different institution they’d take that code and that understanding of code with them.

I imagine that many professional programmers are not surprised by the validity of (m)any of the statements made in this paper, but by the necessity of stating them. No, not everyone uses version control, or thinks that agile is the best thing ever, or uses consistent naming conventions throughout a source file. Indeed in my experience of scientific programming, use of a symbolic debugger wasn’t If you consider all of these problems to be “solved” then you’re really only looking at a limited part of the world of software development. It’s not just scientific computing that doesn’t match that worldview; what about all the people out there for whom programming is a bunch of Excel formulae and maybe the odd VBA macro pasted from a website?

In both commercial and scientific software development, understanding and behaviour is spread by sharing knowledge from masters to apprentices. I think that the reason there’s such a big difference in practice could be due to the longer generations in scientific software. That 20+-year-old FITS code still works, why change it? And those 20+-year-old practices that created the FITS code, well they still work too, don’t they?

Which of these things actually matters?

Based on my own experience I’d assert that all of them are important things for scientific programmers to know about. I’ve argued, hopefully convincingly, that version control has an important part to play in the scientific process: numerical analysis is a key part of many experiments and like the rest of the method it should be available for inspection and repetition. Science is also a collaborative activity, so it makes sense that some of the recommendations would be about collaboration: document the purpose of the code instead of its mechanics, write programs for people.

Could I justify those assertions with figures? Probably not. Is that important? Well, actually it probably is. Of the researchers I’ve worked with (bear in mind this has always been in Physics), even many who are heavily invested in computational methods see programming (rightly) as a means to an end, and aren’t likely to try new-to-them techniques in programming just for the sake of programming. Despite any rational economic benefit, they’d rather stick with what they know and focus on getting new results without any surprises.

If you want to say “it’s better to work this way” or “you’ll get results quicker like this”, to a bunch of physicists, you have to show them data to prove it. A paper like the one I’m discussing here will likely be read, if it:

  1. gets published
  2. said publication happens in a relevant journal
  3. said publication is picked up and circulated in enough news sources that researchers who don’t read the publishing journal get wind

On the other hand, it’s likely that the article’s tone will ensure that it only preaches to the converted. Nothing in the paper says “this is actually better”, just “professional programmers do these things”. Exercises like Software Carpentry are likely to only appeal to people who already have an interest in bettering their own programming abilities. As I said, most researchers I know don’t: they want to publish, and programming is a necessary—albeit complex—tool helping them to achieve that.

Why is this suddenly an issue?

It isn’t. A very quick search for errors in scientific computing yielded papers published across the last two decades, and I could probably find more. The abstracts for these (I did say it was a very quick search) include some pining for the use of skills from software engineering, or a closer focus by software engineering researchers on scientific computing projects.

What can be done about it?

That’s a very good question. If we knew what to do to improve the quality of any software production effort, there’d be a lot more good software in the world :). If the techniques from commercial software really would help make scientific software better, why wait for the scientists to apply them? Plenty of scientific software is open source, so in the case of things like analysis tools and automatic tests, sufficiently motivated individuals could just apply those things then demonstrate to the project maintainers how much of a difference they’ve made. Sure, there will be problems: I once worked on some software that could only be successfully executed if there was a particle accelerator connected to your workstation. But the first thing I did was to make a virtual particle accelerator – demonstrating how much easier it was to make progress if you could do it away from the experiment.

This brings me onto another option: scientific computing teams can employ commercial developers. I’ve seen it happen, I’ve seen it work and I’ve seen it fail. The ways in which it work include sharing of knowledge from both disciplines, discussing and improving practices. The ways in which it fails come down to frustration on both sides: scientific programmers feel that refactoring is change for change’s sake, perhaps, and software engineers think that not using their favourite practices is the realm of cowboys. That means that for a cross-discipline software team to work, it needs good leadership: the team needs to be designed to appreciate the different skills and viewpoints brought by the different members. And now we’ve gone fully out of the realm of science into management techniques.

Representativeness in Software Engineering Research

The first paragraph describes the context of this post in relation to the blog on which it originally appeared, not blog.securemacprogramming.com.

For this post, I wanted to go a little bit meta. One focus of this blog will be on whether results from academic software engineering are applicable to the work I do as a commercial software developer, so it was hard to pass up this Microsoft Research paper on representativeness of research.

In a nutshell, the problem is this: imagine that you read some study that shows, for example, that schedule slippage on a software project is significantly lessened if developers are given two digestive biscuits and a cup of tea at 4pm on working days. The study examined the breaktime habits of developers on 500 open source projects. This sounds quite convincing. If this thing with the tea and biscuits is true across five hundred projects, it must be applicable to my project, right?

That doesn’t follow. There are many reasons why it might not follow: the study may be biased. The analysis may be wrong. The biscuit thing may be significant but not the root cause of success. The authors may have selected projects that would demonstrate the biscuit outcome. Projects that had initially signed up but got delayed might have dropped out. This paper evaluates one cause of bias: the projects used in the study aren’t representative of the project you’re doing.

It’s a fallacy to assume that just because a study has a large sample size, its results can be generalised to the population. This only applies in the case that the sample represents an even slice of the population. Imagine a world in which all software projects are either written in Java or LISP. Now it doesn’t matter whether I select 10 projects or 10,000 projects: a sample of LISP practices will not necessarily tell us anything about how to conduct a Java project.

Conversely a study that investigates both Java and LISP projects can—in this restricted universe, and with the usual “all other things being equal” caveat—tell us something generally about software projects independent of language. However, choice of language is only one dimension in which software can be measured: the size, activity, number of developers, licence and other factors can all be relevant. Therefore the possible phase space of important factors can be multidimensional.

In this paper the authors develop a framework, based on work in medicine and other fields, for measuring and maximising representativeness of a sample by appropriate selection of projects along the dimensions of the problem space. They apply the framework to recent research.

What they discovered, tabulated in Table II of the paper, is that while a very small, carefully-selected sample can be surprisingly representative (50 out of ~20k projects represented ~15% of their problem space), the ~200 projects they could find analysed in recent research only scored around 9% on their representativeness metric. However in certain dimensions the studies were highly representative, many covering 100% of the phase space in specific dimensions.

Conclusions

A fact that jumped out at me, because of the field I work in, is that there are 245 Objective-C projects in the universe studied by this paper (the projects indexed on Ohloh) and that not one of these is covered by any of the studies they analysed. That could mean that my own back yard is ripe for the picking: that there are interesting results to be determined by analysing Objective-C projects and comparing those results with received wisdom.

In the discussion section, the authors point out that just because a study is not general, does not mean it is not useful. You may not be able to generalise a result gleaned from analysing (say) Java developer tools to all software development, but if what you’re interested in is Java developer tools then that is not a problem.

What this paper gives us, then, is not necessarily a tool that commercial developers can run out and use. It gives us some important quantitative context on evaluating the research that we do read. And, should we want to analyse our own work and investigate hypotheses about factors affecting our projects, it gives us a framework to understand just how representative those analyses would be.