Elegant Object-oriented Software Design via Interactive, Evolutionary Computation

The abstract of this paper from the ArXiv had me concerned:

Design is fundamental to software development but can be demanding to perform. Thus to assist the software designer, evolutionary computing is being increasingly applied using machine-based, quantitative fitness functions to evolve software designs. However, in nature, elegance and symmetry play a crucial role in the reproductive fitness of various organisms. In addition, subjective evaluation has also been exploited in Interactive Evolutionary Computation (IEC). Therefore to investigate the role of elegance and symmetry in software design, four novel elegance measures are proposed based on the evenness of distribution of design elements. In controlled experiments in a dynamic interactive evolutionary computation environment, designers are presented with visualizations of object-oriented software designs, which they rank according to a subjective assessment of elegance. For three out of the four elegance measures proposed, it is found that a significant correlation exists between elegance values and reward elicited. These three elegance measures assess the evenness of distribution of (a) attributes and methods among classes, (b) external couples between classes, and (c) the ratio of attributes to methods. It is concluded that symmetrical elegance is in some way significant in software design, and that this can be exploited in dynamic, multi-objective interactive evolutionary computation to produce elegant software designs.

The “design” of a software system is, to me, a part of the social science aspect of software engineering. Does the design make it easy for me to work out how the software functions? Can I see what I need to change to fix some problem? If I don’t fix this problem, can you also see what you’d need to change? Does working with this design cause an emotional response?

With this mindset, it’s hard to understand how an objective metric of software design can be formulated. Without that understanding, it’s impossible to see any value in letting a software system design another software system that human developers are going to work on. In fact, it seems entirely back to front: the design is (to me) the part that needs a combination of experience, insight and serendipity to create. If a computer can then automatically fill some of the details in a way that saves time and reduces error, that would be useful. Doing it the other way around means human programmers become blue-collar subordinates to the (literal) software architect.

So, I didn’t exactly jump into the rest of this paper with an open mind, which I recognised was a problem I needed to deal with so I ploughed on anyway. I started by reading “A Survey on Search-based Software Design”, along with some of the other references, with a view to working out just what it was that these researchers are trying to automate. In the event, this post took a couple of weeks to write at my usual “whenever I get a chance” rate—there was a lot to understand.

What’s Going On?

The idea is that certain principles in object-oriented design can be assigned a quantitative value: a “score”, if you will. So you could score a design on how tightly-coupled the classes are, on how many responsibilities each class has, and on other features. You can also decide that a good design would aim to lower or increase particular scores; for example that looser coupling and fewer responsibilities are “better”. You could decide that some designs are just “stillborn”, and no matter how well they do on some metrics you’re never going to use them: a circular reference, or a class with no responsibilities, might instantly be discarded.

Now imagine deriving some initial design for your software, for example from a collection of use cases. (You may be wondering how, and that’s a good question: if your initial guess at the design is derived automatically from the use cases, then the use cases themselves need to be pretty precise, complete and unambiguous. In other words, they need to be written in a computer programming language.[*]) You score that design according to the criteria you defined, then make some “mutations” (which, in the case of evolutionary software design, means applying design patterns from a catalogue). The mutations that score better, you keep, combining them and mutating them further. Eventually you should have a collection of design candidates that are all much better than the initial guess.

[*]As a thought experiment, take the use-case diagram for a cinema booking system used as one of the inputs for this paper’s methods. Try designing a software system to implement these use cases; every time you have a question that isn’t answered by the diagram, make a note but _don’t assume an answer_. How many questions do you end up with? Are you happy using a design in which these questions are unanswered? My guess is you’ll be OK to leave some of them, designing the software to be flexible to different answers. But some will cause more problems unless they’re addressed.

Is This Useful?

I don’t feel like I got over the bias that I went into this post with: that the point of software design is to communicate something about that software among the various people who will be working on it. Computer-generated design is like computer-generated prose, when viewed from that perspective: uncannily close, but no substitute for the real thing.

What you could get from a technique like this are proposals for improvement on designs: indeed one branch (or clade, perhaps) of research in which genetic algorithms are applied to software design is in refactoring. One can imagine future UML tools (or IDEs) offering suggestions at the architecture level, just as current IDEs can offer suggestions for individual lines or methods.

That’s basically what this paper is driving at: the “interactive” part of Interactive Evolutionary Computation. Human participants created the first versions of the designs, and qualitatively evaluated the later iterations (which were both produced and also evaluated by the software). Ultimately, software designers were called upon to reward the “better” designs and to decide when to stop the iteration: i.e. they chose whether to accept the “suggestions” made by the evolutionary algorithm.

So is this technique a step on the way to having that tool? Looking at table 4, you might think that the software did create better designs than the humans in two thirds of cases. Such is the danger of bold typeface. Look again at the standard deviations. Unfortunately, discussing the results with a tame statistician, we couldn’t agree that the analysis in the paper shows the significant results the authors claim. As an example, it’s not clear that pairing any two metrics actually makes sense, or that just because one measurement comes out consistently lower overall it’s a better indicator of “elegance” than another (which might vary more between designs: something we’re not told here).

The authors are on clearer footing when evaluating the relationship between the rewards given and the metrics: ignoring the software algorithm completely, do people consistently think of some particular property of a software design as indicative of elegance? While they’ve only got 7 participants (who, assuming you know the group, can probably be de-anonymised based on the information presented…just saying), and it’s risky to draw general conclusions from such a small number of people[*], there are early indications here of consistency.

[*]particularly as they’re all in academia, and probably all in the same institution.

About Graham

I make it faster and easier for you to create high-quality code.
This entry was posted in academia. Bookmark the permalink.