My use of Latin: a glossary

  • i.e.: I Explain
  • e.g.: Example Given
  • et al.: Extremely Tedious Author List
  • op. cit.: Other Page Cited It Too
  • ibid.: In Book I Described
  • etc.: Evermore To Continue
  • a.m.: Argh! Morning!
  • p.m.: Past Morning
  • ca.: Close Approximation
  • sic.: See Inexcusable Cock-up
Posted in whatevs | Comments Off on My use of Latin: a glossary

Depending on the self-interest of strangers

The title is borrowed from an economics article by Art Carden, which is of no further relevance to this post. Interesting read though, yes?

I’m enjoying the discussion in the iOS Developer Community™ about dependency of app makers on third-party libraries.

My main sources for what I will (glibly, and with a lot of simplification) summarise as the “anti-dependency” argument are a talk by Marcus Zarra which he gave (a later version than I saw of) at NSConference, and a blog post by Justin Williams.

This is not perhaps an argument that is totally against use of libraries, but an argument in favour of caution and conservatism. As I understand it, the position(s) on this side can be summarised as an attempt to mitigate the following risks:

  • if I incorporate someone else’s library, then I’m outsourcing my understanding of the problem to them. If they don’t understand it in the same way that I do, then I might not end up with a desired solution.
  • if I incorporate someone else’s library, then I’m outsourcing my understanding of the code to them. If it turns out I need something different, I may not be able to make that happen.
  • incorporating someone else’s library may mean bringing in a load of code that doesn’t actually solve my problem, but that increases the cognitive load of understanding my product.

I can certainly empathise with the idea that bringing in code to solve a problem can be a liability. A large app I was involved in writing a while back used a few open source libraries, and all but one of them needed patching either to fix problems or to extend their capabilities for our novel setting. The one (known) bug that came close to ending up in production was due to the interaction between one of these libraries and the operating system.

But then there’s all of my code in my software that’s also a liability. The difference between my code and someone else’s code, to a very crude level of approximation that won’t stand up to intellectual rigour but is good enough for a first pass, is that my code cost me time to write. Other than that, it’s all liability. And let’s not accidentally give a free pass to platform-vendor libraries, which are written by the same squishy, error-prone human-meat that produces both the first- and third-party code.

The reason I find this discussion of interest is that at the beginning of OOP’s incursion into commercial programming, the key benefit of the technique was supposedly that we could all stop solving the same problems and borrow or buy other peoples’ solutions. Here are Brad Cox and Bill Hunt, from the August 1986 issue of Byte:

Encapsulation means that code suppliers
can build. test. and document solutions to difficult user
interface problems and store them in libraries as reusable
software components that depend only loosely on the applications that use them. Encapsulation lets consumers assemble generic components directly into their applica­tions. and inheritance lets them define new application­ specific components by inheriting most of the work from generic components in the library.

Building programs by reusing generic components will seem strange if you think of programming as the act of assembling the raw statements and expressions of a programming language. The integrated circuit seemed just as strange to designers who built circuits from discrete elec­tronic components. What is truly revolutionary about object-oriented programming is that it helps programmers reuse existing code. just as the silicon chip helps circuit builders reuse the work of chip designers. To emphasise this parallel we call reusable classes Software-ICs.

In the “software-IC” formulation of OOP, CocoaPods, Ruby Gems etc. are the end-game of the technology. We take the work of the giants who came before us and use it to stand on their shoulders. Our output is not merely applications, which are fleeting in utility, but new objects from which still more applications can be built.

I look forward to seeing this discussion play out and finding out whether it moves us collectively in a new direction. I offer zero or more of the following potential conclusions to this post:

  • Cox and contemporaries were wrong, and overplayed the potential for re-use to help sell OOP and their companies’ products.
  • The anti-library sentiment is wrong, and downplays the potential for re-use to help sell billable hours.
  • Libraries just have an image problem, and we can define some trustworthiness metric (based, perhaps, on documentation or automated test coverage) that raises the bar and increases confidence.
  • Libraries inherently work to stop us understanding low-level stuff that actually, sooner or later, we’ll need to know about whether we like it or not.
  • Everyone’s free to do what they want, and while the dinosaurs are reinventing their wheels, the mammals can outcompete them by moving the craft forward.
  • Everyone’s free to do what they want, and while the library-importers’ castles are sinking into the swamps due to their architectural deficiencies, the self-inventors can outcompete them by building the most appropriate structures for the tasks at hand.
Posted in Uncategorized | Comments Off on Depending on the self-interest of strangers

Software, Science?

Is there any science in software making? Does it make sense to think of software making as scientific? Would it help if we could?

Hold on, just what is science anyway?

Good question. The medieval French philosopher-monk Buridan said that the source of all knowledge is experience, and Richard Feynman paraphrased this as “the test of all knowledge is experiment”.

If we accept that science involves some experimental test of knowledge, then some of our friends who think of themselves as scientists will find themselves excluded. Consider the astrophysicists and cosmologists. It’s all very well having a theory about how stars form, but if you’re not willing to swirl a big cloud of hydrogen, helium, lithium, carbon etc. about and watch what happens to it for a few billion years then you’re not doing the experiment. If you’re not doing the experiment then you’re not a scientist.

Our friends in what are called the biological and medical sciences also have some difficulty now. A lot of what they do is tested by experiment, but some of the experiments are not permitted on ethical grounds. If you’re not allowed to do the experiment, maybe you’re not a real scientist.

Another formulation (OK, I got this from the wikipedia entry on Science) sees science as a sort of systematic storytelling: the making of “testable explanations and predictions about the universe”.

Under this definition, there’s no problem with calling astronomy a science: you think this is how things work, then you sit, and watch, and see whether that happens.

Of course a lot of other work fits into the category now, too. There’s no problem with seeing the “social sciences” as branches of science: if you can explain how people work, and make predictions that can (in principle, even if not in practice) be tested, then you’re doing science. Psychology, sociology, economics: these are all sciences now.

Speaking of the social sciences, we must remember that science itself is a social activity, and that the way it’s performed is really defined as the explicit and implicit rules and boundaries created by all the people who are doing it. As an example, consider falsificationism. This is the idea that a good scientific hypothesis is one that can be rejected, rather than confirmed, by an appropriately-designed experiment.

Sounds pretty good, right? Here’s the interesting thing: it’s also pretty new. It was mostly popularised by Karl Popper in the 20th Century. So if falsificationism is the hallmark of good science, then Einstein didn’t do good science, nor did Marie Curie, nor Galileo, or a whole load of other people who didn’t share the philosophy. Just like Dante’s Virgil was not permitted into heaven because he’d been born before Christ and therefore could not be a Christian, so all of the good souls of classical science are not permitted to be scientists because they did not benefit from Popper’s good message.

So what is science today is not necessarily science tomorrow, and there’s a sort of self-perpetuation of a culture of science that defines what it is. And of course that culture can be critiqued. Why is peer review appropriate? Why do the benefits outweigh the politics, the gazumping, the gender bias? Why should it be that if falsification is important, negative results are less likely to be published?

Let’s talk about Physics

Around a decade ago I was studying particle physics pretty hard. Now there are plenty of interesting aspects to particle physics. Firstly that it’s a statistics-heavy discipline, and that results in statistics are defined by how happy you are with them, not by some binary right/wrong criterion.

It turns out that particle physicists are a pretty conservative bunch. They’ll only accept a particle as “discovered” if the signal indicating its existence is measured as a five-sigma confidence: i.e. if there’s under a one-on-a-million chance that the signal arose randomly in the absence of the particle’s existence. Why five sigma? Why not three (a 99.7% confidence) or six (to keep Motorola happy)? Why not repeat it three times and call it good, like we did in middle school science classes?

Also, it’s quite a specialised discipline, with a clear split between theory and practice and other divisions inside those fields. It’s been a long time since you could be a general particle physicist, and even longer since you could be simply a “physicist”. The split leads to some interesting questions about the definition of science again: if you make a prediction which you know can’t be verified during your lifetime due to the lag between theory and experimental capability, are you still doing science? Does it matter whether you are or not? Is the science in the theory (the verifiable, or falsifiable, prediction) or in the experiment? Or in both?

And how about Psychology, too

Physicists are among the most rational people I’ve worked with, but psychologists up the game by bringing their own feature to the mix: hypercriticality. And I mean that in the technical sense of criticism, not in the programmer “you’re grammar sucks” sense.

You see, psychology is hard, because people are messy. Physics is easy: the apple either fell to earth or it didn’t. Granted, quantum gets a bit weird, but it generally (probably) does its repeatable thing. We saw above that particle physics is based on statistics (as is semiconductor physics, as it happens); but you can still say that you expect some particular outcome or distribution of outcomes with some level of confidence. People aren’t so friendly. I mean, they’re friendly, but not in a scientific sense. You can do a nice repeatable psychology experiment in the lab, but only by removing so many confounding variables that it’s doubtful the results would carry over into the real world. And the experiment only really told you anything about local first year psychology undergraduates, because they’re the only people who:

  1. walked past the sign in the psychology department advertising the need for participants;
  2. need the ten dollars on offer for participation desperately enough to turn up.

In fact, you only really know about how local first year psychology undergraduates who know they’re participating in a psychology experiment behave. The ethics rules require informed consent which is a good thing because otherwise it’s hard to tell the difference between a psychology lab and a Channel 4 game show. But it means you have to say things like “hey this is totally an experiment and there’ll be counselling afterward if you’re disturbed by not really electrocuting the fake person behind the wall” which might affect how people react, except we don’t really know because we’re not allowed to do that experiment.

On the other hand, you can make observations about the real world, and draw conclusions from them, but it’s hard to know what caused what you saw because there are so many things going on. It’s pretty hard to re-run the entire of a society with just one thing changed, like “maybe if we just made Hitler an inch taller then Americans would like him, or perhaps try the exact same thing as prohibition again but in Indonesia” and other ideas that belong in Philip K. Dick novels.

So there’s this tension: repeatable results that might not apply to the real world (a lack of “ecological validity”), and real-world phenomena that might not be possible to explain (a lack of “internal validity”). And then there are all sorts of other problems too, so that psychologists know that for a study to hold water they need to surround what they say with caveats and limitations. Thus is born the “threats to validity” section on any paper, where the authors themselves describe the generality (or otherwise) of their results, knowing that such threats will be a hot topic of discussion.

But all of this—the physics, the psychology, and the other sciences—is basically a systematised story-telling exercise, in which the story is “this is why the universe is as it is” and the system is the collection of (time-and-space-dependent) rules that govern what stories may be told. It’s like religion, but with more maths (unless your religion is one of those ones that assigns numbers to each letter in a really long book then notices that your phone number appears about as many times as a Poisson distribution would suggest).

Wait, I think you were talking about software

Oh yeah, thanks. So, what science, if any, is there in making software? Does there exist a systematic approach to storytelling? First, let’s look at the kinds of stories we need to tell.

The first are the stories about the social system in which the software finds itself: the story of the users, their need (or otherwise) for a software system, their reactions (or otherwise) to the system introduced, how their interactions with each other change as a result of introducing the system, and so on. Call this requirements engineering, or human-computer interaction, or user experience; it’s one collection of stories.

You can see these kinds of stories emerging from the work of Manny Lehman. He identifies three types of software:

  • an S-system is exactly specified.
  • a P-system executes some known procedure.
  • an E-system must evolve to meet the needs of its environment.

It may seem that E-type software is the type in which our stories about society are relevant, but think again: why do we need software to match a specification, or to follow a procedure? Because automating that specification or procedure is of value to someone. Why, or to what end? Why that procedure? What is the impact of automating it? We’re back to telling stories about society. All of these software systems, according to Lehman, arise from discovery of a problem in the universe of discourse, and provide a solution that is of possible interest in the universe of discourse.

The second kind are the stories about how we worked together to build the software we thought was needed in the first stories. The practices we use to design, build and test our software are all tools for facilitating the interaction between the people who work together to make the things that come out. The things we learned about our own society, and that we hope we can repeat (or avoid) in the future, become our design, architecture, development, testing, deployment, maintenance and support practices. We even create our own tools—software for software’s sake—to automate, ease or disrupt our own interactions with each other.

You’ll mostly hear the second kind of story at most developer conferences. I believe that’s because the people who have most time and inclination to speak at most developer conferences are consultants, and they understand the second stories to a greater extent than the first because they don’t spend too long solving any particular problem. It’s also because most developer conferences are generally about making software, not about whatever problem it is that each of the attendees is trying to solve out in the world.

I’m going to borrow a convention that Rob Rix told me in an email, of labelling the first type of story as being about “external quality” and the second type about “internal quality”. I went through a few stages of acceptance of this taxonomy:

  1. Sounds like a great idea! There really are two different things we have to worry about.
  2. Hold on, this doesn’t sounds like such a good thing. Are we really dividing our work into things we do for “us” and things we do for “them”? Labelling the non-technical identity? That sounds like a recipe for outgroup homogeneity effect.
  3. No, wait, I’m thinking about it wrong. The people who make software are not the in-group. They are the mediators: it’s just the computers and the tools on one side of the boundary, and all of society on the other side. We have, in effect, the Janus Thinker: looking on the one hand toward the external stories, on the other toward the internal stories, and providing a portal allowing flow between the two.

JANUS (from Vatican collection) by Flickr user jinnrouge

So, um, science?

What we’re actually looking at is a potential social science: there are internal stories about our interactions with each other and external stories about our interactions with society and of society’s interactions with the things we create, and those stories could potentially be systematised and we’d have a social science of sorts.

Particularly, I want to make the point that we don’t have a clinical science, an analogy drawn by those who investigate evidence-based software engineering (which has included me, in my armchair way, in the past). You can usefully give half of your patients a medicine and half a placebo, then measure survival or recovery rates after that intervention. You cannot meaningfully treat a software practice, like TDD as an example, as a clinical intervention. How do you give half of your participants a placebo TDD? How much training will you give your ‘treatment’ group, and how will you organise placebo training for the ‘control’ group? [Actually I think I’ve been on some placebo training courses.]

In constructing our own scientific stories about the world of making software, we would run into the same problems that social scientists do in finding useful compromises between internal and ecological validity. For example, the oft-cited Exploratory experimental studies comparing online and offline programming performance (by Sackman et al., 1968) is frequently used to support the notion that there are “10x programmers”, that some people who write software just do it ten times faster than others.

However, this study does not have much ecological validity. It measures debugging performance, using either an offline process (submitting jobs to a batch system) or an online debugger called TSS, which probably isn’t a lot like the tools used in debugging today. The problems were well-specified, thus removing many of the real problems programmers face in designing software. Participants were expected to code a complete solution with no compiler errors, then debug it: not all programmers work like that. And where did they get their participants from? Did they have a diverse range of backgrounds, cultures, education, experience? It does not seem that any results from that study could necessarily apply to modern software development situated in a modern environment, nor could the claim of “10x programmers” necessarily generalise as we don’t know who is 10x better than whom, even at this one restricted task.

In fact, I’m also not convinced of its internal validity. There were four conditions (two programming problems and two debugging setups), each of which was assigned to six participants. Variance is so large that most of the variables are independent of each other (the independent variables are the programming problem and the debugging mode, and the dependent variables are the amount of person-time and CPU-time), unless the authors correlate them with “programming skill”. How is this skill defined? How is it measured? Why, when the individual scores are compared, is “programming skill” not again taken into consideration? What confounding variables might also affect the wide variation in scored reported? Is it possible that the fastest programmers had simply seen the problem and solved it before? We don’t know. What we do know is that the reported 28:1 ratio between best and worst performers is across both online and offline conditions (as pointed out in, e.g., The Leprechauns of Software Engineering, so that’s definitely a confounding factor. If we just looked at two programmers using the same environment, what difference would be found?

We had the problem that “programming skill” is not well-defined when examining the Sackman et al. study, and we’ll find that this problem is one we need to overcome more generally before we can make the “testable explanations and predictions” that we seek. Let’s revisit the TDD example from earlier: my hypothesis is that a team that adopts the test-driven development practice will be more productive some time later (we’ll defer a discussion of how long) than the null condition.

OK, so what do we mean by “productive”? Lines of code delivered? Probably not, their amount varies with representation. OK, number of machine instructions delivered? Not all of those would be considered useful. Amount of ‘customer value’? What does the customer consider valuable, and how do we ensure a fair measurement of that across the two conditions? Do bugs count as a reduction in value, or a need to do additional work? Or both? How long do we wait for a bug to not be found before we declare that it doesn’t exist? How is that discovery done? Does the expense related to finding bugs stay the same in both cases, or is that a confounding variable? Is the cost associated with finding bugs counted against the value delivered? And so on.

Software dogma

Because our stories are not currently very testable, many of them arise from a dogmatic belief that some tool, or process, or mode of thought, is superior to the alternatives, and that there can be no critical debate. For example, from the Clean Coder:

The bottom line is that TDD works, and everybody needs to get over it.

No room for alternatives or improvement, just get over it. If you’re having trouble defending it, apply a liberal sprinkle of argumentum ab auctoritate and tell everyone: Robert C. Martin says you should get over it!

You’ll also find any number of applications of the thought-terminating cliché, a rhetorical technique used to stop cognitive dissonance by allowing one side of the issue to go unchallenged. Some examples:

  • “I just use the right tool for the job”—OK, I’m done defending this tool in the face of evidence. It’s just clearly the correct one. You may go about your business. Move along.
  • “My approach is pragmatic”—It may look like I’m doing the opposite of what I told you earlier, but that’s because I always do the best thing to do, so I don’t need to explain the gap.
  • “I’m passionate about [X]”—yeah, I mean your argument might look valid, I suppose, if you’re the kind of person who doesn’t love this stuff as much I do. Real craftsmen would get what I’m saying.
  • and more.

The good news is that out of such religious foundations spring the shoots of scientific thought, as people seek to find a solid justification for their dogma. So just as physics has strong spiritual connections, with Steven Hawking concluding in A Brief History of Time:

However, if we discover a complete theory, it should in time be understandable by everyone, not just by a few scientists. Then we shall all, philosophers, scientists and just ordinary people, be able to take part in the discussion of the question of why it is that we and the universe exist. If we find the answer to that, it would be the ultimate triumph of human reason — for then we should know the mind of God.

and Einstein debating whether quantum physics represented a kind of deific Dungeons and Dragons:

[…] an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the “old one.” I, at any rate, am convinced that He does not throw dice.

so a (social) science of software could arise as an analogous form of experimental theology. I think the analogy could be drawn too far: the context is not similar enough to the ages of Islamic Science or of the Enlightenment to claim that similar shifts to those would occur. You already need a fairly stable base of rational science (and its application via engineering) to even have a computer at all upon which to run software, so there’s a larger base of scientific practice and philosophy to draw on.

It’s useful, though, when talking to a programmer who presents themselves as hyper-rational, to remember to dig in and to see just where the emotions, fallacious arguments and dogmatic reasoning are presenting themselves, and to wonder about what would have to change to turn any such discussion into a verifiable prediction. And, of course, to think about whether that would necessarily be a beneficial change. Just as we can critique scientific culture, so should we critique software culture.

Posted in advancement of the self, learning, social-science, software-engineering | Comments Off on Software, Science?

Inside-Out Apps

This article is based on a talk I gave at mdevcon 2014. The talk also included a specific example to demonstrate the approach, but was otherwise a presentation of the following argument.

You probably read this blog because you write apps. Which is kind of cool, because I have been known to use apps. I’d be interested to hear what yours is about.

Not so fast! Let me make this clear first: I only buy page-based apps, they must use Core Data, and I automatically give one star to any app that uses storyboards.

OK, that didn’t sound like it made any sense. Nobody actually chooses which apps to buy based on what technologies or frameworks are used, they choose which apps to buy based on what problems those apps solve. On the experience they derive from using the software.

When we build our apps, the problem we’re solving and the experience we’re providing needs to be at the uppermost of our thoughts. You’re probably already used to doing this in designing an application: Apple’s Human Interface Guidelines describe the creation of an App Definition Statement to guide thinking about what goes into an app and how people will use it:

An app definition statement is a concise, concrete declaration of an app’s main purpose and its intended audience.

Create an app definition statement early in your development effort to help you turn an idea and a list of features into a coherent product that people want to own. Throughout development, use the definition statement to decide if potential features and behaviors make sense.

My suggestion is that you should use this idea to guide your app’s architecture and your class design too. Start from the problem, then work through solving that problem to building your application. I have two reasons: the first will help you, and the second will help me to help you.

The first reason is to promote a decoupling between the problem you’re trying to solve, the design you present for interacting with that solution, and the technologies you choose to implement the solution and its design. Your problem is not “I need a Master-Detail application”, which means that your solution may not be that. In fact, your problem is not that, and it may not make sense to present it that way. Or if it does now, it might not next week.

You see, designers are fickle beasts, and for all their feel-good bloviation about psychology and user experience, most are actually just operating on a combination of trend and whimsy. Last week’s refresh button is this week’s pull gesture is next week’s interaction-free event. Yesterday’s tab bar is today’s hamburger menu is tomorrow’s swipe-in drawer. Last decade’s mouse is this decade’s finger is next decade’s eye motion. Unless your problem is Corinthian leather, that’ll be gone soon. Whatever you’re doing for iOS 7 will change for iOS 8.

So it’s best to decouple your solution from your design, and the easiest way to do that is to solve the problem first and then design a presentation for it. Think about it. If you try to design a train timetable, then you’ll end up with a timetable that happens to contain train details. If you try to solve the problem “how do I know at what time to be on which platform to catch the train to London?”, then you might end up with a timetable, but you might not. And however the design of the solution changes, the problem itself will not: just as the problem of knowing where to catch a train has not changed in over a century.

The same problem that affects design-driven development also affects technology-driven development. This month you want to use Core Data. Then next month, you wish you hadn’t. The following month, you kind of want to again, then later you realise you needed a document database after all and go down that route. Solve the problem without depending on particular libraries, then changing libraries is no big deal, and neither is changing how you deal with those libraries.

It’s starting with the technology that leads to Massive View Controller. If you start by knowing that you need to glue some data to some view via a View Controller, then that’s what you end up with.

This problem is exacerbated, I believe, by a religious adherence to Model-View-Controller. My job here is not to destroy MVC, I am neither an iconoclast nor a sacrificer of sacred cattle. But when you get overly attached to MVC, then you look at every class you create and ask the question “is this a model, a view, or a controller?”. Because this question makes no sense, the answer doesn’t either: anything that isn’t evidently data or evidently graphics gets put into the amorphous “controller” collection, which eventually sucks your entire codebase into its innards like a black hole collapsing under its own weight.

Let’s stick with this “everything is MVC” difficulty for another paragraph, and possibly a bulleted list thereafter. Here are some helpful answers to the “which layer does this go in” questions:

  • does my Core Data stack belong in the model, the view, or the controller? No. Core Data is a persistence service, which your app can call on to save or retrieve data. Often the data will come from the model, but saving and retrieving that data is not itself part of your model.
  • does my networking code belong in the model, the view, or the controller? No. Networking is a communications service, which your app can call on to send or retrieve data. Often the data will come from the model, but sending and retrieving that data is not itself part of your model.
  • is Core Graphics part of the model, the view, or the controller? No. Core Graphics is a display primitive that helps objects represent themselves on the display. Often those objects will be views, but the means by which they represent themselves are part of an external service.

So building an app in a solution-first approach can help focus on what the app does, removing any unfortunate coupling between that and what the app looks like or what the app uses. That’s the bit that helps you. Now, about the other reason for doing this, the reason that makes it easier for me to help you.

When I come to look at your code, and this happens fairly often, I need to work out quickly what it does and what it should do, so that I can work out why there’s a difference between those two things and what I need to do about it. If your app is organised in such a way that I can see how each class contributes to the problem being solved, then I can readily tell where I go for everything I need. If, on the other hand, your project looks like this:

MVC organisation

Then the only thing I can tell is that your app is entirely interchangeable with every other app that claims to be nothing more than MVC. This includes every Rails app, ever. Here’s the thing. I know what MVC is, and how it works. I know what UIKit is, and why Apple thinks everything is a view controller. I get those things, your app doesn’t need to tell me those things again. It needs to reflect not the similarities, which I’ve seen every time I’ve launched Project Builder since around 2000, but the differences, which are the things that make your app special.

OK, so that’s the theory. We should start from the problem, and move to the solution, then adapt the solution onto the presentation and other technology we need to use to get a product we can sell this week. When the technologies and the presentations change, we can adapt onto those new things, to get the product we can sell next week, without having to worry about whether we broke solving the problem. But what’s the practice? How do we do that?

Start with the model.

Remember that, in Apple’s words:

model objects represent knowledge and expertise related to a specific problem domain

so solving the problem first means modelling the problem first. Now you can do this without regard to any particular libraries or technology, although it helps to pick a programming language so that you can actually write something down. In fact, you can start here:

Command line tool

A Foundation command-line tool has everything you need to solve your problem (in fact it contains a few more things than that, to make up for erstwhile deficiencies in the link editor, but we’ll ignore those things). It lets you make objects, and it lets you use all those boring things that were solved by computer scientists back in the 1760s like strings, collections and memory allocation.

So with a combination of the subset of Objective-C, the bits of Foundation that should really be in Foundation, and unit tests to drive the design of the solution, we can solve whatever problem it is that the app needs to solve. There’s just one difficulty, and that is that the problem is only solved for people who know how to send messages to objects. Now we can worry about those fast-moving targets of presentation and technology choice, knowing that the core of our app is a stable, well-tested collection of objects that solve our customers’ problem. We expose aspects of the solution by adapting them onto our choice of user interface, and similarly any other technology dependencies we need to introduce are held at arm’s length. We test that we integrate with them correctly, but not that using them ends up solving the problem.

If something must go, then we drop it, without worrying whether we’ve broken our ability to solve the problem. The libraries and frameworks are just services that we can switch between as we see fit. They help us solve our problem, which is to help everyone else to solve their problem.

And yes, when you come to build the user interface, then model-view-controller will be important. But only in adapting the solution onto the user interface, not as a strategy for stuffing an infinite number of coats onto three coat hooks.

References

None of the above is new, it’s just how Object-Oriented Programming is supposed to work. In the first part of my MVC series, I investigated Thing-Model-View-Editor and the progress from the “Thing” (a problem in the real world that must be solved) to a “Model” (a representation of that problem and its solution in the computer). That article relied on sources from Trygve Reenskaug, who described (in 1979) the process by which he moved from a Thing to a Model and then to a user interface.

In his 1992 book Object-Oriented Software Engineering: A Use-Case Driven Approach, Ivar Jacobson describes a formalised version of the same motion, based on documented use cases. Some teams replaced use cases with user stories, which look a lot like Reenskaug’s user goals:

An example: To get better control over my finances, I would need to set up a budget; to keep account
of all income and expenditure; and to keep a running comparison between budget and accounts.

Alastair Cockburn described the Ports and Adapters Architecture (earlier known as the hexagonal architecture) in which the application’s use cases (i.e. the ways in which it solves problems) are at the core, and everything else is kept at a distance through adapters which can easily be replaced.

Posted in architecture of sorts, MVC, OOP, ruby, software-engineering | Comments Off on Inside-Out Apps

Principled Lizards

Sixty-five million years ago, there were many huge lizards. Most of them were really happy being lizards, and would spend all of the time they could doing lizardy things. Some wanted to be the biggest lizards, and grew so large and so heavy that it would sound like peals of thunder if you could hear them walking about on their lizardy way. Others wanted to be the most terrible lizards, and they developed big scary teeth and sharp, shiny talons. The most terrible lizards were feared by many of the other lizards, but it was a fear that sprang from awe: they were all happy that each was, in their own way, the most lizardy of the lizards. And they were all happy that each of the other lizards they met was trying to be, in their own way, the most lizardy of lizards.

For the lizards met often. They would have their big get-togethers where the big lizards and the small lizards and the terrible lizards and the scaly lizards would each talk about how they handle being so big, or so small, or so terrible, or so scaly. And the other lizards would listen to these talks, and they would applaud the speakers for being so big, or so small, or so terrible, or so scaly. Having seen these examples of lizardly apotheosis, they would try to emulate them. So it was that the lizard world became bigger, but also smaller, and more terrible, and more scaly.

But it seems that not all of the lizards shared these goals of ever-increasing lizardhood. Some would try different things. A group of lizards found that they could regulate their own blood temperature, they would no longer need to sit in the sun all morning like the other lizards. One group of lizards turned their feathery covering to the task of improved aerodynamics. Another group turned it to a sort of coat, which stopped them getting so cold.

The big meetings of lizardy lizards did not really pay these developments much notice, as they were not very lizard like changes. They knew that they were lizards! They should do the lizardy things, like getting bigger or smaller or more terrible or more scaly! They put over eighty hours a week into it, they were passionate about it. The world was, for them, all about being more lizardly every day.

Some of the things that the decidedly non-lizardlike groups were coming up with did take a sort of root among those who called themselves the “lizard community”, but only to the extent that they could be seen as lizardy things. So ideas from the feather aerodynamics group became diluted, and were called “flight-oriented lizarding”. At the big gatherings of all the lizards, the FOL evangelists would show how they had made things that looked a bit like the feathers used for aerodynamics, but which were more lizardy. They had some benefit to lizards in that they slowed them down slightly as they fell out of trees. And, of course, as this was something that you had to be able to demonstrate expert lizardly competence in, they invented the idea of the master flight-oriented lizard.

All sorts of rules were invented to demonstrate competency and master-lizardliness in the flight-oriented world. This feather and that feather must each have a single responsibility: this for slowing the fall, that for turning. Feathers must be open for falling but closed for impact. Specific types of feathers could be invented, but only where they could be used in place of the more generic feathers. Feathers had to be designed so that they never got into the area around a lizard’s eyes (the in-the-face segregation principle). Despite the fact that flight-oriented lizards only used their feathers for falling out of trees, feathers had to be designed to work when travelling upwards too (the descendency inversion principle).

But to the expert lizards—the biggest, smallest, scaliest and most terrible lizards—something felt uncomfortable. It felt like people were saying that there was something else to do than being an expert lizard, as if lizardness wasn’t enough. So, of course, they arranged another meeting of all the lizards. Expert lizards and novice lizards and improving lizards all came together, that one day sixty-five million years ago, and they met in the town of Chicxulub. And the most expert of the expert lizards got up in front of all the lizards, and said this:

If you want to carry on at lizarding you have to really love it. You’ve got to want to put every waking moment into becoming a better lizard. You’ve got to look up after practising your lizarding, and be shocked at how much time has gone past. If that isn’t you, if you don’t absolutely love everything about lizarding, perhaps it’s time to move on and do something else.

Many of the expert lizards agreed with this idea, and were pleased with themselves. But many that had been trying other things, the fur or the flying or the warm blood, were confused: did they want to be lizards forever, and strive toward the best of lizardliness, or not? Did they perhaps want to explore the opportunities presented by warm blood, or flying, or fur?

And so it was that at Chicxulub, as a rock from outer space danced through the upper atmosphere, pushing and heating and ionising the air in front of it, people chose between the many paths open to them.

Posted in advancement of the self | Leave a comment

Messily Veneered C

A recap: we saw that Model-View-Controller started life as Thing-Model-View-Editor, a way of approaching problems to design Smalltalk user interfaces. As Smalltalk-80 drifted off from its ivory tower, many Smalltalkers were using and talking about MVC, although any kind of consensus on its meaning was limited by the lack of important documentation.

Would this telephone game continue to redefine Model-View-Controller as object-oriented programming grew beyond the Smalltalk-80 community?

Part 3: Objective-MVC

Objective-C is now best known as the programming language Apple promotes for developing iOS and Mac applications. It was first available on the Mac during the 1980s pre-Cambrian explosion of OOP languages: on the Mac alone you could send messages using Smalltalk-80, Object Pascal, Objective-C, ExperCommon Lisp, Allegro Common Lisp, Object Assembler, Object Logo, and there were probably more. Most of the list here was derived from Object-Oriented Programming for the Macintosh, written by Kurt J. Schmucker then of Productivity Products International.

Which brings us back to Objective-C, which is probably the best known of the products produced by Productivity Products. Objective-C did not let programmers use a spatial programming interface, it made them use linear text files.

Objective-C also did not follow the Model-View-Controller paradigm of Smalltalk-80. As described by Brad Cox in this figure adapted from Object-Oriented Programming: an Evolutionary Approach, the PPI programmers were thinking about Models and Views, and then about reusing Views in different UIs.

A layer architecture for Objective-C applications. The application level contains models; the presentation level contains interchangeable views; and the terminal level contains graphics primitives.

So, what happened to the Controllers? Here’s Cox:

This architecture is very similar to the one used in Smalltalk-80, with one exception. […] The outgoing leg of Smalltalk’s user interface is handled by a hierarchy of views much like the ones discussed here. But the incoming leg is implemented by a separate hierarchy of classes, Controllers, that provide the control for the application. […] The need for the separate controller hierarchy is unclear and is the topic of spirited debate even within the Smalltalk-80 community.

Unfortunately the citation for “spirited debate” is personal communication with various Smalltalkers, so we may never know the content.

This gives us three Objective-C modifications to the original Smalltalk-80 concept that persist to this day. One is the move from a user interface paradigm concerning the interactions of individual objects to a layered architecture. The Cox book shows the display screen as a projection of the view layer onto glass, and the models all in their own two-dimensional layer suspended from the views, hanging:

suspended from the presentation level by pointers in much the way that circuit boards are connected to test equipment with a bed-of-nails testing jig.

That still persists in Cocoa MVC, though with a somewhat weak assertion:

The collection of objects of a certain MVC type in an application is sometimes referred to as a layer—for example, model layer.

The second thing is the abstraction of the Presentation layer away from the display primitives.

The other thing that we still don’t have in ObjC is the Controller. Wait, what? Surely Objective-C applications have Controllers. Apple and NeXT both talked about it, and they have objects with the name “Controller” in them. Surely that’s MVC.

For whatever reason, NeXT kept the ObjC-style Views with their handling of both input and output, but also had to reintroduce the idea of a Controller. So they took the same approach followed by others, notably including Ivar Jacobson a few years later, of defining the Controller layer to contains whatever’s left once you’ve worked out what goes into the Model and View layers.

Now there’s only the question of placement. The Controller can’t go where Smalltalk’s Controllers went, because then it would get in the way of the input events which now need to get to the View. It could go between the two:

A controller object acts as an intermediary between one or more of an application’s view objects and one or more of its model objects. Controller objects are thus a conduit through which view objects learn about changes in model objects and vice versa. Controller objects can also perform setup and coordinating tasks for an application and manage the life cycles of other objects.

Here’s a reproduction of the diagram from that document:

An illustration of Cocoa's Model-View-Controller architecture. A view sends user actions to a controller, and receives updates from the controller. The controller sends updates to the model, and receives notifications from the model.

Notice that for the Controllers to act as a mediator between Models and Views, the Views actually have to forward action messages on to the Controllers (via delegation, target-action or the responder chain). Had the Controller been left where it was it would already be receiving those events.

Compare that with the diagram derived from Smalltalk MVC:

A controller receives events from a person, and passes them to a view. A model has bi-directional synchronization with both the controller and the view. The view updates the screen.

It’s clear that these are different concepts that unfortunately share a name.

Posted in MVC, OOP | Comments Off on Messily Veneered C

The Objective-C protocol naming trifecta

Objective-C protocol names throughout history seem to fall into three distinct conventions:

  • some are named after what a conforming object provides. Thus we have DBProperties, DBEntities, DBTypes and the like in Database Kit.
  • others are named after what the object is doing. Thus we have NSCoding, NSLocking, IXCursorPositioning etc.
  • still others are named after what the conforming object is. NSTableViewDataSource, NSDraggingInfo etc.

I think I prefer the second one. It emphasises the dynamic message-sending nature of the language: “because you’re sending messages from this protocol, I’ll act like this”.

Posted in OOP | Comments Off on The Objective-C protocol naming trifecta

ClassBrowser’s public face

I made a couple of things:

I should’ve done both of these things at the beginning of the project. I believe that the fact I opened the source really early, when it barely did one thing and then only on my machine™, was a good thing. It let people find out about the project, have a look, and even send changes. But not giving anywhere to discuss ClassBrowser was a mistake. It meant I answered questions in private messages that would’ve been better archived, and it probably turned some people away who couldn’t immediately see what the point was.

Hopefully it’s not too late to fix that.

Posted in Responsibility, software-engineering | Comments Off on ClassBrowser’s public face

Replacing the language

Over the last few years, people have used the ObjC frameworks from TCL, Python, Perl and WebScript, Perl again, Perl, more Python, Ruby, Ruby, Ruby, Ruby, Java, Java, AppleScript, Smalltalk, C++, Pascal, Object Pascal, CLIPS, Common LISP, Nu, Eero, Modula-2, JavaScript, Erlang, Lua, OCaml, C#, F#, and indeed any unknown language.

It’d be nice if we had an alternative to ObjC, wouldn’t it?

Posted in code-level | Comments Off on Replacing the language

Laggards don’t buy apps: devil’s advocate edition

Silky-voiced star of podcasts and all-round nice developer person Brent Simmons just published a pair of articles on dropping support for older OS releases. His argument is reasonable, and is based on a number of axioms including this one:

  • People who don’t upgrade their OS are also the kind of people who don’t buy apps.

Sounds sensible. But here’s another take that also sounds sensible, in my opinion.

  • People who don’t upgrade their OS are also the kind of people who don’t like having to computer instead of getting their stuff done.

Let’s explore a world in which that axiom is true. I’m not saying it is true, nor that Brent’s is false: nor am I saying that his is true and that mine is false. I’m saying that there’s an open question, and we can investigate multiple options (or maybe even try to find some data).

Developers are at the extreme end of a range of behaviours, which can be measured in a single dimension that’s glibly called “extent to which individual is willing to mess about with a computer and consider the time spent valuable”. The “only upgraders buy apps” axiom can be seen as an extension of the idea that all changes to a computer fit onto the high end of that dimension: if you’re willing to buy an app, then you’re willing to computer. If you’re willing to computer, then you’re willing to upgrade. Therefore anyone who wants to sell an app is by definition selling to upgraders, so you can reduce costs by targeting the latest upgrade.

Before exploring the other option, allow me to wander off into an anecdote. For a few years between about 2004 and 2011 I was active in my (then) local Mac User Group, which included chairing it for a year. There were plenty of people there who were nearer the middle of the “willingness to mess with a computer” spectrum, who considered messing with upgrades and configuration a waste of time and often a way to introduce unwanted risk of data loss, but nonetheless were keen to learn about new ways to use their computers more efficiently. To stop computering, and start working.

Many of these people were, it is true, on older computers. The most extreme example was a member who to this day uses a Powerbook G3 Pismo and a Newton MessagePad 2100. He could do everything he needed with those two computers. But that didn’t stop him from wanting to do it with less computer, from wanting to optimise his workflow, to find the latest tips and tricks available and decide whether they got him where he was going more efficiently.

As I said, that example was extreme. There were plenty of other people who only bought new computers every five years or more, but were still on the latest versions of apps like Photoshop, Quark Xpress, or iWork where they could be, and whenever new ones got released the meeting topic would be to dig into the new version (some brave soul would have it on day one) to see whether they could do things better, or with less effort.

These people were paying big money for big software. Not because it was the newest, or because they had to be on “latest and [as we like to claim] greatest”, but because it was better suited to their needs. It gave them a better experience. So, bearing in mind that this is a straw-man for exploration purposes, let me introduce the hypothesis that defends the straw-man axiom presented above:

  • A delightful user experience means not making people mess about with computer stuff just to use their computers.
  • There are plenty of people out there who would rather get something that lets them work more effectively than waste time on upgrades.
  • To those people, spending money on software that gets them where they are going is a better investment than any amount of time and anxiety spent on messing with settings including operating system upgrades.
  • These people represent the middle of the spectrum: not the extreme low end where you never change anything once you’ve bought the computer; nor the extreme high end where fiddling with settings and applying upgrades is considered entertainment.
  • Therefore, the low price of “latest and greatest” software reflects at least in part the externalisation of the (time-based) costs and risks associated with upgrading to mid-range tinkerers.
  • Because of this, while the incremental number of users associated with supporting earlier OS versions may not be great, the incremental value per user may be much higher than gaining users with low-price apps on the current operating systems.

As I say, interesting food for thought, but not necessarily any more (or less) true than the view presented in Brent’s posts. Please have your pinch of salt ready, and don’t bet your business on the thoughts of this idle blogger.

Posted in Business, Responsibility | Comments Off on Laggards don’t buy apps: devil’s advocate edition