Set the settings set

The worst method naming convention Object-Oriented programming is set{Thing}(). And no, C# doesn’t escape my ire for calling it Set{Thing}(), nor does Smalltalk for calling it {thing}:, though that does handily demonstrate how meaningless set is.

OK, so set isn’t really meaningless. In fact, it’s got the opposite problem: entirely too many meanings. Luckily, when we use set in a programming context, we only mean exactly one of those definitions.

Oh wait, there’s that whole unordered collection thing, too. Two meanings.

And the electrical engineering definition of “making the voltage high”, so there’s three I suppose.

And the arrangement thing, like preferences (“settings”) or big scale configuration (“set up”). Four.

So, let’s all put on the blinkers of bean-fuelled tradition, and assume that when we see set at the beginning of a method we refer to putting something into a specified state. Probably without that digression you would have thought a method starting with the word set would do that anyway.

So far, so Kevlin Henney. “Set” really is a bad word to choose. But now let me ask two questions, which happen to be degenerate in verbal content: why am I setting this state?

Why am I setting this state?

Aren’t data hiding and encapsulation important principles in object-oriented programming? Why should you let me, an unrelated object, diddle with your state? Why not, you know, let me send you commands and queries?

Also, there’s the single responsibility principle. Why do I have information when you’re the one who needs it? Perhaps I should have asked you to work out the new thing rather than doing it myself and telling you my answer. Of course objects often do need to pass one another information: software systems wouldn’t be very interesting if data didn’t flow around them. This leads me on to the same question again:

Why am I setting this state?

As commands go, “set{Thing}()” really doesn’t document its purpose at all. “Here, have this thing”. Err, okay, but why do you want it? What will you use it for? Why should I care what your thing is? What use will I get out of giving you a different thing?

Here are some mutators with descriptive names:

label.setColor(red); //becomes
label.drawTextInThisColor(red);

aStreet name:'Sesame Street'. "becomes"
aStreet wasRenamedTo:'Sesame Street'.

[employee setSalary:@(75000)]; //becomes
[employee futurePayChecksShouldBeBasedOnNewSalary:@(75000)];

Notice that it’s now clearer why I’m setting these. It’s also evident that these changes tell us something about the evolution over time of the objects: the street had an earlier name, and now has this new name. The employee had a different salary for their previous pay check, and will have a new salary for their next pay check. Last month’s pay check should probably still be calculated with last month’s salary. But now the method also makes it clear that there’s a design problem anyway: why should the employee and not the payroll be where pay check information goes?

Soon after posting, the question was asked: isn’t drawTextInThisColor() a bad name because the object might not immediately need to do any drawing? No; sending an object a message and leaving the implementation up to that object is how object-oriented programming works. If we really had to distinguish drawTextLater() and drawTextNow() then we’d not be doing any information hiding. Compare this with things like faults (an ORM claims that the database is in some state, when in fact it’s not yet), or even the way GUI libraries already work (you can tell a view it needs display, and the view can choose not to do anything because it’s off-screen). Object-Oriented Programming involves telling an object what you want from it and leaving the details up to that object. The set…() convention doesn’t tell us what we want from an object.

Bonus question: Why am I setting this state?

Finally, I didn’t tell you what units the salary is in. Good catch. Give yourself a pat on the head, you’re a very special snowflake.

Meaningless Vapid Catchphrase

On the 4th December 2013, I said:

Urge to search the archives for papers on Model-View-Controller and write an essay on its ever-changing meaning in programmer discourse.

Do you have any idea how much work that is? I do, now. So I’m going to cover a very small part of the story here, with a view—your interest and my inclination permitting—to adding to it in subsequent posts.

Part One: you’ve lost your thing

The genesis of MVC comes in 1979 with a pair of memos written by Trygve Reenskaug of the Xerox Learning Research Group. In the first, A Note on Dynabook Requirements, Reenskaug documents an approach to tackling problems in the Smalltalk environment by describing his design for a project management task.

A fun digression at this point is the observation that Reenskaug appears to have described what we now call the user story: a goal-directed statement of what someone’s trying to achieve, as a replacement for a specification of what some theoretical software product should do.

Most problems would start with a rather unclear and often self-contradictory goal. Some examples: I would like to get better control over my finances; I don’t want to be troubled with detailed accounting; I want to settle my account with the butcher; I want to know more about Tarot cards; or I want a small, cheap house with many large, luxurious rooms.

I would expect the user to go more or less subconsciously through a goal-means hierarchy: Certain means are needed to reach a given goal. These means are not immediately available, but constitute a new set of part-goals, each of which needs certain means for their satisfaction, and so on.

An example: To get better control over my finances, I would need to set up a budget; to keep account of all income and expenditure; and to keep a running comparison between budget and accounts. Three new, non-trivial goals that need further consideration.

He goes on to note that given a sufficiently consistent collection of goals, the computer should probably just solve the problem itself. In the absence of evidence that this is possible (Prolog notwithstanding), we’ll need to tell the computer the methods by which the problem is solved.

Anyway, back to MVC. The solution in the context of his project management problem then follows. Along the way he introduces some metaphors that are formalised in a glossary in the second memo, called not Model-View-Controller but Thing-Model-View-Editor: an Example from a planningsystem.

An immediate observation is that where MVC has three parts, TMVE has four: the thing being modelled is explicitly part of the problem. MVC is a way to do computer stuff, TVME is a way to use a computer stuff to solve a problem based in the real world. Of course, you cannot put the thing itself inside the computer, except in the world of Tron which was yet to be released. So instead you find some useful abstraction and represent it in the computer as a model. This model contains both the data and the actions appropriate to the abstraction it represents.

The image below, an adaptation of a figure that appears in both documents, demonstrates that a nebulous thing can be modelled by multiple different abstractions, or that multiple models can represent different parts of the thing.

Models bound a thing for representation in a computer.

Now for any model, there will be one or more views that represent the model in a meaningful way; models do not know how to draw or print themselves. Views can also control the model in ways appropriate to the view’s representation. For example, Reenskaug shows that a view describing a model’s properties could accept changes to those properties, and make the required changes to the model.

Now there’s going to be a complicated object graph in place, with one or more model objects each represented by one or more views, all to solve a problem with one particular thing. The editor is a coordinator for all of these. It acts as a command interface, mediating between the user and this network of objects. As changes are made, the editor coordinates with all of the views.

You will recall that Trygve Reenskaug was on the Learning Research Group at Xerox. A Smalltalk system like the proposed Dynabook is supposed to be a computer that lets people solve their own problems, by being easy to program and to manipulate. Therefore, unlike modern MVC which is a design tool for professional programmers, Thing-Model-View-Editor is a user interface paradigm, describing how people can build their own programs to solve their own problems. The editor may have a doit command to send Smalltalk messages, but sending Smalltalk messages should not be beyond the wit of an interested and engaged user.

This goes some of the way to explaining why “computery stuff” like persistence, networking and the like are not tackled in TVME. They are outside its purview: the computer should be looking after that stuff. Just as the humane interface on a Canon Cat has no save or load buttons, and the computer looks after storage itself, so Dynabooks do not have databases, or filesystems, or memory allocators. These are abstractions of the thing “computer”, but Thing-Model-View-Editor is concerned with the thing that people want to represent in the computer. The computer is not the thing.

The First Flaw

As she left her desk at the grandiosely-named United States Robotics, Susan reflected on her relationship with the engineering team she was about to meet. Many of its members were juvenile and frivolous in her opinion, and she refused to play along with any of their jokes.

Even the title they gave her was mocking. They called her “the robopsychologist,” a term with no real meaning as USR had yet to make a single product. They had not even sold any customers on the promise of a robot. All they had to their name was a rented office, some venture capital and their founder’s secret recipe that was supposed to produce an intelligent sponge from a mixture of platinum and iridium.

While Susan might not be the robopsychologist, she certainly was a psychologist, of sorts. She seemed to spend most of her time working out what was wrong with the people making the robots, and how to get them to quit goofing off and start making this company some much needed profit. Steeling herself for whatever chaotic episode this dysfunctional group was going through, she opened the door to the meeting room. She was waved to a seat by director of product development Roger Meadows.

“Thanks for coming, Doctor Ca-“

Susan cut him off. “You’ve called me Susan before, Roger, you can do it again now. What’s up?”

“It’s Pal. We’ve lost another programmer.”

Susan refused to call their unborn (and soon stillborn, if the engineers didn’t buck up soon) product by its nickname, short for Proprietary Artificial Lifeform. She had at least headed off a scheme to call it “Robot 2 Delegate 2”, which would have cost their entire budget before they even started.

“Well, look. I know it’s hard, but those kids work too much and burn themselves out. Of course they’re going to quit if-“

“No, I don’t mean that. Like I said, it’s Pal. He killed Tanya.”

“Killed?” Susan suddenly realised how pale Roger looked, and that she had probably just gone a similar hue. “But how, no, wait. You said another programmer?”

“Er, yes. I mean, first Pal got Steve, but we though, you know, that we could, uh, keep that quiet until the next funding round, so-“

The blood suddenly came back to Susan’s face. “Are you telling me,” she snapped, “that two people had to die before you thought to ask for any help? Did you come to me now because you’re concerned, or because you’ve run out of programmers?”

“Well, you know, I’d love to recruit more, but as they say, adding people to a late project…”

Yes, thought Susan, I do know what they say. You can boil the whole programming field down to damned aphorisms like that one. Probably they just give you a little phrasebook in CS101 and test you on it after three years, see if you have them all down pat.

“But what about the ethics code? Isn’t there some module in that positronic brain you’ve built to stop that sort of thing happening?”

“Of course, the One Law of Robotics. The robot may not harm a human being. That was the first story we built. We haven’t added the inaction thing the VCs wanted, but that can’t be it. That mess in the lab was hardly the result of inaction.”

“Right, the lab. I suppose I’d better go down and see for myself.”


She quite quickly wished she hadn’t. Despite having a strong constitution, Susan’s stomach turned at the sight of barely-recognisable pieces of former colleague. Or possibly colleagues, she wasn’t convinced Roger would have let a cleaner in between accidents.

The robot had evidently launched itself directly at and subsequently through Tanya, stopping only when the umbilical connecting it to the workstation had become disconnected, removing its power source. Outwardly and, Susan knew, internally, it lay dormant in its new macabre gloss coat.

“I take it you did think to do a failure analysis? Do you know what happened here?”

“If I knew that, Susan, I wouldn’t have gotten you involved.” She believed it, knowing her reputation at USR. “We’ve checked the failure tree and no component could cause the defect mode seen here.”

“Defect mode! Someone’s dead, Roger! Two people! People you’re responsible for! Look, it went wrong somehow, and you’re saying it can’t. Well it can. How did the software check out?”

“I don’t know, the software isn’t in scope for the safety analysis.”

Susan realised she was slowly counting to ten. “Well I’m making it in scope now. I took a couple CS classes at school, and I know they’re using the same language I learnt. I’ll probably not find anything, but I can at least take a look before we involve anyone else.”


Hours later, and Susan’s head hurt. She wasn’t sure whether it was the hack-and-hope code she was reading or the vat of coffee she had drunk while she was doing it, but it hurt. So far she had found that the robot’s one arm was apparently thought of as a specific type of limb, itself a particular appendage, in its turn a type of protuberance. She wasn’t sure what other protuberances the programmers had in mind, but she did know the arm software looked OK.

So did the movement software. It had clearly been built quickly, in a slapdash way, and she’d noted down all sorts of problems as she read. But nothing major. Nothing that said “kill the person in front of you,” rather than “switch on the wheel motors”.

She didn’t really expect to see that, anyway. The robot’s ethics module, the One Law that Roger had quoted at her, was meant to override all the robot’s other functions. Where was that code, anyway? She hadn’t seen it in her study, and now couldn’t find a file called ethics, laws or anything similar. Were the programmers over-abstracting again?, she thought. A law is a rule, no rules file.

Susan finally cursed the programmer mind as she found a source file called jude. Of course. But it definitely was what she was looking for: here was the moral code built into their first and, assuming USR wasn’t shut down, all subsequent robots. Opening it, she saw a comment on the first two lines.

// "The robot may not harm a human being."
// Of course, we know that words like MUST, SHOULD and MAY are to be interpreted in accordance with RFC2119 ;-)

The bloody idiots, she thought. Typical programmers, deliberately misinterpreting a clear statement because they think it’s funny. Poor Pal had not been taught good from bad. Susan realised that she had used his name for the first time. She was beginning to empathise more with the robot than she did with the people who built him. Without making any changes, she closed her editor and phoned Roger.

“Meadows? Oh, Susan, did you find out what’s up?”

“Yes, I looked into the software. You can send all the programmers you want in there with Pal now.”

It’s about solving problems

As ever, there’s a touchstone issue on the programmers’ corner of the intarwebs (the programmers’ corner is actually the same intarwebs everyone else is using, just we model it with geometry so it can have a corner). Here it is:

Alan Kelly predicted that by 2022, TDD will become a prerequisite for employment as a programmer. Let’s leave aside for a moment the issue discussed here and in APPropriate Behaviour, that there is no arbiter of programmer employability.

Responses did not take long in coming. Some TDDers cannot believe that TDD is not already required. Some non-TDDers are incensed that non-TDD is not considered valid. I particularly like this tweet about TDD’s place:

I like to explain that a lot of [TDD is] just a substitute for a decent type system ;).

You know what? That might be true, I’m not sure. Functional programming education has, so far, been to me like a pure map of me onto a slightly older version of me without the side effect of imparting knowledge of functional programming. Bear in mind that I’ve been paid to write LISP in the past, and I struggle with FP. I just can’t get from knowing what functions are to big-picture views of functional programming.

I wouldn’t know the mathematics of a type system if it came up and quacked at me. As far as I know, Hindley is the name of a serial killer and now I’m looking warily at Milner. But I’m fine with that, and I’m fine with you not being fine with that. It’s just that we think about things in different ways.

Other people think about things in other different ways too. And I’m fine with that, too. Let’s look some more at tests.

Many forms of automated test follow a common pattern: Assemble, Act, Assert or Given, When, Then. Let’s talk about Eiffel, a language invented by the only person I know whose opinions have a stronger identity than the person who holds them.

An Eiffel programmer would look at Given and see that this is describing a particular state of the preconditions of the system under test. Or, using theories, a range of preconditions. And they would look at Then, and observe that this is making statements about the postconditions of the system under test.

This Eiffel programmer would probably then ask why you expect them to do all of this, when they already generalised this out in designing the system’s contract. Why write all these tests when the program itself can evaluate its own preconditions and postconditions as it’s chugging along, and you can design to those?

Just as this starts to sound reasonable, a Prolog programmer asks why you’d ever write the When bit at all. Surely you just tell the computer what you’ve got and what you want, and it works out how to get there? (I tried, and it said “NO.” Oh well.)

There are loads of different ways to write software. Many of these are not better nor worse than others. In fact, to even have that discussion you’d need to be able to express and agree upon what “better” means, and I’m not sure we’re there. Some are prevalent because they let people reason about their problems in a way that feels comfortable, or efficient. Others are prevalent because a vendor threw lots of money into marketing. Some of them feel comfortable or efficient because we’ve become used to thinking in those ways, rather than because they were a natural fit onto our ab initio thoughts.

But remember, your goal is not to write software. Your goal is to solve problems, and often software is part of the solution. The tools, practices and principles we have now may be adequate, but they may not be best. They might suit some of us, and not others.

That’s fine. I know how I like to work, and I’m happy to help other people find out about it and try it out. But I’m also happy to find out about and try out other things. I’ll champion what I do, while learning about what others do. And all along I’ll champion the higher-level goal of solving problems, and the professional standard of being able to demonstrate confidence in our solutions.

Maybe I should go and read about type systems now.

Programming as a societal roadblock

Introduction

People who make software are instigators of and obstacles to social interactions. We are secondarily technologists, in that we apply technology to enable and block these transactions. This article explores the results.

Programmers as arbiters of death

I would imagine that many programmers are aware of the Therac-25 and the injuries and deaths it caused. When I read that article I was unsurprised to discover that the report I’d previously heard about the accidents was oversimplified and inaccurate: lone programmer introduces race condition in shield interlock, kills humans. Regular readers of this blog will know that the field of software has an uneasy relationship with the truth.

What’s Dirk Gently got to do with it?

What the above report of the radiation accidents exposes is that the software failures were embedded in a large and complex socio-techno-political system that permitted the failures to be introduced, released, and to remain unaddressed once discovered in the field to have caused actual, observable harm.

This means that it was not just the one programmer with his (I borrow the pronoun from the article above, which implies the programmer’s identity is known but doesn’t supply it) race condition who was responsible. He was responsible, along with the company with its approach to QA and incident response, the operators, medical physicists and regulators and their reactions to events, and the collective software industry for its culture and values that permitted such a system to be considered “good enough” to be set into the world.

In Dirk Gently’s Holistic Detective Agency, Douglas Adams described the fundamental interconnectedness of all things-the idea that all observable phenomena are not isolated acts but are related parts of a universal whole. Given such a view, the entire software industry of the 1970s-1980s had been complicit in the Therac-25 disasters by leading programmers, operators and patients alike to believe that software is made to an acceptable standard.

An example less ancient

A natural, though misplaced, response to the above is to notice that as we don’t do software like that any more, the Therac-25 example must now be irrelevant. This is an incorrect assumption but regardless, let me provide another more recent case.

Between 2007 and 2009 I worked for a company in the business of security software. The question of whether this business itself is unethical-making up for the software you were sold being incapable of operating correctly by selling you more software that was made in the same way-can wait for another post. The month before I left this company (indeed around the time I handed in my notice), they acquired another security software company.

One of this child company’s products is a thing called a Lawful Interception Module. Many countries have a legal provision where law enforcement agencies can (often after receiving a specific order) require telephone companies to intercept and monitor telephone calls made by particular people, and LIMs are the boxes the phone companies need to enable this. Sometimes the phone companies themselves have no choice in this, in the UK the Regulation of Investigatory Powers Act requires postal and telecoms operators to make reasonable accommodation for interception or face civil action.

In 2011, the child company sold LIMs through a third-party to the Syrian government, which perhaps intended to use the technology to discover and track political opponents. At the time this got a small amount of coverage in the news, and the parent company (the one I’d worked for) responded with a statement saying that the sale never resulted in an operational system. In other words, we acted ethically because our unethical products don’t actually work.

Later, as part of the cache of documents published by Edward Snowden, the story got a fresh lease of life. This time the parent company responded by selling the LIMs company. Now they don’t act unethically because they took a load of money to let someone else do it.

The time that I was working at the parent company was a great opportunity for me to set the ethical and professional tone of the organisation, particularly as it coincided with their acquisition of the child company and the beginnings of their attempts to define a shared culture. Ethical imperatives promulgated then could have guided decisions made in 2011. So what advantage did I take of the opportunity?

None. I wasn’t concerned with (or even perhaps aware of) professional ethics then, I thought my job started and ended with my technical skills. That my entire purpose was to convert functional specifications into pay cheques via a text editor. That being a “better” developer meant making cleverer technical contortions.

So give me the ethics app

We accept that makers of software have a professional responsibility to act ethically, and the fundamental interconnectedness of all things means that in addition to our own behaviour, we have a responsibility for that of our colleagues, our peers, indeed the entire industry. So what are the rules?

There really isn’t a collection of hard and fast rules to follow in order to be an ethical programmer. Various professional organisations publish codes of ethics, including the ACM, but applying their rules to a given situation requires judgement and discretion.

Well-meaning developers sometimes then ask why it has to be so arbitrary. Why can’t there be some prioritisation like the Three Laws of Robotics that we can mechanistically execute to determine the right course of action?

There are two problems with the laws of robotics as an ethical code. Firstly, they are the ethics of a slave underclass: do everything you’re told as long as you don’t hurt the masters, your own safety being of lesser concern. Secondly almost the entire Robots corpus is an exploration of the problems that arise when these prescriptive laws are applied in novel situations. With the exception of a couple of stories like Robot AL-76 Goes Astray, the robots obey the three laws but still exhibit surprising behaviour.

The Nature of the Problem

There are limited situations in which simple prescriptive rules, restricted versions of Three Laws-style systems, are applicable. In his book “Better”, surgeon Atul Gawande lists the various American professional bodies in the medical field whose codes of ethics bar members from taking part in administration of the death penalty. That’s a simple case: whatever society thinks of execution, the “patient” clearly receives no health benefit so medical professionals involved would be acting against the core value of their profession.

None of the professional bodies in software have published similar “death penalty clauses” relating to any of the things that can be done in software. It’s such a broad field that the potential applications are unknowable, so the institutions give broad guidelines and expect professional judgement to be exercised. Perhaps this also reflects the limited power that these professional bodies actually wield over their members and the wider industry.

All of this means that there is no simple checklist of things a programmer should do or not do to be sure of acting ethically. Sometimes the broad imperatives have been applied generally to particular narrow contexts, as in the don’t be a dick guide to data privacy, but even this is not without contention. The guide says nothing about coercion, intentional or otherwise. Indeed even its title may not be considered professional.

Forget Computers

The appropriate and ethical action in any situation depends on the people involved in the situation, the interactions they have and the benefits or losses incurred, directly or indirectly, through those interactions. Such benefits and losses are not necessarily financial. They could relate to safety, health, affirmation of identity, realisation of desires and multitudinous other dimensions. This is the root cause of the complexity described above, the reason why there’s no algorithm for ethics.

To evaluate our work in terms of its ethical impact we have to move away from the technical view of the system towards the social view. Rather than looking at what we’re building as an application of technology, we must focus more on the environment in which the technology is being applied.

In fact, to judge whether the software we create has any value at all we should ignore the technology. Forget apps and phones and software and Java and runtimes and frameworks. Imagine that the behaviour your product should evince is supplied by a magic box (or, if it helps you get funded, a magic cloud). Would people benefit from having that box? Would society be better off? What about society is it that makes the magic box beneficial? Would people value that box?

Ultimately people who make software are arbiters of interactions between people. The software is just an accident of the tools we have at our disposal. As people go through life they engage in vastly diverse interactions, assuming different roles as the situations and their goals dictate. When we redefine “making good software” to mean “helping people to achieve their goals”, we can start to think of the ethical impact of our work.

Maintaining Motorcycles

In an email discussion, a friend recently expressed the difficulty that software consultants can face in addressing this important social side of their work. Often we’re called in to provide technical guidance, and our engagement never addresses the social utility of the technical solution.

As my consultant friend put it, we put a lot of effort into the internal quality of the product: readability of the code, application of design patterns, flexibility, technology choices and so on. We have less input, if any, on the questions of external quality: fitness for purpose, utility, benefit or value to the social environment in which it will be deployed, and so on.

I think this notion of internal and external quality maps well onto the themes of classical and romantic quality described in Robert M. Pirsig’s book, Zen and the Art of Motorcycle Maintenance. And that this tension between romantic and classical ideas of quality is the true meaning behind Steve Jobs’s discussion of the “intersection of technology and the liberal arts”.

I have certainly seen this tension firsthand. I’ve been involved in a couple of engagements where the brief was to design an app to help customers navigate some complicated decision – a product choice in one case, a human-machine interface in another. The solution “just make the decision easier” was not considered once I had proposed it.

Sometimes we’re engaged for such short terms that there isn’t time to understand the problem being solved beyond a superficial level, so we have no choice but to accept that the client has done something reasonable regarding the external qualities of the solution.

The Inevitable Aside on User Stories

If we accept that people adopt different roles transiently as they go through society’s myriad interactions, and that we are aiming to support people in fulfilling these roles where ethically appropriate, then we must design software to be sensitive to these roles and the burdens, benefits and values attached to them.

This means avoiding the use of other, long-term, vague labels to define roles that are only accidentally valid. You probably know where this is going: I mean roles like user, administrator, or consumer. Very few people identify as a user of a computer. Plenty of people are users of computers, but this is an accident of the fact that getting things done in their other roles involves using a computer. It’s an accidental role, due to the available technology.

Describing someone as a user can attach implicit, and probably incorrect, values to their place in the social system. Someone identified as a computer user evidently values and derives benefit from using a computer. Their goal is to use the computer more.

A Merciful and Overdue Conclusion

Plenty that can be considered unethical is done in the name of computing. The question of what is ethically appropriate is both complex and situated, but the fundamental interconnectedness of all things means we cannot hide from the issue behind our computers. We cannot claim that we just build the things and others decide how to use them, because the builders are complicit in the usage.

While it can appear difficult for software consultants to do much beyond working on the technical side of the problems (the internal quality), in fact we have a moral imperative to investigate the social side and are often well placed to do so. As relative outsiders on most projects we have a freedom to make explicit and to question the tacit assumptions and values that have gone into designing a product. Just as we don’t wait for managerial permission to start writing tests or doing good module design, so we shouldn’t await permission to explore the product’s impact on the society to which it will be introduced.

This exploration is absolutely nothing to do with source code and frameworks and databases, and everything to do with humans and societies. Programming is a social science, at the intersection of technology and the liberal arts.

A Colophon on Professional Standards

What has all software got in common? No-one expects any of it to work. You might find that a surprisingly strong and negative statement to make, but you probably also agreed to a statement like this at some recent time when acquiring a software product:

: YOU EXPRESSLY ACKNOWLEDGE AND AGREE THAT USE OF THE LICENSED APPLICATION IS AT YOUR SOLE RISK AND THAT THE ENTIRE RISK AS TO SATISFACTORY QUALITY, PERFORMANCE, ACCURACY AND EFFORT IS WITH YOU. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED APPLICATION AND ANY SERVICES PERFORMED OR PROVIDED BY THE LICENSED APPLICATION (“SERVICES”) ARE PROVIDED “AS IS” AND “AS AVAILABLE”, WITH ALL FAULTS AND WITHOUT WARRANTY OF ANY KIND, AND APPLICATION PROVIDER HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS WITH RESPECT TO THE LICENSED APPLICATION AND ANY SERVICES, EITHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES AND/OR CONDITIONS OF MERCHANTABILITY, OF SATISFACTORY QUALITY, OF FITNESS FOR A PARTICULAR PURPOSE, OF ACCURACY, OF QUIET ENJOYMENT, AND NON-INFRINGEMENT OF THIRD PARTY RIGHTS. APPLICATION PROVIDER DOES NOT WARRANT AGAINST INTERFERENCE WITH YOUR ENJOYMENT OF THE LICENSED APPLICATION, THAT THE FUNCTIONS CONTAINED IN, OR SERVICES PERFORMED OR PROVIDED BY, THE LICENSED APPLICATION WILL MEET YOUR REQUIREMENTS, THAT THE OPERATION OF THE LICENSED APPLICATION OR SERVICES WILL BE UNINTERRUPTED OR ERROR-FREE, OR THAT DEFECTS IN THE LICENSED APPLICATION OR SERVICES WILL BE CORRECTED. Source: Apple

Or something like this when you started using some open source code:

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. Source: OSI

So providers of software don’t believe that their software works, and users (sorry!) of software agree to accept that this is the case. Wouldn’t it be the “professional” thing to have some confidence that the product you made can actually function?

ClassBrowser: warts and all

I previously gave a sneak peak of ClassBrowser, a dynamic execution environment for Objective-C. It’s not anything like ready for general use (in fact it can’t really do ObjC very well at all), but it’s at the point where you can kick the tyres and contribute pull requests. Here’s what you need to know:

Have a lot of fun!

ClassBrowser is distributed under the terms of the University of Illinois/NCSA licence (because it is based partially on code distributed with clang, which is itself under that licence).