The First Flaw

As she left her desk at the grandiosely-named United States Robotics, Susan reflected on her relationship with the engineering team she was about to meet. Many of its members were juvenile and frivolous in her opinion, and she refused to play along with any of their jokes.

Even the title they gave her was mocking. They called her “the robopsychologist,” a term with no real meaning as USR had yet to make a single product. They had not even sold any customers on the promise of a robot. All they had to their name was a rented office, some venture capital and their founder’s secret recipe that was supposed to produce an intelligent sponge from a mixture of platinum and iridium.

While Susan might not be the robopsychologist, she certainly was a psychologist, of sorts. She seemed to spend most of her time working out what was wrong with the people making the robots, and how to get them to quit goofing off and start making this company some much needed profit. Steeling herself for whatever chaotic episode this dysfunctional group was going through, she opened the door to the meeting room. She was waved to a seat by director of product development Roger Meadows.

“Thanks for coming, Doctor Ca-“

Susan cut him off. “You’ve called me Susan before, Roger, you can do it again now. What’s up?”

“It’s Pal. We’ve lost another programmer.”

Susan refused to call their unborn (and soon stillborn, if the engineers didn’t buck up soon) product by its nickname, short for Proprietary Artificial Lifeform. She had at least headed off a scheme to call it “Robot 2 Delegate 2”, which would have cost their entire budget before they even started.

“Well, look. I know it’s hard, but those kids work too much and burn themselves out. Of course they’re going to quit if-“

“No, I don’t mean that. Like I said, it’s Pal. He killed Tanya.”

“Killed?” Susan suddenly realised how pale Roger looked, and that she had probably just gone a similar hue. “But how, no, wait. You said another programmer?”

“Er, yes. I mean, first Pal got Steve, but we though, you know, that we could, uh, keep that quiet until the next funding round, so-“

The blood suddenly came back to Susan’s face. “Are you telling me,” she snapped, “that two people had to die before you thought to ask for any help? Did you come to me now because you’re concerned, or because you’ve run out of programmers?”

“Well, you know, I’d love to recruit more, but as they say, adding people to a late project…”

Yes, thought Susan, I do know what they say. You can boil the whole programming field down to damned aphorisms like that one. Probably they just give you a little phrasebook in CS101 and test you on it after three years, see if you have them all down pat.

“But what about the ethics code? Isn’t there some module in that positronic brain you’ve built to stop that sort of thing happening?”

“Of course, the One Law of Robotics. The robot may not harm a human being. That was the first story we built. We haven’t added the inaction thing the VCs wanted, but that can’t be it. That mess in the lab was hardly the result of inaction.”

“Right, the lab. I suppose I’d better go down and see for myself.”


She quite quickly wished she hadn’t. Despite having a strong constitution, Susan’s stomach turned at the sight of barely-recognisable pieces of former colleague. Or possibly colleagues, she wasn’t convinced Roger would have let a cleaner in between accidents.

The robot had evidently launched itself directly at and subsequently through Tanya, stopping only when the umbilical connecting it to the workstation had become disconnected, removing its power source. Outwardly and, Susan knew, internally, it lay dormant in its new macabre gloss coat.

“I take it you did think to do a failure analysis? Do you know what happened here?”

“If I knew that, Susan, I wouldn’t have gotten you involved.” She believed it, knowing her reputation at USR. “We’ve checked the failure tree and no component could cause the defect mode seen here.”

“Defect mode! Someone’s dead, Roger! Two people! People you’re responsible for! Look, it went wrong somehow, and you’re saying it can’t. Well it can. How did the software check out?”

“I don’t know, the software isn’t in scope for the safety analysis.”

Susan realised she was slowly counting to ten. “Well I’m making it in scope now. I took a couple CS classes at school, and I know they’re using the same language I learnt. I’ll probably not find anything, but I can at least take a look before we involve anyone else.”


Hours later, and Susan’s head hurt. She wasn’t sure whether it was the hack-and-hope code she was reading or the vat of coffee she had drunk while she was doing it, but it hurt. So far she had found that the robot’s one arm was apparently thought of as a specific type of limb, itself a particular appendage, in its turn a type of protuberance. She wasn’t sure what other protuberances the programmers had in mind, but she did know the arm software looked OK.

So did the movement software. It had clearly been built quickly, in a slapdash way, and she’d noted down all sorts of problems as she read. But nothing major. Nothing that said “kill the person in front of you,” rather than “switch on the wheel motors”.

She didn’t really expect to see that, anyway. The robot’s ethics module, the One Law that Roger had quoted at her, was meant to override all the robot’s other functions. Where was that code, anyway? She hadn’t seen it in her study, and now couldn’t find a file called ethics, laws or anything similar. Were the programmers over-abstracting again?, she thought. A law is a rule, no rules file.

Susan finally cursed the programmer mind as she found a source file called jude. Of course. But it definitely was what she was looking for: here was the moral code built into their first and, assuming USR wasn’t shut down, all subsequent robots. Opening it, she saw a comment on the first two lines.

// "The robot may not harm a human being."
// Of course, we know that words like MUST, SHOULD and MAY are to be interpreted in accordance with RFC2119 ;-)

The bloody idiots, she thought. Typical programmers, deliberately misinterpreting a clear statement because they think it’s funny. Poor Pal had not been taught good from bad. Susan realised that she had used his name for the first time. She was beginning to empathise more with the robot than she did with the people who built him. Without making any changes, she closed her editor and phoned Roger.

“Meadows? Oh, Susan, did you find out what’s up?”

“Yes, I looked into the software. You can send all the programmers you want in there with Pal now.”

Posted in Fiction, Responsibility, social-science | Leave a comment

It’s about solving problems

As ever, there’s a touchstone issue on the programmers’ corner of the intarwebs (the programmers’ corner is actually the same intarwebs everyone else is using, just we model it with geometry so it can have a corner). Here it is:

Alan Kelly predicted that by 2022, TDD will become a prerequisite for employment as a programmer. Let’s leave aside for a moment the issue discussed here and in APPropriate Behaviour, that there is no arbiter of programmer employability.

Responses did not take long in coming. Some TDDers cannot believe that TDD is not already required. Some non-TDDers are incensed that non-TDD is not considered valid. I particularly like this tweet about TDD’s place:

I like to explain that a lot of [TDD is] just a substitute for a decent type system ;).

You know what? That might be true, I’m not sure. Functional programming education has, so far, been to me like a pure map of me onto a slightly older version of me without the side effect of imparting knowledge of functional programming. Bear in mind that I’ve been paid to write LISP in the past, and I struggle with FP. I just can’t get from knowing what functions are to big-picture views of functional programming.

I wouldn’t know the mathematics of a type system if it came up and quacked at me. As far as I know, Hindley is the name of a serial killer and now I’m looking warily at Milner. But I’m fine with that, and I’m fine with you not being fine with that. It’s just that we think about things in different ways.

Other people think about things in other different ways too. And I’m fine with that, too. Let’s look some more at tests.

Many forms of automated test follow a common pattern: Assemble, Act, Assert or Given, When, Then. Let’s talk about Eiffel, a language invented by the only person I know whose opinions have a stronger identity than the person who holds them.

An Eiffel programmer would look at Given and see that this is describing a particular state of the preconditions of the system under test. Or, using theories, a range of preconditions. And they would look at Then, and observe that this is making statements about the postconditions of the system under test.

This Eiffel programmer would probably then ask why you expect them to do all of this, when they already generalised this out in designing the system’s contract. Why write all these tests when the program itself can evaluate its own preconditions and postconditions as it’s chugging along, and you can design to those?

Just as this starts to sound reasonable, a Prolog programmer asks why you’d ever write the When bit at all. Surely you just tell the computer what you’ve got and what you want, and it works out how to get there? (I tried, and it said “NO.” Oh well.)

There are loads of different ways to write software. Many of these are not better nor worse than others. In fact, to even have that discussion you’d need to be able to express and agree upon what “better” means, and I’m not sure we’re there. Some are prevalent because they let people reason about their problems in a way that feels comfortable, or efficient. Others are prevalent because a vendor threw lots of money into marketing. Some of them feel comfortable or efficient because we’ve become used to thinking in those ways, rather than because they were a natural fit onto our ab initio thoughts.

But remember, your goal is not to write software. Your goal is to solve problems, and often software is part of the solution. The tools, practices and principles we have now may be adequate, but they may not be best. They might suit some of us, and not others.

That’s fine. I know how I like to work, and I’m happy to help other people find out about it and try it out. But I’m also happy to find out about and try out other things. I’ll champion what I do, while learning about what others do. And all along I’ll champion the higher-level goal of solving problems, and the professional standard of being able to demonstrate confidence in our solutions.

Maybe I should go and read about type systems now.

Posted in learning, Responsibility, TDD | Leave a comment

Programming as a societal roadblock

Introduction

People who make software are instigators of and obstacles to social interactions. We are secondarily technologists, in that we apply technology to enable and block these transactions. This article explores the results.

Programmers as arbiters of death

I would imagine that many programmers are aware of the Therac-25 and the injuries and deaths it caused. When I read that article I was unsurprised to discover that the report I’d previously heard about the accidents was oversimplified and inaccurate: lone programmer introduces race condition in shield interlock, kills humans. Regular readers of this blog will know that the field of software has an uneasy relationship with the truth.

What’s Dirk Gently got to do with it?

What the above report of the radiation accidents exposes is that the software failures were embedded in a large and complex socio-techno-political system that permitted the failures to be introduced, released, and to remain unaddressed once discovered in the field to have caused actual, observable harm.

This means that it was not just the one programmer with his (I borrow the pronoun from the article above, which implies the programmer’s identity is known but doesn’t supply it) race condition who was responsible. He was responsible, along with the company with its approach to QA and incident response, the operators, medical physicists and regulators and their reactions to events, and the collective software industry for its culture and values that permitted such a system to be considered “good enough” to be set into the world.

In Dirk Gently’s Holistic Detective Agency, Douglas Adams described the fundamental interconnectedness of all things-the idea that all observable phenomena are not isolated acts but are related parts of a universal whole. Given such a view, the entire software industry of the 1970s-1980s had been complicit in the Therac-25 disasters by leading programmers, operators and patients alike to believe that software is made to an acceptable standard.

An example less ancient

A natural, though misplaced, response to the above is to notice that as we don’t do software like that any more, the Therac-25 example must now be irrelevant. This is an incorrect assumption but regardless, let me provide another more recent case.

Between 2007 and 2009 I worked for a company in the business of security software. The question of whether this business itself is unethical-making up for the software you were sold being incapable of operating correctly by selling you more software that was made in the same way-can wait for another post. The month before I left this company (indeed around the time I handed in my notice), they acquired another security software company.

One of this child company’s products is a thing called a Lawful Interception Module. Many countries have a legal provision where law enforcement agencies can (often after receiving a specific order) require telephone companies to intercept and monitor telephone calls made by particular people, and LIMs are the boxes the phone companies need to enable this. Sometimes the phone companies themselves have no choice in this, in the UK the Regulation of Investigatory Powers Act requires postal and telecoms operators to make reasonable accommodation for interception or face civil action.

In 2011, the child company sold LIMs through a third-party to the Syrian government, which perhaps intended to use the technology to discover and track political opponents. At the time this got a small amount of coverage in the news, and the parent company (the one I’d worked for) responded with a statement saying that the sale never resulted in an operational system. In other words, we acted ethically because our unethical products don’t actually work.

Later, as part of the cache of documents published by Edward Snowden, the story got a fresh lease of life. This time the parent company responded by selling the LIMs company. Now they don’t act unethically because they took a load of money to let someone else do it.

The time that I was working at the parent company was a great opportunity for me to set the ethical and professional tone of the organisation, particularly as it coincided with their acquisition of the child company and the beginnings of their attempts to define a shared culture. Ethical imperatives promulgated then could have guided decisions made in 2011. So what advantage did I take of the opportunity?

None. I wasn’t concerned with (or even perhaps aware of) professional ethics then, I thought my job started and ended with my technical skills. That my entire purpose was to convert functional specifications into pay cheques via a text editor. That being a “better” developer meant making cleverer technical contortions.

So give me the ethics app

We accept that makers of software have a professional responsibility to act ethically, and the fundamental interconnectedness of all things means that in addition to our own behaviour, we have a responsibility for that of our colleagues, our peers, indeed the entire industry. So what are the rules?

There really isn’t a collection of hard and fast rules to follow in order to be an ethical programmer. Various professional organisations publish codes of ethics, including the ACM, but applying their rules to a given situation requires judgement and discretion.

Well-meaning developers sometimes then ask why it has to be so arbitrary. Why can’t there be some prioritisation like the Three Laws of Robotics that we can mechanistically execute to determine the right course of action?

There are two problems with the laws of robotics as an ethical code. Firstly, they are the ethics of a slave underclass: do everything you’re told as long as you don’t hurt the masters, your own safety being of lesser concern. Secondly almost the entire Robots corpus is an exploration of the problems that arise when these prescriptive laws are applied in novel situations. With the exception of a couple of stories like Robot AL-76 Goes Astray, the robots obey the three laws but still exhibit surprising behaviour.

The Nature of the Problem

There are limited situations in which simple prescriptive rules, restricted versions of Three Laws-style systems, are applicable. In his book “Better”, surgeon Atul Gawande lists the various American professional bodies in the medical field whose codes of ethics bar members from taking part in administration of the death penalty. That’s a simple case: whatever society thinks of execution, the “patient” clearly receives no health benefit so medical professionals involved would be acting against the core value of their profession.

None of the professional bodies in software have published similar “death penalty clauses” relating to any of the things that can be done in software. It’s such a broad field that the potential applications are unknowable, so the institutions give broad guidelines and expect professional judgement to be exercised. Perhaps this also reflects the limited power that these professional bodies actually wield over their members and the wider industry.

All of this means that there is no simple checklist of things a programmer should do or not do to be sure of acting ethically. Sometimes the broad imperatives have been applied generally to particular narrow contexts, as in the don’t be a dick guide to data privacy, but even this is not without contention. The guide says nothing about coercion, intentional or otherwise. Indeed even its title may not be considered professional.

Forget Computers

The appropriate and ethical action in any situation depends on the people involved in the situation, the interactions they have and the benefits or losses incurred, directly or indirectly, through those interactions. Such benefits and losses are not necessarily financial. They could relate to safety, health, affirmation of identity, realisation of desires and multitudinous other dimensions. This is the root cause of the complexity described above, the reason why there’s no algorithm for ethics.

To evaluate our work in terms of its ethical impact we have to move away from the technical view of the system towards the social view. Rather than looking at what we’re building as an application of technology, we must focus more on the environment in which the technology is being applied.

In fact, to judge whether the software we create has any value at all we should ignore the technology. Forget apps and phones and software and Java and runtimes and frameworks. Imagine that the behaviour your product should evince is supplied by a magic box (or, if it helps you get funded, a magic cloud). Would people benefit from having that box? Would society be better off? What about society is it that makes the magic box beneficial? Would people value that box?

Ultimately people who make software are arbiters of interactions between people. The software is just an accident of the tools we have at our disposal. As people go through life they engage in vastly diverse interactions, assuming different roles as the situations and their goals dictate. When we redefine “making good software” to mean “helping people to achieve their goals”, we can start to think of the ethical impact of our work.

Maintaining Motorcycles

In an email discussion, a friend recently expressed the difficulty that software consultants can face in addressing this important social side of their work. Often we’re called in to provide technical guidance, and our engagement never addresses the social utility of the technical solution.

As my consultant friend put it, we put a lot of effort into the internal quality of the product: readability of the code, application of design patterns, flexibility, technology choices and so on. We have less input, if any, on the questions of external quality: fitness for purpose, utility, benefit or value to the social environment in which it will be deployed, and so on.

I think this notion of internal and external quality maps well onto the themes of classical and romantic quality described in Robert M. Pirsig’s book, Zen and the Art of Motorcycle Maintenance. And that this tension between romantic and classical ideas of quality is the true meaning behind Steve Jobs’s discussion of the “intersection of technology and the liberal arts”.

I have certainly seen this tension firsthand. I’ve been involved in a couple of engagements where the brief was to design an app to help customers navigate some complicated decision – a product choice in one case, a human-machine interface in another. The solution “just make the decision easier” was not considered once I had proposed it.

Sometimes we’re engaged for such short terms that there isn’t time to understand the problem being solved beyond a superficial level, so we have no choice but to accept that the client has done something reasonable regarding the external qualities of the solution.

The Inevitable Aside on User Stories

If we accept that people adopt different roles transiently as they go through society’s myriad interactions, and that we are aiming to support people in fulfilling these roles where ethically appropriate, then we must design software to be sensitive to these roles and the burdens, benefits and values attached to them.

This means avoiding the use of other, long-term, vague labels to define roles that are only accidentally valid. You probably know where this is going: I mean roles like user, administrator, or consumer. Very few people identify as a user of a computer. Plenty of people are users of computers, but this is an accident of the fact that getting things done in their other roles involves using a computer. It’s an accidental role, due to the available technology.

Describing someone as a user can attach implicit, and probably incorrect, values to their place in the social system. Someone identified as a computer user evidently values and derives benefit from using a computer. Their goal is to use the computer more.

A Merciful and Overdue Conclusion

Plenty that can be considered unethical is done in the name of computing. The question of what is ethically appropriate is both complex and situated, but the fundamental interconnectedness of all things means we cannot hide from the issue behind our computers. We cannot claim that we just build the things and others decide how to use them, because the builders are complicit in the usage.

While it can appear difficult for software consultants to do much beyond working on the technical side of the problems (the internal quality), in fact we have a moral imperative to investigate the social side and are often well placed to do so. As relative outsiders on most projects we have a freedom to make explicit and to question the tacit assumptions and values that have gone into designing a product. Just as we don’t wait for managerial permission to start writing tests or doing good module design, so we shouldn’t await permission to explore the product’s impact on the society to which it will be introduced.

This exploration is absolutely nothing to do with source code and frameworks and databases, and everything to do with humans and societies. Programming is a social science, at the intersection of technology and the liberal arts.

A Colophon on Professional Standards

What has all software got in common? No-one expects any of it to work. You might find that a surprisingly strong and negative statement to make, but you probably also agreed to a statement like this at some recent time when acquiring a software product:

: YOU EXPRESSLY ACKNOWLEDGE AND AGREE THAT USE OF THE LICENSED APPLICATION IS AT YOUR SOLE RISK AND THAT THE ENTIRE RISK AS TO SATISFACTORY QUALITY, PERFORMANCE, ACCURACY AND EFFORT IS WITH YOU. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED APPLICATION AND ANY SERVICES PERFORMED OR PROVIDED BY THE LICENSED APPLICATION (“SERVICES”) ARE PROVIDED “AS IS” AND “AS AVAILABLE”, WITH ALL FAULTS AND WITHOUT WARRANTY OF ANY KIND, AND APPLICATION PROVIDER HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS WITH RESPECT TO THE LICENSED APPLICATION AND ANY SERVICES, EITHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES AND/OR CONDITIONS OF MERCHANTABILITY, OF SATISFACTORY QUALITY, OF FITNESS FOR A PARTICULAR PURPOSE, OF ACCURACY, OF QUIET ENJOYMENT, AND NON-INFRINGEMENT OF THIRD PARTY RIGHTS. APPLICATION PROVIDER DOES NOT WARRANT AGAINST INTERFERENCE WITH YOUR ENJOYMENT OF THE LICENSED APPLICATION, THAT THE FUNCTIONS CONTAINED IN, OR SERVICES PERFORMED OR PROVIDED BY, THE LICENSED APPLICATION WILL MEET YOUR REQUIREMENTS, THAT THE OPERATION OF THE LICENSED APPLICATION OR SERVICES WILL BE UNINTERRUPTED OR ERROR-FREE, OR THAT DEFECTS IN THE LICENSED APPLICATION OR SERVICES WILL BE CORRECTED. Source: Apple

Or something like this when you started using some open source code:

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. Source: OSI

So providers of software don’t believe that their software works, and users (sorry!) of software agree to accept that this is the case. Wouldn’t it be the “professional” thing to have some confidence that the product you made can actually function?

Posted in Responsibility | Comments Off on Programming as a societal roadblock

ClassBrowser: warts and all

I previously gave a sneak peak of ClassBrowser, a dynamic execution environment for Objective-C. It’s not anything like ready for general use (in fact it can’t really do ObjC very well at all), but it’s at the point where you can kick the tyres and contribute pull requests. Here’s what you need to know:

Have a lot of fun!

ClassBrowser is distributed under the terms of the University of Illinois/NCSA licence (because it is based partially on code distributed with clang, which is itself under that licence).

Posted in C++, code-level, Mac, OOP, software-engineering, TDD, tool-support | Leave a comment

Standing at the Crossroads

A while back I wrote Conflicts in my Mental Model of Objective-C, in which I listed a few small scale dichotomies or cognitive dissonances that plagued my notion of my work. I just worked out what the overall picture is, the jigsaw into which all of these pieces can be assembled.

And I do mean just. It’s about 1AM on Christmas Eve, but this picture hit me so hard I couldn’t stop thinking about it without writing it down and getting it out of my head. If it doesn’t explain everything, it shows me the shape of the solution at least.

A tale of two Apples

I believe that everything I wrote in the Conflicts post can be understood in terms of two different and (of course) opposed models of Apple. I also believe that the two models are irreconcilable, but that the opposition is also accidental, not essential. That by removing the supposed conflict between them, everything I thought was a problem can be resolved.

It was the best of iPads

The iPad is, I would argue (and accept that this is a subjective argument) the most tasteful application of computing technology to the world of many computer users. I would further argue, and this is perhaps on firmer ground, that the reason the iPad is so tasteful is because Apple spend a lot more time and resources on worrying about questions of taste than many of their competitors and others in the industry.

There are many visions that have combined to produce the iPad, but interestingly the one that I think is clearest is John Scully’s Knowledge Navigator. In the concept videos for Knowledge Navigator, a tablet computer with natural language speech comprehension and a multi-touch screen is able to use the many hyperlinked documents available on the Web to answer a wide range of questions, make information available to its users and even help them to plan their schedules.

This is what we have now. This is the iPad, with Siri and a host of third-party applications. Apple even used to use the slogan “there’s an app for that”. Do you have a problem? You can probably solve it with iOS and a trip to the app store.

It was the worst of iPads

OK, what do you do if your problem isn’t solved on the app store, or the available solutions aren’t satisfactory?

Well first, you’d better get yourself another computer because while the iPad is generally designed for solving problems it isn’t designed for solving general problems. You might be able to find some code editors on the iPad, but you sure aren’t going to use them to write an iPad app without external assistance.

OK, so you’ve got your computer, and you’ve learned how to do the stuff that makes iPad apps. Now you just pay a recurring fee to be allowed to put that stuff onto your iPad. And what if you want to share that with your friends? Only if it meets Apple’s approval.

If the iPad is the Knowledge Navigator, it is not the Dynabook. A Dynabook is a computer that you can use to solve your problems on, but it’s also one on which you can create solutions to your own problems.

The promise of the Dynabook is that if you understand what your problem is, you can model that problem on the Dynabook. You model it with objects-either your own or ones supplied for you. You can change and create these objects until they model the problem you have, at which point you can use them to compute a solution.

The irony is that we have all the parts needed in a Dynabook, all in the iPad. Computer so simple even children can use it? Check. Objects? Check. Repositories of objects created by other people so we don’t have to rewrite our own basic objects all the time? We call that CocoaPods (or RubyGems, or whatever the poison in your area of the world). But we just can’t put all of these things together on that computer itself.

That would be distasteful. That might let people do things that make the iPad look bad. That might mean iPads providing experiences that haven’t been vetted by the mothership.

Does using an iPad ever make you wonder how iPads work? What they can do? What you can make them do? You can answer these questions, but not using an iPad. Your Knowledge Navigator does not know the route to that particular destination.

This is what is truly meant when it is said that the iPad is not upgradeable. Forget swapping out memory chips or radio transmitters. Those are just lumps of sand inside a box made of melted sand and refined rock. The iPad is not upgradeable because you are stuck with the default experience: the out-of-the-box facilities plus those that have been approved from on high. It might be good, but it might not be good enough.

Notice that this is not an “everyone must program” position. That would be a very bad experience. The position is rather “everyone must have the facility, should they be so inclined, to make their computer better for them than the manufacturers did”.

Conclusions?

I think that the Apple described above is not at the intersection of technology and the liberal arts. It is at the border, a self-appointed barrier of things that might flow between the two.

I believe that the two visions can be reconciled, and that a thing can be both the Knowledge Navigator and the Dynabook. I don’t believe you have to disable some experiences to provide others. I believe that Apple the champions of tasteful computing can be applauded at every turn while Apple the high priests of the church of computing can be fought tooth and nail.

Enablers? Yes, please. Arbiters? No, thanks.

Posted in AAPL, iPad, OOP, Responsibility | Leave a comment

A sneaky preview of ClassBrowser

Let me start with a few admissions. Firstly, I have been computering for a good long time now, and I still don’t really understand compilers. Secondly, work on my GNUstep Web side-project has tailed off for a while, because I decided I wanted to try something out to learn about the compiler before carrying on with that work. This post will mostly be about that something.

My final admission: if you saw my presentation on the ObjC runtime you will have seen an app called “ClassBrowser” where I showed that some of the Foundation classes are really in the CoreFoundation library. Well, there were two halves to the ClassBrowser window, and I only showed you the top half that looked like the Smalltalk class browser. I’m sorry.

So what’s the bottom half?

This is what the bottom half gives me. It lets me go from this:

ClassBrowser before

via this:

Who are you calling a doIt?

to this:

ClassBrowser after

What just happened?

You just saw some C source being compiled to LLVM bit code, which is compiled just-in-time to native code and executed, all inside that browser app.

Why?

Well why not? Less facetiously:

  • I’m a fan of Smalltalk. I want to build a thing that’s sort of a Smalltalk, except that rather than being the Smalltalk language on the Objective-C runtime (like F-Script or objective-smalltalk), it’ll be the Objective-C(++) language on the Objective-C runtime. So really, a different way of writing Objective-C.
  • I want to know more about how clang and LLVM work, and this is as good a way as any.
  • I think, when it actually gets off the ground, this will be a faster way of writing test-first Objective-C than anything Xcode can do. I like Xcode 5, I think it’s the better Xcode 4 that I always wanted, but there are gains to be had by going in a completely different direction. I just think that whoever strikes out in such a direction should not release their research project as a new version of Xcode :-).

Where can I get me a ClassBrowser?

You can’t, yet. There’s some necessary housekeeping that needs to be done before a first release, replacing some “research” hacks with good old-fashioned tested code and ensuring GNUstep-GUI compatibility. Once that’s done, a rough-and-nearly-ready first open source release will ensue.

Then there’s more work to be done before it’s anything like useful. Particularly while it’s possible to use it to run Objective-C, it’s far from pleasant. I’ve had some great advice from LLVM IRC on how to address that and will try to get it to happen soon.

Posted in advancement of the self, C++, code-level, learning, OOP, software-engineering, tool-support | Comments Off on A sneaky preview of ClassBrowser

By your _cmd

This post is a write-up of a talk I gave at Alt Tech Talks: London on the Objective-C runtime. Seriously though, you should’ve been there.

The Objective-C runtime?

That’s the name of the library of C functions that implement the nuts and bolts of Objective-C. Objects could just be represented as C structures, and methods could just be implemented as C functions. In fact they sort of are, but with some extra capabilities. These structures and functions are wrapped in this collection of runtime functions that allows Objective-C programs to create, inspect and modify classes, objects and methods on the fly.

It’s the Objective-C runtime library works out what methods get executed, too. The [object doSomething] syntax does not directly resolve a method and call it. Instead, a message is sent to the object (which gets called the receiver in this context). The runtime library gives objects the opportunity to look at the message and decide how to respond to it. Alan Kay repeatedly said that message-passing is the important part of Smalltalk (from which Objective-C derives), not objects:

I’m sorry that I long ago coined the term “objects” for this topic because it gets many people to focus on the lesser idea.

The big idea is “messaging” – that is what the kernal[sic] of Smalltalk/Squeak is all about (and it’s something that was never quite completed in our Xerox PARC phase). The Japanese have a small word – ma – for “that which is in between” – perhaps the nearest English equivalent is “interstitial”. The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.

Indeed in one article describing the Smalltalk virtual machine the programming technique is called the message-passing or messaging paradigm. “Object-Oriented” is used to describe the memory management system.

Throughout this talk and post I’m talking about the ObjC runtime, but there are many. They all support an object’s introspective and message-receiving capabilities, but they have different features and work in different ways (for example, Apple’s runtime sends messages in one step, but the GNU runtime looks up messages and then invokes the discovered function in two steps). All of the discussion below relates to Apple’s most modern runtime library (the one that is delivered as part of OS X since 10.5 and iOS).

In the talk, I decided to examine a few specific areas of the runtime library’s behaviour. I looked for things that I wanted to understand better, and came up with questions I wanted to answer as part of the talk.

Dynamic class creation

Can I implement Key-Value Observing?

While I was preparing the talk, a post called KVO considered harmful started to get a lot of coverage. That post raises a lot of valid criticisms of the Key-Value Observing API, but rather than throw away the Observer pattern I wanted to explore a new implementation.

The observed (pardon the pun) behaviour of KVO is to privately subclass the observed object’s class, so that it can customise the object’s behaviour to call the KVO callback. That’s done through a function called objc_duplicateClass, unfortunately the documentation tells us that we should not call this function ourselves.

It’s still possible to implement an Observer pattern that uses the same secret-subclass behaviour, by allocating and registering a “class pair”. What’s a class pair? Well each class in Objective-C is really two classes: the class object defines the instance methods, and the “metaclass” defines the class methods. So each class is really a singleton instance of its metaclass.

The ObserverPattern implementation shows how this works. When you add an observer to an object, the receiver first works out whether it’s an instance of the observable class. If it needs to create that class, it does so: adding our own implementations of -dealloc to clean up after ourselves, and -class so that, like KVO observable objects, the generated class name doesn’t appear when you ask an observed object its type.

Having created the class, the code goes on to add a setter for the conventional Key-Value Coding selector name for the property: this setter grabs the old and new values of the property and invokes the callback which was supplied as a block object. Because we can, the block is dispatched asynchronously.

Notice that the -addObserverForKey:withBlock: method uses object_setClass() to replace the receiver’s class with the newly-constructed class. The main effect of this is to change the way messages are resolved onto methods, but you need to be careful that the original and replaced class have the same instance variable layout too. Instance variables are looked up via the runtime too, and changing the class could alter where the runtime thinks the bytes are for any given variable.

We have a little extra hurdle to overcome in storing the collection of observer tokens, because there’s nowhere to put them. Adding an instance variable to the ObserverPattern[…] class would not work, as instances of that class are never actually allocated. The objects involved have the instance variables of their initial class, which won’t include space for the observers.

The Objective-C runtime provides for this situation by giving us associated objects. Any object can have what is, conceptually, a dictionary of other objects maintained by the runtime. Observed objects can store and retrieve their observer tokens via associated references, and no extra instance variables are needed.

A little problem in the ObserverPattern implementation will become clear if you run it enough times. The observation callbacks are sent asynchronously, and can be delivered out of sequence. That means the observer can’t actually tell what the final state of the observed key is, because the “new value” received in the callback might have already been replaced. I left this fun issue in to demonstrate that KVO’s synchronous implementation is a feature, not a bug.

Creating objects

What are those extra bytes for?

When you create an Objective-C object, the runtime lets you allocate some extra storage at the end of the space reserved for its instance variables. What’s the point of that? All you can do is get a pointer to the start of the space (using object_getIndexedIvars)…hmm, indexed ivars. Well, I suppose an array is a pretty obvious use of indexed ivars…

Let’s build NSArray! There are two things to see in SimpleArray: the most obvious is the use of the class cluster pattern. The reason is that the object returned from +alloc—where we’d normally allocate space for the object—cannot know how big it’s going to be. We need to use the arguments to -initWithObjects:count: to know how many objects there are in the array. So +alloc returns a placeholder, which is then able to allocate and return the real array object.

One obvious question to ask is why we’d do this at all. Why not just use calloc() to grab an appropriately-sized buffer in which to store the object pointers? The answer is to do with a low-level performance concern called locality of reference. We know from the design of the array class that pretty much every time the array pointer is used, the buffer pointer will be used too. Putting them next to each other in RAM means we don’t have to look off at some dereferenced pointer just to find another pointer.

Message dispatch

Just how does message forwarding work?

One of the powerful features of Objective-C is that an object doesn’t have to implement a method when it’s compiled to be able to respond to messages with that selector name. It can lazily resolve the methods, or it can forward them to another object, or it can raise an error, or it can do something else. But something about this feature was bugging me: message forwarding (which happens in the runtime) calls -forwardInvocation:, passing it an NSInvocation object. But NSInvocation is defined in Foundation: does the runtime library need to “know” about Foundation to work?

I tracked down what was going on and found that no, it does not need to know about Foundation. The runtime lets applications define the forwarding function, that gets called when objc_msgSend() can’t find the implementation for a selector. On startup, CoreFoundation[+] injects the forwarding function that does -forwardInvocation:. So presumably my application can do its own thing, right?

Let’s build Ruby! OK, not all of Ruby. But Ruby has a #method_missing function that gets called when an object receives a message it doesn’t understand, which is much more similar to Smalltalk’s approach than to Objective-C’s. Using objc_setForwardHandler, it’s possible to implement methodMissing: in our Objective-C classes.

Conclusion

The Objective-C runtime is a powerful way to add a lot of dynamic behaviour to an application for very little work. Some developers don’t use it much beyond swizzling methods for debugging, but it has facilities that make it a powerful tool for real application code too.


[+]CoreFoundation and Foundation are really siblings, and they each expose pieces of the other’s implementation, but one has a C interface and the other an Objective-C interface. Various Objective-C classes are actually part of CoreFoundation, including NSInvocation and the related NSMethodSignature class. NSObject is not in either of these libraries: it’s now defined in the runtime itself, so that the runtime’s memory management functions know about -retain, -release and so on[++]. On the other hand, most of the *behaviour* of NSObject is implemented by categories higher up. And, of course, this is all implementation detail and the locations of these classes could be (and are) moved between versions of the frameworks.

[++]Other languages like Smalltalk and Ruby have a simple base class that does nothing except know how to be an object, called ProtoObject or BaseObject. You could imagine the runtime supplying—and being coupled to—ProtoObject, and (Core)Foundation supplying NSObject and NSProxy as subclasses of ProtoObject.

Posted in code-level, OOP | Comments Off on By your _cmd

The Ignoble Programmer

Two programmers are taking a break from their work, relaxing on a bench in the park across from their office. As they discuss their weekend plans, a group of people jog past, each carrying their laptop in a yoke around their neck and furiously typing as they go.

“Oh, there goes the Smalltalk team,” says the senior of the two programmers on the bench. “They have to do everything at run-time.”

I love jokes. And not just because they’re sometimes funny, though that helps: I certainly find I enjoy a conversation and can relax more when at least two of the people involved are having fun. When only one person is joking, it gets awkward (particularly if everyone else is from HR). But a little levity can go a long way toward disarming an unpleasant truth so that it can be discussed openly. Political leaders through the ages have taken advantage of this by appointing jesters and fools to keep them aware of intrigues in the courts: even the authors of the American bill of rights remembered the satirist before the shooter.

I also like jokes because of the thought that goes into constructing a good (or deliberately bad) one. There’s a certain kind of creativity that goes into identifying an apparently absurd connection, exactly because of the absurdity. Being able to construct a joke, and being practised at constructing jokes, means being able to see new contexts and applications for existing ideas. Welcome to the birthplace of reuse and exploring the bounds of a construct’s application: welcome to the real home of software architecture.

But there’s a problem, or at least an opportunity (or maybe just a few thousand consulting dollars to be made and a book to be written). That problem is this: everyone else puts way more effort into their jokes than programmers do. Take this one, from the scientists:

Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon

They didn’t just joke about doing a brain scan of a dead fish, they did a brain scan of a dead fish. And published the (serendipitous and unexpected) results. But they didn’t just angle for a laugh, they had a real point. The subtitle of their paper:

An Argument For Proper Multiple Comparisons Correction

And isn’t it fun that some microbiologists demonstrated that beards are significant vectors for microbial infections?

Both of these examples were lifted from the Annals of Improbable Research’s Ig Nobel Prizes, awarded for “achievements that first make people laugh, and then makes them think”. The Ig Nobels have been awarded every year since 1991, and in that time only one computer science award has been granted. That award was given to the developer behind PawSense, a utility that detects and blocks typing caused by your cat walking across your keyboard.

Jokes that first make you laugh, and then make you think, are absolutely the best jokes you can make about my work. If I conclude “you’re right, that is absurd, but what if…” then you’ve done it right. Jokes that are thought-terminating statements can make us laugh, and maybe make us feel good about what we’re doing, but cannot make us any better at it because they don’t give us the impetus to reflect on our craft. Rather, they make us feel smug about knowing better than the poor sap who’s the butt of the joke. They confirm that we’ve nothing to learn, which is never the correct outlook.

We need more Ig Nobel-quality achievements in computing. Disarming the absurd and the downright broken in programming and presenting them as jokes can first make us laugh, and then make us think.

N.B. My complete connection to the Annals of Improbable Research is that I helped out on the AV desk at a couple of their talks. At their talk in Oxford in 2006 I was inducted into the Luxuriant and Flowing Hair Club for Scientists.

Posted in advancement of the self, learning, social-science | Leave a comment

Agile application security

There’s a post by clever security guy Jim Bird on Appsec’s Agile Problem: how can security experts participate in fast-moving agile (or Agile™) projects without either falling behind or dragging the work to a halt?

I’ve been the Appsec person on such projects, so hopefully I’m in a position to provide at least a slight answer :-).

On the team where I did this work, projects began with the elevator pitch, application definition statement, whatever you want to call it. “We want to build X to let Ys do Z”. That, often with a straw man box-and-line system diagram, is enough to begin a conversation between the developers and other stakeholders (including deployment people, marketing people, legal people) about the security posture.

How will people interact with this system? How will it interact with our other systems? What data will be touched or created? How will that be used? What regulations, policies or other constraints are relevant? How will customers be made aware of relevant protections? How can they communicate with us to identify problems or to raise their concerns? How will we answer them?

Even this one conversation has achieved a lot: everybody is aware of the project and of its security risks. People who will make and support the system once it’s live know the concerns of all involved, and that’s enough to remove a lot of anxiety over the security of the system. It also introduces awareness while we’re working of what we should be watching out for. A lot of the suggestions made at this point will, for better or worse, be boilerplate: the system must make no more use of personal information than existing systems. There must be an issue tracker that customers can confidentially participate in.

But all talk and no trouser will not make a secure system. As we go through the iterations, acceptance tests (whether manually run, automated, or results of analysis tools) determine whether the agreed risk profile is being satisfied.

Should there be any large deviations from the straw man design, the external stakeholders are notified and we track any changes to the risk/threat model arising from the new information. Regular informal lunch sessions give them the opportunity to tell our team about changes in the rest of the company, the legal landscape, the risk appetite and so on.

Ultimately this is all about culture. The developers need to trust the security experts to make their expertise available and help out with making it relevant to their problems. The security people need to trust the developers to be trying to do the right thing, and to be humble enough to seek help where needed.

This cultural outlook enables quick reaction to security problems detected in the field. Where the implementors are trusted, the team can operate a “break glass in emergency” mode where solving problems and escalation can occur simultaneously. Yes, it’s appropriate to do some root cause analysis and design issues out of the system so they won’t recur. But it’s also appropriate to address problems in the field quickly and professionally. There’s a time to write a memo to the shipyard suggesting they use thicker steel next time, and there’s a time to put your finger over the hole.

If there’s a problem with agile application security, then, it’s a problem of trust: security professionals, testers, developers and other interested parties[*] must be able to work together on a self-organising team, and that means they must exercise knowledge where they have it and humility where they need it.

[*] This usually includes lawyers. You may scoff at the idea of agile lawyers, but I have worked with some very pragmatic, responsive, kind and trustworthy legal experts.

Posted in Uncategorized | Leave a comment

How to answer questions the smart way

You may have read how to ask questions the smart way by Eric S. Raymond. You may have even quoted it when faced with a question you thought was badly-formed. I want you to take a look at a section near the end of the article.

How to answer questions in a helpful way is the part I’m talking about. It’s a useful section. It reminds us that questions are part of a dialogue, which is a two-way process. Sometimes questions seem bad, but then giving bad answers is certainly no way to make up for that. What else should we know about answering questions?

The person who asked the question has had different experiences than you. The fact that you do not understand why the question should be asked does not mean that the question should not be asked. “Why would you even want to do that?” is not an answer.

Answer at a level appropriate to the question. If the question shows a familiarity with the basics, there’s no need to mansplain trivial details in the answer. On the other hand, if the question shows little familiarity with the basics, then an answer that relies on advanced knowledge is just pointless willy-waving.

The shared values that pervade your culture are learned, not innate. Not everyone has learned them yet, and they are not necessarily even good, valuable or correct. This is a point that Raymond misses with quotes like this:

You shouldn’t be offended by this; by hacker standards, your respondent is showing you a rough kind of respect simply by not ignoring you. You should instead be thankful for this grandmotherly kindness.

What this says is: this is how we’ve always treated outsiders, so this is how you should expect to be treated. Fuck that. You’re better than that. Give a respectful, courteous answer, or don’t answer. It’s really that simple. We can make a culture of respect and courtesy normative, by being respectful and courteous. We can make a culture of inclusion by not being exclusive.

I’m not saying that I’m any form of authority on answering questions. I’m far from perfect, and by exploring the flaws I know I perceive in myself and making them explicit I make them conscious, with the aim of detecting and correcting them in the future.

Posted in advancement of the self, learning, Responsibility, social-science | Leave a comment