On home truths in iOS TDD

The first readers of Test-Driven iOS Development (currently available in Rough Cuts form on Safari Books Online: if you want to buy a paper/kindle/iBooks editions, you’ll have to wait until it enters full production in a month or so) are giving positive feedback on the book’s content, which is gratifying. Bar last minute corrections and galley proof checking, my involvement with the project is nearly over so it’s time for me to reflect on the work that has dominated my schedule for over a year.

As explained in the book’s front matter, I chose to give all of the examples in the book and accompanying source code using OCUnit. As the BBC might say, “other unit test frameworks are available”. Some of the alternative frameworks are discussed in the book, so interested readers can try them out for themselves.

What made OCUnit the correct choice—put it a different way, what made OCUnit the choice I made? It’s the framework that’s shipped with Xcode, so anyone who might want to try out unit testing can pick up the book and give it a go. There are no third-party dependencies to become unsupported or change beyond all recognition—though that does occasionally happen to Xcode. File-New Project…, include unit tests, and you’re away, following the examples and trying out your own things.

Additionally, the shared body of knowledge in the Cocoa development community is greatest when it comes to OCUnit. Aside from people who consider automated testing to be teh suck, plenty of developers on Mac, iOS and other platforms have got experience using OCUnit or something very much like it. Some of those people have switched to other frameworks, but plenty are using OCUnit. There’s plenty of experience out there, and plenty of help available.

The flip side to this is that OCUnit doesn’t represent the state of the art in testing. Far from it: the kit was first introduced in 1998, and hasn’t changed a great deal since. Indeed many of the alternatives we see in frameworks like GHUnit and Google Toolkit for Mac are really not such great improvements, adding some extra macros and different reporting tools. Supporting libraries such as OCHamcrest and OCMock give us some additional features, but we can look over the fence into the neighbouring fields of Java, ruby and C# to see greater innovations and more efficient testers.

Before you decide to take the book out of your Amazon basket, let me assure you that learning TDD via OCUnit is not wasted effort. The discipline of red-green-refactor, the way that writing tests guides the design of your classes, the introduction of test doubles to remove dependencies in tests: these are all things that (I hope) the book can teach you, and that you can employ whether you use OCUnit or some other framework. And, as I said, there’s plenty of code out there that is in an OCUnit harness. It’s not bad, it could be better.

So what are the problems with OCUnit?

  • repetition. Every time you write STAssert, you’re saying two things. Firstly, “hey, I’m using OCUnit”, which isn’t really useful information. Second, “what’s coming up is a test, read on to find out what kind of test”. Then you finally get to the end of the macro where you reveal what it is you’re going to do. This is the important information, but we bury it in the middle of the line behind some boilerplate.

    Imagine, instead, a hypothetical language where we could send messages to arbitrary expressions (ok that exists, but imagine it’s objc). Then you could do [[2+2 should] equal: 4]; which more closely reflects our intention.

  • repetition. In the same way that STAssert is boilerplate, so is sub classing SenTestCase and writing -(void)test at the beginning of every test method. It gives you no useful information, and hides the actual data about the test behind the boilerplate.

    Newer test frameworks in languages like C# and Java use the annotation features of those languages to take the fact that a method is a test out of its signature and make it metadata. ObjC doesn’t support annotations, so we can’t do that. But take a look at the way CATCH tests are marked up. You indicate that something is a test, and the fact that this means the framework needs to generate an objective-c++ class and call a method on it is encapsulated in the framework’s implementation.

  • repetition. You might think that there’s a theme developing here :-). If you write descriptive method names, you might have a test named something like -testTheNetworkConnectionIsCleanedUpWhenADownloadFails. Should that test fail, you’re told what is going wrong: the network connection is not cleaned up when a download fails.

    So what should you write in the mandatory message parameter all of the STAssert…() macros require? How about @"the network connection was not cleaned up when a download failed"? Not so useful.

  • organisation. I’ve already discussed how OCUnit makes you put tests into particular classes and name them in particular ways. What if you don’t want to do that? What if you want to define multiple groups of related tests in the same class, in the way BDD practitioners do to indicate they’re all part of the same story? What if you want to group some of the tests in one of those groups? You can’t do that.

I’m sure other people have other complaints about OCUnit, and that yet other people can find no fault with it. In this post I wanted to draw attention to the fact that there’s more than one way to crack a nut, and the vendor-supplied nutcracker is useful though basic.

Posted in books, code-level, TDD, tool-support | 7 Comments

Security: probably doing it wrong

Being knowledgable in the field of information security is useful and beneficial. However, it’s not sufficient, and while it’s (somewhat) easy to argue that it’s necessary there’s a big gap between being a security expert and making software better, or even making software more secure.

The security interaction on many projects goes something like this:

  • Develop software
  • Get a penetration tester in
  • Oh, shit
  • Fix anything that won’t take more than two days
  • Get remaining risk signed off by senior management
  • Ship
  • Observe that most of the time, this doesn’t cause much trouble

Now whether or not a company can afford to rely on that last bullet point being correct is a matter for the executives to decide, but let’s assume that they don’t want to depend on it. The problem they’ll have is that they must depend on it anyway, because the preceding software project was done wrong.

Security people love to think that they’re important and clever (and they are, just not any more than other software people). Throughout the industry you hear talk of “fail” or even “epic fail”. This is not jargon, it’s an example of the mentality that promotes calling developers idiots.

Did the developer get the security wrong because he’s an idiot, or was it because you didn’t tell him it was wrong until after he had finished?

“But we’re penetration testers; we weren’t engaged until after the developers had written the software.” Who’s fault is that? Did you tell anyone you had advice to give in the earlier stages of development? Did you offer to help with the system architecture, or with the requirements, or with tool selection?

You may think at this point that I shouldn’t rock the boat; that if we carry on allowing people to write insecure software, there’ll be more money to be made in testing it and writing reports about how many high-severity issues there are that need fixing. That may be true, though it won’t actually lead to software becoming more secure.

Take another look at the list of actions above. Once the project manager knows that the software has a number of high-priority issues, the decision that project manager will have to take looks like this:

If I leave these problems in the software, will that cause more work in the project, or in maintenance? Do I look like my bonus depends on what happens in maintenance?

So, as intimated in the process at the top of the post, you’ll see the quick fixes done – anything that doesn’t affect the ship date – but more fundamental problems will be left alone, or perhaps documented as “nice to haves” for a future version. Anything that requires huge changes, like architectural modification or component rewrites, isn’t going to happen.

If we actually want to get security problems fixed, we have to distribute the importance assigned to it more evenly. It’s no good having security people who think that security is the most important thing ever, if they’re not going to be the people making the stuff: conversely it’s no good having the people who make the thing unaware of security if it really does have some importance associated with it.

Here’s my proposal: it should be the responsibility of the software architect to know security or to know someone who knows security. Security is a requirement of a software system, and it’s the architect’s job to understand what the requirements are, how the software is to implement them and how to make any trade-off needed if the requirements come into conflict. It’s the architect’s job to justify those decisions, and to make them and see them followed throughout development.

That makes the software architect the perfect person to ensure that the relative importance of security versus performance, correctness, responsiveness, user experience and other aspects of the product is both understood and correctly executed in building the software. It promotes (or demotes, depending on your position) software security to its correct position in the firmament: as an aspect of constructing software.

Posted in software-engineering | Comments Off on Security: probably doing it wrong

Irresponsible tolerance

Context

@unclebobmartin said:

One of the bad behaviors that destroys projects is “irresponsible tolerance”. Tolerating what you know you should fix.

This triggered a discussion between @phil_nash and myself. As far as this got on the Twitters, we agreed that it’s not necessarily irresponsible to ignore a problem for now as long as what you’re actually doing is deferring the fix until you’ve got time…except that it’s easy for deferral to slip into tacit acceptance as other work comes up. We may even be able to delude ourselves into thinking we still intend to fix that issue “some day”, even though the reality is that will never happen.

My >140char response

Yes, that is easy to do. I’ve done it myself. I’ve even – though not in a number of years – used tolerance of a badly-written component as an excuse to avoid not just cleaning it up, but of doing other useful work on the same component. “Touching that spaghetti code would be too risky, and rewriting it would take too long, so let’s just leave it as it is.”

Since reading the GTD book, I’ve tried a new approach which has, for the most part, been more successful. It’s not exactly a GTD technique, but borrows the spirit. In GTD there’s a two-minute rule: if you think of something you need to do that would take less than two minutes, just do it. If it would take longer, add it to your backlog.

The analogous approach for refusing to tolerate software problems is this: if you see something you think needs fixing, and you have time to fix it now, fix it now. If you do not have time to fix it, write a bug report.

What goes into the bug report?

All of the things a good software architect should be logging as part of their work anyway: a description of the problem, discussion of potential solutions, choice of solution and justification of that solution. So if there’s some ugly class that needs rewriting, explain why it’s ugly. Describe what would be better, and why.

What do I get from this?

In the first instance, the act of describing what it is that you dislike about the current code often makes it easier to see that the fix actually wouldn’t take too long. So that really disgusting class is full of long methods: what’s three minutes with the “extract method” tool between friends?

Oftentimes the solution will still be too big to work on right now. So hit the “report” button, and get the bug report into the tracker (or the backlog, or icebox, or whatever this week’s cool term is. I can no longer keep up; I’m in my thirties now). You know how they say a problem shared is a problem halved? It’s crap. A problem shared is a problem everyone is burdened with, so there are more people to go “oh crap, yeah, I hate that too”. Maybe one (or more) of you has the time to spend a day or two sorting the issue out, or is willing to make time. Maybe someone else knows enough about that code to propose a better alternative.

Even if not, the whole team can no longer ignore the issue: every time someone looks at the outstanding issues, there’s your problem, reminding everyone not to tolerate it. It’s harder to say “oh yeah, I thought about fixing that once but I didn’t have the time” if every time you read the bug list you are forced to think about it again. One of these days you, or someone else, will have time to fix it, and so will have to either do that or think of a convincing excuse to shelve it again. Then explain at the next bug review time why the issue is still there. If you wrote a good justification for why the proposed solution would be better, it looks like you’re actively trying to avoid making a better product.

Depending on how you track bugs, you may have an additional benefit: the ability to link your complaint to other issues. So maybe (and this is a real-world example from my experience), the problem is that a class for reading files has a hard-coded list of search paths. Then a request comes in saying that an extra filesystem has been provided by IT, and they want to put some of the files into a location on this new filesystem. Link them. Someone will be assigned the user request that’s been prioritised as a business issue, and when they do they’ll see your report that a good way to fix the problem would also clean up the product, so they can do both at the same time. If the issue is linked to enough problems in the product, it becomes clear that addressing the underlying issue will benefit the customers and the work will be scheduled. Then you really have no excuse for not finding the time to fix it: it’s your job.

So this is a silver bullet, is it?

No. It’s worked well for me, but it’s not foolproof. Some projects I’ve worked on have suffered from a form of bug tracker malaise, where the backlog is so great that it’s easy to ignore the vast number of open issues—some of which are no longer relevant— meaning that adding another straw to the haystack isn’t going to help anyone. That’s an extreme position for a project to get into, it’s basically a slippery-slope version of Uncle Bob’s “irresponsible tolerance” where even problems being reported by customers can be tolerated. In those cases, a special injection of enthusiasm into the development team is required: the whole product is already on a death march.

For most projects, though, reporting an issue is a good way to avoid ignoring that issue.

Posted in Uncategorized | Comments Off on Irresponsible tolerance

On standards in free software engineering

I have previously written on the economics of software insecurity, and I quote a couple of paragraphs from that post below:

One option that is not fully explored in the book, but which I believe could be worth exploring, is this: development of critical infrastructure software could be taken away from the free market.

Now the size of even the U.S. government IT budget probably isn’t sufficient to completely fund a bunch of infrastructure developers, but there are other options. Rice correctly notes the existence of not-for-profit software development organisations (particularly the Open Source Initiative and Free Software Foundation), and discusses the benefits and drawbacks of the open source model as it applies to commercial software. He does not explore the possibility that charity development organisations could withdraw from market competition, and focus on engineering practice, quality and security without feature parity or first-to-market speed.

Today I was re-reading Free Software, Free Society by Richard Stallman, a collection of his essays and speeches on topics including copyleft, the GNU system and General Public Licence. In thinking about this book, I went wandering back to the idea of non-commercial driver for good quality software.

I am now convinced that the Free Software Foundation should be investigating, researching and promoting standards, practices and quality in software construction.

The principal immediate benefit the FSF would gain is in terms of visibility and support. Everywhere that software is used—public, private and academic sectors—organisations are interested in finding out ways to improve quality, reliability, deliverability: in other words, the success of their software. An entity that could offer to evaluate and report on whether particular techniques are feasible and offer improvement—in return for funding and staffing the production of their sought-after free software—would be welcomed and would be put to good use.

The FSF is well-placed to achieve this goal, because all of its output is copyleft. A large problem with analysing the success or otherwise of development practices is that the outputs are proprietary: not just the code, but the project documentation, meeting minutes and so forth. With an FSF project everything is (or should be) freely available so inspecting how a project was run, what the developers did and—crucially—whether users are happy with the end result is much easier. Conclusions should be reproducible because everybody can see everything that went on.

Notice that in this scheme, relationships between the FSF and other (proprietary, open source, whatever) organisations are mutually beneficial, not antagonistic as is often either actually the case or just assumed. The benefits seen by external parties are the improvements in process and technique; benefits that all developers can make use of. The discussion moves away from free vs. fettered, and becomes making the field of software engineering better for everyone.

Incidentally, such a focus would also put free software at the forefront of discussions on software quality and deliverability. This would be something of a coup for free software, which is often associated with chaotic management, lack of road maps, and paucity of documentation and support. OK, the FSF wants people to consider freedom as a value in itself, but there’s nothing wrong with ensuring that free software is the best software too, surely?

Posted in Business, software-engineering | Comments Off on On standards in free software engineering

On the economics of software insecurity

This post is mainly motivated by having read Geekonomics: the real cost of insecure software, by David Rice. Since writing the book Rice has apparently been hired by Apple, though his bio at the Geekonomics site doesn’t mention that (nor his LinkedIn profile).

Geekonomics is a thoroughly interesting read. It’s evidently designed as a call to arms for users to demand better security, and as a result resorts to hyperbole in parts. You are a crash test dummy for software manufacturers and are paying extravagantly for the privilege. In this way it reads as if it is to security as The Inmates are Running the Asylum was to user experience in the 1999: do you realise just how shoddy all of this software you use is?

That said, once you actually dig into Rice’s arguments, the hyperbole disappears and the book becomes well-sourced, internally consistent and rational. He explains why the market forces in the software industry don’t lead to security (or even high quality) as either the primary customer requirement nor the key focus of producers.

Interestingly, while Geekonomics only incidentally touches on the role of security researchers in the software economy, their position is roughly consistent with the one I outlined in On Securing Lion: they are in it to get money (and sometimes fame) from selling either the vunerabilities they discover, or their skill at selling vulnerabilities.

The book ends by describing the different options a curated free market like the US market has for correcting the situation where market forces lead to socially undesirable outcomes: these options are redress via contract law, via tort law or via strict liability legislation. The impact on each of the above on the software industry is estimated.

One option that is not fully explored in the book, but which I believe could be worth exploring, is this: development of critical infrastructure software could be taken away from the free market.

Now the size of even the U.S. government IT budget probably isn’t sufficient to completely fund a bunch of infrastructure developers, but there are other options. Rice correctly notes the existence of not-for-profit software development organisations (particularly the Open Source Initiative and Free Software Foundation), and discusses the benefits and drawbacks of the open source model as it applies to commercial software. He does not explore the possibility that charity development organisations could withdraw from market competition, and focus on engineering practice, quality and security without feature parity or first-to-market speed.

Governments, trade groups, communications carriers and other organisations with an interest in using software as infrastructure (e.g. so-called “cloud” companies) could fund non-profits (maybe with money, maybe with staff) that develop infrastructure-grade software. Those non-profits would have a mission to do quality-centric development, and would put confidentiality, integrity, availability, reliability and correctness before feature richness or novelty. Their governance (the bit I haven’t fully thought through, admittedly) would be organised to promote and reward exactly that approach to development.

The software, its documentation and its engineering methodologies would be open, so that commercial software can take advantage of its advances at low cost. This is partially of importance because where security is a “hygiene factor” to software purchasers, the “security gap” between the infrastructure-grade and commercial-grade software would become clear and would artificially introduce infrastructure-grade robustness to the marketplace. Commercial vendors who could cheaply pick up parts of the infrastructure-grade software for their own products would be, in a self-interested manner, bringing that software’s quality into the commercial marketplace and making it a competition point.

“But,” some people say, “such software would be feature-poor. Why would anyone choose [SafeOS, SafeWebServer, SafeSmartPhone, whatever] over a feature-rich commercial offering?” The point is, that in infrastructure, correct function is more important than features. It’s only the fact that software exists purely in a competitive world that means the focus is on features.

Case in point: one analogy used throughout Geekonomics is that infrastructure software is like cement (actually, in a book I’m currently writing on software testing, I make the same analogy, though relating to design rather than function). Well even taking into account innovations like Portland cement, the feature list of cement hasn’t changed in thousands of years. It sticks aggregate together to make concrete or mortar. It’s only the quality of its stick-aggregate-together-ness that has changed.

In relation to software, most computers are still “stuck together” using RFC791 (Internet Protocol version 4), which was documented in 1981 but was in use already at that time. The main advantages of RFC2460 (Internet Protocol version 6), written in 1998, are increased address space and reduced overhead. It’s better at stick-computers-together-ness, but doesn’t really do anything new. There may have been new applications of networks recently (and of course, the late addition of confidentiality in the mid-1990s), but networking itself doesn’t frequently need new features.

Or even operating system software. The last version of Mac OS X that added any features for its users was version 10.5, said new features were:

  • Time Machine: computers have been doing backup for years, this added a new UI.
  • Spaces: an improvement on the ability to draw windows on the screen.
  • Back to my Mac: an integration of existing capabilities (VNC and wide-area zeroconf networking).
  • Boot Camp: managing partitions, and giving the primary bootloader compatibility with a 1983 computer standard.

All of the other enhancements were in the applications, which still all require the same things of the OS: schedule processes, protect memory, abstract the file systems, manage devices. Again, there may have been new applications of an operating system, but there hasn’t been much newness in the operating system itself.

The part where such non-profit infrastructure software becomes tricky is in integrating with the rest of the “stack”. On the hardware side, it would be inappropriate to require that a government-sponsored and not-for-profit software project run on proprietary hardware. On the other hand, it might be inappropriate to disbar deployment on proprietary hardware—but is infrastructure-grade software on commercial-grade hardware still an infrastructure-grade deployment?

That’s particularly difficult in our world—the world of smartphones—because there isn’t really any open hardware. There are somewhat open definitions: the Android Compatibility Definition Document for example. But as Ken Thompson taught us: in a trusted system, we need to question who we trust and why.

Going the other way, of course, is much easier. Anyone could write an application that interoperates with infrastructure-grade software, or a system partially constructed out of such software. But the same question would still exist: how much of an impact do the non-infrastructure-grade components have on the reliability of the system?

Posted in software-engineering | 1 Comment

These things are hard

Mike Lee recently wrote about his feelings on seeing those classic pictures from the American space program, in which the earth appears as a small blue marble set against the backdrop of space. His concluding paragraph:

Life has its waves. There are ups and downs. My not insubstantial gut and my lucky stars both are telling me 2012 is going to be an upswell. Let us do as we do where I grew up and catch that wave. Put aside your fear and cynicism. The future is ours to create. The system is ours to debug and refactor.

For me, the most defining and inspiring moment of the whole space program predates the Space Shuttle, the moon landing, even the Gemini program. It is the words of a politician, driving his country to one of its highest ebbs of innovation and discovery. The whole speech is worth watching, but this sentence encapsulates the sentiment perfectly.

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.

So let us create the future. It is not going to be easy, but we shall choose to do it because it is hard.

Posted in Uncategorized | 2 Comments

On explaining stuff to people

An article that recently made the rounds, though it was written back in September, is called Apple’s Idioten Vektor. It’s a discussion of how the CCCrypt() function in Apple’s CommonCrypto library, when used in its default cipher block chaining mode, treats the IV (Initialization Vector) parameter as optional. If you don’t supply an IV, it provides its own IV of 0x0.

Professional Cocoa Application Security also covers CommonCrypto, CBC mode, and the Initialization Vector. Pages 79-88 discuss block encryption. The section includes sample code for both one-shot and staged use of the API. It explains how to set the IV using a random number generator, and why this should be done.[1] Mercifully when the author of the above blog post reviewed the code in my book section, he decided I was doing it correctly.

So both publications cover the same content. There’s a clear difference in presentation technique, though. I realise that the blog post is categorised as a “rant” by the author, and that I’m about to be the pot that calls the kettle black. However, I do not believe that the attitude taken in the post—I won’t describe it, you can read it—is constructive. Calling people out is not cool, helping them get things correct is. Laughing at the “fail” is not something that endears people to us, and let’s face it, security people could definitely be more endearing. We have a difficult challenge: we ask developers to do more work to bring their products to market, to spend more money on engineering (and often consultants), in return for potentially protecting some unquantified future lost revenue and customer hardship.

Yes there is a large technical component in doing that stuff, but solving the above challenge also depends very strongly on relationship management. Security experts need to demonstrate that we’re all on the same side; that we want to work with the rest of the software industry to help make better software. Again, a challenge arises: a lot of the help provided by security engineers comes in the form of pointing out mistakes. But we shouldn’t be self promoting douchebags about it. Perhaps we’re going about it wrong. I always strive to help the developers I work with by identifying and discussing the potential mistakes before they happen. Then there’s less friction: “we’re going to do this right” is a much more palatable story than “you did this wrong”.

On the other hand, the Idioten Vektor approach generated a load of discussion and coverage, while only a couple of thousand people ever read Professional Cocoa Application Security. So there’s clearly something in the sensationalist approach too. Perhaps it’s me that doesn’t get it.

[1]Note that the book was written while iPhone OS 3 was the current version, which is why the file protection options are not discussed. If I were covering the same topic today I would recommend eschewing CCCrypto for all but the most specialised of purposes, and would suggest setting an appropriate file protection level instead. The book also didn’t put encryption into the broader context of cryptographic protocols; a mistake I have since rectified.

Posted in books, Crypto, documentation, Encryption, iPad, iPhone, Mac, PCAS | Leave a comment

On SSL Pinning for Cocoa [Touch]

Moxie Marlinspike, recently-acquired security boffin at Twitter, blogged about SSL pinning. The summary is that relying on the CA trust model to validate SSL certificates introduces some risk into using an app – there are hundreds of trusted roots in an operating system like iOS, and you don’t necessarily want to trust all (or even any) of the keyholders. Where you’re connecting to a specific server under your control, you don’t need anyone else to tell you the server’s identity: you know what server you need to use, you should just look for its certificate. Then it doesn’t matter if someone compromises any CA; you’re not trusting the CAs any more. He calls this SSL pinning, and it’s something I’ve recommended to Fuzzy Aliens clients over the past year. I thought it’d be good to dig into how you do SSL pinning on Mac OS X and iOS.

The first thing you need to do is to tell Foundation not to evaluate the server certificate itself, but to pass the certificate to you for checking. You do this by telling the NSURLConnection that its delegate can authenticate in the “server trust” protection space.

-(BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)space {
  return [[space authenticationMethod] isEqualToString: NSURLAuthenticationMethodServerTrust];
}

Now your NSURLConnection delegate will receive an authentication challenge when the SSL connection is negotiated. In this authentication challenge, you evaluate the server trust to discover the certificate chain, then look for your certificate on the chain. Because you know exactly what certificate you’re looking for, you can do a bytewise comparison and don’t need to do anything like checking the common name or extracting the fingerprint: it either is your certificate or it isn’t. In the case below, I look only at the leaf certificate, and I assume that the app has a copy of the server’s cert in the sealed app bundle at MyApp.app/Contents/Resources/servername.example.com.cer.


- (void)connection:(NSURLConnection *)connection didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge {
  if ([[[challenge protectionSpace] authenticationMethod] isEqualToString: NSURLAuthenticationMethodServerTrust]) {
    SecTrustRef serverTrust = [[challenge protectionSpace] serverTrust];
    (void) SecTrustEvaluate(serverTrust, NULL);
    NSData *localCertificateData = [NSData dataWithContentsOfFile: [[NSBundle mainBundle]
                                                                    pathForResource: serverName
                                                                    ofType: @"cer"]];
    SecCertificateRef remoteVersionOfServerCertificate = SecTrustGetCertificateAtIndex(serverTrust, 0);
    CFDataRef remoteCertificateData = SecCertificateCopyData(remoteVersionOfServerCertificate);
    BOOL certificatesAreTheSame = [localCertificateData isEqualToData: (__bridge NSData *)remoteCertificateData];
    CFRelease(remoteCertificateData);
    if (certificatesAreTheSame) {
      [[challenge sender] useCredential: [NSURLCredential credentialForTrust: serverTrust] forAuthenticationChallenge: challenge];
    }
    else {
      [[challenge sender] cancelAuthenticationChallenge: challenge];
    }
  }
  // fall through for challenges in other spaces - or respond to them if you need to
}

That’s really all there is to it. You may want to change some of the details depending on your organisation: for example you may issue your own intermediate or root certificate and check for its presence while allowing the leaf certificate to vary; however the point is to get away from the SSL certificate authority trust model so I haven’t shown that here.

Posted in code-level, iPad, iPhone, ssl | 6 Comments

A bunch of monkeys with typewriters

As with many of the posts in this blog, this one originally started as a tweet that got too long. With the launch of Path 2, a conversation about Atos ditching email for social media and Yammer posting a video of how their enterprise social network is used at O2, I’ve been thinking about how I’d design the social network for business users.

The TL;DR (or executive summary, if that’s your thing) is “don’t be a lazy arse, read the whole post”.

CSCW

Computer-Supported Collaborative Working. A term that has been around for a few decades, and means pretty much what it says: finding ways to use computers to support people working together. Ideas from CSCW pervade the modern requirements engineering discipline; particularly strong is the notion that the software and the system in which it’s used are related. Change the software and you’ll change the way people work together. Change the people and you’ll change the way they work together too, including how they use the software.

When I finished being an undergraduate and became an actual person, real-world CSCW was synonymous with “groupware” and basically meant an email service with integrated calendars. Wikis existed, and some companies were using them. Trouble ticket systems like req and rt did exist too. All of these tools were completely separate. Most of them, if they needed to tell you something, would send you an email.

Ah, email. The tool that changed “read your memos at 4:30 pm” into a permanent compulsion. How that big red badge on your mail icon taunts you, with its message that there are now THREE REALLY IMPORTANT THINGS you’re ignoring. Only one’s a message from facilities telling you that the wastepaper baskets have been moved, and while the other two are indeed task-related they carry no new information (“I agree we should touch base to talk around this action”). Originally the red badge only mocked me while I was in the office, but then I was issued a smartphone.

Email: the tool that makes conversations in other parts of the company completely undiscoverable. You can no longer get a feel for what’s going on by hanging around the water cooler or the smokers’ shelter, because those discussions are being held over email. Email that you can only read if you’re in on the joke. If you’re supposed to be in on the joke but the author addresses her missive to “j.smith3” instead of “j.smith1”, you’re shit out of luck. Email where you have to look at whether you were CCd or BCCd before weighing in with your tuppen’orth.

We’ve come a long way baby

OK, since those dark days we’ve learned a lot, right? There’s Twitter, and Facebook, and Myspace (yes, it is still a thing), and LinkedIn and Path and Tumblr and Glassboard and…

Lots of these things have good ideas, and lots have not so good ideas. Most of them are designed to be very general things that people in many contexts would use: what would a (by which I mean “my”) business-centric comms tool designed today look like?

The features

Completely replaces internal e-mail for everyone

This is non-negotiable. No more multiple channels of communication (I currently have open: twitter, outlook, skype and IRC. And I’m writing a blog post. And I have a phone. And I get Path push notifications.). No more trying to match up threads that have come in and out of various applications. If the CEO wants to leave a message for the CTO, he does it over the social network, just as when one of the cleaners wants to talk to another.

Pull, don’t push

I will say this nice and loud: I don’t care when your shitty app thinks I need to read my messages. I have work to do, let me get on with it without the guilt of the red badge. Support an OmniFocus-style view where I can see what’s come in, what’s outstanding, and what I think is important or urgent. But let me review that on my own terms. I don’t care whether the personal assistant of the head of some other department thinks this message is double-exclamation-mark-urgent; I’ll decide. Let me set up notifications on things I think important to be notified about.

Everything is a citable object

Twitter gets this correct, facebook does not. Except that Twitter doesn’t really, because I can’t really write long-form posts, styled posts, or put images/video/audio/etc. in Tweets. I can just pretend using URLs. But it does allow you to treat a reply as a first-class object: it’s just a tweet that has a link to a previous tweet. In facebook, comments are special things that are attached to other things but don’t really have an existence of their own.

Facebook also decides that different ways of communicating with someone are, for some reason, different things. Notes appear in the “notes” section, updates on the wall (including updates that are links to notes), photos in photos, music in…you get the idea. I say the important things are who said what to whom, and in what context. If I want to write a long-form post about a comment that was a reply to a photo posted in response to a customer service request, I should be able to do that. That implies that our hypothetical social network would be a platform that can support applications like request tracking, bug reporting, CRM and whatever else it is that people in a big company do. Which brings me on to the next two features.

I need to be able to discover people

People (indeed any primates) naturally organise into smallish close groups, in which they understand the dyads (what each people thinks about the others) quite well. They then know a little bit about what the other groups are, but not necessarily about the relationships inside those groups. Current social network tools don’t really take advantage of that: and no, Google+ circles are not this.

Companies, in fact, do not take this into account. Hierarchical management structures lead to an easy identification of “us” and “them”, so that whenever someone in a different group causes “us” to re-think what we’re doing, we assume that “they” don’t get what “we” are all about. We’re all trying to make the same shareholders rich (or change the world, or gain a pension; whyever it is you do what you do) and we should all be on the same side.

If I’m looking for someone in Legal who knows about open source licenses, the tool should support that. Furthermore, the tool should show me how their circle overlaps with mine, so I can see who to go to (and who to avoid) for an introduction if that’s the best way to proceed. The chance is that in a company with thousands of employees I don’t know who I’m looking for, but someone I know does know. Furthermore, the tool should be able to use sentiment analysis to tell whether the people I’m going to talk to are likely to be on my side or not. This should help combat the “us vs. them” mentality described above, because I can go into my new meeting knowing where I’m similar to my new contact.

[An early draft of the section you’ve just read was titled “It isn’t who you know, it’s who you know who you know knows” but that wasn’t really the name of a feature.]

There are no silos or Chinese walls: everyone can see (and react to) everything.

This has a few benefits. Most importantly, this is a time where valuable advances are being made not by gathering information – which we’ve been doing for decades – but by finding new ways of combining, organising and analysing the data that we’ve already got. Which makes it crazy that most companies stop people from taking a cross-functional view of the things going on in that company. It’s maddening, it costs money, and it needs to stop. If I’m doing some research into, for example, fixing some bug in a product and can find out that Jim on the team downstairs has recently been fixing a related bug on his team’s product, that’s useful. I should be able to see that: we need a platform that supports it.

The second benefit is to remove another barrier that reinforces the “us vs. them” separation of departments. No, sales aren’t hiding anything from you: it’s all available.

Next, if everyone knows that anything they say in “new email” is readable by everyone else in the company, that also helps to increase the collaborative nature of interactions because you can’t hide behind the almost-private nature of email.

Path has an interesting feature in this regard: for each event you can see, you can also see who has seen that event. It’s a useful reminder that what you write is being read, and by whom: once you’ve seen your head of department pop up a few times, you probably remember to make everything work-safe. It’s almost an implementation of the “your mum can read what you write” reminder I wanted to add to facebook.

Now I can hear my security colleagues sharpening their ePencils to write the “waah, insider threat” comment. My question is this: so? Insider threats are an HR problem, so rather than making everybody suffer by carving up the company, consider giving the HR department the tools they need to detect and react to the problem. Such as, I don’t know, a tool that provides a view on what company data people are interacting with along with sentiment analysis…

There are some things that genuinely do need to have restricted access: customer data, employee personal information and the like. These should be exceptions, not the norm, and treated exceptionally. Similarly, contractors and other suppliers who probably should be given access to the platform can have need-to-know access. Again, that doesn’t mean that all your permies do. In fact, show your permies that you trust them with the ability to see what’s going on across the whole company, you’ll probably end up with happier and more motivated permies.

All of this stuff about tearing down the walls doesn’t stop you allowing the employees to organise into groups on the platform. But such groups should be self-organising, transient, and transparent: just because I didn’t join, that doesn’t mean I shouldn’t be able to join in.

Smart searches

Final thing: if you build everything above, you’re plugging everyone in to the company firehose. Make it easy for people to find what they’re after, and to review when new matching results become available.

Posted in Uncategorized | Comments Off on A bunch of monkeys with typewriters

Mac App Sandboxing: it may not be for you (but that’s probably OK)

The MAS section of devforums is, along with a healthy subsection of the rest of the interwebs, aflame with the news that the deadline for sandboxing store-delivered apps is further away than it used to be, but still too close for some people.

What’s the deal?

Bugs have been getting easier to detect over time, as the tools used to create software have got better. Anyone who remembers using a microcomputer BASIC interpreter will be familiar with the syntax error, where you’re allowed to enter nonsensical commands and the program runs anyway, stopping unceremoniously when it reaches the unparseable input. Today, syntax errors can be detected and even automatically corrected in the IDE’s text editor.

A program made of 100% acceptable syntax can still have logic errors, which most people would know as “bugs”. The app is supposed to do one thing, but instead does another. Software testing practices are designed to catch logic errors, and code analysis can detect some of these problems too.

With a little bit of hand-waving, we can describe the next level of complexity as the security error or vulnerability. At this level, an application that both compiles and does what the user expects can be made to do something unexpected under misuse. In other words, logic errors are “will it work”, security errors are “can it be made not to work”. Automatic detection and correction of security errors is, in many situations, less well-advanced than detection of other types of error. Techniques for designing and coding security errors out during software development are still very immature.

A little more hand-waving and we get to the final, and most insidious, level of error I will consider here: the blended threat. This can be defined as “can this application be used in conjunction with other applications to do something unexpected”. An example. Safari for Windows suffered a vulnerability called a “carpet bombing” attack, where the author of a web page could cause the browser to download a file without any user intervention. Not that big of a deal: the browser is supposed to allow downloads, this is a little unexpected but not outside the security model of an application.

However, Safari also has that “automatically open safe files after downloading” feature, and this is where the fun begins. Having downloaded, say, a document file, the browser automatically opens it. Now, what if that document is a PDF, and exploits a vulnerability in Acrobat Reader to execute code on the user’s behalf? A local code execution problem like this might be considered a medium-severity issue by the Reader developers: it requires a user to be coerced or tricked into opening a malicious file, doesn’t it?

Well, no. The combination of the low-severity automatic download bug, the automatic open feature and the medium-severity local code execution bug produce a high-severity remote code execution bug.

Detecting and reacting to this kind of “blended threat” is harder than dealing with any of the aforementioned classes of failure. It relies on knowing the interaction between your own app and the vast number of other apps that exist on the platform, understanding the capabilities and bugs of those other apps and how those interact with the capabilities and bugs in your own. It could, in principle, be done, but currently cannot.

Enter sandboxing.

Fundamentally, the problem is that the expression of different roles in an operating system like Mac OS X is incomplete. I’ll not go into this again: my post on the new lion security things covers this ground. The TL;DR is that Mac OS X, Windows NT and friends all assume that the actors in a computer system are the users, and ignore the fact that the code represents a collection of actors too.

OK, so a user is allowed to download a document, read it, and to delete all of her documents. But does that mean that a collection of apps—each one notionally a separate actor independent from the user—should conspire together to enable this? Probably not; or if so, only over a well-defined inter-process communication system that doesn’t allow this sort of thing to happen implicitly. Creating an Automator action to get some content from a PDF on a web server and delete some documents as a result might be something the OS needs to support, but one click in a browser probably shouldn’t make all of that stuff happen automatically.

App sandboxing introduces two important features: the identification of different apps as different actors with their own privileges on the system is one, and (partially voluntary, on the developers’ parts) restrictions on the ways apps communicate is the other. This restriction is implemented both as a limitation on the IPC mechanisms an app can make use of, and on the files the app may access: the filesystem is an app’s version of sneaker net.

With these restrictions in place, the opportunities for performing a blended-threat attack are severely reduced. If one app is compromised it either doesn’t have permission to contact the apps that the threat relies on, or it tries to contact apps that don’t have permission to talk to it.

My app doesn’t fit the sandbox limitations.

Complaints of that nature on iOS, where the limitations have been in place both for ages and from the beginning of the platform’s support for third-party code, have pretty much died out now. Developers have become used to the idea that if an app cannot fit the limitations of the sandboxed environment, then it cannot be distributed on the platform.

There are two issues that limit the applicability of the “suck it up” solution for Mac apps. The first, and weakest, is that there are existing apps which don’t fit the mould. This is weak because on the Mac there are other distribution mechanisms for getting the app into the customers’ hands; mechanisms that have existed and worked for years, and that don’t require the same controls enforced by the Mac App Store policy. Yes, it would be better if every app were sandboxed, but having any app sandboxed reduces the attack surface of the Mac so a less than 100% hit rate is still a win.

Of course, some apps are only incompatible due to minor technicalities, such as legacy decisions over where to locate user files. In these cases it’s usually somewhat harmless to migrate to a situation where the app is sandbox-compatible, for example using the mediated file read permission to load documents from the legacy storage area.

Other apps are basically never going to work in a sandbox: utility software is a common casualty, especially things like anti-virus apps, disk partition editors, and file managers. IDEs are also likely to fall down here (though one that makes good use of the workspace concept need not). We can conclude that Apple doesn’t want to sell ISV software in those categories on the Mac app store, so using a different store is the solution.

The other, stronger limitation is that there are applications that are already on the store that work currently, because the store doesn’t enforce sandbox restrictions, but won’t work in the future, because they’re incompatible with the sandbox restrictions. Developers of these apps could migrate to a different store but would be unable to bring their existing customers with them, to give those existing customers updated versions, or even to contact those customers to tell them about the change.

Yes, this is a serious problem for the minority of affected apps. Yes, it is a limitation of the App Store’s capability that results from Apple’s business decisions. Yes, it’s going to be a ball ache for affected developers to deal with. No, it’s not a reason to throw the baby out with the bathwater and give up on sandboxing completely. No, I haven’t provided a solution yet. As it’s a business problem your first port of call should be your business contacts at Apple: this is such a big issue that it deserves a separate post. The short version is that if you’re doing business with Apple (or any other company) you need a business relationship with that company.

OK, so technical solutions. It all depends on the design and architecture of your app. Perhaps there are features that can be removed, without destroying the essence of your app while gaining compatibility with the app store. Consider the business impact of removing that one feature versus removing your whole app. Also, remember that unless you’ve sold about 40 million copies, there are more potential customers out there than current customers.

Another possibility is that you separate the product into two components, a service and a controlling app. The service part is incompatible with the app store and distributed separately. The app is available as an additional feature to support easier use of the service. Take a look at the number of apps for controlling MySQL, or Postgres, and you’ll see that having an app store app as a front-end to an external service is something that’s both supported and viable (well, if it isn’t viable, a lot of companies enjoy not making money).

Posted in Uncategorized | 2 Comments