Detecting overflows, undefined behaviour and other nasties

You will remember that a previous post discussed what happens when you add one to an integer, and that the answer isn’t always obvious. Indeed, the answer isn’t always defined.

As it happens, there are plenty of weird cases that crop up when working with C and languages like it. You can’t expect a boolean to be YES or NO. You can’t assume that an enum variable only holds values from the enumeration. You can’t assume that you know how long an array is, even if the caller told you. Just as adding one is subtle, so is dividing by minus one.

In each of these cases[*]—and others—what you should actually do is to check that the input to an operation won’t cause a problem before doing the operation:

int safe_add(int n1, int n2) {
  if(n2 > 0) assert(INT_MAX - n2 < n1); //or throw a floating point exception or otherwise signal not to use the result
  if(n2 < 0) assert(INT_MIN - n2 > n1);
  return n1 + n2;
}

But who does that? Thankfully, the compiler writers do. Coming up in a future release of clang is a collection of sanitisers that insert runtime checks for the things described above. If you’re the kind of person who writes assertions like the above in your code, you can swap all that for sanitisers enabled in your debug builds. If you’re not the kind of person who writes those assertions, you probably should enable these sanitisers, then go and find out where else you should be adding assertions.

In code that deals with input from other processes, machines or the outside world, you could consider enabling sanitisers even in release builds. They’ll cause your app to report where it encounters overflows, underflows, and other potential security problems. If you don’t think it’s a good enough better option, you should be writing explicit checks for bad data and application-specific failure behaviour.

So, how does this work? Compiling with the sanitiser options inserts checks of the sort shown above into the compiled code. These checks are evaluated at runtime (sort of; for array bounds checking, the size of the array must be known when compiling but the check is still done at runtime) and the process prints a helpful message if the checked condition fails. Let’s look at an example!

#include <stdio.h>
#include <limits.h>

int main(int argc, char *argv[]) {
	printf("%d + %d = %d\n", INT_MAX, 1, INT_MAX + 1);
	return 0;	
}

Compiling that with default settings “works”, but results in undefined behaviour:

clang -o mathfail mathfail.c
./mathfail
2147483647 + 1 = -2147483648

So let’s try to insert some sanity!

clang -fsanitize=integer -o mathfail mathfail.c
./mathfail
./mathfail.c:5:47: runtime error: signed integer overflow: 2147483647 + 1 cannot be represented in type 'int'
2147483647 + 1 = -2147483648

Another example: dividing by zero.

#include <stdio.h>

int main(int argc, char *argv[]) {
	int x=2, y=0;
	printf("x/y = %d\n", x/y);
	return 0;
}

./mathfail.c:5:24: runtime error: division by zero

I wonder how many of the programs I’ve written in the past would trigger sanity failures with these new tools. I wonder how many of those are still in use.

[*] With the exception of booleans. As Mark explains in the linked post, you can always compare a boolean to 0, which is the only value that means false.

Posted in Uncategorized | Comments Off on Detecting overflows, undefined behaviour and other nasties

An open letter to Xcode

The post below has been filed verbatim as an Apple Developer Tools bug report with ID 13051064.

Dear Xcode,

imagine that you had a combine harvester. Only, this combine harvester, instead of having a hopper into which the winnowed wheat is poured, has a big quern stone. It grinds the wheat into a flour, which is poured into a big mixer with some water and a bit of yeast. The mixture from here then finds its way into a big oven.

While all of this is going on, the other side of the combine harvester is actually a platform hosting some cows and a milking machine. The gathered milk is churned by the same device that turns the quern for grinding the flour.

As you have no doubt concluded yourself, Xcode, such a machine would be a great help in producing bread and butter. But let me tell you to what this parable alludes: Apple’s bread and butter is its systems of electronic devices, first and third party software. You, Xcode, are the souped-up combine harvester; the thing that makes it possible to rapidly turn the ingredients of Apple’s ecosystem into the products that everyone desires. You free both Apple’s people and the people who make things with Apple’s things from the drudgery of threshing and winnowing, and let them concentrate on what they want to make and how people would want to interact with those things. Do some people want artisanal loaves while others want simple pre-sliced packages? Xcode, you help people help both of these people.

Here’s what I think. I think you know this, Xcode. I think you, and the product people, and the developers who make you, all know that you are a key piece of machinery in producing Apple’s bread and butter. I worry though that other people at Apple, particularly some of the managers, do not see this. I think that they see you as a tool for internal use and a small number of external customers; a group that is not the primary focus for Apple’s products. These same people would see the combine harvester not as the greatest labour-saving device of the twentieth century, but as a niche instrument only of interest to combine harvester operators.

I respect you, Xcode. You may not know that, because I joke about you a lot on Twitter. What you have to understand is that for an English man to make a joke about one of his friends, it means that he really respects and has affection for that friend. It’s that respect I have for you, and for the people that make you, that means I think I can tell you what follows and you’ll take it in the intended spirit: as advice, not as an insult.

Xcode, I need you to do a couple of things. One of them you’ll like, one of them you won’t. I’ll start with the one you’ll like: I’m not a fan of the shit sandwich mode of delivering criticism. So here goes. Xcode, I need you and the people who make you to go around the campus at Apple and tell everyone you meet about the combine harvester story. Particularly senior management. Let everyone see that you are not some toy app used by a few edge-case and highly demanding users, but are in fact a critical component in the machinery that makes iPhones what they are, that makes iPads what they are, that makes Macs what they are. Remind the people who’re focused on iBooks Author as a key part of Apple’s education strategy that you help to make iBooks Author. Remind the people who want to build even better iOS releases that you are helping to build those releases. That when someone says “there’s an app for that”, it’s because of you. Help everyone to realise that the better you are, Xcode, the better nearly everything else that Apple does will become.

You’ll find it easier to convince some people than others, I’m sure. I expect that Craig Federighi has strong memories of using you when you were still Project Builder. I expect that Tim Cook may never have launched or even installed you. I don’t promise that telling the combine harvester story will be easy, but I do promise that it will boost your esteem, and that of the people who work on you.

So that was the one I think you’ll like, Xcode. Here is the other one; the one I doubt you’ll like so much. I mentioned Project Builder in the last paragraph: Xcode, I’m sorry to have to tell you that you are no longer Project Builder. How do I mean? I’m certainly not talking about your outward appearance: Project Builder was a quirky adolescent who couldn’t do anywhere near as much as you can. What I’m saying is that you’re no longer the cool startup whose goal is to change the world of developer tools. We’ve tried a whole bunch of different machinery and we’ve settled on the souped-up combine harvester. What we need you to be is a better souped-up combine harvester.

That’s not to say that innovation in developer tools should die completely. There will be new Project Builders. Someone will invent a new way of building software that’s completely out of left field, and plenty of people will find that new way better than the current way. That’ll be really cool. Maybe Apple will buy that company, or license their technology, so that you can have a go with it. That would also be cool.

What I’m saying, Xcode, is that you’re mature and grown up and people respect you for that. Please stop having these mid-life crises like you did at version 4 where you suddenly change how everything is done. Your work now is in incremental enhancement, not in world-changing revolution. People both at Apple and outside have come to expect you to be dependable, reliable and comfortable. You may think that’s boring. It’s not! Remember all of those things that exist because of you, all of those people who are delighted by what you have helped create. Just bear in mind also that when it comes time for Xcode 5, people will want a better Xcode, not a replacement for Xcode.

Think about the apps that are made at Apple now. What could make it a bit faster to make every view? Or make regressions a bit easier to detect and fix? What errors do developers at Apple see, with what frequency? How could you reduce those errors or make them quicker to diagnose? There’s an old story that Steve Jobs wanted the boot time of the Macintosh to be as fast as possible, and he thought about it in terms of the number of lifetimes that would be wasted staring at the boot screen. You may now be thinking about the number of lifetimes spent writing code, but I want you to think bigger than that: think about the exponentially larger number of lifetimes being spent waiting for those apps to ship. That extra month where 100 million people waited for the new iTunes; could a better Xcode have cut that time down?

Listen, Xcode, this is going to sound weird. I mean, you barely know me, but I’m talking like we’re best friends and I’m holding some kind of intervention. But here’s how I want you to see it, and it’s based on the combine harvester story. I don’t know whether you have a bonus or incentive scheme at Apple, but if you do then ask them to make this change. Xcode, your bonus should not be based on shipping Xcode. That would be like paying a combine harvester for harvesting; it completely misses the point. The point of harvesting is to make things like bread. Your bonus should be based on shipping every other software product Apple makes. Maybe even the third-party apps, if you can work out a fair way to measure that.

With more sincerity than this blog usually evinces,

Graham.

Posted in Uncategorized | 1 Comment

Retiring the “Apple developers are insular” meme

There’s an old trope used in discussions of Mac and iOS developers, that says they’re too inward-looking. They only think about software in ways that have been “blessed” by Apple, their platform vendor. I’m pretty sure that I’ve used this meme myself though couldn’t find an example in a short Bing for the topic. It’s now time to put that meme out to pasture (though, please, not out to stud. We don’t want that thing breeding.)

“Apple-supplied” is a broad church

Since I’ve been using Macs, it’s included: C, C++, Objective-C, five different assemblers, Java, AppleScript, perl, python, ruby (both vanilla and MacRuby), Tcl, bash, csh, JavaScript, LISP and PHP. Perhaps more. Admittedly on the iOS side options are fewer: but do you know anyone who’s found their way around all of modern C++? You can be a programmer who never leaves the aforementioned collection of languages and yet is familiar with procedural, object-oriented, structured, functional and template programming techniques. There’s no need to learn Haskell just to score developer points.

There is more to heaven and earth

“The community” has actually provided even more options than those listed above. RubyMotion, MonoTouch, MonoMac, PhoneGap/Cordova, wxWidgets, Titanium: these and more provide options for developing for Apple’s platform with third party tools, languages and APIs. To claim that the Apple-based community is insular is to choose an exclusive subset of the community, ignoring all of the developers who, well, aren’t that insular subset. If playing that sort of rhetorical game is acceptable then we aren’t having grown-up discussions. Well, don’t blame me, you started it.

Find out how many iOS apps are built with C#, or LUA, or JavaScript, or Ruby. Now see if you can say with conviction that the community of iOS app developers pays attention to nothing outside the field of Objective-C.

Not everyone need be a generalist

Back when Fred Brooks was writing about the failures of the System/360 project in his book “The Mythical-Man Month” and the article “No Silver Bullet”, he suggested that instead of building armies of programmers to create software the focus should be on creating small, focussed surgical teams with a limited number of people assuming the roles required. The “surgeon” was played by the “chief programmer”, somewhere between a software architect and a middle manager.

One of the roles on these “chief programmer teams” was the language lawyer. It’s the job of the language lawyer to know the programming language and interfaces inside-out, to suggest smarter or more efficient ways of doing what’s required of the software. They’re also great at knowing what happens at edge-case uses of the language (remember the previous post on the various things that happen when you add one to an integer?) which is great for those last-minute debugging pushes towards the end of a project.

Having language lawyers is a good thing. If some people want to focus on knowing a small area of the field inside-out rather than having broader, but shallower, coverage, that’s a good thing. These are people who can do amazing things with real code on real projects.

It doesn’t help any discussion

Even if the statement were true, and if its truth in some way pointed to a weakness in the field and its practitioners, there are more valuable things to do than to express the statement. We need some internet-age name for the following internet-age rhetorical device:

I believe P is true. I state P. Therefore I have made the world better.

If you think that I haven’t considered some viewpoint and my way of working or interacting with other developers suffers as a result, please show that thing to me. Preferably in a friendly compelling fashion that explains the value. Telling me I’m blinkered may be true, but is unlikely to change my outlook. Indeed I may be inclined to find that distasteful and stop listening; the “don’t read the comments” meme is predicated on the belief that short, unkind statements are not worth paying attention too.

Conclusion

Absorption of external ideas does exist in our community, claiming that it doesn’t is a fallacy. Not everyone need learn everything about the entirety of software making in order to contribute; claiming that they should is a fallacy. Making either of these claims is in itself not helpful. Therefore there’s no need to continue on the “Apple developers are insular” meme, and I shan’t.

If you find exciting ideas from other areas of software development, share them with those who will absorb. Worry not about people who don’t listen, but rather wonder what they know and which parts of that you haven’t discovered yet.

Posted in advancement of the self, code-level, Responsibility, software-engineering | Comments Off on Retiring the “Apple developers are insular” meme

What happens when you add one to an integer?

It depends. You saw in the previous post that there are plenty of different integer types, some with known sizes and some where the size is set by the implementation. Well for each size of integer type there are two main variants: signed and unsigned.

Unsigned numbers are always zero or positive. They’re the easiest ones to understand, and their behaviour is well defined. In almost all cases, adding one to an unsigned integer in C makes that integer bigger by one. The only exceptional case is when the number already represents the maximum value that will fit in its type; adding one to the maximum “overflows” and gets you back to 0.

Signed integers are tricky. Computers don’t natively handle negative numbers, but signed values can (as the name suggests) be negative. Various conventions have been created to allow support for negative numbers: the most common is to treat one bit of a variable as the “sign” bit (as a note for overly-sensitive nerds: sometimes these conventions are honoured in CPU instructions, and you could say that such computers do natively handle negative numbers). If the sign bit is set, then the number is negative; otherwise it is positive. Some platforms have an extra bit separate from the storage of the number that indicates the sign of the number.

What this means is that if the C language were to specify what happens when a signed integer overflows, some implementations would be able to handle this efficiently but some would not as they’d have to translate the particular platform-specific behaviour into that required by the standard.

The result then of adding one to a signed integer is quite surprising: if it causes the number to overflow, the result is undefined. An implementation is free to do anything (implementers usually choose to do whatever’s most efficient); relying on the behaviour from one implementation means writing unportable code.

As a result of this it’s important to guard against integer overflow in C (and C++ and Objective-C) programs. Typically the unsigned integer types should only be used either as bitmasks, where the value of each bit is important but doesn’t affect interpretation of the other bits, or in situations where the known overflow behaviour is actually desired. In cases where you “know” a number will always be positive, it’s still best to use a signed integer, as that offers the possibility of detecting bugs that end up pushing the value below zero.

As an example, consider a data type in my application that I “know” will always have a count that’s positive and smaller than 200. I could use a uint8_t to represent that, but there are conditions that are erroneous and yet will lead to valid-looking answers. Imagine removing 80 objects from an instance with count 50, or adding 80 objects to an instance with count 180. Because of the overflow behaviour of uint8_t, these problems would leave the result “looking” OK. It would be better to represent this type using int16_t, which both accepts values below 0 and above 200; now the problematic cases described earlier do not overflow, but result in numbers that are within the range that can be represented and can therefore be tested against my application-specific requirements.

Posted in buffer-overflow, code-level | Comments Off on What happens when you add one to an integer?

How big is an integer?

In the beginning, when all was without form and void, Kernighan and Ritchie created char. And they said, “let it be of a size chosen by the compiler, guaranteed to be large enough to hold one character from the execution character set.” And so it was, and they decreed that whatever the size of this char, the compiler would call its size 1.

Right, that’s enough silly voice. There were also other types of integer: short, int, long, long long, and pointers. The point is that on any system, you could find out how big one of these numbers is (using the compiler’s sizeof() feature) but that size depended on the system you were compiling for. Assuming that a sizeof(char)==1 is OK, but assuming that sizeof(int)==4 will lead to trouble: it’s 2 on some systems and 8 on some others, for example.

C also provides the typedef feature, which lets you give new names to existing types. Plenty of API designers use typedef to rename integer types to give some clue as to their meaning; so you’ll see size_t used to describe the size of something, ptrdiff_t to express the difference between two pointers, and so on.

Leaving the size of the various types undefined gives plenty of flexibility to implementors. A compiler for a given platform can choose to create ints that are the same size as a CPU register, or the maximum size transferable on the data bus in one load operation. It gives benefits to well-written software, which can be ported to hardware with different data characteristics just by recompiling. It also causes some problems for programmers whose software needs to talk to, well, anything else.

Imagine two computers communicating over a network. One of them wants to send an integer to the other, and the program represents the integer as an int. Well the receiving computer could have a different idea of how big an int is. Maybe the sender puts four bytes onto the network, but the receiver waits forever because it wants eight bytes. Alternatively, maybe the sender delivers eight bytes, the first four of which are incorrectly used as the integer, and the next four remain in the queue to be incorrectly used for the next value required.

The same problem can occur with files, too. Imagine that my app writes an int to disk. My customer then upgrades their computer, and my same app running on a different architecture tries to read the int back in. Does it get the same value? I’ve even seen this problem with two processes on the same computer, where one was a 64-bit kernel talking to a 32-bit user process. [N.B. a related problem is that every process needs to agree on which byte goes where in multi-byte integers; a problem not considered here].

Clearly there’s a need for integer types that are of a stable size, guaranteed to remain the same whatever architecture the software is running on. The inttypes.h or stdint.h headers, introduced in C99 (so well over a decade ago), provide these (and more). If the target environment is capable of providing an integer type that uses exactly eight bits, you can access that as int8_t (uint8_t for unsigned integers). Whether or not this is available, the smallest type that holds at least eight bits is called int_least8_t. The integer type that holds at least eight bits and is fastest for the computer to handle can also be used, as int_fast8_t. Standard implementations should provide these types for 8, 16, 32 or 64 bit integers, and may provide types for other sizes too.

The point of all of this is that while there are guaranteed-size integer types available, anything that isn’t obviously of a specific size should be treated as if it’s of unknown size. Take, for example, NSInteger. It and the unsigned NSUInteger type were introduced by Apple to provide source code compatibility between 32 and 64-bit Cocoa API code, while also expanding the values used and returned by the API on wider platforms.

This could have been done by keeping the API as it was, and changing the size of int on 64-bit Cocoa from 4 bytes to 8. This would’ve been a poor choice, because plenty of code that assumes (wrongly) that sizeof(int)==4 would have broken. Most other 64-bit environments provide eight byte longs and pointers and four-byte ints, and Apple chose to follow suit for better compatibility.

Instead, NSInteger’s underlying type depends on the architecture you’re compiling for. Currently, all Apple’s 32-bit platforms define it as int, and the 64-bit platforms define it as long. The end result is that while an NSInteger is guaranteed to be big enough to hold the length of an NSArray or an NSString, it isn’t guaranteed to be the same size as someone else’s NSInteger. Some compatibility issues still remain, and failing to deal with them can lead to some subtle bugs that only manifest themselves in particular situations.

Posted in Uncategorized | Comments Off on How big is an integer?

Server-side Objective-C

Recently, Kevin Lawler posted an “Informal Technical Note” saying that Apple could clean up on licence sales if only they’d support web backend development. There are only two problems with this argument: it’s flawed, and the precondition probably won’t be met. I’m sure there is an opportunity for server-side programming with Objective-C, but it won’t be met by Apple.

The argument is flawed

The idea is that the community is within a gnat’s crotchet of using ObjC on the web, if only ObjC were slightly better. This represents an opportunity for Apple:

  1. Licensing fees
  2. Sales of Macs for development
  3. Increase share of Objective-C at the expense of Java
  4. Get more devs capable with Objective-C, which is necessary for OSX & iOS development
  5. Developer good will
  6. Steer development on the web

Every one of these "opportunities" seems either inconsequential or unrealistic. Since the dot-com crash, much web server development has been done on free platforms with free tools: LAMP, Java/Scala/Clojure + Tomcat + Spring + Hibernate + Eclipse, Ruby on Rails, Node.js, you name it. The software’s free, you pay for the hardware (directly or otherwise) and the developers. The opportunities for selling licences are few and far between—there are people who will pay good money for good developer tools that save them a good amount of time, but most developers are not those people. The money is made in support and in consultancy. This is why Oracle still exists, and Sun doesn’t.

Of course, Apple already knows this, having turned the $n*10^4-per-license NeXT developer tools into a set of free developer tools.

Speaking of sales, the argument about selling Macs to developers is one that made sense in 2000. When Apple still needed to convince the computer-buying public that the new NeXT-based platform had a future, then selling to technologists and early adopters was definitely a thing. You could make a flaccid but plausible argument that Java and TextMate 1 provided an important boost to the platform. You can’t argue that the same’s true today. Developers already have Macs. Apple is defending their position from what has so far been lacklustre competition; there’s no need for them to chase every sale to picky developers any more.

I’ll sum up the remaining points as not being real opportunities for Apple, and move on. For Objective-C to win, Java does not have to lose (and for that matter, for Apple to win, Objective-C does not have to win; they’ve already moved away from Apple BASIC, Microsoft BASIC, Pascal and C-with-Carbon). Having ObjC backend developers won’t improve the iOS ecosystem any more than Windows 8 has benefitted from all the VB and C# developers in the world. “Developer good will” is a euphemism for “pandering to fickle bloggers”, and I’ve already argued that Apple no longer needs to do that. And Apple already has a strong position in directing the web, due to controlling the majority of clients. Who cares whether they get their HTML from ObjC or COBOL?

It probably won’t happen

Even if Craig Federighi saw that list and did decide Apple’s software division needed a slice of the server pie, it would mean reversing Apple’s 15-year slow exit of the server and services market.

Apple already stopped making servers last year due to a lack of demand. Because OS X is only licensed to run on Apple-badged hardware, even when virtualised, this means there’s no datacenter-friendly way you can run OS X. The Mac Mini server is a brute-force solution: rather than redundant PSUs, you have redundant servers. Rather than lights-out management, you hope some of the redundant servers stay up. Rather than fibre channel-attached storage, you have, well, nothing. And so on.

OS X Server has been steadily declining in both features and quality. The App Store reviews largely coincide with my experience—you can’t even rely on an upgrade from a supported configuration of 10.N, N^≤7 to 10.8 to leave your server working properly any more.

Apple have a server product that (barely) lets a group of people in the same building share wikis and calendars. They separately have WebObjects: a web application platform that they haven’t updated in four years and no longer provide support for. One of their biggest internal server deployments is based on WebObjects (with, apparently, an Oracle stack): almost all of their others aren’t. iCloud is run on external services. They internally use J2EE and Spring MVC.

So Apple have phased out their server hardware and software, and the products they do have do not appear to be well-supported. This is consistent with Tim Cook’s repeated statement of “laser focus” on their consumer products; not so much with the idea that Apple is about to ride the Objective-C unicorn into web server town.

But that doesn’t mean it won’t happen

If there is a growth of server-side Objective-C programming, it’s likely to come from people working without, around or even despite Apple. The options as they currently exist:

  • Objective-Cloud is, putting it crudely, Cocoa as a Service. It’s a good solution as it caters to the “I’m an iOS app maker who just needs to do a little server work” market; in the same way that Azure is a good (first-party) platform for Microsoft developers.
  • GNUstepWeb is based on a platform that’s even older than Apple’s WebObjects. My own attempts to use it for web application development have hit a couple of walls: the GNUstep community has not shown interest in accepting my patches; the frameworks need a lot of love to do modern things like AJAX, REST or security; and even with the help of someone at Heroku I couldn’t get Vulcan to build the framework.
  • Using any Objective-C environment such as GNUstep or the Cocotron, you could build something new or even old-school CGI binaries.
  • If it were me, I’d fork GNUstep and GSW. I’d choose one deployment platform, one web server, and one database, and I’d support the hell out of that platform only. I’d sell that as a hosted platform with the usual tiered support. The applications needed to do the sales, CRM and so on? Written on that platform. As features are needed, they get added; and the support apps are suitable for turning into the tutorials and sample code that help to reduce the support effort.

    Of course, that’s just me.

Posted in code-level, OOP, server, software-engineering, WebObjects | Comments Off on Server-side Objective-C

Can code be “readable”?

Did Isaac Asimov write good stories?

Different people will answer that question in different ways. People who don’t read English and don’t have access to a translation will probably be unable to answer. People who don’t like science fiction on principle (and who haven’t been introduced to his mystery stories) will likely say ‘no’, on principle. Other people will like what he wrote. Some will like some of what he wrote. Others will accept that he did good work but “that it isn’t really for me”.

The answers above are all based on a subjective interpretation, both of Asimov’s work and the question that was asked. You could imagine an interpretation in the form of an appeal to satisfaction: who was the author writing for, and how does the work achieve the aim of satisfying those people? What themes was the author exploring, and how does the work achieve the goal of conveying those themes? These questions were, until the modern rise of literary theory, key ways in which literary criticism analysed texts.

Let us take these ideas and apply them to programming. We find that we demand of our programmers not “can you please write readable code?”, but “can you consider what the themes and audience of this code are, and write in a way that promotes the themes among members of that audience?” The themes are the problems you’re trying to solve, and the constraints on solving them. The audience is, well, it’s the audience; it’s the people who will subsequently have to read and understand the code as a quasi-exclusive collection.

We also find that we can no longer ask the objective-sounding question “did this coder write good code?” Nor can we ask “is this code readable?” Instead, we ask “how does this code convey its themes to its audience?”

In conclusion, then, a sound approach to writing readable code requires author and reader to meet in the middle. The author must decide who will read the code, and how to convey the important information to those readers. The reader must analyse the code in terms of how it satisfies this goal of conveyance, not whether they enjoyed the indentation strategy or dislike dots on principle.

Source code is not software written in a human-readable notation. It’s an essay, written in executable notation.

Posted in code-level, software-engineering | Comments Off on Can code be “readable”?

I published a new book!

Executive summary: it’s called APPropriate Behaviour, head over to the LeanPub site to check it out.

For quite a while, I’ve noticed that posts here are moving away from nuts and bolts code towards questions about evaluating my own performance, working with other developers and the industry in general.

I decided to spend some time working on these and related thoughts, trying to derive some consistent narrative as well as satisfying myself that these ideas were indeed going somewhere. I quickly ended up with about half of a novel-length book.

The other half is coming soon, but in the meantime the book is already published in preview state. To quote from the introduction:

this book is about the things that go into being a programmer that aren’t specifically the programming. It starts fairly close to home, with chapters on development tools, on supporting your own programming needs, and on other “software engineering” practices that programmers should understand and make use of. But by the end of the book we’ll be talking about psychology and metacognition — about understanding how you the programmer function and how to improve that functioning.

As I said, this is currently in very much a preview state—only about half of the content is there, it hasn’t been reviewed, and the thread that runs through it has dropped a few stitches. However, even if you buy the book now you’ll get free updates forever so you’ll get to find out as chapters are added and as changes are made.

At this early stage I’m particularly interested in any feedback readers have. I’ve set up a Glassboard for the book—in the Glassboard app, use invite code XVSSV to join the discussion.

I hope you enjoy APPropriate behaviour!

Posted in advancement of the self, books, Business, code-level, Responsibility, software-engineering | Comments Off on I published a new book!

Surprising ARC performance characteristics

The project I’m working on at the moment has quite tight performance constraints. It needs to start up quickly, do its work at a particular rate and, being an iOS app, there’s a hard limit on how much RAM can be used.

The team’s got quite friendly with Instruments, watching the time profile, memory allocations, thread switches[*] and storage access trying to discover where we can trade one off in favour of another.

[*] this is a topic for a different post, but “dispatch_async() all the things” is a performance “solution” that brings its own problems.

It was during one of these sessions that I noticed a hot loop in the app was spending a lot of time in a C++ collection type called objc::DenseMap. This is apparently used by objc_retain() and objc_release(), the functions used to implement reference counting when Objective-C code is compiled using ARC.

The loop was implemented using the closure interface, -[NSDictionary enumerateKeysAndValuesUsingBlock:. Apparently the arguments to a block are strong references, so each was being retained on entering the block and released on return. Multiply by thousands of objects in the collection and tens of iterations per second, and that was a non-trivial amount of time to spend in memory management functions.

I started to think about other data types in which I could express the same collection—is there something in the C++ standard library I could use?

I ended up using a different interface to the same data type – something proposed by my colleague, Mo. Since Cocoa was released, Foundation data types have been accessible via the CoreFoundation C API[**]. The key difference as far as modern applications are concerned is that the C API uses void * to refer to its content rather than id. As a result, and with appropriate use of bridging casts, ARC doesn’t try to retain and release the objects.

[**]I think that Foundation on OPENSTEP was designed in the same way, but that the C API wasn’t exposed until the OS X 10.0 release.

So this:

[myDictionary enumerateKeysAndObjectsUsingBlock: ^(id key, id object, BOOL *stop) {
  //...
}];

became this:

CFDictionaryRef myCFDictionary = (__bridge CFDictionaryRef)myDictionary;
CFIndex count = CFDictionaryGetCount(myCFDictionary);
void *keys[count];
void *values[count];
CFDictionaryGetKeysAndValues(myCFDictionary, keys, values);

for (CFIndex i = 0; i < count; i++)
{
  __unsafe_unretained id key = (__bridge id)keys[i];
  __unsafe_unretained id value = (__bridge id)values[i];
  //...
}

which turned out to be about 12% faster in this case.

I’ll finish by addressing an open question from earlier, when should I consider ditching Foundation/CoreFoundation completely? There are times when it’s appropriate to move away from those data types. Foundation’s adaptive algorithms are very fast a lot of the time, choosing different representations under different conditions – but aren’t always the best choice.

Considering loops that enumerate over a collection like the loop investigated in this post, a C++ or C structure representation is good if the loop is calling a lot of messages. Hacks like IMP caching can also help, in which this:

for (MyObject *foo in bar)
{
  [foo doThing];
}

becomes this:

SEL doThingSelector = @selector(doThing);
IMP doThingImp = class_getMethodImplementation([MyObject class], doThingSelector);

for (MyObject *foo in bar)
{
  doThingImp(foo, doThingSelector);
}

If you’ve got lots (like tens of thousands, or hundreds of thousands) of instances of a class, Objective-C will add a measurable memory impact in the isa pointers (each object contains a pointer to its class), and the look aside table that tracks retain counts. Switching to a different representation can regain that space: in return for losing dynamic dispatch and reference-counted memory management—automatic or otherwise.

Posted in code-level, performance, software-engineering | Comments Off on Surprising ARC performance characteristics

Sideloading content into iOS apps

All non-trivial apps visualise content in some form, whether it’s game levels embedded in the app, data loaded from some internet service, or something else.

In many cases the developer who’s writing the Objective-C code isn’t going to be the person who creates or prepares this content. In the case of embedded content, this can lead to a slow feedback loop—the content experts create a database or some other assets, then send it to the developer. The developer prepares a build using the new assets, uploading it to TestFlight or some other ad-hoc distribution centre. Then the content people can download that app to see their content in the context of the application it’s designed for.

There’s a simple way to close this loop, letting content creators see the app with their latest changes as they make them. That is to use iTunes File Sharing to load the content via the app’s Documents folder.

If you have a line like this:

NSString *pathToContent = [[NSBundle mainBundle] pathToResource: @"myDatabase" ofType: @"sqlite"];

Change it to use a function like this:

NSString *pathToPotentiallySideloadedFile(NSString *filename, NSString *type)
{
    NSString *pathInDocumentsFolder = [[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent: filename] stringByAppendingPathExtension: type];
    if (pathInDocumentsFolder)
        return pathInDocumentsFolder;
    else
        return [[NSBundle mainBundle] pathForResource: filename ofType: type];
}

//...
NSString *pathToContent = pathToPotentiallySideloadedFile(@"myDatabase", @"sqlite");

Now if people working on your app have a file in their Documents folder with the same name as the one used in the app, it’ll load their version. So, how do they get it in there?

You need to make a simple change to your app’s Info.plist:

    <key>UIFileSharingEnabled</key>
    <true/>

Now when anybody with the app connects their device to iTunes, they’ll be able to use file sharing to add their own content. Don’t forget to turn this off before you go live!

I mentioned at the beginning of this post that this technique can be used for networked apps. Obviously there isn’t really any difficulty getting updated content into a network-driven app; or if there is, someone did it wrong.

It’s the opposite problem you have: keeping the content fixed. If your online component—be it a CMS, a data feed from an API, or something else—is getting new data you can’t always ensure that the app is looking at the same stuff in testing. Indeed, sometimes I’ve found the CMS developers changing the data format without telling anyone; if you’re investigating a particular condition related to the state of the data, it can be hard to reproduce.

You can use the iTunes File Sharing technique to load a specific version of the app’s data without relying on the network connection and the server giving you the same output. This is great for regression testing, as you can ensure that only your code is changing between runs.

Posted in iPad, iPhone, tool-support | Comments Off on Sideloading content into iOS apps