On voices that matter

In October I’ll be in Philadelphia, PA talking at Voices That Matter: Fall iPhone Developers’ Conference. I’m looking forward to meeting some old friends and new faces, and sucking up a little more of that energy and enthusiasm that pervades all of the Apple-focussed developer events I’ve been to. In comparison with other fields of software engineering, Cocoa and Cocoa Touch development have a sense of community that almost exudes from the very edifices of the conference venues.

But back to the talk. Nay, talks. While Alice was directed by the cake on the other side of the looking glass to “Eat Me”, the label on my slice said “bite off more of me than you can chew”. Therefore I’ll be speaking twice at this event, the first talk on Saturday is on Unit Testing, which I’ve taken over just now from Dave Dribin. Having seen Dave talk on software engineering practices before (and had lively discussions regarding coupling and cohesion in Cocoa code in the bar afterwards), I’m fully looking forward to adding his content’s biological and technological distinctiveness to my own. I’ll be covering the why of unit testing in addition to the how and what.

My second talk is on – what else – security and encryption in iOS applications. In this talk I’ll be looking at some of the common features of iOS apps that require security consideration, how to tease security requirements out of your customers and product managers and then looking at the operating system support for satisfying those requirements. This gives me a chance to focus more intently on iOS than I was able to in Professional Cocoa Application Security (Wrox Professional Guides), looking at specific features and gotchas of the SDK and the distinctive environment iOS apps find themselves in.

I haven’t decided what my schedule for the weekend will be, but there are plenty of presentations I’m interested in watching. So look for me on the conference floor (and, of course, in the bar)!

Posted in code-level, iPad, iPhone, Talk, threatmodel, tool-support | Leave a comment

On stopping service management abuse

In chapter 2 of their book The Mac Hacker’s Handbook (is there only one Mac hacker?), Charlie Miller and Dino Dai Zovi note that an attacker playing with a sandboxed process could break out of the sandbox via launchd. The reasoning goes that the attacker just uses the target process to submit a launchd job. Launchd, which is not subject to sandbox restrictions, then loads the job, executing the attacker’s payload in an environment where it will have more capabilities.

This led me to wonder whether I could construct a sandbox profile that would stop a client process from submitting launchd jobs. I have done that, but not in a very satisfying way or even necessarily a particularly complete one. My profile does this:

(version 1)
(deny default)
(debug deny)

(allow process-exec)
(allow file-fsctl)
(allow file-ioctl)
; you can probably restrict access to even more files - don't forget to let dyld link Cocoa though!
(allow file-read* file-write*)
(deny file-read* (regex "^/System/Library/Frameworks/ServiceManagement.framework"))
(deny file-read* (literal "/bin/launchctl" "/bin/launchd"))
(allow signal (target self))
(allow ipc-posix-shm)
(allow sysctl*)
; this lot was empirically discovered - Cocoa apps needs these servers
(allow mach-lookup (global-name "com.apple.system.notification_center" "com.apple.system.DirectoryService.libinfo_v1"
 "com.apple.system.DirectoryService.membership_v1" "com.apple.distributed_notifications.2"
 "com.apple.system.logger" "com.apple.SecurityServer" 
"com.apple.CoreServices.coreservicesd" "com.apple.FontServer" 
"com.apple.dock.server" "com.apple.windowserver.session" 
"com.apple.pasteboard.1" "com.apple.pbs.fetch_services" 
"com.apple.tsm.uiserver"))

So processes run in the above sandbox profile are not allowed to use the launchd or launchctl processes, nor can they link the ServiceManagement framework that allows Cocoa apps to discover and submit jobs directly.

Unfortunately I wasn’t able to fully meet my goal: a process can still do the same IPC that launchctl et al use directly. I found that if I restricted IPC access to launchd, apps would crash when trying to check-in with the daemon. Of course the IPC protocol is completely documented so it might be possible to do finer-grained restrictions, but I’m not optimistic.

Of course, standard disclaimers apply: the sandbox Scheme environment is supposed to be off-limits to us smelly non-Apple types.

Posted in launchd, Mac, sandbox | Comments Off on On stopping service management abuse

On private methods

Let’s invent a hypothetical situation. You’re the software architect for an Objective-C application framework at a large company. This framework is used by many thousands of developers to create all sorts of applications for a particular platform.

However, you have a problem. Developer Technical Support are reporting that some third-party developers are using a tool called class-dump to discover the undocumented methods on your framework’s classes, and are calling them directly in application code. This is leading to stability and potentially other issues, as the methods are not suitable for calling at arbitrary points in the objects’ life cycles.

You immediately reject the distasteful solution of making the private method issue a policy problem. While you could analyse third-party binaries looking for use of undocumented method selectors, this approach is unscalable and error-prone. Instead you need a technical solution.

The problem in more detail

Consider the following class:

@interface GLStaticMethod : NSObject {
@private
    int a;
}
@property (nonatomic, assign) int a;
- (void)doTheLogThing;
@end

@interface GLStaticMethod ()
- (void)logThis;
@end

@implementation GLStaticMethod

@synthesize a;

- (void)doTheLogThing {
    [self logThis];
}

- (void)logThis {
    NSLog(@"Inside logThis: %d", self->a);
}

@end

Clearly this -logThis method would be entirely dangerous if called at unexpected times. Oh OK, it isn’t, but let’s pretend. Well, we haven’t documented it in the header, so no developer will find it, right? Enter class-dump:

/*
 *     Generated by class-dump 3.3.2 (64 bit).
 *
 *     class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2010 by Steve Nygard.
 */

#pragma mark -

/*
 * File: staticmethod
 * Arch: Intel x86-64 (x86_64)
 *
 *       Objective-C Garbage Collection: Unsupported
 */

@interface GLStaticMethod : NSObject
{
    int a;
}

@property(nonatomic) int a; // @synthesize a;
- (void)logThis;
- (void)doTheLogThing;

@end

OK, that’s not so good. Developers can find our private method, and that means they’ll use the gosh-darned thing! What can we do?

Solution 1: avoid static discovery

We’ll use the dynamic method resolution feature of the new Objective-C runtime to only bind this method when it’s used. We’ll put our secret behaviour into a function that has the same signature as an IMP (Objective-C method implementation), and attach that to the class when the private method is first used. So our class .m file now looks like this:

@interface GLStaticMethod ()
void logThis(id self, SEL _cmd);
@end

@implementation GLStaticMethod

@synthesize a;

+ (BOOL)resolveInstanceMethod: (SEL)aSelector {
    if (aSelector == @selector(logThis)) {
        class_addMethod(self, aSelector, (IMP)logThis, "v@:");
        return YES;
    }
    return [super resolveInstanceMethod: aSelector];
}

- (void)doTheLogThing {
    [self logThis];
}

void logThis(id self, SEL _cmd) {
    NSLog(@"Inside logThis: %d", ((GLStaticMethod *)self)->a);
}

@end

What does that get us? Let’s have another look at class-dump’s output now:

/*
 *     Generated by class-dump 3.3.2 (64 bit).
 *
 *     class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2010 by Steve Nygard.
 */

#pragma mark -

/*
 * File: staticmethod
 * Arch: Intel x86-64 (x86_64)
 *
 *       Objective-C Garbage Collection: Unsupported
 */

@interface GLStaticMethod : NSObject
{
    int a;
}

+ (BOOL)resolveInstanceMethod:(SEL)arg1;
@property(nonatomic) int a; // @synthesize a;
- (void)doTheLogThing;

@end

OK, so our secret method can’t be found using class-dump any more. There’s a hint that something special is going on because the class provides +resolveInstanceMethod:, and a really dedicated hacker could use otool to disassemble that method and find out what selectors it uses. In fact, they can guess just by looking at the binary:

heimdall:Debug leeg$ strings staticmethod 
NSAutoreleasePool
alloc
init
setA:
doTheLogThing
release
drain
logThis
resolveInstanceMethod:
v8@0:4
c12@0:4:8
i8@0:4
v12@0:4i8
NSObject
GLStaticMethod
Ti,N,Va
Inside logThis: %d

You could mix things up a little more by constructing strings at runtime and using NSSelectorFromString() to generate the selectors to test.

Problem extension: runtime hiding

The developers using your framework have discovered that you’re hiding methods from them and found a way to inspect these methods. By injecting an F-Script interpreter into their application, they can see the runtime state of every object including your carefully-hidden instance methods. They know that they can call the methods, and can even declare them in categories to avoid compiler warnings. Where do we go from here?

Solution 2: don’t even add the method

We’ve seen that we can create functions that behave like instance methods – they can get access to the instance variables just as methods can. The only requirement is that they must be defined within the class’s @implementation. So why not just call the functions? That’s the solution proposed in ProCocoaAppSec – it’s a little uglier than dynamically resolving the method, but means that the method never appears in the ObjC runtime and can never be used by external code. It makes our public method look like this:

- (void)doTheLogThing {
    logThis(self, _cmd);
}

Of course, logThis() no longer has an Objective-C selector of its very own – it can only get the selector of the method from which it was called (or whatever other selector you happen to pass in). Most Objective-C code doesn’t ever use the _cmd variable so this isn’t a real drawback. Of course, if you do need to be clever with selectors, you can’t use this solution.

Conclusion

Objective-C doesn’t provide language-level support for private methods, but there are technological solutions for framework developers to hide internal code from their clients. Using these methods will be more reliable and easier to support than asking developers nicely not to use those methods, and getting angry when they do.

Posted in code-level, iPad, iPhone, Mac, PCAS, software-engineering | Leave a comment

On authorization proxy objects

Authorization Services is quite a nice way to build in discretionary access controls to a Mac application. There’s a whole chapter in Professional Cocoa Application Security (Chapter 6) dedicated to the topic, if you’re interested in how it works.

The thing is, it’s quite verbose. If you’ve got a number of privileged operations (like, one or more) in an app, then the Auth Services code can get in the way of the real code, making it harder to unpick what a method is up to when you read it again a few months later.

Let’s use some of the nicer features of the Objective-C runtime to solve that problem. Assuming we’ve got an object that actually does the privileged work, we’ll create a façade object GLPrivilegedPerformer that handles the authorization for us. It can distinguish between methods that do or don’t require privileges, and will attempt to gain different rights for different methods on different classes. That allows administrators to configure privileges for the whole app, for a particular class or even for individual tasks. If it can’t get the privilege, it will throw an exception. OK, enough rabbiting. The code:

@interface GLPrivilegedPerformer : NSObject {
    id actual;
    AuthorizationRef auth;
}
- (id)initWithClass: (Class)cls;
@end

@implementation GLPrivilegedPerformer

- (NSMethodSignature *)methodSignatureForSelector:(SEL)aSelector {
    NSMethodSignature *sig = [super methodSignatureForSelector: aSelector];
    if (!sig) {
        sig = [actual methodSignatureForSelector: aSelector];
    }
    return sig;
}

- (BOOL)respondsToSelector:(SEL)aSelector {
    if (![super respondsToSelector: aSelector]) {
        return [actual respondsToSelector: aSelector];
    }
    return YES;
}

- (void)forwardInvocation:(NSInvocation *)anInvocation {
    if ([actual respondsToSelector: [anInvocation selector]]) {
        NSString *selName = NSStringFromSelector([anInvocation selector]);
        if ([selName length] > 3 && [[selName substringToIndex: 4] isEqualToString: @"priv"]) {
            NSString *rightName = [NSString stringWithFormat: @"%@.%@.%@",
                                   [[NSBundle mainBundle] bundleIdentifier],
                                   NSStringFromClass([actual class]),
                                   selName];
            AuthorizationItem item = {0};
            item.name = [rightName UTF8String];
            AuthorizationRights requested = {
                .count = 1,
                .items = &item,
            };
            OSStatus authResult = AuthorizationCopyRights(auth,
                                                          &requested,
                                                          kAuthorizationEmptyEnvironment,
                                                          kAuthorizationFlagDefaults |
                                                          kAuthorizationFlagExtendRights |
                                                          kAuthorizationFlagInteractionAllowed,
                                                          NULL);
            
            if (errAuthorizationSuccess != authResult) {
                [self doesNotRecognizeSelector: [anInvocation selector]];
            }
        }
        [anInvocation invokeWithTarget: actual];
    }
    else {
        [super forwardInvocation: anInvocation];
    }
}

- (id)initWithClass: (Class)cls {
    self = [super init];
    if (self) {
        OSStatus authResult = AuthorizationCreate(NULL,
                                                  kAuthorizationEmptyEnvironment,
                                                  kAuthorizationFlagDefaults,
                                                  &auth);
        if (errAuthorizationSuccess != authResult) {
            NSLog(@"couldn't create auth ref");
            return nil;
        }
        actual = [[cls alloc] init];
    }
    return self;
}

- (void)dealloc {
    AuthorizationFree(auth, kAuthorizationFlagDefaults);
    [actual release];
    [super dealloc];
}
@end

Some notes:

  • You may want to raise a custom exception rather than using -doesNotRecognizeSelector: on failure. But you’re going to have to @catch something on failure. That’s consistent with the way Distributed Objects handles authentication failures.
  • The rights it generates will have names of the form com.example.MyApplication.GLActualPerformer.privilegedTask, where GLActualPerformer is the name of the target class and privilegedTask is the method name.
  • There’s an argument for the Objective-C proxying mechanism making code harder to read than putting the code inline. As discussed in Chapter 9, using object-oriented tricks to make code non-linear has been found to make it harder to review the code. However, this proxy object is small enough to be easily-understandable, and just removes authorization as a cross-cutting concern in the style of aspect-oriented programming (AOP). If you think this will make your code too hard to understand, don’t use it. I won’t mind.
  • As mentioned elsewhere, Authorization Services is discretionary. This proxy pattern doesn’t make it impossible for injected code to bypass the authorization request by using the target class directly. Even if the target class has the “hidden” visibility attribute, class-dump can find it and NSClassFromString() can get the Class object.
Posted in Authorization, code-level, Mac, PCAS, software-engineering | Comments Off on On authorization proxy objects

NSConference MINI videos available

During WWDC week I talked at NSConference MINI, a one-day conference organised by Scotty and the MDN. The videos are now available: free to attendees, or $50 for all 10 for non-attendees.

My own talk was on extending the Clang static analyser, to perform your own tests on your code. I’m pleased with the amount I managed to get in, and I like how the talk managed to fit well with the general software-engineering theme of the conference. There’s stuff on bit-level manipulation, eXtreme Programming, continuous integration, product management and more. I’d fully recommend downloading the whole shebang.

Posted in code-level, NSConf, software-engineering, tool-support | Leave a comment

On Trashing

Back in the 1980s and 1990s, people who wanted to clandestinely gain information about a company or organisation would go trashing.[*] That just meant diving in the bins to find information about the company structure – who worked there, who reported to whom, what orders or projects were currently in progress etc.

You’d think that these days trashing had been thwarted by the invention of the shredder, but no. While many companies do indeed destroy or shred confidential information, this is not universal. Those venues where shredding is common leave it up to their staff to decide what goes in the bin and what goes in the shredder; these staff do not always get it correct (they’re likely to think about whether a document is secret to them rather than the impact on the company). Nor do they always want to think about which bin to put a worthless sheet of paper in.

Even better: in those places that do shred secret papers, they helpfully collect all of the secrets in big bins marked “To Shred” to help the trashers :). They then collect all of these bins into a big hopper, and leave that around (sometimes outside the building, in a covered or open yard) for the destruction company to come and pick up.

So if an attacker can get entry to the building, he just roots around in the “To Shred” bins. Someone asks, he tells them he put a printout there in the morning but now think he needs it again. Even if he can’t get in, he just dives in the hopper outside and get access to all those juicy secrets (with none of the banana peelings and teabags associated with the non-secret bin).

But for those attackers who don’t like getting their hands dirty, they can gain some of the same information using technological means. LinkedIn will helpfully provide a list of employees – including their positions, so the public can find out something of the reporting structure. Some will be looking for recruitment opportunities – these are great people to phone for more information! So are ex-employees, something LinkedIn will also help you out with.

But the fun doesn’t stop there. Once our attacker has the names, he now goes over to Twitter and Facebook. There he can find people griping about work…or describing what the organisation is up to, to put it another way.

All of the above information about 21st-century trashing comes from real experience with an office I was invited into in the last 12 months. Of course, I will not name the organisation in charge of that office (or their data destruction company). The conclusion is that trashing is alive and well, and that those who participate need no longer root around in, well, in the trash. How does your organisation deal with the problem?

[*] for me, it was mainly the 1990s. I was the perfect size in the 1980s for trashing, but still finding my way around a Dragon 32.

Posted in Business, Data Leakage, Policy, Twitter | Leave a comment

On detecting God Classes

Opinion on Twitter was divided when I suggested the following static analyser behaviour: report on any class that conforms to too many protocols.

Firstly, a warning: “too many” is highly contextual. Almost all objects implement NSObject and you couldn’t do much without it, so it gets a bye. Other protocols, like NSCoding and NSCopying, are little bits of functionality that don’t really dilute a class by their presence. It’s probably harmless for a class to implement those in addition to other protocols. Still others are so commonly implemented together (like UITableViewDataSource and UITableViewDelegate, or WebView‘s four delegate protocols) that they probably shouldn’t independently count against a class’s “protocol weight”. On the other hand, a class that conforms to UITableViewDelegate, UIAlertViewDelegate and MKMapViewDelegate might be trying to do too much – of which more after the next paragraph.

Secondly, a truism: the goal of a static analyser is to ignore code that the developer’s happy with, and to warn about code the developer isn’t happy with. If your coding style permits a class to conform to any number of protocols, and you’re happy with that, then you shouldn’t implement this analyser rule. If you would be happy with a maximum of 2, 4, or 1,024 protocols, you would benefit from a rule with that number. As I said in my NSConf MINI talk, the analyser’s results are not as cleanly definable as compiler errors (which indicate code that doesn’t conform to the language definition) or warnings (which indicate code that is very probably incorrect). The analyser is more of a code style and API use checker. Conforming to protocols is use of the API that can be checked.

OK, let’s get on with it. A class that conforms to too many protocols has too many reponsibilities – it is a “God Class”. Now how can this come about? A developer who has heard about model-view-controller (MVC) will try to divide classes into three high-level groups, like this:

MVC high-level architecture

The problem comes when the developer fails to take that diagram for what it is: a 50,000-foot overview of the MVC architecture. Many Mac and iOS developers will use Core Data, and will end up with a model composed of multiple different entities. If a particular piece of the app’s workflow needs to operate on multiple entities, they may break that out into “business logic” objects that can be called by the controller. Almost all Mac and iOS developers use the standard controls and views, meaning they have no choice but to break up the view into multiple objects.

But where does that leave the controller? Without any motivation to divide things up, everything else is stuffed into a single object (likely an NSDocument or UIViewController subclass). This is bad. What happens if you need to display the same table in two different places? You have to go through that class, picking out the bits that are relevant, finding out where they’re tied to the bits that aren’t and untangling them. Ugh.

Cocoa developers will probably already be using multiple controller objects if they’re using Cocoa Bindings. Each NSArrayController or similar receives its object or objects, usually from the main controller object, and then gets on with the job of marshalling the interaction between the bound view and the observed model objects. So, if we take the proposed changes so far, our MVC diagram looks like this:

MVC - Slightly closer look

The point of my protocol-checking code is to go the remaining distance, and abstract out the other data sources into their own objects. What we’re left with is a controller that looks after the view’s use case, ensuring that logic actions take place when they ought, that steps in the workflow only become available when their preconditions are met, and so on. Everything related to performing the logic is down in those dynamic model objects, and everything to do with data presentation is in its own controller objects. Well, maybe not everything – a button doesn’t exactly have a complicated API. But if you need a table of employees for this view controller and a table of employees for that view controller, you just take the same table datasource object in both places. You don’t have two different datasource implementations in two view controllers (or even the same one pasted twice). This makes the diagram look like this:

MVC - more separation

So to summarise, a class that conforms to too many protocols probably has too many responsibilities. The usual reason for this is that controller objects take on responsibility for managing workflow, providing data to views and handling delegate responsibilities for the views. This leads to code that is not reusable except through the disdainful medium of copy-pasting, as it is harder to define a clean interface between these various concerns. By producing a tool that reports on the existence of such God classes, developers can be alerted to their presence and take steps to fix them.

Posted in code-level, iPad, iPhone, Mac, software-engineering, tool-support | Leave a comment

On Fitt’s Law and Security

…eh? Don’t worry, read on and all shall be explained.

I’ve said in multiple talks and podcasts before that one key to good security is good user interface design. If users are comfortable performing their tasks, and your application is designed such that the easiest way to use it is to do the correct thing, then your users will make fewer mistakes. Not only will they be satisfied with the experience, they’ll be less inclined to stray away from the default security settings you provide (your app is secure by default, right?) and less likely to damage their own data.

Let’s look at a concrete example. Matt Gemmell has already done a pretty convincing job of destroying this UI paradigm:

The dreaded add/remove buttonsThe button on the left adds something new, which is an unintrusive, undoable operation, and one that users might want to do often. The button on the right destroys data (or configuration settings or whatever), which if unintended represents an accidental loss of data integrity or possibly service availability.

Having an undo feature here is not necessarily a good mitigation, because a user flustered at having just lost a load of work or important data cannot be guaranteed to think of using it. Especially as undo is one of those features that has different meaning in different apps, and is always hidden (there’s no “you can undo this” message shown to users). Having an upfront confirmation of deletion is not useful either, and Matt’s video has a good discussion of why.

Desktop UI

Anyway, this is a 19×19 px button (or two of them), right? Who’s going to miss that? This is where we start using the old cognometrics and physiology. If a user needs to add a new thing, he must:

  1. Think about what he wants to do (add a new thing)
  2. Think about how to do it (click the plus button, but you can easily make this stage longer by adding contextual menus, a New menu item and a keyboard shortcut, forcing him to choose)
  3. Move his hand to the mouse (if it isn’t there already)
  4. Locate the mouse pointer
  5. Move the pointer over the button
  6. Click the mouse button

Then there’s the cognitive effort of checking that the feedback is consistent with the expected outcome. Anyway, Fitt’s Law comes in at step 5. It tells us, in a very precise mathematical way, that it’s quicker to move the pointer to a big target than a small one. So while a precise user might take time to click in this region:

Fastidious clicking action

A user in a hurry (one who cares more about getting the task done than on positioning a pointing device) can speed up the operation by clicking in this region:

Hurried clicking action

The second region includes around a third of the remove button’s area; the user has a ~50% chance of getting the intended result, and ~25% of doing something bad.

Why are the click areas elliptical? Think about moving your hand – specifically moving the mouse in your hand (of course this discussion does not apply to trackpads, trackballs etc, but a similar one does). You can move your wrist left/right easily, covering large (in mousing terms) distances. Moving the mouse forward/back needs to be done with the fingers, a more fiddly task that covers less distance in the same time. If you consider arm motions the same applies – swinging your radius and ulna at the elbow to get (roughly) left/right motions is easier than pulling your whole arm at the shoulder to get forward/back. Therefore putting the remove button at one side of the add button is, from an ergonomic perspective, the absolutely worst place for it to go.

Touch UI

Except on touch devices. We don’t really need to worry about the time spent moving our fingers to the right place on a touchscreen – most people develop a pretty good unconscious sense of proprioception (knowing where the bits of our bodies are in space) at a very early age and can readily put a digit where they want it. The problem is that we don’t want it in the same place the device does.

We spend most of our lives pointing at things, not stabbing them with the fleshy part of a phalange. Therefore when people go to tap a target on the screen, the biggest area of finger contact will be slightly below (and to one side, depending on handedness) the intended target. And of course they may not accurately judge the target’s location in the first place. In the image below, the user probably intended to tap on “Presentations”, but would the tap register there or in “LinkedIn Profile”? Or neither?

A comparatively thin finger

The easiest way to see this for yourself is to move from an iPhone (which automatically corrects for the user’s intended tap location) and an Android device (which doesn’t, so the tap occurs where the user actually touched the screen) or vice versa. Try typing on one for a while, then moving over to type on the other. Compare the sorts of typos you make on each device (obviously independent of the auto-correct facilities, which are good on both operating systems), and you’ll find that after switching devices you spend a lot of time hitting keys to the side of or below your target keys.

Conclusion

Putting any GUI controls next to each other invites the possibility that users will tap the wrong one. When a frequently-used control has a dangerous control as a near neighbour, the chance for accidental security problems to occur exists. Making dangerous controls hard to accidentally activate does not only provide a superior user experience, but a superior security experience.

Posted in iPad, iPhone, Mac, threatmodel, UI, user-error | 1 Comment

Using Aspect-Oriented Programming for Security Engineering

This paper by Kotrappa Sirbi and Prakash Jayanth Kulkarni (link goes to HTML abstract, full text PDF is free) discusses implementation of an application’s security requirements in Java using Aspect-Oriented Programming (AOP).

We have AOP for Objective-C (of sorts), but as hardly anyone has used it I think it’s worth taking a paragraph or two out to explain. If you’ve ever written a non-trivial Cocoa[ Touch] application, you’ll have found that even when you have a good object-oriented design, you have code that addresses different concerns mixed together. A method that performs some useful task (deleting a document, for example) also calls some logging code, checks for authorisation, reports errors, and maybe some other things. Let’s define the main business concerns of the application as, well, business concerns, and all of these other things like logging and access control as cross-cutting concerns.

AOP is an attempt to reduce the coupling between business and cross-cutting code by introducing aspects. An aspect is a particular cross-cutting concern that must be implemented in an application, such as logging. Rather than calling the logging code from the business code, we’ll define join points, which are locations in the business code where it’s useful to insert cross-cutting code. These join points are usually method calls, but could also be exception throw points or anywhere else that program control changes. We don’t necessarily need logging at every join point, so we also define predicates that say which join points are useful for logging. Whenever one of the matching join points is met, the application will run the logging code.

This isn’t just useful for adding code. Your aspect code can also define whether the business code actually gets run at all, and can even inspect and modify the return value of the business code. That’s where it gets useful for security purposes. You don’t need to put your access control (for instance) checks inside the business code, you can define them as modifications to join points in the business code. If you need to change the way access control works (say going from a single-user to directory service scheme, or from password checks to access tokens) you can just change the implementation of the aspect rather than diving through all of the app code.

Of course, that doesn’t mean you can just implement the business logic then add security stuff later, like icing a cake or sprinkling fairy dust. You still need to design the business objects such that the security control decisions and the join points occur at the same places. However, AOP is useful for separating concerns, and for improving maintainability of non-core app behaviour.

Posted in code-level, software-engineering, tool-support | Leave a comment

A solution in need of a problem

I don’t usually do product reviews, in fact I have been asked a few times to accept a free product in return for a review and have turned them all down. This is just such an outré product that I have decided to write a review, despite not being approached by the company and having no connection. The product is the 3M Privacy Filter Gold. Here is one I was given at InfoSec, on my MacBook:

3M Privacy Filter Gold - front view

You’ll probably notice that the screen has some somewhat unsightly plastic tabs around the edge. They are holding in the main feature of this product, which is the thing giving the screen that slightly coppery colour. It’s actually a sheet of plastic which I assume is etched with a blazed diffraction grating, because at more acute viewing angles it makes the screen look like this:

3M Privacy Filter Gold - side view

OK, so you can still see the plastic tabs, but now it’s hard to make out the text. And that’s the goal of the privacy filter gold: it’s to stop shoulder surfers. By reducing the usable viewing angle of your screen. Hold on a moment, while I count the number of times I’ve been told about a business or government agency that leaked sensitive data through shoulder surfing.

0.

And the number of times I’ve heard (or discovered, through risk analysis) that it’s an important risk for an organisation to address? About the same. OK, so this product is distinctive, and gimmicky, and evidently does what it’s designed to do. But I don’t see the problem that led to this product being developed. It might be useful if you want to browse porn in Starbucks or goof off on Facebook while you’re at work, except that someone stood behind you can still see the screen. OK, you could browse porn while sat on the London Underground – except you won’t be able to get a network signal.

If anti-shoulder-surfing is important to you, you may want to bear these issues into account. When used in strong sunlight, the privacy filter gold makes the screen look like this (note: availability of handsome security expert holding iPhone is strictly limited):

3M Privacy Filter Gold - in the sunshine

The MacBook isn’t great in strong sunlight anyway, but with the filter over the top it becomes positively unusable. All of that dust is actually on the filter (although it was fresh out of its packaging when the picture was taken), and causing additional scatterings through its grating leading to a “gold dust” effect on the screen. And yes, the characters “3M GPF13.3W” are indeed etched into the filter at the top-right, near that thing that yes, is indeed a notch you can put your finger in for extracting the filter from the plastic tabs.

That’s one issue, the other is price. Retail price for these filters is around £50, varying slightly depending on screen size. I’m really not sure that’s a price worth paying considering that I have no idea what the benefit of the filter will be.

Posted in Data Leakage | Leave a comment