Structure and Interpretation of Computer Programmers

I make it easier and faster for you to write high-quality software.

Tuesday, March 2, 2010

How to hire Graham Lee

There are few people who can say that when it comes to Cocoa application security, they wrote the book. In fact, I can think of only one: me. I’ve just put the final draft together for Professional Cocoa Application Security and it will hit the shops in June: click the link to purchase through my Amazon affiliate programme.

Now that the book’s more-or-less complete, I can turn my attention to other interesting projects: by which I mean yours! If your application could benefit from a developer with plenty of security experience and knowledge to share in a pragmatic fashion, or a software engineer who led development of a complex Cocoa application from its legacy PowerPlant origins through Snow Leopard readiness, or a programmer who has worked on performance enhancement in networking systems and low-level daemon code on Darwin and other UNIX platforms, then your project will benefit from an infusion of the Graham Lee magic. Even if you have some NeXTSTEP or OPENSTEP code that needs maintaining, I can help you out: I’ve been using Cocoa for about as long as Apple has.

Send an email to iamleeg <at> securemacprogramming <dot> com and let’s talk about your project. The good news is that for the moment I am available, you probably can afford me[], and I really want to help make your product better. Want to find out more about my expertise? Check out my section on the MDN show, and the MDN security column.

[] It came up at NSConference that a number of devs thought I carry a premium due to the conference appearances, podcasts and other material I produce. Because I believe that honesty is the best policy, I want to come out and say that I don’t charge any such premium. My rates are consistent with other contractors with my level of experience, and I even provide a discounted rate for NGOs and academic institutions.

posted by Graham Lee at 13:22  

Sunday, February 17, 2008

Mach-OFS: aforementioned polish and functionality

It’s getting there, now has the ability to display load commands (though it only reports useful information for LC_SEGMENT and LC_SEGMENT_64 commands):

Again the screenshot depicts the OmniDazzle binary for no reason other than it’s a nontrivial file. The directions in which to take the filesystem are now numerous: I can add info about the remaining load commands (v. useful), the raw data for each segment (somewhat useful), and the sections in each segment (v. useful). Whether the filesystem will eventually get to the level of symbol resolution, I’m not sure :-).

posted by Graham Lee at 00:58  

Sunday, February 10, 2008

Mach-O FS (no really, MacFUSE does rule)

It needs some polishing and more functionality before I’d call it useful, then I have to find out whether I’m allowed to do anything with the source code ;-). But this is at least quite a cool hack; exploring a Mach-O file (thin or fat – in this case, I used the OmniDazzle executable which is a fat file) as if it’s a file system. FUSE of course makes it easy, so thanks to Amit Singh for the port!

posted by Graham Lee at 23:39  

Saturday, February 9, 2008

MacFUSE rules

One reason that microkernels win over everything else (piss off, Linus) is that stability is better, because less stuff is running in the dangerous and all-powerful kernel environment. MacFUSE, like FUSE implementations on other UNIX-like operating systems, takes the microkernel approach to filesystems, hooking requests for information out of the kernel and passing them to user-space processes to handle. Here’s the worst that can happen when screwing up a FUSE filesystem:

Now that might sound not only like a recipe for lower-quality code, but also like I’m extolling the capability to create lower-quality code. Well no it isn’t, and yes I am. The advantage is that now the develop-debug-fix cycle for filesystems is just as short as it is for other userland applications (and HURD translators and the like). This provides a lower barrier to entry (meaning that it’s more likely that interesting and innovative filesystems can be created), but also a faster turnaround on bugfixes (no panic, restart, try to salvage panic log… no two-machine debugging with kdb…) so ultimately higher-quality filesystems.

posted by Graham Lee at 22:06  

Friday, January 12, 2007

Eating one’s own dog food

I feel foolish for having made this error (especially after having so patiently explained how this stuff works on Mach), but today I did it. I reported the amount of free memory on a Linux system as being the amount reported by free as free.

My own opinion on this is that I suffer from a view of hardware management which was tainted by using micros like the Dragon 32 (Radio Shack/Tandy TRS-80 to my American readers) and the Amiga, where there were basically two types of memory usage: yes and no. A particular block was either in use by the system, or it wasn’t. On the Dragon 32 it was even easier than that of course; all bytes were available for reading and writing, but what happened was context-dependent. Also because this was the MC6809E, whether such a peek/poke made sense depended on whether you were trying to hit an I/O address, and what was plugged in. The Amiga had a particularly lame memory allocator which quickly sucked performance like a vacuum of performance suckage +2; but a byte of RAM was either in use or it was available.

But I digress. The point is, that such a simple view of memory availability is no longer sensible but it’s hard for me to think around it without a lot of work, just as it was hard for me to become a programmer after I’d been taught BASIC and Pascal. If I were involved in UNIX internals more (and indeed that would be fun, although I think maybe Linux wouldn’t be my first choice to open up) I’d probably be able to think about these things properly, just as I had to throw myself into C programming in a big way before I lost my BASIC-isms.

For the record (and so that it looks like this post is going somewhere), both operating systems have an intermediate state for RAM to be in between "used" and "not used" (where I’m ignoring kernel wired memory, and Linux kernel buffers). On Mach, there’s the "inactive" state which I’ve already described at el linko above. On Linux, it’s used as an I/O cache for (mainly disk, mainly read-ahead) operations. This means that Linux will automatically take almost all (if not all) of the memory during the boot process. The way that inactive memory gets populated on Mach means that on that system (e.g. Darwin) actually the amount of free memory starts large and inactive starts small, but over time as the active and inactive counts go up, the free count goes down, and it’s rare for memory to be re-freed. On both systems, free memory is really "memory it wasn’t worth you buying" as it’s not being put to any use at all.

posted by Graham Lee at 18:05  

Powered by WordPress