What’s on the other channel?

I run a company, a mission-driven software consultancy that aims to make it easier and faster to make high-quality software that preserves privacy and freedom. On the homepage you’ll find Research Watch, where I talk about research papers I read. For example, the most recent article is Runtime verification in Erlang by using contracts, which was presented at a conference last year. Articles from the last few decades are discussed: most is from the last couple of years, nothing yet is older than I am.

At de Programmatica Ipsum, I write on “individuals, interactions, and the true valuation of the things on the left” with Adrian Kosmaczewski and a glorious feast of guest writers. The most recent issue was on work, the upcoming issue is on programming history. You can subscribe or buy our back-catalogue to read all the issues.

Anyway, those are other places where you might want to read my writing. If people are interested I could publish their feeds here, but you may as well just check each out yourself :).

Posted in books, meta-waffle | Leave a comment

Half a bee

When you’re writing Python tutorials, you have to use Monty Python references. It’s the law. On the 40th anniversary of the release of Monty Python’s Life of Brian, I wanted to share this example that I made for collections.defaultdict that doesn’t fit in the tutorial I’m writing. It comes as homage to the single Eric the Half a Bee.

from collections import defaultdict

class HalfABee:
    def __init__(self):
        self.is_a_bee = False
    def value(self):
        self.is_a_bee = not self.is_a_bee
        return "Is a bee" if self.is_a_bee else "Not a bee"

>>> eric = defaultdict(HalfABee().value, {})
>>> print(eric['La di dee'])
Is a bee
>>> print(eric['La di dee'])
Not a bee

Dictionaries that can return different values for the same key are a fine example of Job Security-Driven Development.

Posted in Python | Leave a comment

New Swift hardware

A nesting tower for swifts

The Swift Tower is an artificial nesting structure, installed in Oxford University parks. That or a very blatant sponsorship deal.

Posted in Swift | Leave a comment

King Arthur: By what name are you known?

Why is it we’re not allowed to call the Apple guy “Tim Apple” when everybody calls the O’Reilly guy “Tim O’Reilly”?

Posted in whatevs | Leave a comment

Pythonicity

The same community that says:

There should be one– and preferably only one –obvious way to do it.

Also says:

So essentially when someone says something is unpythonic, they are saying that the code could be re-written in a way that is a better fit for pythons coding style.

Posted in Python | Leave a comment

Runtime verification in Erlang by using contracts

About this paper

Runtime verification in Erlang by using contracts, L.-A. Fredlund et al, presented at WFLP 2018.

Notes

Spoiler alert, but the conclusion to my book OOP the Easy Way is that we should have independently-running objects, like we do in Erlang. We should also document the contract between an object and its collaborators, like we do in Eiffel. The concurrency is there because processors are getting wider, not faster, and applications are becoming more distributed across IOT, “edge compute”, and servers. The contracts are there because we want to tame that complexity, tell people how each of the bits of our system can be composed, and what to expect if they try that. And we want to verify that documentation automatically, keeping ourselves honest.

As the authors here note, Erlang already has good facilities for recovering from errors, but that’s only really suitable for errors that are “things that might go wrong”. Programmer mistakes – call them bugs, logic errors, whatever – are things that shouldn’t go wrong, and Erlang doesn’t help there. Eiffel has really good tools for dealing with things that shouldn’t go wrong.

This paper introduces Erlang Design by Contract, which adds the relevant Eiffel bits to Erlang. At its core is a contract specification that’s richer than that found in Eiffel, allowing developers to assert that particular functions are pure, that they complete within given time, or that a recursive algorithm is trending towards completion.

The bit that achieves our aim, of combining the parallelism of Erlang with the safety of Eiffel, is the cpre/3 function, for implementing concurrent preconditions. The server inspects a message, the sender (from), and its internal state, and decides either to handle the message or queue it for later, changing the server state as appropriate.

In this way, preconditions act not as assertions that trigger failure, but as wait conditions that say when an object is ready to handle a message. The Concurrency Made Easy project found this too, when they built SCOOP, which works the other way and adds concurrency to Eiffel.

EDBC and SCOOP are two very different approaches that start from different places and aim for the middle. It’s really interesting to see that there is common ground: that “precondition as wait” arises whether you add concurrency to contracts, or contracts to concurrency.

Posted in academia, OOP | Tagged | Leave a comment

There’s more to it

We saw in Apple’s latest media event a lot of focus on privacy. They run machine learning inferences locally so they can avoid uploading photos to the cloud (though Photo Stream means they’ll get there sooner or later anyway). My Twitter stream frequently features adverts from Apple, saying “we don’t sell your data”.

Of course, none of the companies that Apple are having a dig at “sell your data”, either. That’s an old-world way of understanding advertising, when unscrupulous magazine publishers may have sold their mailing lists to bulk mail senders.

These days, it’s more like the postal service says “we know which people we deliver National Geographic to, so give us your bulk mail and we’ll make sure it gets to the best people”. Only in addition to National Geographic, they’re looking at kids’ comics, past due demands, royalty cheques, postcards from holiday destinations, and of course photos back from the developers.

To truly break the surveillance capitalism economy and give me control of my data, Apple can’t merely give me a private phone. But that is all they can do, hence the focus.

Going back to the analogy of postal advertising, Apple offer a secure PO Box service where nobody knows what mail I’ve got. But the surveillance-industrial complex still knows what mail they deliver to that box, and what mail gets picked up from there. To go full thermonuclear war, as promised, we would need to get applications (including web apps) onto privacy-supporting backend platforms.

But Apple stopped selling Xserve, Mac Mini Server, and Mac Pro Server years ago. Mojave Server no longer contains: well, frankly, it no longer contains the server bits. And because they don’t have a server solution, they can’t tell you how to do your server solution. They can’t say “don’t use Google cloud, it means you’re giving your customers’ data to the surveillance-industrial complex”, because that’s anticompetitive.

At the Labrary, I run my own Nextcloud for file sharing, contacts, calendars, tasks etc. I host code on my own gitlab. I run my own mail service. That’s all work that other companies wouldn’t take on, expertise that’s not core to my business. But it does mean I know where all company-related data is, and that it’s not being shared with the surveillance-industrial complex. Not by me, anyway.

There’s more to Apple’s thermonuclear war on the surveillance-industrial complex than selling privacy-supporting edge devices. That small part of the overall problem supports a trillion-dollar company.

It seems like there’s a lot that could be interesting in the gap.

Posted in AAPL, Privacy | Leave a comment

Hyperloops for our minds

We were promised a bicycle for our minds. What we got was more like a highly-efficient, privately run mass transit tunnel. It takes us where it’s going, assuming we pay the owner. Want to go somewhere else? Tough. Can’t afford to take part? Tough.

Bicycles have a complicated place in society. Right outside this building is one of London’s cycle superhighways, designed to make it easier and safer to cycle across London. However, as Amsterdam found, you also need to change the people if you want to make cycling safer.

Changing the people is, perhaps, where the wheels fell off the computing bicycle. Imagine that you have some lofty goal, say, to organise the world’s information and make it universally accessible and useful. Then you discover how expensive that is. Then you discover that people will pay you to tell people that their information is more universally accessible and useful than some other information. Then you discover that if you just quickly give people information that’s engaging, rather than accessible and useful, they come back for more. Then you discover that the people who were paying you will pay you to tell people that their information is more engaging.

Then you don’t have a bicycle for the mind any more, you have a hyperloop for the mind. And that’s depressing. But where there’s a problem, there’s an opportunity: you can also buy your mindfulness meditation directly from your mind-hyperloop, with of course a suitable share of the subscription fee going straight to the platform vendor. No point using a computer to fix a problem if a trillion-dollar multinational isn’t going to profit (and of course transmit, collect, maintain, process, and use all associated information, including passing it to their subsidiaries and service partners) from it!

It’s commonplace for people to look backward at this point. The “bicycle for our minds” quote comes from 1990, so maybe we need to recapture some of the computing magic from 1990? Maybe. What’s more important is that we accept that “forward” doesn’t necessarily mean continuing in the direction we took to get to here. There are those who say that denying the rights of surveillance capitalists and other trillion-dollar multinationals to their (pie minus tiny slice that trickles down to us) is modern-day Luddism.

It’s a better analogy than they realise. Luddites, and contemporary protestors, were not anti-technology. Many were technologists, skilled machine workers at the forefront of the industrial revolution. What they protested against was the use of machines to circumvent labour laws and to produce low-quality goods that were not reflective of their crafts. The gig economies, zero-hours contracts, and engagement drivers of their day.

We don’t need to recall the heyday of the microcomputer: they really were devices of limited capability that gave a limited share of the population an insight into what computers could do, one day, if they were highly willing to work at it. Penny farthings for middle-class minds, maybe. But we do need to say hold on, these machines are being used to circumvent labour laws, or democracy, or individual expression, or human intellect, and we can put the machinery to better use. Don’t smash the machines, smash the systems that made the machines.

Posted in whatevs | Tagged | Leave a comment

Ratio

The web has a weird history with comments. I have a book called Zero Comments, a critique of blog culture from 2008. It opens by quoting from a 2005 post from a now defunct website, stodge.org. The Wayback Machine does not capture the original post, so here is the quote as lifted from the book:

In the world of blogging ‘0 Comments’ is an unambiguous statistic that means absolutely nobody cares. The awful truth about blogging is that there are far more people who write blogs than who actually read blogs.

Hmm. If somebody comments on your blog, it means that they care about what you’re saying. What’s the correct thing to do to people who care about your output? In 2011, the answer was to push them away:

It’s been a very difficult decision (I love reading comments on my articles, and they’re almost unfailingly insightful and valuable), but I’ve finally switched comments off.

I experimented with Comments Off, then ultimately turned them back on in 2014:

having comments switched off dilutes the experience for those people who did want to see what people were talking about. There’d be some chat over on twitter (some of which mentions me, some of which doesn’t), and some over on the blog’s Facebook page. Then people will mention the posts on their favourite forums like Reddit, and a different conversation would happen over there. None of that will stop with comments on, and I wouldn’t want to stop it. Having comments here should guide people, without forcing them, to comment where everyone can see them.

This analysis still holds. People comment on my posts over at Hacker News and similar sites, whether I post them there or not. The sorts of comments that you would expect from Hacker News commenters, therefore, rarely appear here. They appear there. I can’t stop that. I can’t discourage it. I can merely offer an alternative.

In 2019 people talk about the Ratio:

While opinions on the exact numerical specifications of The Ratio vary, in short, it goes something like this: If the number of replies to a tweet vastly outpaces its engagement in terms of likes and retweets, then something has gone horribly wrong.

So now saying something that people want to talk about, which in 2005 was a sign that they cared, is a sign that you messed up. The goal is to say things that people don’t care about, but will uncritically share or like. If too many people comment, you’ve been ratioed.

I don’t really have a “solution”: there may be human solutions to technical problems, but there aren’t technical solutions to human problems. And it seems that the humans on the web have a problem that we want an indication that people are interested in what we say, but not too much of an indication, or too much interest.

Posted in social-science | Tagged | Leave a comment

Mach and Matchmaker: kernel and language support for object-oriented distributed systems

About this paper

Mach and Matchmaker: kernel and language support for object-oriented distributed systems
, Michael B. Jones and Richard F. Rashid, from the proceedings of OOPSLA ’86.

Notes

Yes, 1986 was a long time ago, but the topics of Mach and Matchmaker are still relevant, and I find it interesting to read about its genesis and development. I also find that it helps me put today’s uses – or abandonments – in context.

Mach

Two main families of operating systems under development today are still based on the CMU Mach project. Let’s get discussing the HURD out of the way, first. The HURD is based on GNU Mach, which is itself based on the University of Utah’s Mach 4.0 project. GNU Mach is a microkernel, so almost all of the operating system facilities are provided by user-space processes. An interesting implication is that a regular user can create a sub-HURD, an environment with a whole UNIX-like system running within their user account on the host HURD.

Not many people do that, though. HURD is very interesting to read and use, but didn’t fulfil its goal of becoming a free host for the GNU system that made it easy to support hardware. Linux came along, as a free host for the GNU system that made it worthwhile to support hardware. I enjoy using the HURD, but we’ll leave it here.

…because we need to talk about the other operating system family that uses Mach, macOS/iOS/watchOS/tvOS/whatever the thing that runs the touchbar on a Macbook Pro is called OS. These are based on CMU Mach 2.5, for the most part, which is a monolithic kernel. Broadly speaking, Mach was developed by adding bits to the 4.2 (then 4.3) BSD kernel, until it became possible to remove all of the BSD bits and still have a working BSD-like system. Mach 2.5 represents the end of the “add Mach bits to a BSD kernel” part of the process.

Based on an earlier networked environment called Accent, Mach has an object-oriented facility in which object references are called “ports”, and you send them messages by…um, sending them messages. But sending them messages is really hard, because you have to get all the bits of all the parameters in the right place. What you need is…

Matchmaker

Originally built for Accent, Matchmaker is an Interface Definition Language in which you describe the messages you want a client and server to use, and it generates procedures for sending the messages and receiving the responses in the client, and receiving the messages and sending the responses in the server.

Being built atop Mach, Matchmaker turns those messages into Mach messages sent between Mach ports. What Mach does to get the messages around is transparent, so it might take a message on one computer and deliver it to a server on a different computer, maybe even running a different architecture.

That transparency was a goal of a lot of object-oriented remote procedure call systems in the 1990s, and by and large fell flat. The reason is Peter Deutsch’s Eight Fallacies of Distributed Computing. Basically you usually want to know when your message is going out over a network, because that changes everything from how likely it is to be received, how likely you are to get a response, to how expensive it will be to send the message.

Matchmaker supported C, Common LISP, Ada, and PERQ Pascal; Accent and Mach messages, and a bunch of different computer architectures. Unfortunately it supported them all through specific knowledge of each, and the paper described here acknowledges how difficult that makes it to work on and proposes future work to clean it up. It’s not clear that future work was ever done; modern Machs all use MIG, an “interim subset” of Matchmaker that only supports C.

Object-oriented design

In my book OOP the Easy Way, I explore the idea that objects are supposed to be small, independent computer programs that communicate over the loosely-coupled channel that is message-sending. Mach and Matchmaker together implement this design. Your objects can be in different languages, on different computers, even in different host operating systems (there were Mach IPC implementations for Mach, obviously, but also VAX Unix and non-Mach BSD). As long as they understand the same format for messages, they can speak to each other.

Consider a Cocoa application. It may be written in Swift or Objective-C or Objective-C++ or Python or whatever. It has a reference to a window, where it draws its views. The app sees that window as an Objective-C object, but the app doesn’t have a connection to the framebuffer to draw windows.

The Window Server has that connection. So when you create a window in your Cocoa application, you actually send a message to the window server and get back a port that represents your window. Messages sent to the window are forwarded to the window server, and events that happen in the UI (like the window being closed or resized) are sent as messages to the application.

Because of the way that Mach can transparently forward messages, it’s theoretically possible for an application on one computer to display its UI on another computer’s window server. In fact, that’s more than a theoretical possibility. NeXTSTEP supported exactly that capability, and an application with the NXHost default set could draw to a window server on a different computer, even one with a different CPU architecture.

This idea of loosely-coupled objects keeps coming up, but particular implementations rarely stay around for long. Mach messages still exist on HURD and Apple’s stuff (both using MIG, rather than Matchmaker), but HURD is tiny and Apple recommend against using Mach or MIG directly, favouring other interfaces like XPC or the traditional UNIX IPC systems that are implemented atop Mach. Similarly, PDO has come and gone, as have CORBA and its descendents DSOM and DOE.

Even within the world of “let’s use HTTP for everything”, SOAP gave way to REST, which gave way to the limited thing you get if you do the CRUD bits of REST without doing the DAP bits. What you learn by understanding Mach and its interfaces is that this scheme can be applied everywhere from an internet service down to an operating system component.

Posted in academia, architecture of sorts, OOP | Tagged | Leave a comment