Applications and Spelling of Boole

While Alan Turing is regarded by many as the grandfather of Artificial Intelligence, George Boole should be entitled to some claim to that epithet too. His Investigation of the Laws of Thought is nothing other than a systematisation of “those universal laws of thought which are the basis of all reasoning”. The regularisation of logic and probability into an algebraic form renders them amenable to the sort of computing that Turing was later to show could be just as well performed mechanically or electronically as with pencil and paper.

But when did people start talking about the logic of binary operations in computers as being due to Boole? Turing appears never to have mentioned his name: although he certainly did talk about the benefits of implementing a computer’s memory as a collection of 0s and 1s, and describe operations thereon, he did not call them Boolean or reference Boole.

In the ACM digital library, Symbolic synthesis of digital computers from 1952 is the earliest use of the word “Boolean”. Irving S. Reed describes a computer as “a Boolean machine” and “an automatic operational filing system” in its abstract. He cites his own technical report from 1951:

Equations (1.33) and (1.35) show that the simple Boolean system, given in (1.34) may be analysed physically by a machine consisting of N clocked flip flops for the dependent variables and suitable physical devices for producing the sum and product of the various variables. Such a machine will be called the simple Boolean machine.

The best examples of simple Boolean machines known to this author are the Maddidas and (or) universal computers being built or considered by Computer Research Corporation, Northrop Aircraft Inc, Hughes Aircraft, Cal. Tech., and others. It is this author’s belief that all the electronic and digital relay computers in existence today may be interpreted as simple Boolean machines if the various elements of these machines are regarded in an appropriate manner, but this has yet to be proved.

So at least in the USA, the correlation between digital computing and Boolean logic was being explored almost as soon as the computer was invented. Though not universally: the book “The Origins of Digital Computers” edited by Brian Randell, with articles from Charles Babbage, Grace Hopper, John Mauchly, and others, doesn’t mention Boole at all. Neither does Von Neumann’s famous “first draft” report on the EDVAC.

So, second question. Why do programmers spell Boole bool? Who first decided that five characters was too many, and that four was just right?

Some early programming languages, like Lisp, don’t have a logical data type at all. Lisp uses the empty list to mean “false” and anything else to mean true. Snobol is weird (he said, surprising nobody). It also doesn’t have a logical type, conditional execution being predicated on whether an operation signals failure. So the “less than” function can return the empty string if a<b, or it can fail.

Fortran has a LOGICAL type, logically. COBOL, being designed to be illogical wherever Fortran is logical, has a level 88 data type. Simula, Algol and Pascal use the word ‘boolean’, modulo capitalisation.

ML definitely has a bool type, but did it always? I can’t see whether it was introduced in Standard ML (1980s-1990), or earlier (1973+). Nonetheless, it does appear that ML is the source of misspelled Booles.

Digital Declutter

I’ve been reading and listening to various books about the attention/surveillance economy, the rise of fascism in the Anglosphere and beyond, and have decided to disconnect from the daily outrage and the impotent swiping of “social” “content”. The most immediately actionable advice came from Cal Newport’s Digital Minimalism. I will therefore be undertaking a digital declutter in May.

Specifically this means:

  • no social media. In fact I haven’t been on most of them all of April, so this is already in play. By continuing it into May, I intend to do a better job of choosing things to do when I’m not hitting refresh.
  • alerts on chat apps on for close friends and family only.
  • streaming TV only when watching with other people.
  • Email once per day.
  • no RSS.
  • audiobooks only while driving.
  • Slack once per day.
  • Web browsing only when it progresses a specific work, or non-computering, task.
  • at least one walk per day, of at least half an hour, with no technology.
  • Phone permanently in Do Not Disturb mode.

It’s possible that I end up blogging more, if that’s what I start thinking of when I’m not browsing the twitters. Or less. We’ll find out over the coming weeks.

My posts for De Programmatica Ipsum are written and scheduled, so service there is not interrupted. And I’m not becoming a hermit, just digitally decluttering. Arrange Office Hours, come to Brum AI, or find me somewhere else, if you want to chat!

What’s on the other channel?

I run a company, a mission-driven software consultancy that aims to make it easier and faster to make high-quality software that preserves privacy and freedom. On the homepage you’ll find Research Watch, where I talk about research papers I read. For example, the most recent article is Runtime verification in Erlang by using contracts, which was presented at a conference last year. Articles from the last few decades are discussed: most is from the last couple of years, nothing yet is older than I am.

At de Programmatica Ipsum, I write on “individuals, interactions, and the true valuation of the things on the left” with Adrian Kosmaczewski and a glorious feast of guest writers. The most recent issue was on work, the upcoming issue is on programming history. You can subscribe or buy our back-catalogue to read all the issues.

Anyway, those are other places where you might want to read my writing. If people are interested I could publish their feeds here, but you may as well just check each out yourself :).

Half a bee

When you’re writing Python tutorials, you have to use Monty Python references. It’s the law. On the 40th anniversary of the release of Monty Python’s Life of Brian, I wanted to share this example that I made for collections.defaultdict that doesn’t fit in the tutorial I’m writing. It comes as homage to the single Eric the Half a Bee.

from collections import defaultdict

class HalfABee:
    def __init__(self):
        self.is_a_bee = False
    def value(self):
        self.is_a_bee = not self.is_a_bee
        return "Is a bee" if self.is_a_bee else "Not a bee"

>>> eric = defaultdict(HalfABee().value, {})
>>> print(eric['La di dee'])
Is a bee
>>> print(eric['La di dee'])
Not a bee

Dictionaries that can return different values for the same key are a fine example of Job Security-Driven Development.

Pythonicity

The same community that says:

There should be one– and preferably only one –obvious way to do it.

Also says:

So essentially when someone says something is unpythonic, they are saying that the code could be re-written in a way that is a better fit for pythons coding style.

There’s more to it

We saw in Apple’s latest media event a lot of focus on privacy. They run machine learning inferences locally so they can avoid uploading photos to the cloud (though Photo Stream means they’ll get there sooner or later anyway). My Twitter stream frequently features adverts from Apple, saying “we don’t sell your data”.

Of course, none of the companies that Apple are having a dig at “sell your data”, either. That’s an old-world way of understanding advertising, when unscrupulous magazine publishers may have sold their mailing lists to bulk mail senders.

These days, it’s more like the postal service says “we know which people we deliver National Geographic to, so give us your bulk mail and we’ll make sure it gets to the best people”. Only in addition to National Geographic, they’re looking at kids’ comics, past due demands, royalty cheques, postcards from holiday destinations, and of course photos back from the developers.

To truly break the surveillance capitalism economy and give me control of my data, Apple can’t merely give me a private phone. But that is all they can do, hence the focus.

Going back to the analogy of postal advertising, Apple offer a secure PO Box service where nobody knows what mail I’ve got. But the surveillance-industrial complex still knows what mail they deliver to that box, and what mail gets picked up from there. To go full thermonuclear war, as promised, we would need to get applications (including web apps) onto privacy-supporting backend platforms.

But Apple stopped selling Xserve, Mac Mini Server, and Mac Pro Server years ago. Mojave Server no longer contains: well, frankly, it no longer contains the server bits. And because they don’t have a server solution, they can’t tell you how to do your server solution. They can’t say “don’t use Google cloud, it means you’re giving your customers’ data to the surveillance-industrial complex”, because that’s anticompetitive.

At the Labrary, I run my own Nextcloud for file sharing, contacts, calendars, tasks etc. I host code on my own gitlab. I run my own mail service. That’s all work that other companies wouldn’t take on, expertise that’s not core to my business. But it does mean I know where all company-related data is, and that it’s not being shared with the surveillance-industrial complex. Not by me, anyway.

There’s more to Apple’s thermonuclear war on the surveillance-industrial complex than selling privacy-supporting edge devices. That small part of the overall problem supports a trillion-dollar company.

It seems like there’s a lot that could be interesting in the gap.

Hyperloops for our minds

We were promised a bicycle for our minds. What we got was more like a highly-efficient, privately run mass transit tunnel. It takes us where it’s going, assuming we pay the owner. Want to go somewhere else? Tough. Can’t afford to take part? Tough.

Bicycles have a complicated place in society. Right outside this building is one of London’s cycle superhighways, designed to make it easier and safer to cycle across London. However, as Amsterdam found, you also need to change the people if you want to make cycling safer.

Changing the people is, perhaps, where the wheels fell off the computing bicycle. Imagine that you have some lofty goal, say, to organise the world’s information and make it universally accessible and useful. Then you discover how expensive that is. Then you discover that people will pay you to tell people that their information is more universally accessible and useful than some other information. Then you discover that if you just quickly give people information that’s engaging, rather than accessible and useful, they come back for more. Then you discover that the people who were paying you will pay you to tell people that their information is more engaging.

Then you don’t have a bicycle for the mind any more, you have a hyperloop for the mind. And that’s depressing. But where there’s a problem, there’s an opportunity: you can also buy your mindfulness meditation directly from your mind-hyperloop, with of course a suitable share of the subscription fee going straight to the platform vendor. No point using a computer to fix a problem if a trillion-dollar multinational isn’t going to profit (and of course transmit, collect, maintain, process, and use all associated information, including passing it to their subsidiaries and service partners) from it!

It’s commonplace for people to look backward at this point. The “bicycle for our minds” quote comes from 1990, so maybe we need to recapture some of the computing magic from 1990? Maybe. What’s more important is that we accept that “forward” doesn’t necessarily mean continuing in the direction we took to get to here. There are those who say that denying the rights of surveillance capitalists and other trillion-dollar multinationals to their (pie minus tiny slice that trickles down to us) is modern-day Luddism.

It’s a better analogy than they realise. Luddites, and contemporary protestors, were not anti-technology. Many were technologists, skilled machine workers at the forefront of the industrial revolution. What they protested against was the use of machines to circumvent labour laws and to produce low-quality goods that were not reflective of their crafts. The gig economies, zero-hours contracts, and engagement drivers of their day.

We don’t need to recall the heyday of the microcomputer: they really were devices of limited capability that gave a limited share of the population an insight into what computers could do, one day, if they were highly willing to work at it. Penny farthings for middle-class minds, maybe. But we do need to say hold on, these machines are being used to circumvent labour laws, or democracy, or individual expression, or human intellect, and we can put the machinery to better use. Don’t smash the machines, smash the systems that made the machines.

Ratio

The web has a weird history with comments. I have a book called Zero Comments, a critique of blog culture from 2008. It opens by quoting from a 2005 post from a now defunct website, stodge.org. The Wayback Machine does not capture the original post, so here is the quote as lifted from the book:

In the world of blogging ‘0 Comments’ is an unambiguous statistic that means absolutely nobody cares. The awful truth about blogging is that there are far more people who write blogs than who actually read blogs.

Hmm. If somebody comments on your blog, it means that they care about what you’re saying. What’s the correct thing to do to people who care about your output? In 2011, the answer was to push them away:

It’s been a very difficult decision (I love reading comments on my articles, and they’re almost unfailingly insightful and valuable), but I’ve finally switched comments off.

I experimented with Comments Off, then ultimately turned them back on in 2014:

having comments switched off dilutes the experience for those people who did want to see what people were talking about. There’d be some chat over on twitter (some of which mentions me, some of which doesn’t), and some over on the blog’s Facebook page. Then people will mention the posts on their favourite forums like Reddit, and a different conversation would happen over there. None of that will stop with comments on, and I wouldn’t want to stop it. Having comments here should guide people, without forcing them, to comment where everyone can see them.

This analysis still holds. People comment on my posts over at Hacker News and similar sites, whether I post them there or not. The sorts of comments that you would expect from Hacker News commenters, therefore, rarely appear here. They appear there. I can’t stop that. I can’t discourage it. I can merely offer an alternative.

In 2019 people talk about the Ratio:

While opinions on the exact numerical specifications of The Ratio vary, in short, it goes something like this: If the number of replies to a tweet vastly outpaces its engagement in terms of likes and retweets, then something has gone horribly wrong.

So now saying something that people want to talk about, which in 2005 was a sign that they cared, is a sign that you messed up. The goal is to say things that people don’t care about, but will uncritically share or like. If too many people comment, you’ve been ratioed.

I don’t really have a “solution”: there may be human solutions to technical problems, but there aren’t technical solutions to human problems. And it seems that the humans on the web have a problem that we want an indication that people are interested in what we say, but not too much of an indication, or too much interest.