Falsehoods programmers who write “falsehoods programmers believe” articles believe about programmers who read “falsehoods programmers believe” articles

For reasons that will become clear, I can’t structure this article as a “falsehoods programmers believe” article, much as that would add to the effect.

There are plenty of such articles in the world, so turn to your favourite search engine, type in “falsehoods programmers believe”, and orient yourself to this concept. You’ll see plenty of articles that list statements that challenge assumptions about a particular problem domain. Some of them list counterexamples, and a subset of those give suggestions of ways to account for the counterexamples.

As the sort of programmer who writes falsehoods programmers believe articles, my belief is that interesting challenges to my beliefs will trigger some curiosity, and lead me to research the counterexamples and solutions. Or at least, to file away the fact that counterexamples exist until I need it, or am otherwise more motivated to learn about it.

But that motivation is not universal. The fact that I treat it as universal turns it into a falsehood I believe about readers of falsehoods articles. Complaints abound that falsehoods articles do not lead directly to fish on the plate. Some readers want a clear breakdown from “thing you might think is true but isn’t true” to “Javascript you can paste in your project to account for it not being true”. These people are not well-served by falsehoods articles.

On the features of a portfolio career

Since starting The Labrary late last year, I’ve been able to work with lots of different organisations and lots of different people. You too can hire The Labrary to make it easier and faster to create high-quality software that respects privacy and freedom, though not before January 2020 at the earliest.

In fact I’d already had a portfolio career before then, but a sequential one. A couple of years with this employer, a year with that, a phase as an indie, then back to another employer, and so on. At the moment I balance a 50% job with Labrary engagements.

The first thing to notice is that going part time starts with asking the employer. Whether it’s your current employer or an interviewer for a potential position, you need to start that conversation. When I first went from full-time to 80%, a few people said something like “I’d love to do that, but I doubt I’d be allowed”. I infer from this that they haven’t tried asking, which means it definitely isn’t about to happen.

My experience is that many employers didn’t even have the idea of part-time contracts in mind, so there’s no basis on which they can say yes. There isn’t really one for “no” either, except that it’s the status quo. Having a follow-up conversation to discuss their concerns both normalises the idea of part-time employees, and demonstrates that you’re working with them to find a satisfactory arrangement: a sign of a thoughtful employee who you want to keep around, even if only some of the time!

Job-swapping works for me because I like to see a lot of different contexts and form synthetic ideas across all of them. Working with different teams at the same time is really beneficial because I constantly get that sense of change and excitement. It’s Monday, so I’m not there any more, I’m here: what’s moved on in the last week?

It also makes it easier to deal with suboptimal working environments. I’m one of those people who likes being in an office and the social connections of talking to my team, and doesn’t get on well with working from home alone (particularly when separated from my colleagues by timezones and oceans). If I only have a week of that before I’m back in society, it’s bearable, so I can consider taking on engagements that otherwise wouldn’t work for me. I would expect that applies the other way around, for people who are natural hermits and would prefer not to be in shared work spaces.

However, have you ever experienced that feeling of dread when you come back from a week of holiday to discover that pile of unread emails, work-chat-app notifications, and meeting bookings you don’t know the context for? Imagine having that every week, and you know what job-hopping is like. I’m not great at time management anyway, and having to take extra care to ensure I know what project C is up to while I’m eyeballs-deep in project H work is difficult. This difficulty is compounded when clients restrict their work to their devices; a reasonable security requirement but one that has led to the point now where I have four different computers at home with different email accounts, VPN access, chat programs, etc.

Also, absent employee syndrome hits in two different ways. For some reason, the median lead time for setting up meetings seems to be a week. My guess is that this is because the timeslot you’re in now, while you’re all trying to set up the meeting, is definitely free. Anyway. Imagine I’m in now, and won’t be next week. There’s a good chance that the meeting goes ahead without me, because it’s best not to delay these things. Now imagine I’m not in now, but will be next week. There’s a good chance that the meeting goes ahead without me anyway, because nobody can see me when they book the meeting so don’t remember I might get involved.

That may seem like your idea of heaven: a guaranteed workaround to get out of all meetings :). But to me, the interesting software engineering happens in the discussion and it’s only the rote bits like coding that happen in isolation. So if I’m not in the room where the decisions are made, then I’m not really engineering the software.

Maybe there’s some other approach that ameliorates some of the downsides of this arrangement. But for me, so far, multiple workplaces is better than one, and helping many people by fulfilling the Labrary’s mission is better than helping a few.

Applications and Spelling of Boole

While Alan Turing is regarded by many as the grandfather of Artificial Intelligence, George Boole should be entitled to some claim to that epithet too. His Investigation of the Laws of Thought is nothing other than a systematisation of “those universal laws of thought which are the basis of all reasoning”. The regularisation of logic and probability into an algebraic form renders them amenable to the sort of computing that Turing was later to show could be just as well performed mechanically or electronically as with pencil and paper.

But when did people start talking about the logic of binary operations in computers as being due to Boole? Turing appears never to have mentioned his name: although he certainly did talk about the benefits of implementing a computer’s memory as a collection of 0s and 1s, and describe operations thereon, he did not call them Boolean or reference Boole.

In the ACM digital library, Symbolic synthesis of digital computers from 1952 is the earliest use of the word “Boolean”. Irving S. Reed describes a computer as “a Boolean machine” and “an automatic operational filing system” in its abstract. He cites his own technical report from 1951:

Equations (1.33) and (1.35) show that the simple Boolean system, given in (1.34) may be analysed physically by a machine consisting of N clocked flip flops for the dependent variables and suitable physical devices for producing the sum and product of the various variables. Such a machine will be called the simple Boolean machine.

The best examples of simple Boolean machines known to this author are the Maddidas and (or) universal computers being built or considered by Computer Research Corporation, Northrop Aircraft Inc, Hughes Aircraft, Cal. Tech., and others. It is this author’s belief that all the electronic and digital relay computers in existence today may be interpreted as simple Boolean machines if the various elements of these machines are regarded in an appropriate manner, but this has yet to be proved.

So at least in the USA, the correlation between digital computing and Boolean logic was being explored almost as soon as the computer was invented. Though not universally: the book “The Origins of Digital Computers” edited by Brian Randell, with articles from Charles Babbage, Grace Hopper, John Mauchly, and others, doesn’t mention Boole at all. Neither does Von Neumann’s famous “first draft” report on the EDVAC.

So, second question. Why do programmers spell Boole bool? Who first decided that five characters was too many, and that four was just right?

Some early programming languages, like Lisp, don’t have a logical data type at all. Lisp uses the empty list to mean “false” and anything else to mean true. Snobol is weird (he said, surprising nobody). It also doesn’t have a logical type, conditional execution being predicated on whether an operation signals failure. So the “less than” function can return the empty string if a<b, or it can fail.

Fortran has a LOGICAL type, logically. COBOL, being designed to be illogical wherever Fortran is logical, has a level 88 data type. Simula, Algol and Pascal use the word ‘boolean’, modulo capitalisation.

ML definitely has a bool type, but did it always? I can’t see whether it was introduced in Standard ML (1980s-1990), or earlier (1973+). Nonetheless, it does appear that ML is the source of misspelled Booles.

Hyperloops for our minds

We were promised a bicycle for our minds. What we got was more like a highly-efficient, privately run mass transit tunnel. It takes us where it’s going, assuming we pay the owner. Want to go somewhere else? Tough. Can’t afford to take part? Tough.

Bicycles have a complicated place in society. Right outside this building is one of London’s cycle superhighways, designed to make it easier and safer to cycle across London. However, as Amsterdam found, you also need to change the people if you want to make cycling safer.

Changing the people is, perhaps, where the wheels fell off the computing bicycle. Imagine that you have some lofty goal, say, to organise the world’s information and make it universally accessible and useful. Then you discover how expensive that is. Then you discover that people will pay you to tell people that their information is more universally accessible and useful than some other information. Then you discover that if you just quickly give people information that’s engaging, rather than accessible and useful, they come back for more. Then you discover that the people who were paying you will pay you to tell people that their information is more engaging.

Then you don’t have a bicycle for the mind any more, you have a hyperloop for the mind. And that’s depressing. But where there’s a problem, there’s an opportunity: you can also buy your mindfulness meditation directly from your mind-hyperloop, with of course a suitable share of the subscription fee going straight to the platform vendor. No point using a computer to fix a problem if a trillion-dollar multinational isn’t going to profit (and of course transmit, collect, maintain, process, and use all associated information, including passing it to their subsidiaries and service partners) from it!

It’s commonplace for people to look backward at this point. The “bicycle for our minds” quote comes from 1990, so maybe we need to recapture some of the computing magic from 1990? Maybe. What’s more important is that we accept that “forward” doesn’t necessarily mean continuing in the direction we took to get to here. There are those who say that denying the rights of surveillance capitalists and other trillion-dollar multinationals to their (pie minus tiny slice that trickles down to us) is modern-day Luddism.

It’s a better analogy than they realise. Luddites, and contemporary protestors, were not anti-technology. Many were technologists, skilled machine workers at the forefront of the industrial revolution. What they protested against was the use of machines to circumvent labour laws and to produce low-quality goods that were not reflective of their crafts. The gig economies, zero-hours contracts, and engagement drivers of their day.

We don’t need to recall the heyday of the microcomputer: they really were devices of limited capability that gave a limited share of the population an insight into what computers could do, one day, if they were highly willing to work at it. Penny farthings for middle-class minds, maybe. But we do need to say hold on, these machines are being used to circumvent labour laws, or democracy, or individual expression, or human intellect, and we can put the machinery to better use. Don’t smash the machines, smash the systems that made the machines.

Eating the bubble

How far back do you want to go to find people telling you that JavaScript is eating the world? Last year? Two years ago? Three? Five?

It’s a slow digestion process, if that’s what is happening. Five years ago, there was no such thing as Swift. For the last four years, I’ve been told at mobile dev conferences that Swift is eating the world, too. It seems like the clear, unambiguous direction being taken by software is different depending on which room you’re in.

It’s time to leave the room. It looks sunny outside, but there are a few clouds in the sky. I pull out my phone and check the weather forecast, and a huge distributed system of C, Fortran, Java, and CUDA tells me that I’m probably going to be lucky and stay dry. That means I’m likely to go out to the Olimpick Games this evening, so I make sure to grab some cash. A huge distributed system of C, COBOL and Java rumbles into action to give me my money, and tell my bank that they owe a little more money to the bank that operates the ATM.

It seems like quite a lot of the world is safe from whichever bubble is being eaten.

What Lenin taught me about software movements

In What is to be done?: Burning Questions of our Movement, Lenin lists four roles who contribute to fomenting revolution – the theoreticians, the propagandists, the agitators, and the organisers:

The theoreticians write research works on tariff policy, with the “call”, say, to struggle for commercial treaties and for Free Trade. The propagandist does the same thing in the periodical press, and the agitator in public speeches. At the present time [1901], the “concrete action” of the masses takes the form of signing petitions to the Reichstag against raising the corn duties. The call for this action comes indirectly from the theoreticians, the propagandists, and the agitators, and, directly, from the workers who take the petition lists to the factories and to private homes for the gathering of signatures.

Then later:

We said that a Social Democrat, if he really believes it necessary to develop comprehensively the political consciousness of the proletariat, must “go among all classes of the population”. This gives rise to the questions: how is this to be done? have we enough forces to do this? is there a basis for such work among all the other classes? will this not mean a retreat, or lead to a retreat, from the class point of view? Let us deal with these questions.

We must “go among all classes of the population” as theoreticians, as propagandists, as agitators, and as organisers.

Side note for Humpty-Dumpties: In this post I’m going to use “propaganda” in its current dictionary meaning as a collection of messages intended to influence opinions or behaviour. I do not mean the pejorative interpretation, somebody else’s propaganda that I disagree with. Some of the messages and calls below I agree with, others I do not.

Given this tool for understanding a movement, we can see it at work in the software industry. We can see, for example, that the Free Software Foundation has a core of theoreticians, a Campaigns Team that builds propaganda for distribution, and an annual conference at which agitators talk, and organisers network. In this example, we discover that a single person can take on multiple roles: that RMS is a theoretician, a some-time propagandist, and an agitator. But we also find the movement big enough to support a person taking a single role: the FSF staff roster lists people who are purely propagandists or purely theoreticians.

A corporate marketing machine is not too dissimilar from a social movement: the theory behind, say, Microsoft’s engine is that Microsoft products will be advantageous for you to use. The “call” is that you should buy into their platform. The propaganda is the MSDN, their ads, their blogs, case studies and white papers and so on. The agitators are developer relations, executives, external MVPs and partners who go on the conference, executive briefing days, tech tours and so on. The organisers are the account managers, the CTOs who convince their teams into making the switch, the developers who make proofs-of-concept to get their peers to adopt the technology, and so on. Substitute “Microsoft” for any other successful technology company and the same holds there.

We can also look to (real or perceived) dysfunction in a movement and see whether our model helps us to see what is wrong. A keen interest of mine is in identifying software movements where “as practised” differs from “as described”. We can now see that this means the action being taken (and led by the organisers) is disconnected from the actions laid out by the theorists.

I have already written that the case with OOP is that the theory changed; “thinking about your software in this way will help you model larger systems and understand your solutions” was turned by the object technologists into “buying our object technology is an easy way to achieve buzzword compliance”. We can see similar things happening now, with “machine learning” and “serverless” being hollowed out to fill with product.

On the other hand, while OOP and machine learning have mutated theories, the Agile movement seems to suffer from a theory gap. Everybody wants to be Agile or to do Agile, all of the change agents and consultants want to tell us to be Agile or to do Agile, but why does this now mean Dark Scrum? A clue from Ron Jeffries’ post:

But there is a connection between the 17 old men who had a meeting in Snowbird, and the poor devils working in the code mines of insurance companies in Ohio, suffering under the heel of the boot of the draconian sons of expletives who imposed a bastardized version of something called Scrum on them. We started this thing and we should at least feel sad that it has sometimes gone so far off the rails. And we should do what we can to keep it from going more off the rails, and to help some people get back on the rails.

Imagine if Karl Marx had written Capital: Critique of Political Economy, then waited eighty years, then said “oh hi, that thing Josef Stalin is doing with the gulags and the exterminations and silencing the opposition, that’s not what I had in mind, and I feel sad”. Well Agile has not gone so far off the rails as that, and has only had twenty years to do it, but the analogy is in the theory being “baked” at some moment, and the world continuing to change. Who are the current theorists advancing Agile “as practised” (or at least the version “as described” that a movement is taking out to change the practice)? Where are the theoreticians who are themselves Embracing Change? It seems to me that we had the formation of the theory in XP, the crystallisation (pardon the pun) of the theory and the call to action in the Agile manifesto, then the project management bit got firmed up in the Declaration of Interdependence, and now Agile is going round in circles with its tiller still set on the Project Management setting.

Well, one post-Agile more-Agile-than-thou movement for the avocado on toast generation is the Software Craft[person]ship movement, which definitely has theory and a call to action (Software Craftsmanship: the New Imperative, which is only a scratch newer than the Agile Manifesto), definitely has vocal propagandists and agitators, and yet still doesn’t seem to be sweeping the industry. Maybe it is, and I just don’t see it. Maybe there’s no clear role for organisers. Maybe the call to action isn’t one that people care about. Maybe the propaganda is not very engaging.

Anyway, Lenin gave me an interesting model.

Why your app is not massively parallel software

That trash can Mac Pro that hasn’t been updated in years? It’s too hard to write software for.

Now, let’s be clear, there are any number of abstractions that have been created to help programmers parallelise their thing, from the process onward. If you’ve got a loop and can add the words #pragma omp parallel for to your code, then your loop can be run in parallel over as many threads as you like. It’s not hard.

Making sure that the loop body can run concurrently with itself is hard, but there are some rules to follow that either make it easy or tell you when to avoid trying. But you’re still only using the CPU, and there’s that whole dedicated GPU to look after as well.

Even with interfaces like OpenCL, it’s difficult to get this business right. If you’ve been thinking about your problem as objects, then each object has its own little part of the data – but now you need to get that information into a layout that’ll be efficient for doing the GPU work, then actually do the copy, then copy the results back from the GPU memory…is doing all of that worth it?

For almost all applications, the answer is no. For almost no applications, the answer is occasionally. For a tiny number of applications, the answer is most of the time, but if you’re writing one of those then you’re a scientist or a data “scientist” and probably not going to get much value out of a deskside workstation anyway.

What’s needed for that middle tier of applications is the tools – by which I mostly mean the libraries – to deal with this problem when it makes sense. You don’t need visualisations that say “hey, if you learned a different programming language and technique and then applied it to this little inner loop you could get a little speed boost for the couple of seconds that one percent of users will use this feature every week” – you need implementations that notice that and get on with it anyway.

The Mac Pro is, in that sense, the exact opposite of the Macintosh. Back in the 1980s, the Smalltalk software was ready well before there was any hardware that could run it well, and the Macintosh was a thing that took this environment that could be seen to have value, and made it kindof work on real hardware. Conversely, the Mac Pro was ready well before there was any software that could make use of it, and that’s a harder sell. The fact that, four years later, this is still true, makes it evident that it’s either difficult or not worth the effort to try to push the kind of tools and techniques necessary to efficiently use Mac Pro-style hardware into “the developer ecosystem”. Yes, there are niches that make very good use of them, but everybody else doesn’t and probably can’t.