Considered harmless

Don’t like a new way of working? Just point out the absurdity of suggesting that the old way was broken:

Somehow, the microservices folks have failed to notice all that software that was in fact delivered as monoliths.

What the Rust Evangelism Strike Force doesn’t realise is that we’ve spent decades successfully building C programs that don’t dereference the NULL pointer.

This is a sort of “[C|Monoliths] considered harmless” statement. Yes, it’s possible to do it that way, but that doesn’t mean that there aren’t problems, or at least trade-offs. “C considered harmless” is as untrue and unhelpful as “C considered harmful”; what we want is “C considered alongside alternatives”.

An unhelpful distinction

Object-Oriented Programming is quite simple: it’s just choosing what function to run based on the parameters to the function (whether through method sending like Smalltalk, polymorphic lookup like CLOS, or table searching like C++: usually pattern-matching like Haskell would be excluded here).

Object-Oriented Analysis and Design is the thing where we represent our problem domain, and our solution, as a collection of objects, often categorised into classes, where the classes have particular relationships, properties and behaviours. And that is the thing that programmers often struggle with.

Whether it’s hard because OOA/D is hard, or because the problem domains are hard, or because OOA/D is not applicable to the problem domains, is not addressed here.

GNUstep is more important now than ever

In creating a pull request for GNUstep-base, the Free Software implementation of the Foundation library from Objective-C, I realised that if there was ever a time for GNUstep, now is it.

Although GNUstep may have been envisaged as an official desktop for the GNU system – a role now fulfilled by GNOME – it has always had another position as an alternative deployment venue for OpenStep, and then Cocoa, codebases. People have done this to obtain cross-platform support (I know of a test tool that was built for Mac, Linux and Windows using GNUstep), to take advantage of better, or cheaper, server deployment on Linux, or to act as an ‘escape lane’, a place to take your code if your platform vendor changes direction.

This ability to hedge against a vendor’s whims has come in handy before: in 2001, when WebObjects 5 ditched Objective-C support in favour of Java, Objective-C WebObjects applications could be deployed on modern platforms through GNUstep Web or on legacy WebObjects 4 systems. It may come in handy again.

Even if Apple do, indefinitely, support Objective-C, the plain fact is that their community does not. Conference talks, blog posts, and community discussion now take place using Swift, which for those with an interest in Objective-C or those supporting existing code makes it harder to gain help or even to feel membership.

A vendor-independent association of Objective-C developers all interested in giving their code a comfortable, stable home is now more important than ever. May all your messages have receivers.

On the lesser presentations

Advice on presentations – including that given on this blog, is often geared toward the “showbusiness” presentation. We’re usually talking about the big conference talk or product launch, where you can afford to put in the time to make a good, slick performance: a few days of preparation for a half-hour talk is not unheard of.

Not every presentation fits that mould. There are plenty where putting so much time into the presentation would be harmful, but where is the guidance on constructing those presentations?

The minor event

If you spent a few days preparing for a sprint-end demo, or reporting back to your team on some study you did, you’d significantly harm your productivity on the rest of your job. In these contexts, you want to spend a small amount of time building your talk, and you still want to put on a good show: to make your team and your stakeholders feel happy about the work you did, or to make the case persuasively for the tool or technique you studied.

As such, in these cases I still build an outline for my talk outside of the presentation software, and construct my slides to follow the outline. I’ve used OmniOutliner in the past, I use emacs org-mode now, you could use a list of bullets in MarkDown or a pen and a sheet of paper. It doesn’t matter, what matters is that you get what you want to say structured in one place, and the slides that support the presentation done separately.

I try to keep these slides text-free, particularly if the presentation is short, so that people don’t get distracted reading them. If I’m reporting on progress, then screenshots of some progress dashboard make for quickly-constructed slides. My current team shows its burndown every sprint end, that’s a quick screencapture that tells the story of the last two weeks. If there’s some headline figure (26 stories delivered; 80% of MVP scope complete; 2 new people on the team) then a slide containing that number makes for a good backdrop to talking about the subject.

The recurrent deck

The antithesis of the conference keynote presentation style, the recurrent deck is a collection of slides you’ll use over and over again. Your approach to integrating with third-party APIs, your software architecture philosophy, your business goals for 2018…you’ll need to present these over and over again in different contexts.

More to the point, other people will want to present them, too: someone in sales will answer a question about integration by using your integration slides. Your department director will present your team’s goals in the context of her department’s goals. And this works the other way: you can use your CEO’s slide on product strategy to help situate your team goals.

So throw out everything you learned about crafting the slides to fit the story. What you’re doing here is coming up with a reusable visual that can support any story related to the same information. I try to make these slides as information-rich as possible, though still diagrammatic rather than textual to avoid the presentation failure mode of reading out the slide. My current diagram tool is Lucidchart, as it’s my company’s standard, I’ve used OmniGraffle and dia too. Whatever tool, follow the house style (e.g. colour schemes, fonts, iconography) so that when you mash up your slides and your CEO’s slides, it still looks like a coherent presentation.

I try to make each slide self-contained, because I or someone else might take one to use in a different presentation so a single idea shouldn’t need a six-slide reveal and a colleague will find it harder to reuse the slide if it isn’t self-explanatory.

A frequent anti-pattern in slide design is to include the “page number” on the slide: not only is that information useless in a presentation, the only likely outcome is a continuity error when you drag a few slides from different sources to throw a talk together and don’t renumber them. Or worse, can’t: I’ve been given slides before that are screenshots of whatever slide was originally built, so the number is part of a bitmap.

Good reusable slide libraries will also be a boon in quickly constructing the minor event presentations: “we did this because that” can be told with one novel part (we did this) and one appeal to the library (because that).

Why are you using the wrong licence?

I frequently see posts/articles/screeds asking why people don’t contribute to open source. If it’s important that recipients of open source software contribute upstream, and you are angry when they don’t, why use licences like MIT, Apache, GPL or BSD that don’t require upstream collaboration?

Back in the day, Apple released their public source code under version 1 of the Apple Public Source Licence, which required users who changed the source to fill in a form notifying Apple of their changes. You could do the same, and not be angry.

The Atoms of Programming

In the world of physics, there are many different models that can be used, though typically each of them has different applicability to different contexts. At the small scale, quantum physics is a very useful model, Newtonian physics will yield evidently incorrect predictions so is less valuable. Where a Newtonian model gives sufficiently accurate results, it’s a lot easier to work with than quantum or relativistic mechanics.

All of these models are used to describe the same universe – the same underlying collection of observations that can systematically be categorised, modelled and predicted.

Physical science (or experimental philosophy) does not work in the same way as computational philosophy. There are physical realisations of computational systems, typically manifested as electronic systems or pencil-and-paper simulations. But the software, the abstract configurations of ideas that run on those systems, exist in entirely separate space and are merely (though the fact that this is possible is immensely powerful) translated into the electronic or paper medium.

Of course one model for the software system is to abstract the electronic: to consider the movement of electrons as the presence of voltages at terminals; to group terminals as registers or busses; to further abstract this range of voltages as 0 and that range as 1. And indeed that model frequently is useful.

Frequently, that model is not useful. And the great thing is that we get to select from a panoply of other models, at some small or large remove from the physical model. We can use these models separately, or simultaneously. You can think of a software system as a network of messages passed between independent objects, as a flow of data through transformers, as a sequence of state changes, as a graph of single-argument functions, as something else, or as a combination of these things. Each is useful, each is powerful, all are applicable.

Sometimes, I can use these models to make decisions about representing the logical structure of these systems, transforming a concept into a representation that’s valid in the model. If I have a statement in a mathematical formulation of my problem, “for any a drawn from the set of Articles there exists a p drawn from the set of People such that p is the principal author of a” then I can build a function, or a method, or a query, or a predicate, or a procedure, or a subroutine, or a spreadsheet cell, or a process, that given an article will yield exactly one person who is the principal author of that article.

Sometimes, I use the models to avoid the conceptual or logical layers at all and express my problem as if it is a software solution. Object-oriented analysis and design, data flow modelling, and other techniques can be used to represent a logical model, or they can be used to bash the problem straight into a physical model without having thought about the problem in the abstract. “Shut up and code” is an extreme example of this approach, in which the physical model is realised without any attempt to tie it to a logical or conceptual design. I’ll know correct when I see it.

I don’t see a lot of value in collecting programming languages. I can’t count the number of different programming languages I’ve used, and many of them are entirely similar. C and JavaScript both have sequences of expressions that are built into statements that are built into procedures. Both let me build aggregations of data and procedures that either let me organise sequential programs, represent objects, represent functions, or do something else.

But collecting the models, the different representations of systems conceptually that can be expressed as software, sometimes called paradigms: this is very interesting. This is what lets me think about representing problems in different ways, and come up with efficient (conceptually or physically) solutions.

More paradigms, please.

Stop ignoring the world

Long term readers will have noticed, and everybody else is about to be told, that this blog has had posts in the Responsibility category since 2010. I’m not rigorous in my use of WordPress categories, but it’s not much of a stretch to assume that most of those 40 posts touch on professional ethics, and that most of the posts on ethics in this blog are in that category.

In recent times, the idea that maybe the world of computing should take its head out of its butt and consider its impact on wider society has escaped the confines of goggle-eyed loon practitioners like yours truly and hit the mainstream. In the UK, newspapers call for change: the leftist Guardian writes “Big tech is broken”, and liberal centrist paper the Independent tells us that “Those of us with any sense of morality should hate Apple“. Editorials document how social media platforms, decrying fake news while running ads for anyone with the dollars, have supplanted democratic rule with new, transnational, shareholder-run government. They show how the new unicorn startups achieve their valuations by disrupting labour law, reversing centuries of gains in workers’ rights by introducing the neoserfdom of gig economies and zero-hour contracts.

Software is eating the world, and turning it into shit. You can no longer pretend that it isn’t happening, and that you are not playing a part. That supporting the success of your favoured multibillionaire transnational platform vendor isn’t helping to consolidate ownership of society among the multibillionaire platform vendors. That your part is just making the rockets go up, and that where they come down is a different department. That your job is not a position in society and without consequence.

OOP as an organic approach to computing

I’m reading How Not to Network a Nation, which talks a lot about cybernetics. Not merely cybernetics as the theory of control in complex systems (cybernetics shares a root with “governor”, fans of the etymological fallacy!) but cybernetics as the intersectional discipline matching organisational and management theory with computer science, anthropology, and biology. The study of systems in animals, people and machinery and their (self- or externally-directed) control.

We still use a lot of the ideas from even early cybernetics thought now, such as Claude Shannon’s theories on entropy and information, J.C.R. Licklider’s ARPAnet, von Neumann’s computer architecture, artificial neural networks. But even though the proponents aren’t often associated with the field, I think it’s reasonable to argue that object-oriented programming is a cybernetically-derived systems approach.

A lot of cybernetics theory is about the components of a system and the messages they pass between each other to achieve control and feedback, and in OOP Alan Kay was seeking to model a software system as a network of messages flowing between independent computer program components. He made the analogy with living organisms clear:

> I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning — it took a while to see how to do messaging in a programming language efficiently enough to be useful).

More advanced object oriented systems such as Erlang even display autopoesis, automatically spawning new “cells” when old ones are damaged.

There plenty that the intersectional nature of cybernetics still has to inform me about my work. Information theory helps me to understand the utility of a machine learning algorithm. Game theory and biological cooperation and cheating models help describe how a crypto currency is resilient against Byzantine generals.

And now I understand that the biological systems analogy should help me with software analysis and design too.

My platform is no platform

I currently use three of the desktop computing platforms (Windows, macOS and GNU/Linux) and one of the mobile computing platforms (Samsung-flavoured Android); I currently get paid to develop software for “the web”, an amorphous non-platform that acts in many ways like a vendor platform orthogonal to those just mentioned. This is not because I want to ally myself to any or all of those platforms but because I want to be practically independent of all of them. This is where I try to understand why.

A code of ethics

As a member of the ACM I have committed to act in accordance with the ACM Code of Ethics and Professional Conduct, or to revoke my membership. I believe that a lot of my unwillingness to buy into these platforms (and by extension, as a software professional, my unwillingness to push them on others by building upon them) can be expressed in terms of these codes.

Of course, this is expression within my interpretation of the terms. I don’t believe that Google’s data harvesting activities are consistent with the ACM Code of Ethics, but a previous ACM president was Vint Cerf, a Google employee. Clearly we disagree on our application of the Code to Google’s activity. Neither of us is, without further analysis, [in]correct; ethical decisions will be situated in a cultural and experiential context.

This, in general, is why American technology companies find it hard to expand their operations to Europe; the framework in which their actions will be viewed and judged is different.

App stores and the introduction of dependence

The iOS platform is locked down to most people, in three ways: technologically (a cryptographic system prevents all but specific native software and arbitrary JavaScript from being run); market control (only Apple can add software to the approved list; only paid members of their developer program can propose software for approval) and socially (the norms among developers, both native and web software, follow the “don’t make me think” principle in which applications are simplistic and unextensible, as discoverability and ease of use are valued over flexibility or composition).

The same argument applies to Google’s Chrome OS and Microsoft’s Windows 10 S.

This situation promotes an “Apple knows best” effect in which use of a computer is a passive consumption activity where the only applications available are those deemed fit to publish by the central actor, much like broadcast television.

In a professional community that previously gave us Mindstorms and Smalltalk in the Classroom, it is hard to accept that such a prescriptive model of computing is considered tolerable. Indeed, it seems at odds with an ethical imperative to improve public understanding of computing (section 2.7 in the ACM Code). It also seems to produce a three-tier system, in which those who have (the platform operators) are at the top, those who have means (ISVs who can buy into the approval system) are a rung down – albeit in a feudalistic vassal state that seems akin to coal miners buying their picks from the mine owners – and everyone else is below them. This would not seem consistent with an imperative to be fair (section 1.4 in the Code). Indeed I would go as far as to say that putting most of the people interacting with a software system into subject positions does not contribute to society or human well-being (section 1.1).

The web and “cloud” as protection rackets

If I use a web-based software application, it will probably offer to store all data on its developer’s servers (or more accurately on virtual machines run on the developer’s behalf by some service provider). It may not, indeed probably won’t, offer an alternative. My ongoing use of the application is predicated on my ongoing acceptance of the developer’s terms and pricing structure. The collection of individuals and organisations with whom my data is shared are also subject to change at any time, and I either demur or stop using the service.

But because their “service” also includes exclusively providing access, even to me, of what I created using their application, even accessing the things I already created is subject to their licence and my acceptance of their future changes, which can’t be known (and will probably be hidden up front, even where they are already being planned). This does not seem honest nor trustworthy (section 1.3 of the Code), nor to provide comprehensive and thorough evaluations of the impacts of the system (section 2.5), nor to credit my intellectual property in creating the things that are ransomed by their service (1.6).

In which new developer tools are dull

Over on lobste.rs I said that I don’t hold out much hope for another “blue plane” style event in developer tools. In one of Alan Kay’s presentations, he referred to the ordinary way of things as the pink plane, and incremental advances in the state of affairs being movements in that plane. Like the square in Edwin Abbot’s Flatland that encounters a sphere, a development could take us out of the pink plane into the (orthogonal) blue plane. These blue plane ideas are rare because like the square, it’s hard to even conceive of life outside the pink plane.

In what may just be a surprising coincidence, Apple engineers used Blue and Pink to refer to features in evolutionary and revolutionary developments of their operating system.

Software engineering tooling is, for the majority of developers, in a phase of conservative retreat

Build UIs on the web and you probably won’t use a graphical builder, you’ll type HTML and JavaScript (and maybe JSX) into a text editor.

Build native apps and even where there is a GUI builder, you’ll find people recommending against its use and wanting to do things “programmatically” (by which they mean “through typing”, even though the GUI builder tools are another way to construct a program).

In the last couple of decades, interest in CASE tooling has shrunk to conservative interest in text editors with some syntax highlighting, like vim or Atom. Gone even is the “build and run” button from IDEs, to be replaced with command-line invocations of grunt tasks (a fancy phrase meaning shell scripts), npm scripts (a fancy phrase meaning shell scripts) or rake tasks (you get the idea).

Where previously there were live development environments embedded in the deployment environment (and the Javascript VM is almost perfectly designed for that task), there is now console.log and unit tests. The height of advanced interaction with your programming tools are the REPL (an interactive shell) and the Playground/InstaREPL (an interactive shell that echoes stdin and stdout in different places).

For the most part, and I say that to avoid the inevitable commenter who thinks that a counterexample like LabView or Mathematica or that one person they met who uses Expression Blend renders the whole argument broken, developers have doubled down on the ceremony of programming: the typing of arcane text into an 80×24 character display. Now to be fair, text is an efficient and compact graphical representation of a linear sequence of connected concepts. But it is not the only one, nor the most efficient nor most compact, and neither are many software systems linear.

The rewards in making software to make software are scarce.

You can do like IntelliJ do, and make a better version of the 80×24 text entry thing. You can work for a platform vendor, and make their version of the 80×24 thing. You can go and get an engineering grade 6 or above job in Silicon Valley and tell your manager that whatever it is their business does, you’re going to focus on the 80×24 thing (“at scale”) instead.

What you don’t seem to be able to do is to disrupt the 80×24 thing. It’s free (at least as in beer), it’s ubiquitous, and whether or not it’s as good as it could be it certainly seems to be good enough for the people who not only get paid to make bad software, but get paid again to fix it.