Skip to content

Why are you using the wrong licence?

I frequently see posts/articles/screeds asking why people don’t contribute to open source. If it’s important that recipients of open source software contribute upstream, and you are angry when they don’t, why use licences like MIT, Apache, GPL or BSD that don’t require upstream collaboration?

Back in the day, Apple released their public source code under version 1 of the Apple Public Source Licence, which required users who changed the source to fill in a form notifying Apple of their changes. You could do the same, and not be angry.

The Atoms of Programming

In the world of physics, there are many different models that can be used, though typically each of them has different applicability to different contexts. At the small scale, quantum physics is a very useful model, Newtonian physics will yield evidently incorrect predictions so is less valuable. Where a Newtonian model gives sufficiently accurate results, it’s a lot easier to work with than quantum or relativistic mechanics.

All of these models are used to describe the same universe – the same underlying collection of observations that can systematically be categorised, modelled and predicted.

Physical science (or experimental philosophy) does not work in the same way as computational philosophy. There are physical realisations of computational systems, typically manifested as electronic systems or pencil-and-paper simulations. But the software, the abstract configurations of ideas that run on those systems, exist in entirely separate space and are merely (though the fact that this is possible is immensely powerful) translated into the electronic or paper medium.

Of course one model for the software system is to abstract the electronic: to consider the movement of electrons as the presence of voltages at terminals; to group terminals as registers or busses; to further abstract this range of voltages as 0 and that range as 1. And indeed that model frequently is useful.

Frequently, that model is not useful. And the great thing is that we get to select from a panoply of other models, at some small or large remove from the physical model. We can use these models separately, or simultaneously. You can think of a software system as a network of messages passed between independent objects, as a flow of data through transformers, as a sequence of state changes, as a graph of single-argument functions, as something else, or as a combination of these things. Each is useful, each is powerful, all are applicable.

Sometimes, I can use these models to make decisions about representing the logical structure of these systems, transforming a concept into a representation that’s valid in the model. If I have a statement in a mathematical formulation of my problem, “for any a drawn from the set of Articles there exists a p drawn from the set of People such that p is the principal author of a” then I can build a function, or a method, or a query, or a predicate, or a procedure, or a subroutine, or a spreadsheet cell, or a process, that given an article will yield exactly one person who is the principal author of that article.

Sometimes, I use the models to avoid the conceptual or logical layers at all and express my problem as if it is a software solution. Object-oriented analysis and design, data flow modelling, and other techniques can be used to represent a logical model, or they can be used to bash the problem straight into a physical model without having thought about the problem in the abstract. “Shut up and code” is an extreme example of this approach, in which the physical model is realised without any attempt to tie it to a logical or conceptual design. I’ll know correct when I see it.

I don’t see a lot of value in collecting programming languages. I can’t count the number of different programming languages I’ve used, and many of them are entirely similar. C and JavaScript both have sequences of expressions that are built into statements that are built into procedures. Both let me build aggregations of data and procedures that either let me organise sequential programs, represent objects, represent functions, or do something else.

But collecting the models, the different representations of systems conceptually that can be expressed as software, sometimes called paradigms: this is very interesting. This is what lets me think about representing problems in different ways, and come up with efficient (conceptually or physically) solutions.

More paradigms, please.

Stop ignoring the world

Long term readers will have noticed, and everybody else is about to be told, that this blog has had posts in the Responsibility category since 2010. I’m not rigorous in my use of WordPress categories, but it’s not much of a stretch to assume that most of those 40 posts touch on professional ethics, and that most of the posts on ethics in this blog are in that category.

In recent times, the idea that maybe the world of computing should take its head out of its butt and consider its impact on wider society has escaped the confines of goggle-eyed loon practitioners like yours truly and hit the mainstream. In the UK, newspapers call for change: the leftist Guardian writes “Big tech is broken”, and liberal centrist paper the Independent tells us that “Those of us with any sense of morality should hate Apple“. Editorials document how social media platforms, decrying fake news while running ads for anyone with the dollars, have supplanted democratic rule with new, transnational, shareholder-run government. They show how the new unicorn startups achieve their valuations by disrupting labour law, reversing centuries of gains in workers’ rights by introducing the neoserfdom of gig economies and zero-hour contracts.

Software is eating the world, and turning it into shit. You can no longer pretend that it isn’t happening, and that you are not playing a part. That supporting the success of your favoured multibillionaire transnational platform vendor isn’t helping to consolidate ownership of society among the multibillionaire platform vendors. That your part is just making the rockets go up, and that where they come down is a different department. That your job is not a position in society and without consequence.

OOP as an organic approach to computing

I’m reading How Not to Network a Nation, which talks a lot about cybernetics. Not merely cybernetics as the theory of control in complex systems (cybernetics shares a root with “governor”, fans of the etymological fallacy!) but cybernetics as the intersectional discipline matching organisational and management theory with computer science, anthropology, and biology. The study of systems in animals, people and machinery and their (self- or externally-directed) control.

We still use a lot of the ideas from even early cybernetics thought now, such as Claude Shannon’s theories on entropy and information, J.C.R. Licklider’s ARPAnet, von Neumann’s computer architecture, artificial neural networks. But even though the proponents aren’t often associated with the field, I think it’s reasonable to argue that object-oriented programming is a cybernetically-derived systems approach.

A lot of cybernetics theory is about the components of a system and the messages they pass between each other to achieve control and feedback, and in OOP Alan Kay was seeking to model a software system as a network of messages flowing between independent computer program components. He made the analogy with living organisms clear:

> I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning — it took a while to see how to do messaging in a programming language efficiently enough to be useful).

More advanced object oriented systems such as Erlang even display autopoesis, automatically spawning new “cells” when old ones are damaged.

There plenty that the intersectional nature of cybernetics still has to inform me about my work. Information theory helps me to understand the utility of a machine learning algorithm. Game theory and biological cooperation and cheating models help describe how a crypto currency is resilient against Byzantine generals.

And now I understand that the biological systems analogy should help me with software analysis and design too.

My platform is no platform

I currently use three of the desktop computing platforms (Windows, macOS and GNU/Linux) and one of the mobile computing platforms (Samsung-flavoured Android); I currently get paid to develop software for “the web”, an amorphous non-platform that acts in many ways like a vendor platform orthogonal to those just mentioned. This is not because I want to ally myself to any or all of those platforms but because I want to be practically independent of all of them. This is where I try to understand why.

A code of ethics

As a member of the ACM I have committed to act in accordance with the ACM Code of Ethics and Professional Conduct, or to revoke my membership. I believe that a lot of my unwillingness to buy into these platforms (and by extension, as a software professional, my unwillingness to push them on others by building upon them) can be expressed in terms of these codes.

Of course, this is expression within my interpretation of the terms. I don’t believe that Google’s data harvesting activities are consistent with the ACM Code of Ethics, but a previous ACM president was Vint Cerf, a Google employee. Clearly we disagree on our application of the Code to Google’s activity. Neither of us is, without further analysis, [in]correct; ethical decisions will be situated in a cultural and experiential context.

This, in general, is why American technology companies find it hard to expand their operations to Europe; the framework in which their actions will be viewed and judged is different.

App stores and the introduction of dependence

The iOS platform is locked down to most people, in three ways: technologically (a cryptographic system prevents all but specific native software and arbitrary JavaScript from being run); market control (only Apple can add software to the approved list; only paid members of their developer program can propose software for approval) and socially (the norms among developers, both native and web software, follow the “don’t make me think” principle in which applications are simplistic and unextensible, as discoverability and ease of use are valued over flexibility or composition).

The same argument applies to Google’s Chrome OS and Microsoft’s Windows 10 S.

This situation promotes an “Apple knows best” effect in which use of a computer is a passive consumption activity where the only applications available are those deemed fit to publish by the central actor, much like broadcast television.

In a professional community that previously gave us Mindstorms and Smalltalk in the Classroom, it is hard to accept that such a prescriptive model of computing is considered tolerable. Indeed, it seems at odds with an ethical imperative to improve public understanding of computing (section 2.7 in the ACM Code). It also seems to produce a three-tier system, in which those who have (the platform operators) are at the top, those who have means (ISVs who can buy into the approval system) are a rung down – albeit in a feudalistic vassal state that seems akin to coal miners buying their picks from the mine owners – and everyone else is below them. This would not seem consistent with an imperative to be fair (section 1.4 in the Code). Indeed I would go as far as to say that putting most of the people interacting with a software system into subject positions does not contribute to society or human well-being (section 1.1).

The web and “cloud” as protection rackets

If I use a web-based software application, it will probably offer to store all data on its developer’s servers (or more accurately on virtual machines run on the developer’s behalf by some service provider). It may not, indeed probably won’t, offer an alternative. My ongoing use of the application is predicated on my ongoing acceptance of the developer’s terms and pricing structure. The collection of individuals and organisations with whom my data is shared are also subject to change at any time, and I either demur or stop using the service.

But because their “service” also includes exclusively providing access, even to me, of what I created using their application, even accessing the things I already created is subject to their licence and my acceptance of their future changes, which can’t be known (and will probably be hidden up front, even where they are already being planned). This does not seem honest nor trustworthy (section 1.3 of the Code), nor to provide comprehensive and thorough evaluations of the impacts of the system (section 2.5), nor to credit my intellectual property in creating the things that are ransomed by their service (1.6).

In which new developer tools are dull

Over on I said that I don’t hold out much hope for another “blue plane” style event in developer tools. In one of Alan Kay’s presentations, he referred to the ordinary way of things as the pink plane, and incremental advances in the state of affairs being movements in that plane. Like the square in Edwin Abbot’s Flatland that encounters a sphere, a development could take us out of the pink plane into the (orthogonal) blue plane. These blue plane ideas are rare because like the square, it’s hard to even conceive of life outside the pink plane.

In what may just be a surprising coincidence, Apple engineers used Blue and Pink to refer to features in evolutionary and revolutionary developments of their operating system.

Software engineering tooling is, for the majority of developers, in a phase of conservative retreat

Build UIs on the web and you probably won’t use a graphical builder, you’ll type HTML and JavaScript (and maybe JSX) into a text editor.

Build native apps and even where there is a GUI builder, you’ll find people recommending against its use and wanting to do things “programmatically” (by which they mean “through typing”, even though the GUI builder tools are another way to construct a program).

In the last couple of decades, interest in CASE tooling has shrunk to conservative interest in text editors with some syntax highlighting, like vim or Atom. Gone even is the “build and run” button from IDEs, to be replaced with command-line invocations of grunt tasks (a fancy phrase meaning shell scripts), npm scripts (a fancy phrase meaning shell scripts) or rake tasks (you get the idea).

Where previously there were live development environments embedded in the deployment environment (and the Javascript VM is almost perfectly designed for that task), there is now console.log and unit tests. The height of advanced interaction with your programming tools are the REPL (an interactive shell) and the Playground/InstaREPL (an interactive shell that echoes stdin and stdout in different places).

For the most part, and I say that to avoid the inevitable commenter who thinks that a counterexample like LabView or Mathematica or that one person they met who uses Expression Blend renders the whole argument broken, developers have doubled down on the ceremony of programming: the typing of arcane text into an 80×24 character display. Now to be fair, text is an efficient and compact graphical representation of a linear sequence of connected concepts. But it is not the only one, nor the most efficient nor most compact, and neither are many software systems linear.

The rewards in making software to make software are scarce.

You can do like IntelliJ do, and make a better version of the 80×24 text entry thing. You can work for a platform vendor, and make their version of the 80×24 thing. You can go and get an engineering grade 6 or above job in Silicon Valley and tell your manager that whatever it is their business does, you’re going to focus on the 80×24 thing (“at scale”) instead.

What you don’t seem to be able to do is to disrupt the 80×24 thing. It’s free (at least as in beer), it’s ubiquitous, and whether or not it’s as good as it could be it certainly seems to be good enough for the people who not only get paid to make bad software, but get paid again to fix it.

Bottom-up teaching

We’re told that the core idea in computer programming is problem-solving. That one of the benefits of learning about computer programming (one that is not universally accepted) is gaining the skill of problem decomposition.

If you look at real teaching of computing, it seems to have more to do with solution composition than problem decomposition. The latter seems to be background noise: here are the things you can build solutions with, presumably at some point you’ll come across a solution that’s the same size and shape as one of your problem components though how is left up to you.

I have many books on programming languages. Each lists the features of the language, and gives minimally complex examples of the use of those features. In that sense, Kernighan and Ritchie’s “The C Programming Language” (section 1.3, the for statement) is as little an instructional in solving problems using a computer as Eric Nikitin’s “Into the Realm of Oberon” (section 7.1, the FOR loop) or Dave Thomas’s “Programming Elixir” (section 7.2, Using Head and Tail to Process a List).

A course textbook on bitcoin and blockchain (Narayanan, Bonneau, Felten, Miller and Goldfeder, “Bitcoin and Cryptocurrency Technologies”) starts with Section 1.1, “Cryptographic hash functions”, and builds a cryptocurrency out of them, leaving motivational questions about politics and regulation to Chapter 7.

This strategy is by no means universal: Liskov and Guttag’s “Program Development in Java” starts out by describing abstraction, then looks at techniques for designing abstractions in Java. Adele Goldberg and Alan Kay described teaching Smalltalk by proposing exploratory projects, designing the objects that model the problem under consideration and the way in which they will communicate, then incrementally filling in by designing classes and methods that have the desired properties. C.J. Date’s “An Introduction to Database Systems” answers the question “why databases?” before introducing the relational model, and doesn’t introduce SQL until it can be situated in the context of the relational model.

Both of these approaches, and their associated techniques (the bottom-up approach and solution construction; the top-down approach and problem decomposition) are useful; the former leads to progress and the latter leads to understanding. But both must be taken in concert, because understanding without progress leads to the frustration of an unsolved problem and progress without understanding is merely the illusion of progress.

My guess is that more programmers – indeed whole movements, when we consider the collective state of things like OOP, functional programming, BDD, or agile practices – are in the “bottom-up only” group than in the “top-down only” or “a bit of both” groups. That plenty more copies of Introduction to Programming in [This Week’s Hot Language] have been sold than Techniques for Making Your Problem Amenable to Computation. That the majority of software really does comprise of solutions looking for problems.

Why I don’t have a favourite programming language

This is my take on Ilya Sher’s similar post, though from a different context. He is mainly interested in systems programming, I have mostly written user apps and backend services, and also some developer tools.

I originally thought that I would write a list of the languages and difficulties I have with them, but I realised that there’s an underlying theme that can be extracted. Programming languages I have used either have too much vendor dependence (I love writing ObjC, but can’t rely on GNUstep when I’m not on Apple), too little interaction with the rest of the software world (I love writing Pharo, but don’t love going through its FFI to use anything else) or, and this is the biggest kicker, I don’t like the development environments.

When I work on JavaScript, my environment is a text editor (something like VSCode or emacs) that has syntax highlighting, maybe has auto-completion…and that’s about it. When I work in something like Java, ObjC or C++, I have a build button, an integrated debugger, and the ability to run tests. And, if I’m lucky, a form designer. When I work in something like Swift or Clojure, I have insta-repls. When I work in Pharo, I have all the live browsers and things you hear about from smug people, but I still have to type code for things you might expect to be ‘live’ in such an environment. I get confused by the version control tools, but that might be because I’m not familiar with image-based development.

It feels like, details of the languages aside, there’s a synthesis of programming language with environment, where the programming language is a tool integrated into the environment just like the compiler and debugger, and the tools are integrated into the programming language, like the Lisp macro system. It feels like environments like Oberon, Lisp machines and Smalltalks all have some of this integration, and that popular programming environments for other languages all have less of it.

I’m not entirely sure what the ideal state is, and whether that’s an ideal just for me or would benefit others. I wrote my MSc thesis on an exploration of this problem, and still have more research to do.

Free Software should welcome contributions by Apple, Google

It started with a toot from the FSF:

Freedom means not #madebygoogle or #madebyapple, it means #madebythousandsoffreesoftwarehackers #GNU

This post is an expansion on my reply:

@fsf as an FSF Associate I’m happy to use software made by Google or made by Apple as long as it respects the four freedoms.

Yes to made by Google or made by Apple

The Free Software Foundation financially supports the Replicant project, a freedom-respecting operating system based on the Android Open Source Project. The same Android Open Source Project that’s made by Google. Google and Apple are both behind plenty of Free Software contributions, both through their own projects such as Android and Swift or contributions to existing projects like the Linux kernel and CUPS. Both companies are averse to copyleft licences like the GPL, but then both companies have large software patent portfolios and histories of involvement in software patent litigation so it may be that each company is actually averse to compromising the defensibility of their patent hoards through licences like GPL3. On the other hand, the Objective-C support NeXT created for GCC was the subject of an early GPL applicability test so in Apple’s case they could well be averse to “testing” the GPL any further.

Whatever their motivations for the stances they’ve taken, Apple and Google do contribute to Free Software and that should be both encouraged and welcomed. If they want to contribute to more projects, create new ones, or extend those freedoms to their existing proprietary code then we advocates of software freedom should encourage them and welcome them. Freedom does not mean “not #madebygoogle or #madebyapple”.

No to controlled by Google or controlled by Apple

While we in software development have never had it so good in terms of software freedom, with all of our tools and libraries being published as free software (usually under the banner of open source), the community at large has never had it so bad, and Google and Apple are at the vanguard of that movement too. The iOS kernel, Darwin UNIX system and Swift programming language may all be open for us to study, share and improve, but they exist in a tightly-controlled walled garden that’s eroding the very concept of ownership and centralising all decisions within the spheres of the two platform providers. This means that even Freedom Zero, the freedom to use the software for any purpose, is denied to anyone who isn’t a programmer (and in fact to the rest of us too: you can study the iOS kernel but cannot replace the kernel on your phone if you make an improvement; you can study Swift but cannot sell an iOS app using any version other than the one blessed by Apple at time of submission).

People often complain at this point that software freedom is only relevant to programmers because you need to be a programmer to study or improve a program given its source code, but that’s not the point. Open Source is only relevant to programmers. Having the freedom to use your computer for any purpose, and to share your software, gives two things:

  1. to some people, “I wish that my software could do this, it doesn’t, but I understand that it is possible to change it and that I could use the changed version” can be the incentive to learn and to enable their own programming skills.
  2. to others, having the freedom to share means having the freedom to share the software with someone who already knows how to program who can then make improvements and share them back with the first person.

Ignoring those possibilities perpetuates the current two-tier system in which programmers have a lot of freedom and everybody else has none. I have argued against the walled garden before, as a barrier to freedom. That is different from arguing against things that are made by the companies that perpetuate the walled gardens, if we can encourage them to change.

Welcome, Apple. Seriously.

The FSF has a long history of identifying itself “against” some IT incumbent, usually Microsoft. It has identified a change in the IT landscape by positioning itself as an underdog “against” Apple and Google. But it should not be against them, it should be with them, encouraging them to consider and support the freedom of their customers.

Recommend me some books or articles

I’ve been looking for something to read on these topics, can you help?

  • a history of the Unix wars (the ‘workstation’ period involving Sun, HP, Apollo, DEC, IBM, NeXT and SGI primarily, but really everything starting from AT&T up to Linux and OS X would be interesting)
  • a business case study on Apple’s turnaround 1997-2001. I’ve read plenty of 1990s case studies explaining why they’ll fail, and 2010s interpretations of why they’re dominant, and Gil Amelio’s “On the Firing Line” which explains his view of how he stemmed the bleeding, but would like to fill in the gaps: particularly the changes from Dec 1997 to the iPod.
  • a technical book on Mach (it doesn’t need to still be in print, I’ll try to track it down): I’ve read the source code for xnu, GNU Mach and mkLinux, Tevanien’s papers, and the Mac OS X Internals book, but could still do with more