Structure and Interpretation of Computer Programmers

I make it easier and faster for you to write high-quality software.

Tuesday, January 10, 2017

The reality is not the abstraction

Remember that the abstractions you built to help you think about problems are there to help. They are not reality, and when you think of them as such they stop helping you, and they hold you back.

You see this problem in the context of software. A programmer creates a software model of a problem, implements a solution in that model, then releases the solution to the modeled problem as a solution to the original problem. Pretty soon, an aspect of the original problem is uncovered that isn’t in the model. Rather than remodeling the problem to encapsulate the new information, though, us programmers will call that an “edge case” that needs special treatment. The solution is now a solution to the model problem, with a little nub expressed as a conditional statement for handling this other case. You do not have to have been working on a project for long before it’s all nubs and no model.

You also see this problem in the context of the development process. Consider the story point, an abstraction that allows comparison of the relative sizes of problems and size of a team in terms of its problem-solving capacity. If you’re like me, you’ve met people who want you deliver more points. You’ve met people who set objectives featuring the number of points delivered. You’ve met people who want to see the earned points accrue on a burn-down. They have allowed the story point to become their reality. It’s not, it’s an abstraction. Stop delivering points, and start solving problems.

posted by Graham at 18:45  

Saturday, November 26, 2016

On the rhetorical cost of ownership

I’ve recently been talking about software engineering economics, in a very loose way, but so have other people. And now I understand that it’s annoying when people talk about it, and have decided to continue anyway. I’ve decided to continue because what I see is either inaccurate comparisons being made, or valid comparisons that have questionable applicability outside their immediate domain. The world of IT cost comparison is still run by marketing, not by operations.

Recently, I read Don’t Build Private Clouds. Subbu says that the sticker prices (up to $10k for a server that will last four years, vs. up to $1500 per month for a public cloud machine) should not be compared because there are additional costs to self-hosting:

  1. Engineering costs
  2. Network automation costs
  3. Loss of agility
  4. Opportunity costs

Fine, but what are those costs? Why do I not need to do any engineering or automation if I use a public cloud provider? If I do, what is that going to cost? What agility and opportunity do I lose by tying my infrastructure to any one cloud vendor, and what will that cost?

Subbu’s blog says that he is a “cloud helper”, and that goes a long way to explaining why we didn’t get a straight answer on the cost comparison. We’re not being told that cloud services are cheaper, instead we are being told of the Fear, Uncertainty and Doubt involved in choosing the less-favoured path.

Similarly, an IBM employee recently said that Macs are cheaper than PCs by up to $543 per user (remember that’s “up to”, not “as much as” – the lower bound given is $273). Let’s ignore the conflict of interest arising from Apple’s global partnership with IBM: IBM wouldn’t say that their partner systems are cheaper so that they could drum up interest in their partnership services, surely. Surely. I mean, this isn’t the IBM of 1984, is it?

What IBM’s VP says is that over a four-year lifetime, among employees who are given the choice of which platform and model computer they want, the Macs are cheaper. That’s of course a figure about which it’s possible to make realistic comparisons: given IBM’s approach to desktop support, IBM’s level of staffing, IBM’s applications, IBM’s approach to working, IBM’s budgeting for IT support operations, it’s cheaper for people who choose Macs to use Macs than for…well, it’s not clear, but it seems to be than for the “everybody else” bucket: not only people who chose PCs to use PCs, but people who weren’t given a choice to use PCs.

So before you make a textexpander macro for that link and insta-reply to anyone who uses the phrase “Apple Tax”, just how similar is your environment to IBM’s?

posted by Graham at 17:44  

Saturday, November 19, 2016

Can’t you just…

Continuing the thoughts on vexing problems, one difficulty when it comes to discussing software is talking about the size of software. I’m not really talking about productivity metrics – good or bad – like source lines of code or function points, rather the fact that the complexity of a problem looks different depending on who’s doing the looking.

Sometimes, a problem that’s very simple from a business perspective can be incredibly complex technically. One product I worked on could be summarised very quickly: let people interact with marketing campaigns by sending and receiving messages on their mobiles. The small amount of logic between send and receive – allowing the campaigns to operate as quizzes, votes, or auctions – could be detailed on an index card.

But that simplicity was backed by a huge amount of technical complexity to make it work. “Can you just send this message to everyone who got the quiz question correct?” Well, yes, but as it’s a picture message we need to work out how to make it look good on the recipient phone, change it to fit those criteria, and then send it. What makes it look good – and indeed how we can get the information to make that decision – depends on the device, but also which network it’s on, and maybe whether it’s on pre-pay or post-pay and whether they use HTTP, WAP or e-mail to send messages to that device (which might be different from how they send to other devices on the same network). And even after we’ve gathered that information, it may be wrong as some devices claim to support image formats that they can’t render, or image sizes that they’ll actually reject or fail to display.

On the other hand, sometimes the business problem is a lot more complex than the technical problem. If you’re a mobile app developer, any length of problem definition about exciting disruptive apps can be reduced to “so you want to display data from a web service in a table view”.

And then at other times, “can’t you just” gets stymied from left field. Why yes, we could simply do that, and it would be good for the business, but this regulation/patent/staff shortage means we need to do something else.

posted by Graham at 19:24  

Wednesday, November 16, 2016

On the business case for (or against) software

In the vexing problems, I dismissed the hard problems of computer science as being incidental to another problem: we can’t say what the value of our work is. That post contained plenty of questions, precisely because the subject is so unknown.

There are plenty of ways in which the “value” of something can be discussed, but let’s stick with economic value for the moment. Imagine you have the opportunity to write some software, but only if I pay you for it. And not even then, but only if you can make me reasonably confident that I will get some return; that having the work done is worth more than the cost of you doing the work.

A corner of the software industry just cowered and stuck its head in the sand. “#NoEstimates!”, they cry. This problem is hard, so why should we solve it?

That’s a really interesting perspective. You should come and talk about #NoEstimates at my conference. It’ll be on sometime, in a place, and there’ll be a flight that can get you there. Maybe. That’s enough information to be going on for it to be clear that you should commit to it, so I’ll see you there.

At the other end are the folks who would literally write whole books about the topic (and given that books are themselves literary, that “literally” is itself meant literally). I pulled a few likely titles off the shelf to see what they would tell me, and ended up with:

We have to wonder how some people can be certain that estimates are worthlessly inaccurate, while others such as Barry Boehm (the author of Software Engineering Economics and the COnstructive COst MOdel it describes) have built careers out of explaining how to do it well.

Here’s one hypothesis: Boehm is a charlatan, and the COCOMO doesn’t work at all. Let’s see whether there’s evidence for that.

A likely source is COCOMO evaluation and tailoring by Miyazaki and Mori from Fujitsu. The authors of that paper, applying the COCOMO (’81, not the later COCOMO II) model to a corpus of projects undertaken by Fujitsu, conclude:

The original COCOMO overestimates the effort required to develop software in our environment, but its tailoring methodology is applicable. […] The resulting model fits 68% of the projects with less than 20% relative error, after the deletion of two outliers.

That’s maybe short of brilliant – the median project effort in their paper seems to be over 10 person-years, so they’re saying “there’s around two chances in three that we can tell you what this project will cost to within twice a developer’s salary” – but it’s much more precise than “#NoEstimates!” and only quite a lot less accurate.

It might be interesting to track the application of COCOMO through time, and see whether a larger corpus of projects and refinement of the model has improved its accuracy and applicability. Unfortunately, that goal is in conflict with a general industry observation, noted in the introduction of the Fujitsu paper:

We have the impression that software people prefer to start from scratch rather than improving upon the work of others, which seems to be slowing down the progress in this field.

And indeed much of what was written in the sources I’ve been talking about here is no longer applied. All three of the books I mentioned up top talk about Earned Value (EV) Analysis, recording the actual rate of delivering on a project against its Planned Value (PV). For example, Hughes and Cotterell recommend plotting earned value using the “0/100 technique”:

where a task is assigned a value of zero until such time as it is completed when it is given a value of 100% of the budgeted value

Agilistas will recognise the resulting plot: it’s a burn down chart, where a story is not delivered until it has been accepted according to the definition of done, at which point it is completely delivered.

Ultimately somebody in the Agile community must have noticed that they accidentally borrowed a software engineering concept, because the burn down chart is now unfashionable.

If the team starts with a very full, prioritized, and estimated backlog with the expectation they will burn down all tasks to zero — does this not sound like a Waterfall mentality?

Oh noes, the dreaded W word! Run away from this thought, for it is a thought that somebody in the past has already considered!

And that is why I consider these problems to be vexing: when potential solutions are proposed, they are by necessity existing ideas from the old school that must be rejected in favour of…whatever is not those solutions. As Craftsmanship is post-Agile is post-Software Engineering is post-Crisis thinking, surely it will soon be time for post-Craftsmanship thinking. And then post-whatever-comes-next thinking. And we still won’t know how to compare expected value to expected cost.

posted by Graham at 19:08  

Sunday, November 13, 2016

The Vexing Problems in Programming

I admit it, I’ve been on the internet for quite a while (I could tell you that my ICQ number is 95941970, but I haven’t logged in for years) and my habits haven’t changed. I still regularly get technology news from slashdot, and today was no exception. An interesting article was Here Be Dragons: The 7 most vexing problems in programming. Without wanting to spoil the article for you by giving away the punchline, there are indeed some frustratingly difficult problems mentioned: multithreading, security, encryption are among the list.

All of these problems are sideshows to what I see as one of the largest and most vexing issues in programming: the fundamental rule to business administration is that your income should be greater than your costs, but software makers still, by and large, don’t have a way to compare the expected value of their work to the expected cost of the work.

The problem in space

Different software teams – and individuals – do work in different contexts, and in different ways. The lone wolf micro-ISV is not the same as an individual contract developer. The in-house IT team does not have the same problems to solve as the shrinkwrapped software vendor, and those developing web services for public consumption have yet another context. The team with core hours all working in a single office is different from the distributed team inhabiting multiple time zones.

How much of this variety is essential, and how much is accidental? How much of it is relevant, when considering some intervention, process change, or technique? Consultants speaking at conferences (another context with its own similarities and differences from the others) don’t tend to talk about what researchers in fields such as psychology would recognise as “threats to validity” of their work, but given all of the ways in which software is made, we need to know whether some proposal applies to all of them, or to some of them, or whether it has been applied to some of them and might be applied to others, and what would assist or confound that application.

The problem in time

What are the accepted, tested and validated ways to identify who will be using and otherwise impacted by our software systems? To know whether they can use the system we propose, and whether it is the best system for their intended use? To ensure that our proposed software systems treat those people ethically? To understand the cost (to ourselves and to others) of constructing those systems? To deliver the systems to the people who will interact with them? To choose which people are or aren’t entitled to access? To build a representation of the problem to be solved, to validate that representation, to validate the solution against that representation?

Where an answer exists to those questions, what are the contexts in which it is valid and what are the threats to its validity? How has that answer been compared with other possibilities? How has it been confirmed? How has it been challenged? How can I find out about those confirmations and challenges? How can I find out about any alternatives? What techniques exist to weigh up those alternatives quantitively, rather than relying merely on the persuasiveness of the conference speakers promoting those solutions (and, by the way, the books/screencasts that describe the solutions)?

The lack of a problem

Why should I care? There’s enough money in software at the moment to mean that I don’t need to be any good at knowing what works or doesn’t work, I just need to get out there and sell some software. In the rare situation that I don’t make my money back, that’s just the market forces at work, and I can go and get a high-paying job somewhere while I lick my wounds, and pick another programming language/framework/platform/whatever it is that’s going to make my next attempt definitely succeed.

Clearly, this bottomless pit of money that arises from society’s unwavering faith in software and its ability to cure all ills is never going to run out. There’s no need to worry ever about whether we’re doing it right, because there’ll always be someone out there willing to pay for us to do it wrong. Life as a programmer is like some kind of socialist utopia where whether we’re making a valuable contribution or not, the rest of society is looking out for us.

That’s going to last forever, right?

posted by Graham at 12:49  

Tuesday, March 15, 2016

In which I interview so you don’t have to

Describing job interviews for technical roles in the software industry to people who have left or have always been outside the software industry requires two things: patience on the part of the one doing the describing, and the ability for the listener to take a joke. Over the last twelve years I have taken countless job interviews so that you don’t have to. Here’s what I’ve found: presented as a guide to running the average software developer interview. As with all descriptions of mediocrity, you should treat this as best practice.

[Be clear on this: not all interviews are like this. But this is an expectable baseline, derived from experience.]

Person Specification

The ideal candidate will be rich. We’re going to put them through hours – maybe even days – of tests, interviews, meetings, and “informal chats” that they’d better be on best behaviour for anyway. They need to be able to afford taking that time away from work, friends, other opportunities, so they’d better be rich.

That multiple-hour interview process means that they’d better be desperate for a job too. As you’ll find out in the section on our process, we pride ourselves on not giving away too much. We’re not selling our company to you, because we know we’re offering the chance to do what you’ve always wanted: sit in our open plan office space next to our own particular loud crisp-eater muttering at Eclipse.

The ability to go without food is desirable too. Even if a stage of the interview is planned to take so long that it would go over lunch, and even though we might put a break for lunch in, we might also forget to do any catering. Computers don’t need food and programmers are sort of like computers, we heard. We actually occasionally do feed our staff, and advertise this as a perk.

Our Process

The first thing we want to check is whether you can solve logical problems. We don’t actually need you to solve logical problems, after all, that’s what the computers are for. But we’ll give you an aptitude/basic reasoning test anyway [yes, although it’s no longer the 1960s and we aren’t IBM, this is still common if not universal].

The reasoning test is there to weed out people who didn’t have the same education as us, or were raised speaking a different language, or in a different culture. Empathy is hard, and to avoid unduly stressing our staff we want to make sure that their colleagues are as similar to them as possible. Additionally the hour you’ll take going through this test is an hour we don’t have to make eye contact or conversation with you: empathy is hard.

To be honest we have no idea what this test means or how to interpret its results. Everybody before you went through this test, and they’d raise merry hell if we “lowered the bar” by removing it now. As a holacracy/meritocracy/hypocrisy/this week’s organisational behaviour buzzword, we empower our employees to not see any changes that might raise a small amount of discomfort.

So after that test, depending on the seniority of the position and the candidate’s experience, we’ll…no, not really. We did nearly keep a straight face through that sentence though. In fact we didn’t read your CV except to find out whether the keywords that describe the problems we have right now and the solutions we have chosen last week appear. We didn’t read your GitHub/Lanyrd/Bitbucket profiles either, except to check that you have them so we know how much free work to expect out of you in addition to the paid stuff. Our project management system works on the Pareto Principle: 80 hours a week on our stuff, 20 hours a week on open source stuff that we can co-opt.

The next stage in the process is actually the same for everybody: a basic programming test to find out whether you even know what a computer is. We don’t care that you’re [glances at CV] Grace Hopper, we still don’t believe that you can reverse a linked list. None of our employees has ever had to reverse a linked list on the job, and we’d fire them if they did reverse a linked list on the job because there are libraries for that.

Now we’ll come onto the technical interview: a cross-examination by a panel of between one and twelve [not joking] people who have, or have had, a word like “engineer” in their job description at some point. These people are tasked with finding out whether you’ve solved the same problems in your career as they have in theirs. If you haven’t, you might not be clever enough. If you have, then what new experiences are you bringing to the table?

By the way, our flexibility on your technical skills will go down as you become more experienced. We appreciate that new grads might not have used our tools/frameworks/technology and are willing to train them, but if you have more than six months’ experience with Java we’re going to call you a Java developer and only consider you for Java roles.

After all of that, it’s still possible that you might have somehow snuck through the system despite not going to the same university or belonging to the same society as the founder. We can’t really quantify the idea of “culture fit” but that’s what we’re examining in the next part of the process and we’ll know it when we don’t see it.

The Offer

You’ll get a phone call from us while you’re in the bath. We’ll outline the position, pay and (unless this is an American company and there isn’t any) holiday provision. You then have two seconds in which to reply, with either “Yes” or whatever the other one is. You may have other irons in the fire but of course you’ll want to drop all of those when we tell you about the parking space we’ve already allocated for you [This has happened. I don’t have a car.].

The Job

You will be working with a team of people who all went through that same interview and decided they wanted to work in our environment. We will leave it to you to decide what that means.

The Alternative

There are some less…scientific…approaches to hiring that involve using the candidate’s stated and visible experience to have a conversation about what they’ve done, how they do and don’t like to work, how they’ve responded to success and failure, and whether the challenges they would like to see in their career match up with the environment we’re able to provide. While that sounds like quite a pleasant experience for everybody involved we fail to see how it could possibly translate into discovering whether we want to work with you or vice versa.

posted by Graham at 11:52  

Tuesday, July 14, 2015

On running out of words

John Gruber’s subscription to Wiktionary expired:

At just 20 percent of unit sales, Apple isn’t even close to a monopoly. At 92 percent profit share, they have a market dominance that rivals any actual monopoly the tech industry has ever seen. We don’t even have a term for this situation, it’s so unusual.

We do have a term: monopoly will do just fine. Gruber says that Apple “isn’t even close to a monopoly”, but you don’t need to have all or even most of the unit sales in a market in order to be able to act monopolistically. An entity (or a cabal) only needs a big enough share of the sales in order to be able to set prices independent of the other competitors in the market. (Working at big telecoms companies has the effect of teaching you specifics of market economics, but then so did those economics classes I took at University.)

That 92% profit on 20% sales is indicative, rather than contraindicative, of a monopoly. And there’s another word we could use, too: monopsony. Let’s say that you’ve made an iOS app, and now you want to sell it. Do you create a storefront on your website to do that? Do you contact Sears and see how many boxes they want? Speak to some third-party distributor? No, you can only sell to Apple, they are the only buyer for iOS apps.

The thing it’s important to remember about monopolies or monopsonies is that they are not inherently bad: badness happens when an entity uses its dominant position in a market to set prices or other terms that are not considered fair, and that’s a pretty woolly situation. When the one buyer in your market decides that your contribution is “amateur hour” (sucks to be a hobbyist, I guess), or that your content is “over the line”, and doesn’t want to buy your product, you have no other vendors to sell it to: is that fair?

This is an argument that relies too much on legal details and nuance to be able to answer as a novice, so I’ll spare you my “amateur hour” pontification. I would imagine that a legal system that did explore this question would consider analogous environments, like the software market of the 1990s. Back then, Microsoft bundled a web browser and a media player with their operating systems and used their market power (which let them act as a monopoly even though competitors existed) as an operating system vendor to make it hard to sell competing browsers or media players. It might be an interesting thought experiment to compare that situation with today’s.

posted by Graham at 07:21  

Sunday, January 18, 2015

The other pink dollar

How did (a very broad and collective) we go from selling NeXT at $440M to selling Tumblr at $1.1B, in under two decades? Why was Sun Microsystems, one of the most technologically advanced companies in the valley, only worth two Nests?

I don’t think we’re technologists (much) any more. We’ve moved from building value by making interesting, usable and advanced technology to building value by solving problems for people and making interesting, useful and advanced experiences. The good thing about a NeXT workstation was that it was better than other workstations and minicomputers; the bad thing is that you can’t actually do a lot with a Unix workstation. You need applications, functions for turning silicon-rich paperweights into useful tools.

NeXT’s marketing was it’s easier to turn our paperweight into a tool than their paperweight, but today’s tech companies are mostly making things you can already use. The technology is a back-office function, enabling the things you can already use or working around problems the companies discovered in trying to enable those useful things. Making the paperweight with the most potential is no longer interesting to most of the industry, though there will be money in paperweights for a few years yet (even if the paperweights are becoming small enough that they can’t weigh down paper, and even if no-one has a stack of paper to weigh down any more).

Hence the current focus on “disruption”, in the Silicon Valley, not Clayton Christensen, sense. It’s easy to see how an already-solved problem can be solved faster, cheaper or better, by taking out intermediate steps or slow communications.

This is traditional science-fiction advancement. What technology do you need to get the plot moving quickly? Two people need to talk but they’re not on the same planet: you need a mobile phone. Two people need to be in the same room but they’re not on the same planet: you need a teleporter. Two people need to share the specifications of a starship but there isn’t enough paper in the universe: you need a PADD.

It’s harder to identify solutions to unsolved problems, or solutions to unknown problems. This hasn’t changed since the paperweight days, the transition has been from “well, I guess you can find some problem to solve with this workstation” to “we solved this obvious problem for you”. That’s an advance.

It really is good that we’re moving from building things that could potentially solve a problem to things that definitely do solve a problem. That’s more efficient, as fewer people are solving the same problem. Consider the difference between every company buying a Sun workstation and hiring a programmer to write a CRM application, and every company paying someone else to deliver them a CRM system.

It’s also likely the reason for the rise in open-source software, hardware, data centres and related infrastructure. Nobody’s making railroads any more, they’re making haulage companies that use railroads to solve the problem of “you’re in Chicago, Illinois but your crate full of machinery is in a port in Seattle, Washington”. There’s lots of cost in having the best rails, but questionable benefit, so why not share the blueprints for the rails so that anyone can improve them?

Well, what if it turns out that the best way to haul your goods is not on rails? If it’s easy to accept better rails, but the best solution lies in a different direction? Alan Kay would recognise this problem: everyone can see incremental improvements in the pink plane but there are magnitudes of improvements to be made by getting out into the blue plane.

How can a blue plane venture get funded, or adopted? When was the last time it happened? Is there something out there already that just needs adoption to get us onto the blue plane? Or have we set up a system that makes it easy to move quickly on the pink plane, but not to change direction to the blue plane?

posted by Graham at 19:45  

Sunday, June 1, 2014

It doesn’t take an Oracle to see that coming

Today has largely been brought to you by nostalgia brought about by this article, reporting on a get-together of former Sun Microsystems employees.

I have never been a former Sun Microsystems employee, and of course now I never will be one. Of all the tech companies I’ve interacted with, Sun is the one I most regret not getting to work with. By the time I dealt with them, they had already put the “crash” in “dot-com crash” but there was still a feeling that they made great things. And besides, they showed that even a pony-tailed Objective-C programmer can be a tech CEO.

I recently talked about the importance of GNU projects, but plenty of other software projects were also important, and Sun had a hand in quite a few of them:

  • Bill Joy worked for them, and most of their early workstation operating systems were based on BSD Unix.
  • In fact while Apollo may have invented the idea that a single person might use a Unix computer, Sun popularised it.
  • I learned how to boot Macs by learning how to program Forth and boot Suns.
  • NFS was the beginning of the separation between your device and your documents.
  • NIS was a bit of an important step on the way to logging in anywhere (its level of baroqueness compared to OAuth has never been accurately gauged).
  • In fact, they pretty much invented cloud computing.
  • Java was quite a big thing for a while.
  • Dtrace is pretty amazing.
  • They even got into standard Unix workstation vendor capitalisation for a while.

It’s likely that much of the interesting stuff at Sun was already over by the time I could’ve worked there, and I certainly experienced a very last-minute replay of some of their history. When I was a student I ‘borrowed’ an Ultra 5 (one of their least good workstations, pretty much a PC with a sun4u SPARC innards) and a SparcStation 5 (one of their most good) to learn about Solaris, SunOS and NeXTSTEP. But it certainly feels like a lot of the future was invented there, even if they were largely following Xerox’s playbook like the rest of the industry.

So tonight, I’ll remember that my control key is in the correct place:

Sun type 5 keyboard

I’ll press L1 and A, then raise a glass to Sun and the job I never had.

posted by Graham at 22:34  

Monday, May 26, 2014

The lighter side of open source

In a recent post I talked about the apolitical, amoral nature of open source software and how it puts the interests of a small programming class before the interests of the broad collection of people who interact with programmers’ output. The open source movement has been of great benefit to the software industry, and this hasn’t necessarily been a zero-sum game.

Reality is always more nuanced than history, and yet here is a potted guide to open source history. In the beginning, there were military computers. There was no-one else to share your computer programs with, because:

  1. no-one else had a computer.
  2. well, maybe they did, but they weren’t telling you.
  3. you didn’t want to tell anyone else you have a computer.

Then there were academic computers. Now you do want to share your programs with everyone, and they share theirs with you and so everyone is on the cutting edge.

Then there were commercial computer companies (I told you this history would lack nuance), who were happy to share their programs with you because it meant you could get more out of the computers they were selling.

Then there were commercial computer companies who decided that the source code to the programs used to interface with their hardware were their competitive advantage, and decided to stop sharing them. This made an academic (Richard Meriadoc (humour me) Stallman) sad, and so he created the Free Software movement to:

  1. promote sharing of software over not sharing software;
  2. subvert the copyright system usually used to restrict sharing to enable sharing.

Then there were people who wanted to use Free Software in their day jobs but found that the movement was considered too idealogical to be palatable to management, so they rebranded it Open Source Software to re-frame the discussion along business, rather than political, lines.

This is about the point when your protagonist enters, stage right. The dot-com bubble was imploding, leading to changed fortunes for all sorts of people and organisations in the software industry. Everything I would do regarding professional computing depended in some way on the GNU project and the Free Software Foundation:

  1. I learned Unix, thanks to the ability to inexpensively run GNU/Linux on my desktop computer.
  2. The things I learned about Unix, C programming and so on were portable to various platforms beyond GNU/Linux, thanks to the GNU compiler collection, GNU bash, GNU make, GNU debugger and others.
  3. One such platform was Mac OS X, the new hotness from Apple. This was a technology acquired through the purchase of NeXT, who had been able to provide a complete programming environment despite their small size and (comparatively) small budget by wrapping the tools listed above.

Somewhere in all the above I even found it possible to get paid for writing software: a GPLv2-licensed Lisp package for GNU Emacs.

Of course, that’s just my story, but there are plenty like it. Many other programmers work on platforms like iOS, or Android, or Linux, or in environments like Ruby or Objective-C, that either only exist or have only become as successful as they have due to the successes of the Free Software Foundation, and the ability for organisations (commercial or otherwise) to take advantage of Free or Open Source software as building blocks which they can combine or add to.

Since then, the discussion has again been re-framed. Open Source – originally a branding change to make Free Software acceptable to business – has become a principle rather than a tool. A community that owes its financial viability to Free Software now denounces such “viral” licences, as source released under their conditions is harder to profit from than the more permissive, university-style Open Source licences.

Software writers in the 1980s liked to talk about how object technology would be the silver bullet that allowed re-use and composition of software systems, moving programming from a cottage industry where everyone makes everything from scratch to a production-line enterprise where standard parts fit together to provide a base for valuable products. It wasn’t; the sharing-required software licence was.

posted by Graham at 22:48  
« Previous PageNext Page »

Powered by WordPress