In which I misunderstood Objective-C

I was having a think about this short history of Objective-C, and it occurred to me that perhaps I had been thinking about ObjC wrong. Now, I realise that by thinking about ObjC at all I mark myself out as a bit of an oddball, but I do it a lot. I co-host the [objc retain]; stream with Steven Baker, discussing cross-platform free software Objective-C every week. Hell of a time to realise I’ve been doing it wrong.

My current thinking is that the idea of ObjC is not to write “apps” in ObjC, or even in successor languages (sorry, fans of successor languages). Summed up in that history are Brad Cox’s views which you can read in more length in his books. I’ve at least tangentially covered each book here: Object-Oriented Programming: an Evolutionary Approach and Superdistribution: Objects as Property on the Electronic Frontier. In these he talks about Object-Oriented Programming as the “software industrial revolution”, in which the each-one-is-bespoke way of writing software from artisinally-selected ones and lightly-sparkling zeroes is replaced with a catalogue of re-usable parts, called Software ICs (integrated circuits). As an integrator, I might take the “window” IC, the “button” IC, the “text field” IC, and a “data store” IC and make a board for entering expenses.

So far, so npm. The key bit is the next bit. As a computer owner, you might take that board and integrate it into your computer so that you can do your home finances, or so that you can submit your business expense claims, or so that your characters in The Sims can claim for their CPU time, or all three of those things. The key is that this isn’t some app developer, this is the person whose computer it is.

From that perspective, Objective-C is an intermediary tool, and not a particularly important or long-lasting one. Its job is to turn legacy code into objects so that it can be accessed by people using their computers by sticking software together using objects (hello NSFileManager). To the extent it has an ongoing job, that is to turn algorithms into objects, for the same reason (but the algorithms have been made out of not-objects, because All Hail the Perform Ant).

You can make your software on your computer by glueing objects together, whether they’re made of ObjC (a common and important case), Eiffel (an uncommon and important case), Smalltalk (ditto) or whatever. Objective-C is the shiny surface we’re missing over the tar pit. It is the gear system on the bicycle for the mind; the tool that frees computer users from the tyranny of the app vendor and the app store.

I apologise for taking this long to work that out.

Posted in cocoa, design, freesoftware, gnustep, nextstep, objc | Leave a comment

Episode 39: Monetising the Hobby

This episode is about what happens when you let people who are interested in programming (the process) define how you do programming (creating a program).

Links:

Please remember you can support me on Patreon! You can also check out my other projects: [objc retain]; and Dos Amigans. Thank you!

Leave a comment

Episode 38: the Cost of Dependencies

This post is all about whether dependencies are expensive or valuable to a software project (the answer is “yes” in a lot of cases). It was motivated by Benefits of dependencies in software projects as a function of effort by Eli Bendersky.

Leave a comment

Why you didn’t like that thing that company made

There’s been a bit of a thing about software user experience going off the rails lately. Some people don’t like cross-platform software, and think that it isn’t as consistent, as well-integrated, or as empathetic as native software. Some people don’t like native software, thinking that changes in the design of the browser (Apple), the start menu (Microsoft), or everything (GNOME) herald the end of days.

So what’s going on? Why did those people make that thing that you didn’t like? Here are some possibilities.

My cheese was moved

Plenty of people have spent plenty of time using plenty of computers. Some short-sighted individual promised “a computer on every desktop”, made it happen, and this made a lot of people rather angry.

All of these people have learned a way of using these computers that works for them. Not necessarily the one that you or anybody else expects, but one that’s basically good enough. This is called satisficing: finding a good enough way to achieve your goal.

Now removing this satisficing path, or moving it a few pixels over to the left, might make something that’s supposedly better than what was there before, but is actually worse because the learned behaviour of the people trying to use the thing no longer achieves what they want.
It may even be that the original thing is really bad. But because we know how to use it, we don’t want it to change.

Consider the File menu. In About Face 3: The Essentials of Interaction Design, written in 2007, Alan Cooper described all of the problems with the File menu and its operations: New, Open, Save, Save As…. Those operations are implementation focused. They tell you what the computer will do, which is something the computer should take care of.

He described a different model, based on what people think about how their documents work. Anything you type in gets saved (that’s true of the computer I’m typing this in to, which works a lot like a Canon Cat). You can rename it if you want to give it a different name, and you can duplicate it if you want to give it a different name while keeping the version at the original name.

This should be better, because it makes the computer expose operations that people want to do, not operations that the computer needs to do. It’s like having a helicopter with an “up” control instead of a cyclic and collective controls.

Only, replacing the Open/Save/Save As… stuff with the “better” stuff is like removing the cyclic and collective controls and giving a trained helicopter pilot with years of experience the “up” button. It doesn’t work the way they expect, they have to think about it which they didn’t have to do with the cyclic/collective controls (any more), therefore it’s worse (for them).

Users are more experienced and adaptable now

But let’s look at this a different way. More people have used more computers now than at any earlier point in history, because that’s literally how history works. And while they might not like having their cheese moved, they’re probably OK with learning how a different piece of cheese works because they’ve been doing that over and over each time they visit a new website, play a new game, or use a new app.

Maybe “platform consistency” and “conform with the human interface/platform style guidelines” was a thing that made sense in 1984, when nobody who bought a computer with a GUI had ever used one before and would have to learn how literally everything worked. But now people are more sophisticated in their use of computers, regularly flit between desktop applications, mobile apps, and websites, across different platforms, and so are more flexible and adaptable in using different software with different interactions than they were in the 1980s when you first read the Amiga User Interface Style Guide.

We asked users; they don’t care

At first glance, this explanation seems related to the previous one. We’re doing the agile thing, and talking to our customers, and they’ve never mentioned that the UI framework or the slightly inconsistent controls are an issue.

But it’s actually quite different. The reason users don’t mention that there’s extra cognitive load is that these kinds of mental operations are tacit knowledge. If you’re asked about “how can we improve your experience filing taxes”, you’ll start thinking tax-related questions, before you think “I couldn’t press Ctrl-A to get to the beginning of that text field”. I mean, unless you’re a developer who goes out of their way to look for that sort of inconsistency in software.

The trick here is to stop asking, and start watching. Users may well care, even if they don’t vocalise that caring. They may well suffer, even if they don’t realise it hard enough to care.

We didn’t ask users

Yes, that happens. I’ve probably gone into enough depth on why it happens in various places, but here’s the summary: the company has a customer proxy who doesn’t proxy customers.

Posted in UI | Leave a comment

Episode 37: systemic failures in software

Here we talk about things that can go wrong in a whole software organisation such that even if everybody does their job to the best of their ability, and they have a good ability, the result is far from optimal.

Identifying these sorts of things relies on being able to see the whole system as, well, a system. A great resource for learning about this is the Donella Meadows project, and you can start with her book Thinking in Systems: a Primer.

Leave a comment

On programmer behaviours that make Scrum so bad

Respectable persons of this parish of Internet have been, shall we say, critical of Scrum and its ability to help makers (particularly software developers) to make things (particularly software). Ron Jeffries and GeePaw Hill have both deployed the bullshit word.

My own view, which I have described before, is that Scrum is a baseline product delivery process and a process improvement framework. Unfortunately, not many of us are trained or experienced in process improvement, not many of us know what we should be measuring to get a better process, so the process never improves.

At best, then, many Scrum organisations stick with the baseline process. That is still scadloads better than what many software organisations were trying before they had heard of Agile and Scrum, or after they had heard of it and before their CIO’s in-flight magazine made it acceptable. As a quick recap for those who were recruited into the post-Agile software industry, we used to spend two months working out all the tasks that would go into our six month project. Then we’d spend 15 months trying to implement them, then we’d all point at everybody else to explain why it hadn’t worked. Then, if there was any money left, we’d finally remember to ship some software to the customer.

Scrum has improved on that, by distributing both the shipping and the blaming over the whole life span of the project. This was sort of a necessary condition for software development to survive the dot-com crash, when “I have an idea” no longer instantly led to office space in SF, a foosball table, and thirty Herman Miller chairs. You have to deliver something of value early because you haven’t got enough money any more to put it off.

So when these respectable Internet parishioners say that Scrum is bad, this is how bad it has to be to reach that mark. Both of the authors I cite have been drinking from this trough way longer than I have, so have much more experience of the before times.

In this article I wanted to look at some of the things I, and software engineering peers around me, have done to make Scrum this bad. Because, as Ron says, Scrum is systemically bad, and we are part of the system.

Focus on Velocity

One thing it’s possible to measure is how much stuff gets shovelled in per iteration. This can be used to guide a couple of process-improvement ideas. First up is whether our ability to foresee problems arising in implementation is improving: this is “do estimates match actuals” but acknowledging that they never do, and the reason they never do is that we’re too optimistic all the time.

Second is whether the amount of stuff you’re shovelling is sustainable. This has to come second, because you have to have a stable idea of how much stuff there is in “some stuff” before you can determine whether it’s the same stuff you shovelled last time. If you occasionally shovel tons of stuff, then everybody goes off sick and you shovel no stuff, that’s not sustainable pace. If you occasionally shovel tons of stuff, then have to fix a load of production bugs and outages, and do a load of refactoring before you can shovel any more stuff, that’s not a sustainable pace, even if the amount of work in the shovelling phase and the fixing phase is equivalent.

This all makes working out what you can do, and what you can tell other people about what you can do, much easier. If you’re good at working out where the problems lie, and you’re good at knowing how many problems you can encounter and resolve per time, then it’s easier to make plans. Yes, we value responding to change over following a plan, but we also value planning our response to change, and we value communicating the impact of that response.

Even all of that relies on believing that a lot of things that could change, won’t change. Prediction and forecasting aren’t so chaotic that nobody should ever attempt them, but they certainly are sensitive enough to “all things being equal” that it’s important to bear in mind what all the things are and whether they have indeed remained equal.

And so it’s a terrible mistake to assume that this sprint’s velocity should be the same as, or (worse) greater than, the last one. You’re taking a descriptive statistic that should be handled with lots of caveats and using it as a goal. You end up doing the wrong thing.

Let’s say you decide you want 100 bushels of software over the next two weeks, because you shovelled 95 bushels of software last week. The easiest way to achieve that is to take things that you think add up to 100 bushels of software, and do less work so that they contain only, say, 78 bushels, and try to get those 78 bushels done in the time.

Which bits do you cut off? The buttons that the customer presses? No, they’ll notice that. The actions that happen when the customer presses the buttons? They’ll probably notice that, too. How about all the refinement and improvement that will make it possible to add another 100 bushels next time round? Nobody needs that to shovel this 100 bushels in!

But now, next time, shovelling software is harder, and you need to get 110 bushels in to “build on the momentum”. So more corners get cut, and more evenings get worked. And now it’s even harder to get the 120 bushels in that are needed the next time. Actual rate of software is going down, claimed rate of software is going up: soon everything will explode.

Separating “technical” and “non-technical”

Sometimes also “engineering” and “the business”. Particularly in a software company, this is weird, because “the business” is software engineering, but it’s a problem in other lines of business enabled by software too.

Often, domain experts in companies have quite a lot of technical knowledge and experience. In one fintech where I used to work, the “non-technical” people had a wealth (pun intended) of technical know-how when it came to financial planning and advice. That knowledge is as important to the purpose of making financial technology software as is software knowledge. Well, at least as important.

So why do so many teams delineate “engineering” and “the business”, or “techies” and “non-technical” people? In other words, why do software teams decide that only the software typists get a say in what software gets made using what practices? Why do people divide the world into pigs and chickens (though, in fairness to the Scrum folks, they abandoned that metaphor)?

I think the answer may be defensiveness. We’ve taken our ability to shovel 50 bushels of software and our commitment (it used to be an estimate, but now it’s a commitment) to shovel 120 bushels, and it’s evident that we can’t realistically do that. Why not? Oh it must be those pesky product owners, keep bringing their demands from the VP of marketing when all they’re going to do is say “build more”, “shovel more software”, and they aren’t actually doing any of it. If the customer rep didn’t keep repping the customer, we’d have time to actually do the software properly, and that would fix everything.

Note that while the Scrum guide no longer mentions chickens and pigs, it “still makes a distinction between members of the Scrum team and those individuals who are part of the process, but not responsible for delivering products”. This is an important distinction in places, but irrelevant and harmful in others. But it’s at its most harmful when it’s too narrowly drawn. When people who are part of the process are excluded from the delivering-products cabal. You still need to hear from them and use their expertise, even if you pretend that you don’t.

The related problem I have seen, and been part of, is the software-expertise people not even gaining passing knowledge of the problem domain. I’ve even seen it working on a developer tools team where the engineers building the tools didn’t have cause to use, or particularly even understand, the tool during our day-to-day work. All of those good technical practices, like automated acceptance tests, ubiquitous language, domain-driven design; they only mean shit if they’re helping to make the software compatible with the things people want the software for.

And that means a little bit of give and take on the old job boundaries. Let the rest of the company in on some info about how the software development is going, and learn a little about what the other folks you work with do. Be ready to adopt a more nuanced position on a new feature request than “Jen from sales with another crazy demand”.

Undriven process optimisation

As I said up top, the biggest problem Scrum often encounters in the wild is that it’s a process improvement framework run by people who don’t have any expertise at process improvement, but have read a book on Scrum (if that). This is where the agile consultant / retrospective facilitators usually come in: they do know something about process improvement, and even if they know nothing about your process it’s probably failing in similar enough ways to similar processes on similar teams that they can quickly identify the patterns and dysfunctions that apply in your context.

Without that guidance, or without that expertise on the team, retrospectives are the regular talking shop in which everybody knows that something is wrong, nobody knows what, and anybody who has a pet thing to try can propose that thing because there are no cogent arguments against giving it a go (nor are there any for it, except that we’ve got to try something!).

Thus we get resume-driven development: I bet that last sprint was bad because we aren’t functionally reactive enough, so we should spend the next sprint rewriting to this library I read about on medium. Or arbitrary process changes: this sprint we should add a column to the board, because we removed a column last time and look what happened. Or process gaming: I know we didn’t actually finish anything this month, but that’s because we didn’t deploy so if we call something “done” when it’s been typed into the IDE, we’ll achieve more next month. Or more pigs and chickens: the problem we had was that sales kept coming up with things customers needed, so we should stop the sales people talking either to the customers, to us, or to both.

Work out what it is you’re trying to do (not shovelling bushels of software, but the thing you’re trying to do for which software is potentially a solution) and measure that. Do you need to do more of that? Or less of it? Or the same amount for different people? Or for less money? What, about the way you’re shovelling software, could you change that would make that happen?

We’re back at connecting the technical and non-technical parts of the work, of course. To understand how the software work affects the non-software goals we need to understand both, and their interactions. Always have, always will.

Posted in agile, process | Leave a comment

Episode 36: the Isolation Episode

From my anosmic isolation chamber as I’ve got the ‘rona! I talk about how many of the things software engineers think they should or shouldn’t be doing probably don’t have any impact on the success or failure of the software they’re making.

I don’t explicitly mention anything that should be in show notes, but a couple of resources relevant to setting goals for businesses and measuring progress would be beneficial, non? Here are two: Measure What Matters by John Doerr is the book on Objectives and Key Results (OKRs), introduced by Andy Grove at Intel and adopted by Alphabet/Google among others. The Four Disciplines from FranklinCovey are how you ensure that the thing you think you ought to be changing, and the thing you can and are changing, are connected.

Leave a comment

Sleep on it

In my experience, the best way to get a high-quality software product is to take your time, not crunch to some deadline. On one project I led, after a couple of months we realised that the feature goals (ALL of them) and the performance goals (also ALL of them, in this case ALL of the iPads back to the first-gen, single core 2010 one) were incompatible.

We talked to the customer, worked out achievable goals with a new timeline, and set to work with a new idea based on what we had found was possible. We went from infrequently running Instruments (the Apple performance analysis tool) to daily, then every merge, then every proposed change. If a change led to a regression, find another change. Customers were disappointed that the thing came out later than they originally thought, but it was still well-received and well-reviewed.

At the last release candidate, we had two known problems. Very infrequently sound would stop playing back, which with Apple’s DTS we isolated to an OS bug. After around 12 hours of uptime (in an iPad app, remember!) there was a chance of a crash, which with DTS we found to be due to calling an animation API in a way consistent with the documentation but inconsistent with the API’s actual expectations. We managed to fix that one before going live on the App Store.

On the other hand, projects I’ve worked on that had crunch times, weekend/evening working, and increased pressure to deliver from management ended up in a worse state. They were still late, but they tended to be even later and lower quality as developers who were under pressure to fix their bugs introduced other bugs by cutting corners. And everybody got upset and angry with everybody else, desperate to find a reason why the late project wasn’t my fault alone.

In one death march project, my first software project which was planned as a three month waterfall and took two years to deliver, we spent a lot of time shovelling bugs in at a faster rate than we were fixing them. On another, an angry product manager demanded that I fix bugs live in a handover meeting to another developer, without access to any test devices, then gave the resulting build to a customer…who of course discovered another, worse bug that had been introduced.

If you want good software, you have to sleep on it, and let everybody else sleep on it.

Posted in process, team | Leave a comment

Episode 35: a bored man with a microphone

I explore the theme of community and the difficulty I have with feeling like a member of a community of technologists. This was motivated by Joy, or Not by Ron Jeffries.

The Descent of Man by Grayson Perry

Society of Research Software Engineering

Thinking Clearly about Corporations by John Sullivan

[objc retain];, the video stream for Objective-C programmers on GNUstep

Leave a comment

My proposal for scaling open source: don’t

I’ve had a number of conversations about what “we” in the “free software community” “need” to do to combat the growth in proprietary, user-hostile and customer-hostile business models like cloud user-generated content hosts, social media platforms, hosted payment platforms, videoconferencing services etc. Questions can often be summarised as “what can we do to get everyone off of Facebook groups”, “how do we get businesses to adopt Jitsi Meet instead of Teams” or “how do we convince everyone that Mattermost is better for community chat than Slack”.

My answer is “we don’t”, which is very different from “we do nothing about those things”. Scaled software platforms introduce all sorts of problems that are only caused by trying to operate the software at scale, and the reason the big Silicon Valley companies are that big is that they have to spend a load of resources just to tread water because they’ve made everything so complex for themselves.

This scale problem has two related effects: firstly the companies are hyper-concerned about “growth” because when you’ve got a billion users, your shareholders want to know where the next hundred million are coming from, not the next twenty. Secondly the companies are overly-focused on lowest common denominator solutions, because Jennifer Miggins from South Shields is a rounding error and anything that’s good enough for Scott Zablowski from Los Angeles will have to be good enough for her too, and the millions of people on the flight path between them.

Growth hacking and lowest common denominator experiences are their problems, so we should avoid making them our problems, too. We already have various tools for enabling growth: the freedom to use the software for any purpose being one of the most powerful. We can go the other way and provide deeply-specific experiences that solve a small collection of problems incredibly well for a small number of people. Then those people become super-committed fans because no other thing works as well for them as our thing, and they tell their small number of friends, who can not only use this great thing but have the freedom to study how the program works, and change it so it does their computing as they wish—or to get someone to change it for them. Thus the snowball turns into an avalanche.

Each of these massive corporations with their non-free platforms that we’re trying to displace started as a small corporation solving a small problem for a small number of people. Facebook was internal to one university. Apple sold 500 computers to a single reseller. Google was a research project for one supervisor. This is a view of the world that’s been heavily skewed by the seemingly ready access to millions of dollars in venture capital for disruptive platforms, but many endeavours don’t have access to that capital and many that do don’t succeed. It is ludicrous to try and compete on the same terms without the same resources, so throw Marc Andreessen’s rulebook away and write a different one.

We get freedom to a billion people a handful at a time. That reddit-killing distributed self-hosted tool you’re building probably won’t kill reddit, sorry. Design for that one farmer’s cooperative in Skåne, and other farmers and other cooperatives will notice. Design for that one town government in Nordrhein-Westfalen, and other towns and other governments will notice. Design for that one biochemistry research group in Brasilia, and other biochemists and other researchers will notice. Make something personal for a dozen people, because that’s the one thing those massive vendors will never do and never even understand that they could do.

Posted in whatevs | 10 Comments