Story points: there’s no right way to do it

Story points as described represent an attempt to abstract estimation away from “amount of stuff done per unit time”, because we’re bad at doing that and people were traditionally using that to make us look bad. So we introduce an intermediate value, flip a ratio, and get the story point: the [meaningless value] per amount of stuff. Then we also get the velocity, which is the [meaningless value] per unit time, and…

…and we’re back where we started. Except we’re not, we’re slower than we were before, because it used to be that people asked “how much do you think you can get done over the next couple of weeks”, and we’d tell them, and we’d be wrong. But now they ask “how big is this stuff”, then they ask “how much capacity for stuff is there over the next couple of weeks”, and we tell them both of those things, and we get both wrong, so we still have a wrong answer to the original question but answered two distinct questions incorrectly to get there.

There’s no real way out of that. The idea that velocity will converge over time is flawed, both because the team, the context, and the problem are all changing at once, and because the problem with estimation is not that we’re Gaussian bad at it, but that we’re optimistic bad at it. Consistently, monotonically, “oh I think this will just mean editing some config, call it a one-pointer”-ingly, we fail to see complexity way more than we fail to see simplicity. The idea that even if velocity did converge over time, we would then have reliable tools for planning and estimation is flawed, because what people want is not convergence but growth.

Give people 40 points per sprint for 20 sprints and you’ll be asked not how you became so great at estimation, but why your people aren’t getting any better. Give them about 40 points per sprint for 20 sprints, and they’ll applaud the 44s and frown at the 36s.

The assumption that goes into agile, lean, kanban, lean startup, and similar ideas is that you’re already doing well enough that you only need to worry about local optima, so you may as well take out a load of planning overhead and chase those optima without working out your three-sprint rolling average local optimisation rate.

OOP the Easy Way

It’s still very much a work in progress, but OOP the Easy Way is now available to purchase from Leanpub (a free sample is also available from the book’s Leanpub page). Following the theme of my conference talks and blog posts over the last few years, OOP the Easy Way starts with an Antithesis, examining the accidental complexity that has been accumulating under the banner of Object-Oriented Programming for nearly four decades. This will be followed by a Thesis, constructing a model of software objects from the essential core, then a Synthesis, investigating problems that remain unsolved and directions that remain unexplored.

At this early stage, your feedback on the book is very much welcome and will help yourself and fellow readers to get more from the book. You will automatically get updates for free as they are published through Leanpub.

I hope you enjoy OOP the Easy Way!

Or maybe, because we want to

How (and Why) Developers Use the Dynamic Features of Programming Languages: The Case of Smalltalk is an interesting analysis of the reality of dynamic programming in Smalltalk (Squeak and Pharo, really). Taking the 1,000 largest projects on SqueakSource, the authors quantitatively examine the use of dynamic features in projects and qualitatively consider why they were adopted.

The quantitative analysis is interesting: unsurprisingly a small number (under 1.8%) of methods use dynamic features, but they are spread across a large number of projects. Applications make up a large majority of the projects analysed, but only a small majority of the uses of dynamic features. The kinds of dynamic features most commonly used are those that are also supplied in “static” languages like Java (although one of the most common is live compilation).

The qualitative analysis comes from a position of extreme bias: the poor people who use dynamic features of Smalltalk are forced to do so through lack of alternatives, and pity the even poorer toolsmiths and implementors whose static analysis, optimisation and refactoring tools are broken by dynamic program behaviour! Maybe we should forgot that the HotSpot optimisation tools in Java come from the Smalltalk-ish Self environment, or that the very idea of a “refactoring browser” was first explored in Smalltalk.

This quote exemplifies the authors’ distaste for dynamic coding:

Even if Smalltalk is a language where these features are comparitively easier to access than most programming languages, developers should only use them when they have no viable alternatives, as they significantly obfuscate the control flow of the program, and add implicit dependencies between program entities that are hard to track.

One of the features of using Object-Oriented design is that you don’t have to consider the control flow of a program holistically; you have objects that do particular things, and interesting emergent behaviour coming from the network of collaboration and messages passed between the objects. Putting “comprehensible control flow” at the top of the priority list is the concern of the structured programmer, and in that situation it is indeed convenient to avoid dynamic rewriting of the program flow.

I have indeed used dynamic features in software I’ve written, and rather than bewailing the obfuscation of the control flow I’ve welcomed the simplicity of the solution. Looking at a project I currently have open, I have a table data source that uses the column identifier to find or set a property on the model object at a particular row. I have a menu validation method that builds a validation selector from the menu item’s action selector. No, a static analysis tool can’t work out easily where the program counter is going, but I can, and I’m more likely to need to know.

On version 12

Reflecting on another WWDC keynote reminded me of this bit in Tron:Legacy, which I’ve undoubtedly not remembered with 100% accuracy:

We’re charging children and schools so much for this, what’s so great about the new version?

Well, there’s a 12 in it.

Warsaw Welcomes Dumbass Commentary

As I’m going to MCE tomorrow, tonight I’m going to my first WWDC keynote event since 2015. I doubt it’ll quite meet the high note of “dissecting” software design issues in the sports lounge at Moscone with Daniel Steinberg and Bill Dudney, but it’s always better with friends.

As I mentioned recently, almost everything I use on a Mac is a cross-platform application, or a LinuxKit container. I find it much quicker to write a GUI in React with my one year of experience than Cocoa with my couple of decades of experience. Rather than making stuff more featured, Apple need to make it relevant.

Eating the bubble

How far back do you want to go to find people telling you that JavaScript is eating the world? Last year? Two years ago? Three? Five?

It’s a slow digestion process, if that’s what is happening. Five years ago, there was no such thing as Swift. For the last four years, I’ve been told at mobile dev conferences that Swift is eating the world, too. It seems like the clear, unambiguous direction being taken by software is different depending on which room you’re in.

It’s time to leave the room. It looks sunny outside, but there are a few clouds in the sky. I pull out my phone and check the weather forecast, and a huge distributed system of C, Fortran, Java, and CUDA tells me that I’m probably going to be lucky and stay dry. That means I’m likely to go out to the Olimpick Games this evening, so I make sure to grab some cash. A huge distributed system of C, COBOL and Java rumbles into action to give me my money, and tell my bank that they owe a little more money to the bank that operates the ATM.

It seems like quite a lot of the world is safe from whichever bubble is being eaten.

Netscape won

Back when AOL was a standalone company and Sun Microsystems existed at all, Netscape said that they wanted Windows to be a buggy collection of device drivers that people used to access the web, which would be the real platform.

It took long enough that Netscape no longer exists, but they won. I have three computers that I regularly use:

  • my work Mac has one Mac-only, Mac-native app open during the day[*]. Everything else is on the web, or is cross-platform. It doesn’t particularly matter what _technology_ the cross-platform stuff is made out of because the fact that it’s cross-platform means the platform is irrelevant, and the technology is just a choice of how the vendors spend their money. I know that quite a bit of it is Electron, wrapped web, or Java.
  • my home Windows PC has some emulators for playing (old) platform-specific games, and otherwise only runs cross-platform apps[*] and accesses the web.
  • my home Linux laptop has the tools I need to write the native application I’m writing as a side project, and everything else is cross-platform or on the web.

[*] I’m ignoring the built-in file browsers, which are forced upon me but I don’t use.

On twitter [or otherwise]

As occasionally happens, I’ve been reevaluating my relationships with social media. The last time I did this I received emails asking whether I was dead, so let me assure you that such rumours are greatly exaggerated.

Long time readers will remember that I joined twitter about a billion years ago as ‘iamleeg’, a name with a convoluted history that I won’t bore you with but that made people think that I was called Ian. So I changed to secboffin, as I had held the job title Security Boffin through a number of employers. After about nine months in which I didn’t interact with twitter at all, I deleted my account: hence people checking I wasn’t dead.

This time, here’s a heads up: I don’t use twitter any more, but it definitely uses me. When I decided I didn’t want a facebook account any longer, I just stopped using it, then deactivated my account. Done. For some reason when I stop using my twitter account, I sneak back in later, probably for the Skinnerian pleasure of seeing the likes and RTs for posts about new articles here. Then come the asinine replies and tepid takes, and eventually I’m sinking serious time into being meaningless on Twitter.

I’d like to take back my meaninglessness for myself, thank you very much. This digital Maoism which encourages me, and others like me, to engage with the system with only the reward of more engagement, is not for me any more.

And let me make an aside here on federation and digital sharecropping. Yes, the current system is not to my favour, and yes, it would be possible to make one I would find more favourable. I actually have an account on one of the Free Software microblogging things, but mindlessly wasting time there is no better than mindlessly wasting time on Twitter. And besides, they don’t have twoptwips.

The ideal of the fediverse is flawed, anyway. The technology used on the instance I have an account is by and large blocked from syncing with a section of the fediverse that uses a different technology, because some sites that allow content that is welcome in one nation’s culture and forbidden in another nation’s culture also use that technology, even though the site of which I am a member doesn’t include that content. Such blanket bans are not how federation is supposed to work, but are how it does work because actually building n! individual relationships is hard, particularly when you work to the flawed assumption that n should be everyone.

And let’s not pretend that I’m somehow “taking back control” of my information by only publishing here. This domain is effectively rented from the registry on my behalf by an agent, the VPS that the blog runs on is rented, the network access is rented…very little of the moving parts here are “mine”. Such would be true if this were a blog hosted on Blogger, or Medium, or Twitter, and it’s true here, too.

Anyway, enough about the hollow promises of the fediverse. The point is, while I’m paying for it, you can see my posts here. You can see feeds of the posts here. You can write comments. You can write me emails.

I ATEN’T DEAD.