Novel bean incoming

You may remember in July I updated the open source Bean word processor to work with then-latest Xcode and macOS. Over the last couple of days I’ve added iCloud Drive support (obviously only if the app is associated with an App Store ID, but everyone gets the autosave changes), and made sure it works on Big Sur and Apple Silicon.

Alongside this, a few little internal housekeeping changes: there’s now a unit test target, the app uses base localisation, and for each file I had to edit, I cleaned up some warnings.

Developers can try this new version out from source. I haven’t created a new build yet, because I’m still in the process of removing James Hoover’s update-checking code which would replace Bean with the proprietary version from his website. I’ll create and host a Sparkle appcast for automatic updates before doing a binary release, which will support macOS 10.6-11.1.

Posted in cocoa | Leave a comment

Episode 24: Thoughts on Swift

A discussion on whether Swift was inevitable and whether it has achieved its goals, motivated by @tolmasky’s discussion with @lorenb on the Thoughts on Flash letter.

Along the way I talk about Apple’s strategic investment into Java: I’ve discussed that in greater detail on this blog so largely lean on this article in the podcast.

Leave a comment

Episode 23: Licensing Software Engineers

In which I discuss the thorny issue of whether software engineering should be a licensed profession, mostly from the perspective of the ACM’s argument against it. Also considered is how, or even if, the whole Software Engineering Body of Knowledge (SWEBoK) could be examined in a single sitting, and whether an incremental approach like the Software Engineering Institute’s CMMI would work, or even be adopted.

Leave a comment

Episode 22: Attend More Meetings

As if you couldn’t guess, the topic is that software engineers should attend more meetings. I talk about the Maker’s Schedule, Manager’s Schedule idea, why it’s a false dichotomy, and why programmers can actually get to more meetings than they think without doing worse work. In fact, it’ll make their work better.

Leave a comment

Episode 21: No code is better than no code

In which we recommend deleting your code.

Steve McConnell’s More Effective Agile

Ward Cunningham introduces the debt metaphor for bad code, and my guess is it won’t be familiar as he presented it if you think you’re familiar with the term “technical debt”.

Leave a comment

Episode 20: what do we know about software engineering?

I explain the gap between episodes 19 and 20, and ask whether any of the practices we follow in software engineering are defensible.

As always, I welcome discussing this topic with you! Comment here, or send email to the address I read out in the episode.

Tagged | Leave a comment

The Silent Network

People say that the internet, or maybe specifically the web, holds the world’s information and makes it accessible. Maybe there was a time when that was true. But currently it’s not: probably not because the information is missing, but because the search engines think they know better than you what you want.

I recently had cause to look up an event that I know happened: at an early point in the iPod’s development, Steve Jobs disparaged MP3 players using NAND Flash storage. What were his exact words?

Jobs also disparaged the Adobe (formerly Macromedia) Flash Player media platform, in a widely-discussed blog post on his company website many years later. I knew that this would be a closely-connected story, so I crafted my search terms to exclude it.

Steve Jobs NAND Flash iPod. Steve Jobs Flash MP3 player. Steve Jobs NAND Flash -Adobe. Did any of these work? No, on multiple search engines. Having to try multiple search engines and getting the wrong results on all of them is 1990s-era web experience. All of these search terms return lists of “Thoughts on Flash” (the Adobe player), reports on that article, later news about Flash Player linking subsequent outcomes to that article, hot takes on why Jobs was wrong in that article, and so on. None of them show me what I asked for.

Eventually I decided to search the archives of one particular blog, which didn’t make the search engines prefer relevant results but which did reduce the quantity of irrelevant results. Finally, on the second page of articles from Daring Fireball about “Steve Jobs NAND flash storage iPod”, I found Flash Gordon. I still don’t have the quote, I have an article about a later development citing a dead link story that is itself interpreting the quote.

That’s the closest modern web searching tools would let me get.

Posted in ipod, meta-interwebs | 3 Comments

Apple Silicon, Xeon Phi, and Amigas

The new M1 chip in the new Macs has 8-16GB of DRAM on the package, just like many mobile phones or single-board computers. But unlike many desktop, laptop or workstation computers (there are exceptions). In the first tranche of Macs using the chip, that’s all the addressable RAM they have (i.e. ignoring caches), just like many mobile phones or single-board computers. But what happens when they move the Apple Silicon chips up the scale, to computers like the iMac or Mac Pro?

It’s possible that these models would have a few GB of memory on-package and access to memory modules connected via a conventional controller, for example DDR4 RAM. They almost certainly would if you could deploy multiple M1 (or successor) packages on a single system. Such a Mac would be a non-uniform memory access architecture (NUMA), which (depending on how it’s configured) has implications for how software can be designed to best make use of the memory.

NUMA computing is of course not new. If you have a computer with a CPU and a discrete graphics processor, you have a NUMA computer: the GPU has access to RAM that the CPU doesn’t, and vice versa. Running GPU code involves copying data from CPU-memory to GPU-memory, doing GPU stuff, then copying the result from GPU-memory to CPU-memory.

A hypothetical NUMA-because-Apple-Silicon Mac would not be like that. The GPU shares access to the integrated RAM with the CPU, a little like an Amiga. The situation on Amiga was that there was “chip RAM” (which both the CPU and graphics and other peripheral chips could access), and “fast RAM” (only available to the CPU). The fast RAM was faster because the CPU didn’t have to wait for the coprocessors to use it, whereas they had to take turns accessing the chip RAM. Nonetheless, the CPU had access to all the RAM, and programmers had to tell `AllocMem` whether they wanted to use chip RAM, fast RAM, or didn’t care.

A NUMA Mac would not be like that, either. It would share the property that there’s a subset of the RAM available for sharing with the GPU, but this memory would be faster than the off-chip memory because of the closer integration and lack of (relatively) long communication bus. Apple has described the integrated RAM as “high bandwidth”, which probably means multiple access channels.

A better and more recently analogy to this setup is Intel’s discontinued supercomputer chip, Knight’s Landing (marketed as Xeon Phi). Like the M1, this chip has 16GB of on-die high bandwidth memory. Like my hypothetical Mac Pro, it can also access external memory modules. Unlike the M1, it has 64 or 72 identical cores rather than 4 big and 4 little cores.

There are three ways to configure a Xeon Phi computer. You can not use any external memory, and the CPU entirely uses its on-package RAM. You can use a cache mode, where the software only “sees” the external memory and the high-bandwidth RAM is used as a cache. Or you can go full NUMA, where programmers have to explicitly request memory in the high-bandwidth region to access it, like with the Amiga allocator.

People rarely go full NUMA. It’s hard to work out what split of allocations between the high-bandwidth and regular RAM yields best performance, so people tend to just run with cached mode and hope that’s faster than not having any on-package memory at all.

And that makes me think that a Mac would either not go full NUMA, or would not have public API for it. Maybe Apple would let the kernel and some OS processes have exclusive access to the on-package RAM, but even that seems overly complex (particularly where you have more than one M1 in a computer, so you need to specify core affinity for your memory allocations in addition to memory type). My guess is that an early workstation Mac with 16GB of M1 RAM and 64GB of DDR4 RAM would look like it has 64GB of RAM, with the on-package memory used for the GPU and as cache. NUMA APIs, if they come at all, would come later.

Posted in Amiga, arm, HPC, Mac | 7 Comments

Recovering from deleting your login shell on a Mac

In case you ever need it. If you’re searching for something like “deleted login shell Mac can’t open terminal”, this is the post for you.

I just deleted my login shell (because it was installed with homebrew, and I removed homebrew without remembering that I would lose my shell). That stopped me from opening a Terminal window, because it would immediately bomb out as it was unable to open the shell.

Unable to open a normal Terminal window, anyway. In the Shell menu, the “New Command…” item let me run /bin/bash -l, from which I got to a login-like bash shell. Then I could run this command:

chsh -s /bin/zsh

Enter my password, and then I have a normal shell again.

(So I could then install MacPorts, and then change my shell to /opt/local/bin/bash)

Posted in Mac | Leave a comment

Lambda: the ultimate polymath

Thinking back over the last couple of years, I’ve had to know quite a bit about a few different topics to be able to write good software. Those topics:

  • Epidemiology
  • Architecture
  • Plant Sciences
  • Histology
  • Education

Not much knowledge in each field, though definitely expert-level knowledge: enough to have conversations as a peer with experts in those fields, to be able to follow the jargon, and to be able to make reasoned assessments of the experts’ suggestions in software designed for use by those experts. And, where I’ve wanted to propose alternate suggestions, enough expert-level knowledge to identify and justify a different design.

Going back over the rest of my career in software:

  • Pensions, investments, and saving
  • Mobile Telecommunications
  • Terry Pratchett’s Discworld
  • Macromolecular crystallography
  • Synchrotron accelerators
  • Home automation
  • Yoga
  • Soda drinks dispensers
  • Banks
  • Quite a bit of physics

In fact I’d estimate that I’ve spent less than 40% of my “professional career” in jobs where knowing software or even computers in general was the whole thing.

Working in software is about understanding a system to the point where you can use a computer to make a meaningful and beneficial contribution to that system. While the systems thinking community have great tools for mapping out the behaviour of systems, they are only really good for making initial models. In order to get good at it, we have to actually understand the system on its own terms, with the same ideas and biases that the people who interact regularly with the system use.

But of course, because we’re hired for our computering skills, we get to experience and work with all of these different systems. It’s perhaps one of the best domains in which to be a polymath. To navigate it effectively, we need to accept that we are not the experts. We’re visitors, who get to explore other people’s worlds.

We should take them up on that offer, though, if we’re going to be effective. If we maintain the distinction between “technical” and “non-technical” people, or between “engineering” and “the business”, then we deny ourselves the ability to learn about the problem we’re trying to solve, and to make a good solution.

Posted in advancement of the self | Leave a comment