Structure and Interpretation of Computer Programmers

I make it easier and faster for you to write high-quality software.

Sunday, August 17, 2014

Intellectual property and software: the nuclear option

There are many problems that arise from thinking about the ownership of software and its design. Organisations like the Free Software Foundation and Open Source Initiative take advantage of the protections of copyright of source code – presumed to be a creative work analogous to a written poem or a painting on canvas – to impose terms on how programs derived from the source code can be used.

Similar controls are not, apparently, appropriate for many proprietary software companies who choose not to publish their source code and control its use through similar licences. Some choose to patent their programs – analogous to a machine or manufactured product – controlling not how they are used but the freedom of competitors to release similar products.

There is a lot of discomfort in the industry with the idea that patents should apply to software. On the other hand, there is also distaste when a competitor duplicates a developer’s software, exactly the thing patents are supposed to protect against.

With neither copyright nor patent systems being sufficient, many proprietary software companies turn to trade secrets. Rather than selling their software, they license it to customers on the understanding that they are not allowed to disassemble or otherwise reverse-engineer its working. They then argue that because they have taken reasonable steps to protect their program’s function from publication, it should be considered a trade secret – analogous to their customer list or the additives in the oil used by KFC.

…and some discussions on software ownership end there. Software is a form of intellectual property, they argue, and we already have three ways to copy with that legally: patents, copyright, and trade secrets. A nice story, except that we can quickly think of some more.

If copyright is how works of art are protected, then we have to acknowledge that not all works of art are considered equal. Some are given special protection as trade marks: exclusive signs of the work or place of operation of a particular organisation. Certain features of a product’s design are considered similarly as the trade dress of that product. Currently the functionality of a product cannot be considered a trademark or trade dress, but what would be the ramifications of moving in that direction?

We also have academic priority. Like the patent system, the academic journal system is supposed to encourage dissemination of new results (like the patent system, arguments abound over whether or not it achieves this aim). Unlike the patent system, first movers are not awarded monopoly, but recognition. What would the software industry look like if companies had to disclose which parts of their products they had thought of themselves, and which they had taken from Xerox, or VisiCorp, or some other earlier creator? Might that discourage Sherlocking and me-too products?

There’s also the way that nuclear propagation is controlled. It’s not too hard to find out how to build nuclear reactors or atomic weapons, particularly as so much of the work was done by the American government and has been released into the public domain. What it is hard to do is to build a nuclear reactor or atomic weapon. While the knowledge is unrestricted, its application is closely controlled. Control is overseen by an international agency, part of the United Nations. This has its parallels with the patent system, where centralisation into a government office is seen as one of the problems.

The point of this post is not to suggest that any one of the above analogues is a great fit for the problem of ownership and competition in the world of software. The point is to suggest that perhaps not all of the available options have been explored, and that accepting the current state of the world “because we’ve exhausted all the possibilities” would be to give up early.

posted by Graham at 22:06  

Tuesday, September 6, 2011

Don’t be a dick

In a recent post on device identifiers, I wrote a guideline that I’ve previously invoked when it comes to sharing user data. Here is, in both more succinct and complete form than in the above-linked post, the Don’t Be A Dick Guide to Data Privacy:

  • The only things you are entitled to know are those things that the user told you.
  • The only things you are entitled to share are those things that the user permitted you to share.
  • The only entities with which you may share are those entities with which the user permitted you to share.
  • The only reason for sharing a user’s things is that the user wants to do something that requires sharing those things.

It’s simple, which makes for a good user experience. It’s explicit, which means culturally-situated ideas of acceptable implicit sharing do not muddy the issue.

It’s also general. One problem I’ve seen with privacy discussions is that different people have specific ideas of what the absolutely biggest privacy issue that must be solved now is. For many people, it’s location: they don’t like the idea that an organisation (public or private) can see where they are at any time. For others, it’s unique identifiers that would allow an entity to form an aggregate view of their data across multiple functions. For others, it’s conversations they have with their boss, mistress, whistle-blower or others.

Because the DBADG mentions none of these, it covers all of these. And more. Who knows what sensors and capabilities will exist in future smartphone kit? They might use mesh networks that can accurately position users in a crowd with respect to other members. They could include automatic person recognition to alert when your friends are nearby. A handset might include a blood sugar monitor. The fact is that by not stopping to cover any particular form of data, the above guideline covers all of these and any others that I didn’t think of.

There’s one thing it doesn’t address: just because a user wants to share something, should the app allow it? This is particularly a question that makers of apps for children should ask themselves. Children (and everybody else) deserve the default-private treatment of their data that the DBADG promotes. However, children also deserve impartial guidance on what it is a good or a bad idea to share with the interwebs at large, and that should be baked into the app experience. “Please check with a responsible adult before pressing this button” does not cut it: just don’t give them the button.

posted by Graham at 11:00  

Wednesday, May 18, 2011

“Patently” secure

One thing that occasionally becomes interesting about working in security is that doing security and managing business have a great deal of overlap. This makes a lot of sense: a business wants to be profitable, and profit is a reward conferred by the market for taking on some risk. But too much risk can expose your business to undesirable failures, so understanding and controlling your exposure to risk is a useful exercise.

Well that’s fundamentally how security works too. There is some reward to be gained by performing the activity allowed by an app: that might be the enjoyment of playing a game, the cost savings of keeping track of your finances, or the health benefits of seeing what food you consume. But using the app also brings some risk, and so security people seek to quantify and reduce the risk inherent in any app.

I’m going to compare a business risk (infringing on another’s patent) to an information security risk (leaking confidential data) to show just how similar these fields are. I choose patent infringement because it’s an apposite case: however you’ll find that I don’t name particular patents or companies for reasons that will be entered into below. Suffice it to say that I have dealt with software patent lawyers in the past and have some – but not much, by any means – experience of how the US patent system operates. If you choose to infer any advice from this blog post, please seek appropriate counsel before acting on it: I am not a lawyer, and I am certainly not your lawyer.


A risk to either a business or a user can be summed up by the expected damage caused by the event coming to pass. That is, the estimated cost (financial, emotional, intangible etc.) of the risky event multiplied by the expected probability of that event happening.

In the leaky data case, the expected damage would be “what chance is there that an attacker will retrieve the data” × “what is the impact to the user of exposing the data”? Both of these parameters are hard to quantify: information about data security breaches is notoriously hard to get hold of because companies are reluctant to talk about problems they’ve had protecting their customer records. Combine with that the fact that in many fields even the direct costs of a breach are hard to arrive at, and you end up multiplying two very big error bars together.

In the infringement case, things are a bit more straightforward. Legal reports are – in many jurisdictions – a matter of public record, so seeing what the damage of a case “like yours” is going to be is quite easy. That covers direct costs, anyway: indirect costs like lost custom, damaged reputation etc. are harder to arrive at. The likelihood of being caught infringing on a patentholder’s rights is harder to estimate, but I expect not beyond the realms of reason.


There are a few different approaches to reducing (mitigating) the risk involved, which either address the likelihood or expected cost of impact. Let’s look at them. You don’t have to choose any one approach: a successful strategy may combine tactics from each of these categories and even use more than one tactic from the same category.


Remove any likelihood and impact of a risky event occurring by refusing to participate in the risky activity. In the confidentiality case, this means not storing the secrets in the first place. In the patent case, it means not using the potentially infringing invention.

In either case withdrawing from the activity certainly mitigates any risk very reliably, but it also means no possibility of gaining the reward associated with participation.

This is why, going back to an earlier point, I don’t comment on particular patent cases. Given that patent rights asserters are, in my opinion, more litigious than I, there’s a chance that if I talk about a particular case I’ll be considered defamatory. I’d rather avoid that risk, and choose to control it by withdrawing from talking about the cases.


You can opt to transfer the risk to another party, usually for a fee: this basically means taking out insurance. In either of our case studies, look for insurance that protects against the damage incurred. This doesn’t affect the probability that our risky event will come to pass, but means that someone else is liable for the damages.

Employing Countermeasures

Finding some technical or process approach to reduce the risk. In the patent case this is simple: the countermeasure to “sued by patent holder” is “license patent”.

In the confidentiality case, this means technical countermeasures: access control, cryptography and the like.

But think about deploying these countermeasures: you’ve now made your business or your application a bit more complex. Have you introduced new risks? Have you increased the potential damage from some risks by reducing others? And, of course, is your countermeasure cost-effective? The traditional security mantra is “don’t spend $1000 to save $100”: don’t license $1000 of patents to protect a $100 product, and don’t implement $1000 of crypto to hide $100 of data.


The “suck it up” approach to security: accept that the risk exists and that you may be liable for the damage if it ever comes to pass. In our information security case, this means storing the data and accepting that someone else might be able to read it. In our patent case, this means adopting the potentially-infringing invention and accepting that a licensor might come a-knocking.

All risk mitigation strategies have a certain amount of acceptance involved, apart from withdrawal. Imagine that you pay some insurance premium, and that indemnifies you up to $10M with an excess of $1000. You have to choose whether you accept the residual risk exposed in covering the excess and any overage.

Similarly, in the information security case, let’s say you have data assets which, if leaked, would cost $1M in damages. You implement a particular cryptography technique that reduces the likelihood of leaking the data from an estimated once per year to an estimated once per thousand years. Again, do you accept the remaining $1000?


Information security and business management are actually pretty closely related. It’s just that information security requires specialised technical knowledge: and that’s where I come in ;-).

posted by Graham at 21:10  

Wednesday, December 8, 2010

Did the UK create a new kind of “Crypto Mule”?

It’s almost always the case that a new or changed law means that there is a new kind of criminal, because there is by definition a way to contravene the new law. However, when the law allows the real criminals to hide behind others who will take the fall, that’s probably a failure in the legislation.

The Regulation of Investigatory Powers Act 2000 may be doing just that. In Section 51, we find that a RIPA order to disclose information can be satisfied by disclosing the encryption key, if the investigating power already has the ciphertext.

Now consider this workflow. Alice Qaeda needs to send information confidentially to Bob Laden (wait: Alice and Bob aren’t always the good guys? Who knew?). She doesn’t want it intercepted by Eve Sergeant, who works for SOCA (wait: Eve isn’t always the bad guy etc.). So she prepares the information, and encrypts it using Molly Mule’s public key. She then gives the ciphertext to Michael Mule.

Michael’s job is to get from Alice’s location to Bob’s. Molly is also at Bob’s location, and can use her private key to show the plaintext to Bob. She doesn’t necessarily see the plaintext herself; she just prepares it for Bob to view.

Now Alice and Bob are notoriously difficult for Eve to track down, so she stops Michael and gets her superintendent to write a RIPA demand for the encryption key. But Michael doesn’t have they key. He’ll still probably get sent down for two years on a charge of failing to comply with the RIPA request. Even if Eve manages to locate and serve Molly with the same request, Molly just needs to lie about knowing the key and go down for two years herself.

The likelihood is that Molly and Michael will be coerced into performing their roles, just as mules are in other areas of organised crime. So has the legislation, in trying to set out government snooping permissions, created a new slave trade in crypto mules?

posted by Graham at 10:52  

Powered by WordPress