Over the past month, Jane Chong has written a series of posts published over at Security States that go under the title “Bad Code.” Her thesis (amply documented) is that those who write software code generally take inadequate precautions to ensure that the code the write is free of defects (or, more accurately, that they do not take steps to reasonably limit the degree to which defects are unavoidably present in code). The series likewise does a good job of explaining why this is a bad policy idea (because users do not take adequate precautions either and really can’t) and how it is that the existing legal doctrines have come, more or less, to protect software creators from liability.
Though perhaps Jane did not intend to, her series makes, in essence, the argument that software writers should be liable for unreasonable defects in their code because they are what law and economics analysts call the “least cost avoider.” My goal in this post is to explain that concept generally and then briefly suggest how it applies to the cyber domain.
A short summary of the concept is this: economic theory tells us that we should impose liability (and, thus, obligations to make changes to avoid liability if the cost/benefit ratio requires it) on the party to a transaction who is the one that can fix the “problem” while incurring the least cost – and that is probably the software writers and the ISPs, rather than the end users of cyber services.
To begin with, let’s define a least cost avoider. Consider a simple case (we’ll return to this example in more detail as we go along): Imagine a railroad running through a hayfield, where the engine gives off sparks that might burn down the hay. If we want to know who the least cost avoider is, we would ask questions like: how much does it cost to equip all the railroad engines with spark arrestors? And we would weight that cost against how much it might cost the farmer to move his haystacks and let the land near the railroad lie fallow or how much is the value of the hay that is burned. Plainly this is an empirical question and it might sometimes be difficult, if not impossible, to answer definitively in all economic contexts. But equally plainly, sometimes the answer may be clear – if, for example, the cost to the farmer would be $1000 and to the railroad $100,000, then farmer would be the least cost avoider.
So, why is the least cost avoider important? What is the theory that lies behind the insight and how does it link to liability?
Let’s begin with the idea of an externality. Cybersecurity is a “good” – that is an economic product that can be purchased in the private market. The production of a private good will often cause an externality – that is the activity between two economic actors may directly and unintentionally affect a third party. Externalities can be either positive (when a transaction I voluntarily enter into benefits a third party who pays nothing for the benefit) or negative (when the transaction harms the individual).
Think, for example, about our railroad example. The private good is the transaction involving transportation between the railroad and the traveler (or shipper) who uses the rail service. Presumably the price of that is set by the railroad to reflect its costs of production (fuel, rolling stock and manpower, etc.) with a margin of profit for the railroad. The possible effect on the farmer in burning his crop (at least in a world where there is no liability rule about whether the railroad has an obligation to the farmer for its adverse effects) is an externality – that is, an effect of the private transaction on an uninvolved third part. In this case, of course, the externality is quite a negative one.
Many cybersecurity activities have positive externalities. By securing my own server or laptop against intrusion, for example, I benefit others on the network whose systems become more secure by my actions. Indeed, almost every security measure performed on any part of cyberspace improves the overall level of cybersecurity by raising the costs of attack.
But cybersecurity also has two negative externalities. The first is a diversion effect: some methods of protection, such as firewalls, divert attacks from one target to another, meaning one actor’s security improvement can decrease security for systems that are not as well-protected.
The second is a pricing problem: much as the cost of a rail ticket does not include the expected damage to the farmer, private sector actors often do not internalize the costs of security failures in a way that leads them to take adequate protective steps. When software fails to prevent an intrusion or a service provider fails to interdict a malware attack, there is no mechanism through which to hold the software manufacturer or internet service provider responsible for the costs of those failures. The costs are born entirely by the end users. In this way, security for the broader Internet is a classic market externality whose true costs are not adequately recognized in the prices charged and costs experienced by individual actors.
This brings us to Ronald Coase, the Nobel prize winning economist and his famous article “The Problem of Social Cost.” His fundamental insights (which quite deservedly go by the name of the “Coase Theorem”) develop an understanding of how the economic reality of externalities ought, in theory, to be linked to legal concepts of duty and liability.
His first insight and a critical one is that all externalities are, in fact, reciprocal. To see this consider the idea of an “opportunity cost” – that is the financial cost of an economic opportunity that an actor declines to take. When Walmart considers expanding into Washington DC, but declines to do so (say, because of legislation in the DC Council) the profits they don’t earn there and all of the attendant follow on economic benefits that are not realized in the District are a lost opportunity cost.
That means that the externality of the railroad’s possible effect on the farmer is the flip side of the coin of a lost opportunity cost to the farmer – to avoid getting burned, he might simply forgo growing hay near the rail line. That’s a chance for profit that the farmer does not take in order to avoid the externality of a burning hayfield. Let’s say that hypothetically, he could make $5000 annually in hay sales. That’s his unrealized profit and thus his opportunity cost.
One of the things that Coase realized is that, because of the reciprocal nature of externalities, in a world without transaction costs (the emphasis is, of course, an important caveat) it does not matter where the law assigns the liability. In a free market, wherever the liability rests, the person who has the most to gain (or lose) economically will eventually wind up paying. So, for example, even if you make the railroad liable for fires caused in haystacks that the rail line passes, if it is worth it to them (in other words, if their profits are great enough and the cost of spark arrestors too high), they will just pay for the privilege of causing fires. In our example, they will negotiate with the farmer and give him, his lost opportunity costs (in our hypothetical, $5000) for the privilege of running the trains.
Likewise, if the farmer is “liable” – that is if the rule is that he bears the costs — he will make his own judgment based on his own opportunity costs and the harms he would incur. If the value to him is high enough (it would have to be much higher than in our hypothetical) he would even pay the railroad not to run its trains. But either way, the legal rule is irrelevant – Coase says we should expect the parties to negotiate and whichever one of the two would create the greatest positive net economic value would be the one to actually proceed with their economic activity while compensating the other party for their forgone opportunities.
But here’s the problem — we don’t live in a world without transaction costs. To the contrary, we know that adjudicating liability or negotiating exchanges of value actually takes time and costs money itself. We know that information asymmetries sometimes give some actors more or better information than others. And we know that sometimes some of the actors face a collective action problem – think of all the farmers along the railroad. Taken individually, they may be less disadvantaged by the railroad’s passage but collectively their economic value might be greater – and they have a huge problem getting together and coordinating their joint response to the railroad.
And that, of course, is exactly what Bad Code described with respect to software consumers. They don’t have the economic incentive to individually negotiate with software providers – they just accept whatever the terms of service are that are offered in the shrink-wrap contracts they agree to as a condition of getting the new product. They suffer from information asymmetries – the software providers know more about code writing than most consumers. And they have a collective action problem – each individual is in a very weak position with respect to negotiations with the software writers, especially the larger ones whose operating systems are both the most essential (how many of us use an open-source system like Linux instead of a Microsoft or Apple operating system?) and the most vulnerable to attack.
So what is the right economic answer to the liability question in a world where transaction costs exist? The answer to that question (and this is the last of Coase’s insights) is to make your best estimate of who the “least cost avoider” is – that is the person who will incur the least cost to avoid the harm under consideration. If you can correctly identify that person/entity (typically through some form of systems analysis) and allocate the liability there, then you will minimize transaction costs and as closely as possible approximate the pure Coasean world.
As we said, in the end identifying the least cost avoider is an empirical question, and it’s often a difficult one. Let’s return to our railroad hypothetical. If the spark arrestors cost $100,000 to install and the lost opportunity costs to the farmer are $5000 in profits in hay cultivation, then we need to know how many farmers are effected and what their collective costs are. Depending on the geography the liability should be placed on the railroad (i.e. the legal rule should be that the railroad pays for damages) if more than 100 farmers have affected land.
How that plays out in the software industry is even harder to assess as an empirical matter. But the Bad Code posts certainly make a strong case for the idea that the software developers, while they will certainly incur a cost to “fix” their code, will incur costs that are less than the collective costs of cyber insecurity. On the later issue, recent studies suggest that the magnitude of cost of cyber crime and espionage is quite high. For example, CSIS recently released a study, “Estimating the Costs of Cybercrime and Cyber Espionage” suggesting that annual costs to the US economy is $100 billion, and more than 500,000 lost jobs.
We rightly may suspect that it costs less than that to fix the code. Or, perhaps not – we should be open to countervailing factual evidence. The question involve not just knowing what the costs of developing better code are, but some estimate of the lost opportunity costs that arise from slower innovation and development of new software and applications. It is at least theoretically plausible that those costs (which we would all experience as a drag on the development of new applications) would be significant – though we might consider ways in which we limit that drag through structured liability rules. In the end, we don’t know the answer to this question – but if we think about the concept of the least cost avoider we might start asking more relevant questions.
One final note – cybersecurity is not a singular good. Rather it is a bundle of goods, ranging from better code to personal firewalls to network monitoring systems on the internet backbone. And so we might see other least cost avoiders out there as well – people like the ISPs who could more readily monitor traffic and interdict malware than can individual users. Here, however, if we were to go to a liability model we would also need to authorize the ISPs to act. Right now they are constrained by many perceived legal restrictions on their activities. It would be unjust in the extreme to give them the liability without the authority. But that’s a topic for another day ….