As we move toward a presidential election, the question arises: what should the next President (whoever he or she may be) do about cybersecurity? It is a sufficiently salient question that President Obama has chartered a commission, whose basic purpose is to think about next steps for whoever follows after. That’s a challenging task and to cast their net widely the commission has scheduled five public meetings in the next few months. Nor, of course, will the commission’s recommendations be the only entrant — think tanks and other institutions are likely to offer recommendations as well.
So what should these recommendations focus on? The proper answer, I think, is to turn the question around and ask not “what government could do” but rather “what is government uniquely capable of doing?” Over the course of the last four years we have learned that most of the locus of good cybersecurity resides in the private sector. As Robert Knake (former Director of Cybersecurity Policy at the NSC) put it in a recent Foreign Affairs article: “As appealing as it seemed for Washington to take cybersecurity responsibilities out of the hands of private enterprise, the costs and consequences of an expanded government role would do more harm than good. Few private sector executives like the idea that they are responsible for securing their own networks and data, especially against foreign militaries and intelligence agencies. Effective cybersecurity is costly, and defense against foreign agents appears to be a government task on the surface. But making cybersecurity a government responsibility would come with a set of costs that far outweigh the benefits.”
Put another way, the proper focus for government intervention is on situations where the markets fail. One area where we think market incentives don’t work is in the area of information sharing about threat and vulnerability information — hence the focus on that subject in recent years and the (belated) passage of the Cybersecurity Act of 2015.
There is at least one other area that seems ripe for government intervetion (and thus for consideration by the next Presidency): they systematic vulnerability of the network from an engineering perspective. The story is a by now familiar one. The original network was designed with protocols that were intended to scale readily. Ease of use and expansion were prioritized over security and authentication. At a tactical level, software development and deployment at pace were preferred — first to market was (and remains) a concrete advantage and consumers were willing to accept (or unaware that they had accepted) risk in return for new systems.
The result is obvious — the network is riddled with legacy products and systems that have vulnerability. The gaps and holes are so pervasive that no one actor has the capacity to identify and remeidate all of them. And to the extent that individual engineering gaps are identified, no one actor has the economic incentive to develop a solution since the benefits will accrue to many while he or she will bear the costs exclusively. Meanwhile, retrofitted security protocols need to be backward compatible to existing architectures or risk breaking network ubiquity. In short there are plenty of readily observable engineering problems (we are excluding here all of the political problem set) for which no solution is forthcoming from the market. But rather than address those problems, our market drives us to pallative solutions — rather than fix the vulnerabilities in the first instance we invest in preventative mechanisms — intrusion detection systems, firewalls, and resiliency systems that are pure costs in response to pervasive vulnerability.
Non-market forces can, and do, sometimes try to propogate fixes. But their efforts are often stymied by a lack of resources and the challenge of economic disincentives. The IETF’s decade-long effort to develop a more secure domain name system (known as DNSSEC) was plagued by delays and has yet to be fully implement on a global scale for precisely those reasons. Any number of other engineering challenges (buffer overflows, for example) are widely recognized but nobody has the incentive to solve the problem at scale.
And that, it seems, is a perfect role for government — whether the US government acting on its own or, preferably, in concert with other like minded governments. Governments are ideally suited to addressing collective action problems that are plagued by free rider effects, and increasing the security of the network is precisely such a problem set. Everyone wants a more secure network, but no individual (or small group) has the inentive and resources to fix it. Governments do. We should no longer accept as a given that the network is inherently vulnerable — rather, we should set about the difficult, challenging, and in some ways heroic task of making the network less vulnerable.
I am no computer engineer, so the full scope of the problem is beyond my capability to articulate. I can, however, suggest a process by which one might proceed to the development of a plan to meet such a goal. Herewith a recommendation for the next President (and/or the various commissions that will advise him):
- As an initial step, the next President should charter some institution (NIST seems a likely candidate) to conduct an open process of public/private/global inquiry for the purpose of identifying as many engineering vulnerabilties in the network as it can. The sole criteria for inclusion on this list should be that the vulnerability is one that is capable of a technical/engineering solution that if successfully implemented would ameliorate or eliminate the vulnerability in question.
- Once such a list was assembled the same process could then engage in a cost-benefit analysis of the development of solutions to the vulnerabilities identified. In other words, estimate a dollar value for the costs of producting a fix (including, of course, dislocation costs from its implementation) and also a concommitant estimated value of the benefit (in harm avoided, mostly, I imagine) that the implementation of a new solution would engender. I have an instinct that many low-cost/low-effect issues could be identifed and that many other high-cost/large-benefit problems would be noted. [Of course if there appears a low-cost/large-benefit item on the list that would be delightful — suprising but delightful].
- From this analysis the government(s) could develop a strategic plan for the investment of resources as applied to the vulnerabilities identifed. They might choose to use a “low hanging fruit” strategy and pick off the simple, cheaper problems first; or they might go for larger investments in high-consequence areas of concern. That’s a judgment that can be made once the data is available on which to base such a decision. But the end result of this process would be a strategy for systematically approaching the engineering vulnerabilites of the network and working to reduce them over time.
This project is not for the faint of heart. It will take many years of effort — in much the same way that the engineering effort of putting a man on the moon required a decade of focus. Nonetheless, the United States government should have as a strategic goal for cybersecurity the reduction of vulnerability on the network, with a time-scale on the order of twenty years for implementation of a strategic plan of action. That would be Kennedy-esque and an objective worthy of this nation.