Technology’s ubiquitous nature and integration into almost every aspect of our daily lives make the risks from insecure technology that much more damaging. And yet the hardware and software on which we rely is pervasively insecure. If you want one metaphorical symbol for the problem, consider that on the first Tuesday of every month, Microsoft pushes out a series of software patches, and nobody considers this remarkable. Yet imagine if, every month, you needed to bring your car in for an update, or your dishwasher. In the technology domain we accept insecure products that, in any other context, would be unacceptable.
The U.S. government through its recently released National Cybersecurity Strategy has signaled that it intends to try to change this dynamic. The government will embrace the principle of “security by design” as a way of confronting this challenge. In tandem with a new emphasis on secure software, the government has also said that it will explore legislation to impose liability for insecure software onto vendors that fail to take appropriate or reasonable precautions.
But that is just an intention. The devil is in the details. How, precisely, should law and policy demand security by design from software developers? When, if ever, should liability for inadequate security be imposed?
To answer these questions, the Lawfare Institute is launching a multiyear project to evaluate several elements related to “security by design” for software, including secure-by-design principles and how legal and policy processes could require or incentivize security by design from software developers. Google has provided the initial funding for this project, but, per Lawfare’s policy on intellectual independence, it has no editorial control or oversight over the project’s research or outputs.
The goal of this project is to create a density of work product in the area of software design security across four broad questions:
Is there a useful working definition of “security by design”? Our perception is that there is a multiplicity of definitions that vary from enterprise to enterprise and that in some cases the definition is more ambiguous than is useful for successful policy implementation. We hope to first survey existing definitions and then abstract common themes.
Can security by design be measured? If so, how? After all, a definition that does not allow for reproducible and auditable results is of relatively little public policy value. Metrics are, in the end, the ground of transparency and accountability.
How can security by design principles be translated into articulable standards? Assuming that one can identify a core, measurable definition of what “secure by design” means, what is the best forum for articulating those standards in a way that results in the standards being generally accepted and widely adopted. Is the problem best addressed voluntarily? Is it an international issue?
How can standards be incentivized or imposed? What are the best governmental mechanisms (if any) to use in an effort to leverage generally agreed upon standards to drive change? Is liability the best course? What would a safe harbor look like?
As a first phase of this project, we hope to survey existing practice to better understand how different industry and government stakeholders define and implement the concept of security by design.
The more that security by design receives discussion in law, policy, and standards circles, the more important it becomes to define it clearly and understand its implications. Over the coming years, we hope to contribute to the discussion.