Who should be responsible for mitigating the harm from malicious social media content? The poster? The social media platform? The internet service provider that carries the transmission? Or some other actor? In a recently published paper (which I summarize here), I argue that we can learn a useful lesson from the economic concept of the “least cost avoider,” which, properly understood, suggests that significant responsibility for reducing harmful content should be allocated to non-platform actors, such as security services like Cloudflare or cloud service providers like Amazon Web Services.
The underlying theory of economic analysis is simple: In any given social situation where the potential for harm exists, there will be multiple actors who might be capable of taking steps to avoid the potential harm in question. Economists ask which of these actors can best minimize the costs arising from that harm. That actor, known as the least cost avoider (or sometimes the cheapest avoider) is typically the one where economic theory says responsibility for mitigation should be placed. And, naturally, with responsibility often comes liability. Hence, though assessing which actor is the least cost avoider is difficult, it is of great practical significance. As Yale Law School’s Guido Calabresi put it: “[T]he search for the cheapest avoider of accident costs is the search for that activity which has most readily available a substitute activity that is substantially safer. It is a search for that degree of alteration or reduction in activities which will bring about primary accident cost reduction most cheaply.”
By way of example, economic theory might suggest that the least cost avoider is the manufacturer of a ladder, rather than the individual purchasers. Or the writers of software code, rather than the enterprise implementers.
Current expectations place much of the responsibility on social media platforms to monitor, moderate, and mitigate malicious content. We look to Meta, for example, to take down racist rants. But is that the right answer? We might reasonably ask as a matter of first principle: Who (among the many actors in the internet ecosphere) is best situated to mitigate the risks of harmful social media content with minimal societal cost?
A quantitative analysis of the question is impossible—the requisite data does not exist. But we can do a qualitative assessment, examining the information environment along six different avenues of inquiry:
- Which parts of the ecosystem have better knowledge of the risks involved?
- Which have better ways of avoiding the risks/harms than alternate bearers?
- Which are in a better position to use that knowledge efficiently to choose the cheaper alternative?
- Which, by acting, will impose the least negative social costs through the risk of over-limitation?
- Which are better placed to induce modifications in the behavior of others where such modification is the cheapest way to reduce the sum of all social costs?
- And, related to these last two, which are better positioned to fine-tune the moderation processes, and which are blunter instruments?
How would this work in practice? Here are two examples (pulled from the larger analysis), as a way of illustrating the analysis more concretely:
Consider web hosting services like Blue Host. These services would be attractive candidates for moderating malicious content as they typically have direct contact with, and therefore knowledge of, their customer’s activity and content. They are, therefore, relatively well placed to act if they are aware of malicious activity, in much the same way that social media platforms are.
The first challenge, however, would be one of implementation. Currently, web hosting platforms do not undertake this level of scrutiny. Consequently, the hosting service community would have to create a mechanism for moderating content out of whole cloth. The cost of such an effort would be significant, especially since the distributed nature of web hosting services (there are more such services than there are, say, sizable social media platforms) would likely make content moderation at scale comparatively inefficient.
On the other hand, this diversity would mitigate any adverse social impact. Given that the web hosting system is more diverse than the social media platform system, there would be several methods of communication and alternate venues for outlet of non-malicious content if a false positive were experienced. For the same reason, however, sanctions in the web hosting system would have only a modest coercive impact and would be more easily evaded. And so, on balance, web hosting services would be an attractive, but imperfect venue for content moderation.
By contrast, to take another example, it seems clear that [content delivery networks (CDNs)] would be a relatively unattractive option. To be sure, localized cached content could be reviewed if appropriate, but that sort of examination is not currently very common, and doing so would likely require changes in law to authorize such activity.
More importantly, content moderation at the CDN level is likely to be ineffective. Caching content is useful to accelerate website resolution. But lack of access to a CDN does not eliminate the underlying content, it merely slows down a customer’s access to the information. Thus, CDN moderation may mitigate the risks from malicious content, but those risks are not fully avoided.
In addition, it seems likely that there would be a relatively high cost associated with tracking cached malicious content across a distributed network. Unlike with web browsers, there would be significant difficulty in making individuated assessments of cached content. And, as with web browsers, the diversity of CDN providers would make sanctions ineffective and the coercive impact would be limited.
This analysis (more fully detailed in the paper) suggests it is worthwhile considering other avenues for content moderation. To be sure, the idealized analysis may not have any real-world political viability. To put it bluntly, were anyone to suggest imposing moderation obligations on search engines (another area where the analysis suggests fruitful inquiry), Google and Microsoft (the operators of Google Search and Bing) would likely oppose the effort.
Political considerations aside, there are a number of actors in the information ecosystem who are plausibly well-suited to moderate content in ways similar to those currently employed by social media platforms.
The analysis also makes clear that our current system of principal reliance on social media platforms may well prove ill-advised. Online marketplaces and app stores have already begun to take steps to moderate content. Other venues such as search engines and web hosting systems are also plausible venues for mitigating the harm from malicious content.
Today, policymakers are considering a proliferation of ways to regulate content and/or mandate content moderation. In doing so, they have broadly ignored the diversity of the information ecosystem and the possibility of equally impactful interventions outside of the social media platforms. Least cost avoider analysis does not offer a clear-cut answer, but it does suggest that the current consensus answer—to rely primarily on social media platforms for content moderation—is both overly simplistic and, in the end, counterproductive.