Weekend Edition: What our debate about Section 230 and platform liability should be about
Two cases this week about Section 230 and social media liability for terrorism could affect the next generation of information and media systems and their influence on our public sphere.
Two exceptionally important cases were in oral arguments before the Supreme Court this week. How they are decided will profoundly affect how we think about information and information systems, media and publishing, how we build culture, how we build community, and how the technologies that have fueled the last decade of civic dysfunction will be regulated and potentially tuned for the benefit of our public sphere and not the detriment of it. Tuesday, the Supreme Court heard oral arguments in Gonzales v. Google and what was immediately apparent from lines of questioning from all sides of the court was just how poorly our most powerful Court understands, not the legal issues at issue, but the ethical, technical, and design questions at stake.
Section 230 of the Communication Decency Act of 1996 was crafted to shield platforms from liability for the content that users publish on and using their systems. The key legal definition here is that the platforms are not publishers, that the liability for content stops with the creator. What we see here is an attempt to use an existing category (“publisher”) to regulate an entirely new category of technology with far-reaching consequences. People have often remarked that these are the 26 words that founded the internet as we know it, creating incredible opportunities for creativity and new platforms without the fear of constant liability. Justice Brown focused her questions on whether this kind of automated, algorithmic content moderation represented by recommendation engines at issue in the Gonzalez case was really what Section 230 was meant to shield. If one were to speak to Senator Wyden (or read his amicus brief), who wrote Section 230 back in the 90s, his answer would be a resounding yes. Recommendation engines are just automated mechanisms of moderation, aggregations of thousands of decisions and data points exactly similar to the kind of individual content decisions that Section 230 was meant to shield. What ought to be at issue here is not whether individual moderation decisions deserve a liability shield or whether platforms should be treated as publishers, but whether platforms should be treated as a separate category under the law, with discrete and explicit responsibilities for their aggregate effects on people, communities, and society.
It may be the case that individual content moderation decisions ought to be shielded from liability — that we don’t, in fact, want platforms to be liable for individual pieces of content or individual choices to allow or disallow them. But the broadest possible interpretation of this shield and failure to imagine a platform as it’s own category has created essentially a consequence-free marketplace of information systems and a set of platforms with profound effects on society generating massive profits while commonizing all of those social costs with no private consequences. I agree that if YouTube were forced to take responsibility for every post and the subsequent effects that every video has on every person YouTube would essentially cease to exist and eliminating Section 230 entirely would also create profound risk for users as well (as EFF has suggested before). What I do think is that YouTube ought to be responsible for YouTube's aggregate effects on our public sphere based on the systemic choices, values, and biases at work in the experience it designs and offers its users and customers. And in our society, there are public goods associated with the deployment of these tools, that there remains private responsibility in the hands of people who build, deploy, and manage these systems.
Ultimately we probably do want to retain protection like Section 230 (or something similar) but also want something that introduces liability for intentional dangerous or harmful or fraudulent user experiences including recommendations meant to radicalize or harm. We use intent in the law to distinguish the egregiousness of behavior, and it might be the right tool here to create some accountability and responsibility in these platforms. And what about unintended but knowingly created harms? YouTube might be trying to create the most engaging algorithm in order to maximize attention for ad inventory but that they have created an engine that accelerates radicalization and eases the dissemination of mis- and dis-information is undeniable. Legally, what really matters here? Their intention or the consequences? While attribution of systemic consequences are notoriously difficult to ascribe with clarity and certainty, the discussion about aggregate impacts and the outcomes we need and want from these innovations is something we need to enshrine. That mechanism could be an sociocultural restatement of principle and guidelines that these platforms commit to that seek to guide innovation rather than a framework of laws and regulations seeking to criminalize failures.
While we’re thinking about responsibility, intent, and intended and unintended harms, looking at the second case this week, Twitter v. Taamneh, we see a completely different law (the Justice Against Sponsors of Terrorism Act, or JASTA from 2016) at issue but with a deep relationship to the same questions of responsibility and intended action. There are important legal concepts here about a deeply problematic terrorism support law at issue that (as a non-lawyer) I want to set aside in favor of the underlying principles question revealed in the case and the oral arguments. The central question at issue in Taamneh is whether Twitter knowingly provided substantial assistance to an organization who committed a specific terrorist act. The key part of the oral arguments was debating the distinction between “substantial assistance” and just ordinary business activity provided to a public that sometimes includes terrorists. Ford is not responsible if someone uses a Bronco in a suicide bombing. Justices Barrett and Gorsuch both zeroed in on these types of questions — musing about whether a platform company should be responsible if the terrorist group ISIS sometimes uses websites like Twitter, Facebook, and YouTube. As in Gonzalez and the Section 230 debate, we see social media platforms being shoehorned into more familiar categories that just might not be where we should put them. The Justices are required to interpret cases based on the laws on the books and the Constitution and so are struggling to find purchase on how to handle these cases at all. What their weak questions and struggle to call platforms “publisher” or “websites” reveals again is that these platforms need their own, distinct category.
Similar to the way “publisher” doesn’t fit, neither do comparisons to more typical consumer goods just because it’s a consumer facing website or app. We are not asking if ISIS uses a weather app to know if it’s going to rain. They are using a tool to build cultural power and organize with global reach and engagement via a platform built for that purpose. If we can agree on what a platform is and does, then the issue here is whether we are ok with Twitter being completely agnostic about for what purposes it is used just as weather.com is agnostic to what anyone does with information about the weather. I would suggest that our answer is a loud, desperate, and unequivocal no. Networks are not agnostic. Design and experiences and cultural power are moral experiences shaped by the values and principles that underlie their logic and purpose. The are not “plumbing” — no matter much these platforms might hope to maximize their addressable market by holding onto to a user agnosticism that hurts society in aggregate and harms people specifically. Suggesting that all ordinary business activities are the same is naive and belies our desperate need to clarify with conviction and power the role that these platforms play in society and what creating a clear category for their regulation that reflects their real position, potential, and possible consequences might make possible.
What is the category of platform or network and what responsibilities should those platforms shoulder as fundamental building blocks of our public sphere? That discussion should be the one we’re having. Listening to the lines of questioning, the Justices seem entirely focused on whether ruling for Google would create a tsunami of lawsuits rather than whether Google is in fact liable or should be liable under the law. This potential tsunami suggests to me that there is a massive problem at issue here. If their concern isn’t that there is liability but that that liability is so far reaching that our judicial system would be paralyzed by a torrent of lawsuits, perhaps what that reveals is the scale of what’s worth considering here about the imbalance of responsibility of an internet that is essentially consequence-free for these platforms. Perhaps there is a massive essential ethical course correction required in how we regulate and manage and guide the internet that should be undertaken and that burdening our legal and justice system with a flurry of lawsuits might be the necessary consequence of a course correction and realignment of redirecting the internet in service of the public goods that we need it to provide in order to get what we need from the systems.
Sometimes course corrections are painful, especially when we've been off course for more than a decade. But both Justice Kagan and Justice Kavanaugh focusing on the breadth of liability suggests there is a massive dearth of responsibility being accepted or shouldered by these platforms. They are in fact making the case that something needs to change. Whether that change is a reinterpretation of Section 230 that the Court can drive via their decision in Gonzalez or a broader reform of the Communications Decency Act that Congress would have to take up to create the new definition I’m suggesting we are craving is up for debate. That some course correction is necessary is starkly revealed by the difficult, awkward, oral arguments and entire lines of questioning on all sides of both of these cases.