Section 230 isn't the tool we need
Two major Supreme Court cases decided last week show where we need to focus our attention if we hope to create a framework for guiding and regulating our public sphere.
Back in February I highlighted two supreme court cases on media and technology that were in oral arguments in front of the Supreme Court. Last Thursday, the Court issued rulings on both Google v Gonzalez and Twitter v Taamneh. As expected the court ruled 9-0 in favor of Twitter based largely on the highly detailed nature of the law under which the action was brought, but the opinion reveals some details about the Court’s thinking and the limits of the laws being used to regulate these spaces.
The law at issue is a complex anti-terrorism law from 2016 that demands specific aide for a specific incident of international terrorism for an entity to be liable for the results of the act. This case was always a long shot, and most expected the result. That the Court immediately and concurrently used that ruling to send the Gonzalez case back to the lower court suggests just how legally clear the Court found the Taamneh case. The logic and language of the unanimous opinion from Justice Thomas reveals a pretty straightforward conclusion: if we want our laws to help us regulate how our modern public sphere functions, we are going to need new laws.
Thomas acknowledges that Twitter knows it is supporting ISIS: “defendants knew they were playing some sort of role in ISIS’ enterprise.” And he recognizes no duty for Twitter to disallow a user from using their platform for illicit ends even once they know they’re committing crimes aided by their platform: “plaintiffs identify no duty that would require defendants or other communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends.” The Court also clearly sees these platforms (Twitter, YouTube specifically in this case) as passive infrastructure and does not recognize the active and known role the algorithms and machine learning intentionally deployed by these companies for the maximization of revenue play in guiding the experiences of every user including accelerating the radicalization of people sensitive to extremism: “Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.” This passivity also creates a much higher barrier to potential liability well beyond the fundamental power-building and organizing role these tools play that I would argue are exactly the kind of global friction reduction and visibility necessary for modern extremist groups need to function at global reach: “These allegations are thus a far cry from the type of pervasive, systemic, and culpable assistance to a series of terrorist activities that could be described as aiding and abetting each terrorist act.” And lastly, Justice Thomas is clearly worried that this line of reasoning might lead these platforms to being liable for every act of terrorism committed by ISIS: “a contrary holding would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That conclusion would run roughshod over the typical limits on tort liability and take aiding and abetting far beyond its essential culpability moorings…. Plaintiffs thus have not plausibly alleged that Google knowingly provided substantial assistance to the Reina attack, let alone (as their theory of liability requires) every single terrorist act committed by ISIS.”
Neutral. Passive. Inevitable. Too big too fail.
Our existing regulatory framework (JASTA and Section 230 of the Communications Decency Act that was barely mentioned in this opinion and the like) is as it has always been insufficient to he task of guiding, shaping, and regulating modern media platforms (or of even calling them what they are). We don't need the court to issue new interpretation: we need Congress to produce new law and the FCC and FTC to issue new rules that meet the challenges and the moment we’re in as a country and a society. Much like the over reliance on Roe to protect rights to health privacy and abortion, we are overly reliant (or overly optimistic) about the Court protecting our public sphere with laws not meant for it. They are not up to the task and should not be the ones we turn to when other avenues seem too hard. This ruling is the correct legal decision, but the logic that it might unmoor liability precedent is evidence that those precedents (one of which used in this case was a 19th Century trespassing case) might not fit the fact patterns and realities of modern life and systems. Whether these platforms should bear some responsibility for how the cultural and organizing power their users build using their tools that they know are using them for illicit, illegal, or dangerous purposes is the debate we need to have. Illicit to illegal to dangerous is a complex spectrum, and dangerous is highly variable and ill-defined and demands broad, open, inclusive debate to help us define a new, healthy public sphere. What we see here is not a travesty of justice: it is a moral and legislative failure of leadership, a lack of the creativity and ambition to proactively shape a set of systems we desperately need to be healthier if we are to push back on the civic dysfunction that feels all too inevitable right now.
Please consider becoming a paid subscriber to support this work. Subscribing to 7 Bridges is the best way to keep it free and open to all.