Section 230 ... as intended?
Our Federal courts may have accidentally turned Section 230 into the tool it was meant to be.
Section 230 of the 1996 Communications Decency Act has been the subject of multiple Supreme Court decisions in the last two sessions — and a frequent topic here over the last few years. Last summer, I wrote that 230 is not the tool we need to address the questions we are struggling with regarding the degradation and dysfunction in our public sphere after earlier in the year wondering if we could find ways to talk about what we really need to be talking about in these debates instead of confining a more complex discussion into the narrow box of liability.
But as the Supreme Court and the lower courts collectively wrestle with these questions and the ill-fitting existing tools to meet this moment of information system dysfunction including Section 230, presenting torturous sometime nonsensical arguments, the combination of decisions that are stacking together may have (maybe accidentally) created an entirely new foundation for dramatically circumscribing the liability shield that Section 230 has become.
At the end of August, the Third Circuit Court of Appeals issued a ruling in Anderson v Tiktok that stacks the first amendment foundation from the Supreme Court in Moody v NetChoice from just two months ago on top of the Section 230 liability rulings of the last several decades to completely redefine and comprehensively circumscribe Section 230.
In Anderson, Judge Matey reminds us that Section 230 does and was meant to protect only the simple hosting of content:
“But § 230(c)(1) does not immunize more. It allows suits to proceed if the allegedly wrongful conduct is not based on the mere hosting of third-party content, but on the acts or omissions of the provider of the interactive computer service.”
“Properly read, § 230(c)(1) says nothing about a provider’s own conduct beyond mere hosting.”
What Moody made clear is that algorithms are the aggregation of editorial choices and as such are protected as “expressive activity.” In Moody, SCOTUS was focused on the First Amendment and ruled that algorithms are protected as first-party actions, not third-party content. In Anderson, the Third Circuit took that logic and extended it. If algorithms are first-party “expressive activity,” protected by the First Amendment, then they are not third-party content protected by Section 230’s liability shield. Additionally, in a partially concurring, partially dissenting opinion, Judge Matey reminds lower courts that in “common carrier” cases that third-party liability protection does not extend to a first-party when the first-party knows it is distributing something known to cause harm — like a railroad transporting nuclear materials or a social media platform distributing how-to videos on self-affixation.
So in the conversation about the effects of platforms and algorithms on our public sphere and civic discourse, the algorithmic choices that drive the sorting, polarization, and radicalization effects of these systems are the consequence of first-party actions and are distributing and causing increasingly known harms to society. With Anderson, both are behaviors that platforms will now be liable for while still enjoying the basic liability protection of neutral hosting and presentation of third-party content originally intended by Section 230 — if the decision survives additional appeal after the lower court issues its next judgment.
The Third Circuit has clarified, using a months-old Supreme Court precedent, not a reinterpretation of the 1996 law itself, that while Section 230 was intended to protect platforms from third-party content and actions, it was never meant to protect platforms from their own actions or from distributing known harmful third-party content. Moody was largely interpreted as a libertarian victory for corporate free speech and was largely seen as an effort to protect and even extend Section 230. Now with Anderson, the Third Circuit has eviscerated the blank-check liability protections of Section 230 that social media platform have always hidden behind. In this new world, platform responsibility is much more clearly defined, and platforms will be liable for their choices and behaviors — and therefore for the secondary and unintended but known consequences of their business models — and that shift in responsibility might could dramatically transform the core business model of the attention economy.
If known harms can no longer be ignored and algorithmic choices are first-party speech and therefore carry with them all the liability of first-party actions, then … what exactly? We don’t yet just how these platforms will respond to these rulings and whether they will take an offensive approach to overturning them or a defensive approach to self-protection (probably both in the short run). And we have no idea how these “expressive activities” definitions and the shifting responsibility of first-party liability might be applied to the recommendations of LLMs (Are they just complex algorithms or yet another category? Who’s responsible for their actions as they get more autonomous?) and AI-generated content more broadly (h/t Katie Harbath for calling out this particular blindspot). But on first blush, here are some possible consequences:
All algorithmic content optimization will have to be deployed through the filter of its actual effects on people
Risky content will have to be excluded from algorithmic display
Some platforms will shift back to chronological streams to avoid the algorithm liability risk entirely
Some platforms will shift to user-defined algorithms to shift the responsibility back to users
With less algorithmic content streams, attention inventory will decrease and advertising will get more expensive and less efficient
Platforms may end up making more revenue if the increase in per unit cost exceeds the loss in inventory
Sorting pressure and radicalization will decrease
…?
What else might change? What else might become possible in a world where known damage to social cohesion cannot be as easily monetized? Perhaps our assumptions about digital platforms and experiences being mechanisms for expanding our worldviews and bridging communities might come back into focus, and our beliefs about the inherent dysfunction of modern civic life might start to ease. We might finally be able to see our present dysfunctions as the consequences of systems designed without care and devoid of responsibility rather than some failure of humanity. As that pressure and cynicism ease, perhaps we also might rediscover (or for some of our youngest fellow citizens experience) real optimism about he possibility of a civic life — especially the digital public sphere component of our civic life — meant to build rather than undermine our best intentions and expectations of each other and our communities.
I'd throw AI in here and how do platforms think about using AI to adjust how information/content is presented to people combined with everything you mentioned above.