Lauded as the “the most important law protecting free speech”[1] and the law that “gave us the modern internet,”[2] Section 230 of the Communications Decency Act (Section 230) has been a fixture of recent internet policy debates and blamed for everything from the proliferation of sex trafficking[3] to enabling anti-conservative social media bias[4].
Section 230 says, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[5]
Those 26 words shield online platforms from liability arising from hosting or making available third-party or user-generated content.[6] In other words, online platforms are considered intermediaries that cannot be legally liable for what users post on their platforms. However, Section 230 does not provide immunity to the actual creator of content. The author of a defamatory post could still be held responsible for any defamatory material they post.
Notably, what constitutes an “interactive computer service” has been broadly defined, meaning that Section 230 provides immunity not just to the largest social media companies, but also to small blogs, discussion forums, comment sections, and listservs.
Section 230 also allows online platforms to moderate content without opening themselves up to lawsuits. Section 230 was born out of the young internet of the mid-1990s, populated mostly by three online information services that charged users monthly subscription fees to access walled gardens of chat rooms, bulletin boards, and professional third-party content. As online services began fostering the massive amount of user-generated content that existed on their online message and bulletin boards and grew in size and scope, so did the need to maintain community standards to ensure these spaces weren’t overrun with offensive content.
However, a pair of court rulings resulted in online services having to make a choice when it came to moderating content: they could either enforce content guidelines and risk opening themselves up to lawsuits, or they could decline to moderate content entirely and maintain their immunity.[7] This moderator’s dilemma prompted the creation of Section 230, allowing online services to moderate—or decline to moderate—their user-generated content spaces, like message boards, without risking lawsuits.
According to the drafters of the law, the passage of Section 230 arguably served two policy purposes. First, it incentivized online platforms to self-regulate offensive content and maintain community standards without having to fear liability. Second, it encouraged the innovation of new internet companies and the expansion of diverse online discourse.
This is why, over 24 years later, internet freedom advocates credit Section 230 with being the legal bedrock that led to the emergence of our most influential and dominant technology and social media companies whose business models almost exclusively rely on user-generated content. Advocates argue that without Section 230, these companies wouldn’t exist, as they would have been strangled by litigation before they could get off the ground. For example, advocates believe that in a world without Section 230, an online platform could be sued over every inflammatory post on its platform, including by business owners upset with one-star reviews.
However, as online platforms have grown in power and influence, critics argue that the broad protection provided by Section 230 has allowed for a myriad of unintended consequences.
Activists and legal scholars question whether the law was meant to allow platforms to escape accountability for the sex trafficking material, death threats, hate speech, terrorist content, bullying/harassment, and other objectionable content that often pervades online platforms.
Traditional technology and media companies believe that Section 230 has disrupted the playing field between traditional publishers and newer technology and media companies, incentivizing more unscrupulous business models that prioritize readers’ attention over content quality and user wellbeing. Without the threat of legal consequences, online platforms are free to promote and elevate those stories that will drive the most user engagement and earn them the most advertising dollars, even if that means amplifying election misinformation, hate speech, or dangerous conspiracy theories. More traditional technology and media companies believe this new attention marketplace makes it impossible for them to compete while continuing to provide consumers with quality content.
Section 230 has also been a target of legislators from both the right and the left. Republican politicians, including President Trump, who signed an executive order[8] seeking to limit Section 230 and who tweeted out “REPEAL SECTION 230!”[9] to his 87 million Twitter followers, believe that platforms moderate content with an anti-conservative bias. Conservatives in Congress have proposed multiple Section 230 reform bills this legislative session,[10] and a few weeks ago Bill Barr’s DOJ submitted recommendations on how to reform the law.[11] Meanwhile, Democrats are concerned that, among other things, Section 230 has permitted election meddling and the spread of disinformation campaigns that are potentially damaging to our democracy.
According to Section 230 defenders and critics, the 26 words of Section 230(c)(1) are behind everything people both love and hate about the internet.
[1]
[2]
[3]
[4]
[5] Section 230 consists of two key provisions, 47 U.S.C. § 230(c)(1) and (2). Section 230(c)(1) is the provision at the center of the current debate. However, future posts will likely discuss Section 230(c)(2) as well.
[6] There are a few statutory exceptions to Section 230, which will be discussed in a future blog post.
[7] Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y 1991); Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995). These cases and Section 230’s legislative history will be discussed in more depth in the next blog post.
[8]
[9]
[10]
[11]