Washington — A federal provision that has long provided a legal defense for online platforms is the subject of attention from Congress, the White House, and now the US Supreme Court.
The Supreme Court will hear oral arguments this week in two crucial cases involving online expression and content management. “Section 230,” a federal rule that has been roundly condemned by both Republicans and Democrats for various reasons, but that tech corporations and digital rights organizations have defended as essential to a functioning internet, is at the center of the arguments.
The 27-year-old statute has been used by tech companies participating in the dispute as justification for why they shouldn’t be subject to lawsuits that claim they hosted or algorithmically recommended terrorist information with knowledge and substantial assistance to terrorist acts.
The scope of Section 230’s legal protections for websites and social media businesses might be considerably reduced if a series of judgments against the internet sector are made. If that occurs, the Court’s rulings may open internet platforms to a variety of new legal challenges over how they display content to users. Such a verdict would signify the most significant restrictions ever imposed on a legal shield that predates the main social media platforms of the present and has permitted them to prevent numerous content-related cases from ever coming to fruition.
The Supreme Court is still debating whether to consider several further cases that could have an impact on Section 230, while members of Congress have voiced a renewed enthusiasm for eliminating the law’s website safeguards and President Joe Biden has recently urged for the same in an op-ed.
What you need to know about Section 230, the law dubbed “the 26 words that created the internet,” is provided here.
A law that was created in the early days of the Internet-Wide Web
Section 230 of the Communications Decency Act, which was passed in 1996 during the early years of the World Wide Web, was created to support startups and business owners. The legislation’s text acknowledged that the internet was still in its infancy and that it ran the risk of being suffocated if website operators were subject to legal action for content that users had posted.
Without Section 230, according to one of the law’s architects, Democratic Sen. Ron Wyden of Oregon, “all online media would face an assault of bad-faith lawsuits and pressure campaigns from the powerful” aiming to stifle their speech.
Additionally, he claimed that Section 230 directly grants websites the right to delete content they deem undesirable by establishing a “good Samaritan” safe harbor: The federal government can still bring legal action against platforms for breaking criminal or intellectual property laws, but under Section 230, websites are exempt from liability for filtering content as they deem fit, not by the preferences of others.
The protections of Section 230 do not depend on a platform being politically or ideologically neutral, despite what some politicians have asserted. However, the legislation does not mandate that a website “qualify” for liability protection by being categorized as a publisher. Websites instantly qualify for Section 230’s advantages by fitting the definition of an “interactive computer service,” without needing to take any additional action.
The fundamental tenet of the law states that websites (and the users who visit them) cannot be regarded as the speakers or publishers of the content of other people. In plain English, this indicates that the creator of a piece of content is solely responsible for any legal consequences associated with publishing it, not the platforms on which it is shared or the users who re-share it.
Section 230 has a broad influence despite its seeming simplicity. Courts have frequently accepted Section 230 as a defense to defamation lawsuits, negligence claims, and other claims. It has previously protected AOL, Craigslist, Google, and Yahoo, creating a body of legislation that is so extensive and significant that it is now regarded as a cornerstone of the modern internet.
The Electronic Frontier Foundation, a digital rights organization, has stated that Section 230 is necessary for the existence of the free and open internet as we know it. Key court decisions on Section 230 have established that users and services cannot be held liable for email forwarding, hosting online reviews, or distributing content that some people may find offensive. Also, it aids in the swift resolution of issues in court that lack merit.
Nonetheless, Section 230 detractors have grown in number recently and have suggested limitations on the circumstances in which websites may rely on the legal shield.
Criticism from both sides, for various reasons
Conservatives have long criticized Section 230, arguing that it permits social media sites to censor right-leaning viewpoints for political purposes.
Although social media companies have stated they do not make content decisions based on ideology but rather on violations of their policies, Section 230 does provide websites with protection from litigation that can result from that type of viewpoint-based content moderation.
Several of those complaints were attempted to be turned into actual policy by the Trump administration, which, if successful, would have had tremendous repercussions. For instance, the Justice Department published a legislative proposal to amend Section 230 in 2020 that would establish an eligibility test for websites requesting the protections of the legislation. The Federal Communications Commission was instructed to apply a more restrictive interpretation of Section 230 by the White House in an executive order that same year.
The executive order encountered several legal and procedural issues, not the least of which was that the FCC is an independent agency that, by law, does not receive orders from the White House and is not a part of the judicial branch, does not regulate social media or content moderation decisions.
Conservatives are still looking for opportunities to limit Section 230, even though their attempts during the Trump administration failed. They are not alone either. Democrats have raged against Section 230 more and more since 2016 when social media platforms’ role in disseminating Russian election disinformation sparked a national conversation about the corporations’ treatment of harmful content.
Democrats claim that by protecting platforms’ right to control content as they see fit, Section 230 has allowed websites to avoid responsibility for hosting hate speech and false information that has been condemned by others but that social media companies are unable or unwilling to remove themselves.
Even though the two parties can’t agree on why Section 230 is problematic or what policies might properly replace it, the outcome is a bipartisan dislike for it.
Sen. Sheldon Whitehouse of Rhode Island, a Democrat, stated at a Senate Judiciary Committee hearing last week, “I would be prepared to make a wager that if we had a vote on a plain Section 230 repeal, it would clear this committee with practically every vote.” The issue, which is where we get stuck, is that we want 230+. After 230 is repealed, we want “XYZ.” Also, we disagree on what “XYZ” is.
The courts assume the charge
The impasse has shifted a lot of the energy for amending Section 230 to the courts, especially the US Supreme Court, which will now have the chance to decide this term how far the law should go.
Critics of technology have urged for greater legal accountability and exposure. The enormous social media business has mainly been sheltered from the legal system’s development and the courts. The Anti-Defamation League stated in a Supreme Court petition that “it is extremely unusual for a worldwide industry that exerts astonishing influence to be insulated from the judicial examination.
It would be disastrous for the big tech companies, and even for many of their strongest rivals because it would weaken the factors that have allowed the internet to thrive. They claim it might unexpectedly and abruptly expose several websites and users to legal risk, and it would significantly alter how some websites function to limit risk.
In a Supreme Court brief, the social media site Reddit stated that narrowing Section 230 would “dramatically enhance Internet users’ potential to be sued for their online activities” and exclude recommendations of information a user might like.
The firm and numerous unpaid Reddit moderators said, “‘Recommendations’ are the very thing that makes Reddit a dynamic environment. Users decide which postings get more attention and which get lost in the crowd by up- and downvoting content.
According to the brief, a legal system that “carries a substantial risk of being sued for recommending a libelous or otherwise tortious post that was produced by someone else” would cause users to quit using Reddit and moderators to cease volunteering.
Although this week’s oral arguments won’t put an end to the discussion on Section 230, the results of the cases might have a profound impact on the internet, for better or ill.