The First Amendment and Platform Content Moderation: The Supreme Court’s ‘Moody’ Decision

Moody v NetChoice is an important new entry into the century-long debate over how new technology affects legal standards protecting free speech. It was decided July 1, 2024. The case began in 2021 when state legislatures in Florida and Texas, both dominated by conservative Republicans, passed laws regulating large social media companies and other Internet platforms. While the two state laws differed in terms of the entities they covered and the activities they restricted, both regulated the platforms’ content moderation practices, as well as requiring platforms to provide users with reasons for removing or modifying their posts. Both laws were products of the American culture wars; conservatives saw the major platforms as run by progressives or liberals, and their content moderation practices as discriminatory against Republican and conservative viewpoints.

As we will show below, the superficial issue of “media bias” fades in significance relative to the question of how the use of algorithms and AI by platforms affect first amendment protections.

Florida’s SB 7072 :

Texas’s H20 :

Immediately after both laws went into effect, NetChoice and the Computer & Communications Industry Association – trade associations whose members include Facebook and YouTube – issued First Amendment-based challenges . District courts in both states issued preliminary injunctions against the laws. The Florida case was appealed to the Eleventh Circuit, which upheld the lower court decision that the law was unconstitutional. The Texas case, on the other hand, was appealed to the Fifth Circuit, which upheld the law, contending that it regulated platform “conduct” not content, and thus did not violate the first amendment. Both sides challenged the appeals court decisions, bringing the matter to the Supreme Court. On July 1, 2024, the Supreme Court rejected both lower courts’ rulings and remanded them for a new trial.

The trajectory of the Texas and Florida cases has been highly scrutinized due to its implications for regulating content on social media platforms and the applicability of the First Amendment. The Supreme Court’s nearly one hundred pages of legal documents provide a detailed account of the development of the two cases, the reasons for remanding them, and the justices’ opinions on the analyses and judgments. There are encouraging and troubling aspects of the decision.

Analysis of the Supreme Court’s Rationale for Remand

The Supreme Court decision was heavily impacted by NetChoice’s attempt to have both laws declared “facially unconstitutional.” A facial challenge alleges that the legislation is unconstitutional no matter how and to whom it is applied. Most constitutional challenges focus on the application of a statute to specific parties. A facial challenge to a statute seeks to invalidate it in its entirety, even before it is enforced.

The Supreme Court’s opinion vacated the lower court judgments and remanded them for retrial, “because neither the Eleventh Circuit nor the Fifth Circuit conducted a proper analysis of the facial First Amendment challenges to the Florida and Texas laws.” All the Justices agreed with this decision, though for different reasons.

While the Court majority opinion, written by Justice Kagan, seemed to agree that the Florida’s and Texas’s interference with platforms’ content moderation policies were unconstitutional, they did not think the high bar for a facial challenge had been met:

“Analysis and arguments below focused mainly on how the laws applied to the content-moderation practices [of] giant social-media platforms … [such as] Facebook’s News Feed and YouTube’s homepage. They did not address the full range of activities the laws cover and measure the constitutional against the unconstitutional applications.”

Indeed, most of Justice Thomas’s concurring opinion was an extended screed claiming that the Supreme court should never allow any facial challenges.

Nevertheless, it is likely that both laws will be struck down for the most part, though some of the ruminations in the five different opinions leave wiggle room for backsliding on how the first amendment applies to platforms.

In its official ruling, the majority Justices wrote that the U.S. Supreme Court “has repeatedly held that ordering a party to provide a forum for someone else’s views implicates the First Amendment if … the regulated party is engaged in its own expressive activity, which the mandated access would alter or disrupt.” Justice Kagan’s opinion strongly reaffirmed its prior decisions defining “editorial discretion” as an exercise of free expression and indicating that content moderation is a form of editorial discretion that is protected “expressive activity.”

The court admonished the Fifth Circuit appeals court for upholding the Texas law, writing that its decision “rested on a serious misunderstanding of First Amendment precedent and principle.” These precedents included Miami Herald Publishing Co. v. Tornillo (1974), Pacific Gas & Elec. Co. v. Public Util. Commission of California (1986), Turner Broadcasting System, Inc. v. FCC (1994), and Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc. (1995). In her concurring opinion, Justice Barrett wrote, “the Eleventh Circuit’s understanding of the First Amendment’s protection of editorial discretion was generally correct; the Fifth Circuit’s was not.” But here again the broad sweep of a facial challenge was too much for the Court to accept.

“Analyzing how the First Amendment bears on [Facebook’s newsfeed and YouTube’s home page] is complicated enough without simultaneously analyzing how it bears on a platform’s other functions—e.g., Facebook Messenger and Google Search—much less to distinct platforms like Uber and Etsy.”

The Supreme Court thus asked the lower courts to reassess the State Laws’ scope of application. The expansive definition of social media platforms in both Florida and Texas may encompass platforms aside from traditional social media, such as Gmail, Venmo, and Uber. This broad reach necessitates a reassessment of the laws’ applicability and potential impact on a diverse array of online platforms. The Supreme Court’s concerns about the laws’ scope underscores one of the major challenges in governing social media platforms. Platforms with varied functionalities raise complexities in compliance requirements, as different features may trigger distinct legal obligations.

The Supreme Court ruling emphasizes that legislative intent cannot undermine First Amendment protections. The government cannot interfere with free expression simply by claiming that it is interested in improving or better balancing the marketplace of ideas. Plaintiffs were tasked with demonstrating the substantial impact of the laws on protected speech. This scrutiny of both Plaintiff and Defendants underscores the complexity of navigating the intersection of platform regulation, free speech, and editorial discretion.

An analysis of the Supreme Court’s decision to deny rehearing in both cases reveals that the central issue in the current regulation of content moderation by social media platforms is first to determine the position of the social media platform and whether the content production and moderation in which it engages is a form of substantive expression; second to determine the definition of a social media platform, whose connotations and extents will determine the scope of the law’s applicability; and finally, to determine the form of editorial discretion that the social media platform possesses editorial discretion, which will determine whether it is unconstitutional.

Insight into the Judges’ Opinions

Four justices provided additional perspective on the content and ruling of the Supreme Court’s judgment.

Justice Barrett contends that not all social media behavior is shielded by the First Amendment, and not all regulations imposed on these platforms are inherently unconstitutional. The initial step in evaluating the constitutionality of regulating social media content is to ascertain if the platform’s function is fundamentally expressive.

Like several other Justices, Barrett seems to think that algorithms complicate the issue, apparently not understanding that algorithms are simply programs made by humans to scale up editorial judgments by using computer-implemented rules. The exercise of editorial judgment by an algorithm in deleting posts endorsing a specific political candidate, in her opinion, raises questions different from a human doing it directly. We disagree. Conversely, if the algorithm merely suggests content to users based on their preferences, does interference with that function violate the Constitution? Barrett also suggests that use of artificial intelligence to identify and eliminate offensive content also raises concerns regarding the application of the First Amendment – again indicating that she may not understand that “algorithms” and “AI” are the same thing. Barrett argues that “technology may weaken the connection between content moderation behaviors, such as deleting posts, and the constitutionally protected right of human beings to “decide for themselves the ideas and beliefs that are worthy of acceptance, consideration, and adherence.” The idea that algorithms or AI fundamentally alter the nature of the editorial function is a product of misunderstanding.

Justice Barrett also raised the possibility that the organizational structure and ownership of platform businesses could impact constitutional analysis. She argues that businesses established by individuals entitled to First Amendment protections differ from foreign-owned enterprises or those established by foreigners, who may not enjoy the same First Amendment rights. The ownership of a social media platform by foreign entities and their influence over content moderation decisions could influence whether laws overriding such decisions are subject to First Amendment scrutiny. What if a platform’s foreign leadership dictates policies on the dissemination of views and content? she asks. Would the employment of Americans to develop and implement content moderation algorithms under the direction of foreign executives alter the constitutional considerations? These are critical issues that courts may confront when applying the First Amendment to specific platforms.

Although Barrett refrains from explicitly mentioning TikTok, the situation faced by the platform aligns with her arguments. On May 7, 2024, TikTok and its Chinese parent company, ByteDance, i nitiated legal action in the U.S. Court of Appeals for the District of Columbia Circuit to challenge the new law that would force TikTok, a popular short-form video app used by 170 million Americans, to divest or face a ban. The companies argue that this law infringes upon various provisions of the U.S. Constitution, including the First Amendment’s protection of free speech. While TikTok operates in the U.S., its parent company’s Chinese origin, potentially excludes it from First Amendment protection under Barrett’s framework, which considers corporate ownership in determining the applicability of the First Amendment.

Justice Jackson, who partially concurred in the judgment, proposed that the lower court in the subsequent trial should conduct a meticulous examination of how the regulated activity operates, specifically focusing on the procedures of platform content moderation. This thorough analysis is essential for courts to gain deeper insights into the technology involved, with a particular emphasis on understanding the collaboration between algorithms and humans to determine the identity and function of algorithms. Here again, the Justices are not understanding that technology is the agent of human judgment at scale.

In contrast, Justice Thomas underscored the significance of adhering to the common carrier doctrine in guiding the review conducted by lower courts, advocating for social media platforms to be recognized as common carriers. This approach emphasizes the obligation of platforms to provide nondiscriminatory service to the public. Thomas may not understand that applying the common carrier principle to social media would open to door to any and every form of legal speech, including name-calling, racial and sexual slurs, bullying, etc. Forcing platforms to act as common carriers of any speech would completely undermine any attempt by the platforms to differentiate their products or to maintain a healthy and attractive environment for their users. Section 230 was designed to give social media platforms the immunities of a common carrier, to make it less risky for them to allow wide-ranging expression, while also authorizing them to weed out abusive and offensive forms of speech. Section 230 thus recognized platforms’ right to editorial discretion; Thomas apparently wants to abolish it.

Justice Alito penned perhaps the most disturbing concurring opinion. He rejected Kagan’s argument that platforms were engaged in expressive activity through editorial selection. He outlined three prerequisites for challenging a violation of First Amendment rights and argued that NetChoice did not meet all these criteria. Firstly, a claimant must demonstrate that it exercises editorial discretion in selecting and presenting hosted content, raising the question of whether the platform functions as a “curator” or a “dumb pipe.” (Is it not obvious that platforms do not act as dumb pipes?) Secondly, the host must use the compilation of speech to express “some sort of collective point” and although “a narrow, succinctly articulable message is not a condition of constitutional protection,” compilations that organize the speech of others in a non-expressive way (e.g., chronologically) fall “beyond the realm of expression.” Lastly, the compiler must show that its own message is influenced by the speech of others. These criteria serve as a framework for evaluating the constitutionality of content regulation on platforms, considering whether the platform serves as a speaker or conduit, engages in editorial activities, and exhibits expressive qualities.

Alito’s arguments are so weak as to seem contrived to arrive at a specific conclusion; namely, justifying some regulation of platforms. He said, for example, that the scale of messaging on platforms is too high for algorithmic editorial discretion to maintain any kind of expressive content, yet at the same time seems sympathetic to the State government’s claim that their editorial policies are discriminatory against a particular point of view – an obvious contradiction. He asserts again and again that we don’t know how content moderation works, when there is a great deal of social science and computer science research on that very topic. He asserts that “we do not know how the platforms ‘moderate’ their users’ content, much less whether they do so in an inherently expressive way under the First Amendment.” But in fact, the platforms’ decisions to disfavor or eliminate certain kinds of speech and to promote or encourage others is precisely what led to the state laws in the first place.

Justices Thomas and Gorsuch joined in Alito’s concurring opinion, making it clear that the Court’s hard-core conservatives are not friends of the first amendment and form a significant faction on the high court that could prove to be threatening in the future.

Conclusion

After three years, the remanded trials appears to reset the legal proceedings. However, the Moody v Netchoice decision sheds light on the Supreme Court’s concerns regarding the regulation of content moderation on social media platforms. Key issues include the interpretation of the First Amendment’s applicability and constitutionality, the nature of content processing by social media platforms (whether it amounts to editing), and the role of algorithms in content management (whether algorithms are editors, and whether algorithmic deletion of content and algorithmic recommendation of content are the same in nature).

The majority opinion makes it clear that established standards of first amendment protection apply to digital platforms. To some Justices, however, the use of artificial intelligence for content moderation – which is unavoidable given the massive scale of messaging and posting on these platforms – may create a new standard of constitutional judgment. We’ve seen this before, e.g., when the Red Lion case mistakenly used the allocation of radio spectrum to justify forms of content regulation that would not be allowed in print media.

We hold that the shift of content production from PGC (professionally generated content) to UGC (user generate content), the shift from manual editing to algorithmic editing, and the shift from limited layout content to massive information, do not undermine or alter traditional First Amendment protections. There may be some careful thought required about how to apply those principles, but things are not really all that different, just massively enlarged in scale and more technological mediation. We now must await the results of the retrial.

About The Author

Milton Mueller

Milton Mueller is a founder of IGP an internationally prominent scholar specializing in the political economy of information and communication. He is the author of Will the Internet Fragment? (Polity, 2017), Networks and States: The global politics of Internet governance (MIT Press, 2010) and Ruling the Root: Internet Governance and the Taming of Cyberspace (MIT Press, 2002)

Le Yang

Le Yang is a Ph.D. candidate at the School of Journalism and Communication of Tsinghua University. Currently, she is conducting her research as a Visiting Scholar at the School of Public Policy, Georgia Institute of Technology. Her academic publications encompass topics such as Internet content governance, cybernorms, and cybersecurity. Currently, her transnational research delves into digital media platforms, examining the interplay between media platform companies and domestic governments.

Recent Articles

Categories

Will The Internet Fragment?

“In characteristically rigorous fashion, Mueller’s outstanding book punctures the alarmist myth of Internet fragmentation and helps us to understand what is really at stake as nations and other groups vie for power over the Internet.”

Jack Goldsmith, Harvard Law School