Is it possible to circumvent § 230 with contract claims?

December 8, 2021 by Gregory Velloze, USC Gould School of Law '23

Photo by Blogtrepreneur Used under CC 2.0

The Communications Decency Act, 47 U.S.C. § 230, has become a point of contention for many of those who believe in free and open discourse on social media. While Prager University v. Google LLC reminded all of us that the First Amendment does not apply to the actions of online service providers,[1] several plaintiffs have attempted to use contract claims to circumvent § 230 immunity.

Section 230(c) reads as follows:

(1) Treatment of publisher or speaker. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil Liability. No provider or user of an interactive computer service shall be held liable on account of –

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected . . . .[2]

Lawsuits that allege breach of contracts or warranties, such as Barnes v. Yahoo!, Inc.,[3] do not fall under § 230’s immunity; these suits treat service providers as promisors rather than publishers. Furthermore, the limitation of civil liability under § 230(c)(2)(A) describes actions “taken in good faith” to remove “material that the provider or user considers to be” indecent.[4] Given that service providers contractually define what content violates their Terms of Service, § 230 likely does not give service providers the unlimited discretion to interpret their own contracts in favor of the draftsman.[5] If a content provider removes a reported user and does not in any way, shape, or form justify the reasons behind this removal according to its own Terms of Service, then the removal is not a question of publisher immunity, but a question of the service provider’s contractual honesty.

I. Section 230 Contract Claims in Theory

So how would this work in theory? A plaintiff would sue a service provider for a breach of contract and claim the company failed to follow its own Terms of Service by removing content that was not hate speech, misinformation, or otherwise indecent content per the company’s Terms. Courts would then look for the contractual definition of hate speech rather than deferring entirely to service providers. Allowing this new step in litigation would likely lead service providers to be more up front about the content they remove, providing clearer definitions of hate speech and misinformation under their own policies.

But courts may prove hesitant to limit an overexpansive reading of § 230,[6] even where Congress is unlikely to amend the statute. Justice Thomas concurred with a denial of certiorari in Malwarebytes, Inc. v. Enigma Software Group USA, LLC while arguing the overbroad, sweeping immunity of content providers under the Communications Decency Act has been interpreted far beyond its original intent.[7] But Justice Thomas’s opinion still concurred with the denial of certiorari, upholding the Ninth Circuit’s previous decision that § 230 does not apply to false advertising under the Lanham Act,[8] so it is easy for courts to say that a concurrence with a denial of certiorari amounts to nothing.[9]

II. Section 230 Contract Claims in Practice

But where questions of contract liability remain unanswered, § 230 litigation generally relies on unsympathetic or pro se plaintiffs. Often, the content from these plaintiffs raises few ambiguities about the plaintiffs’ respective Terms of Service. And Courts, hoping to resolve matters quickly, feel free to overlook arguments outside of § 230’s overbroad immunity.

In Murphy v. Twitter, Twitter was immune from liability in its decision to remove hate speech against transgender individuals, as the Judge relied on the expansive view of § 230.[10] But the California state appellate court found it unimportant that Murphy wished to treat Twitter not as a publisher, “but as a promisor or party to a contract.”[11] The court upheld the broad immunity of § 230, disparaging “creative” use of pleadings to counter the intent of Congress. But since Congress intended § 230 to provide for the free exchange of information and ideas, this statutory interpretation is somewhat contradictory. Nevertheless, even if the court had allowed the suit to go forward, the plaintiff’s contractual claims would very likely not have prevailed. The courts ultimately overlooked a Terms of Service requirement to save time.[12]

By contrast, in Newman v. Google LLC, the issue of § 230’s applicability to contract claims was ignored through more procedural means, where the court declined to extend supplemental jurisdiction over the claims for breach of good faith. And where incidental issues of monetization have raised questions regarding § 230, courts have not looked at the overall allegations of censorship and content removal. In Lewis v. Google LLC, the Ninth Circuit held that YouTube was immune from liability in its decision to demonetize certain videos from an offending channel that was never banned,[13] and in Sweet v. Google Inc., the Northern District of California held that YouTube’s Terms of Service allowed the provider to not display ads near unapproved content.[14] Even after parsing the text of YouTube’s Terms of Service, these decisions did not answer the question of whether § 230 should defer to contractual definitions of content removal according to a service provider’s own Terms of Service.[15]

Most interestingly, in King v. Facebook, Inc., black civil rights attorney and journalist Christopher King sued Facebook on the basis that the service provider had breached provisions of its Terms of Service that “protect minority users from being punished for using certain terms in a ‘self-referential’ or ‘empowering way.’”[16] The Northern District of California split from the later decisions in Murphy and allowed a claim for retaliatory breach of contract following a dismissal of other charges.[17] Given the courts’ broad reading of § 230 immunity, this was a rare opportunity. This could have led to a proper discussion of what Facebook’s Terms of Service allow the company to do. Disappointingly, the plaintiff instead argued First Amendment claims and a breach of implied covenant without relying on the text of Facebook’s Terms of Service, and the case was ultimately dismissed with prejudice.[18] The court in King relied on Ebeid v. Facebook to say that “a party cannot be held liable on a bad faith claim for doing what is expressly permitted in the agreement.”[19] Not only did the King court take this quote out of context—the contract claim in Ebeid was based on Facebook’s alleged failure to “boost” content after the plaintiff had already otherwise conceded Facebook’s right to remove content[20] —but this quote was inapplicable to the situation before it. Facebook’s current Terms of Service link to its Community Standards Transparency Center, which then links to an opinion piece that itself describes the difficulty of defining offensive content.[21] This definition of hate speech is hardly “expressly permitted,” as the King court claimed.[22]

III. Conclusion

While this case law of broad immunity suggests that circumventing § 230 through contract claims may prove difficult, it is still possible that plaintiffs may arise who make it clearer to judges that contract claims are more than mere creative pleading. Plaintiffs with content that is not self-evidently indecent—who seek remedies indicative of contract claims—can make it clear that they are litigating for good faith and honesty in commercial dealings and not merely evading reasonable content moderation. Such contract claims are worth trying, not only to gauge courts’ willingness to curtail the overbroad immunity of § 230, but also to ensure that online service providers like Facebook, Twitter, and Google are honest about what they put in their Terms of Service, and that courts are not broadly deferring to inconsistent and arbitrary contractual definitions of indecent content to the detriment of civil discourse.

 


[1] Prager Univ. v. Google LLC, 951 F.3d 991 (9th Cir. 2020).

[2] 47 U.S.C. § 230.

[3] Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1109 (9th Cir. 2009).

[4] 47 U.S.C. § 230.

[5] Restatement (Second) of Contracts §206.

[6] Christopher Cox, The Origins and Original Intent of Section 230 of the Communications Decency Act, Rich. J. L. & Tech. Blog [59] (Aug. 27, 2020), https://jolt.richmond.edu/2020/08/27/the-origins-and-original-intent-of-section-230-of-the-communications-decency-act/, (“Yet another misconception about the coverage of Section 230, often heard, is that it created one rule for online activity and a different rule for the same activity conducted offline. To the contrary, Section 230 operates to ensure that like activities are always treated alike under the law. When Section 230 was written, just as now, each of the commercial applications flourishing online had an analog in the offline world, where each had its own attendant legal responsibilities. Newspapers could be liable for defamation. Banks and brokers could be held responsible for failing to know their customers. Advertisers were responsible under the Federal Trade Commission Act and state consumer laws for ensuring their content was not deceptive and unfair. Merchandisers could be held liable for negligence and breach of warranty, and in some cases even subjected to strict liability for defective products.”). Compare with Universal Commun. Sys. V. Lycos, Inc., 478 F.3d 413, 415 (1st Cir. 2007) (describing “broad immunity” specific to internet service providers).

[7] Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13 (2020) (Thomas, J., concurring); see also Biden v. Knight First Amendment Inst. at Columbia Univ. 141 S. Ct. 1220 (2021) (Thomas, J., concurring).

[8] Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040, 1053–54 (9th Cir. 2019).

[9] See e.g., J.B. v. G6 Hosp., LLC, 2020 U.S. Dist. LEXIS 232625 (N.D. Cal. 2020).

[10] Murphy v. Twitter, Inc., 60 Cal. App. 5th 12 (2019).

[11] Id. at 26.

[12] See also Kifle v. YouTube LLC, 2021 U.S. Dist. LEXIS 193604, 7–8 (N.D. Cal. 2021) (claiming without argument that § 230 bars contract claims, contrary to the holding in Barnes v. Yahoo!).

[13] Lewis v. Google LLC, 461 F. Supp. 3d 938 (N.D. Cal. 2020).

[14] Sweet v. Google Inc., 2018 U.S. Dist. LEXIS 37591 (N.D. Cal. 2018).

[15] Id. at 12.

[16] King v. Facebook, Inc., 2019 U.S. Dist. LEXIS 151582, 3–4 (N.D. Cal. 2019).

[17] King v. Facebook, Inc., 2019 U.S. Dist. LEXIS 209247 (N.D. Cal. 2019); King v. Facebook, Inc., 845 Fed. Appx. 691 (9th Cir. 2021).

[18] King, 2019 U.S. Dist. LEXIS 209247 at 4.

[19] Ebeid v. Facebook, Inc., 2019 U.S. Dist. LEXIS 78876, 21 (2019).

[20] Id.

[21] Richard Allen, Hard Questions: Who Should Decide What Is Hate Speech in an Online Global Community?, Facebook (June 27, 2017), https://about.fb.com/news/2017/06/hard-questions-hate-speech/.

[22] Ebeid, 2019 U.S. Dist. LEXIS 78876 at 21.