Media & Entertainment News

Section 230 And The Future Of Content Moderation – Media, Telecoms, IT, Entertainment

[ad_1]

The Communications Decency Act of 1996 (CDA)
was a landmark law enacted to regulate content on the internet. The
purpose of the legislation was to regulate indecent and obscene
material online, but it is most relevant today for Section 230—a provision that protects
online platforms from liability in a variety of circumstances
involving third-party use of their services. While Section 230 is
often credited with allowing the internet to flourish in the late
1990s and the early 21th century, it has been the
subject of calls for amendment from across the political spectrum
as courts and online platforms attempt to fit the law to the modern
internet. In particular, a rash of bills in 2020 targeted the law,
specifically in the context of immunity for content-moderation
decisions—an application that has become more heavily
scrutinized as service providers attempt to curb abusive content
and critics raise concerns of censorship.

This article addresses the evolving landscape for online
platforms seeking to moderate content while limiting litigation
risk.

Background: The CDA and Section 230

Shortly after the CDA was enacted, it faced First Amendment
challenges to its provisions that prohibited the transmission of
“obscene or indecent” content to minors. The U.S. Supreme
Court held the anti-indecency provision of the statute
unconstitutional in Reno v. American Civil Liberties
Union
, but held that provision to be severable from the
rest of the law, allowing Section 230 to stand.

Now, Section 230 is the principal legal protection afforded to
online platforms from lawsuits over content posted by their users.
It contains three provisions specifying when platforms will be
immune from suit: first, in Subsection (c)(1) as a
“publisher”; second, in Subsection (c)(2)(A) for the Good
Samaritan removal or filtering of content; and third, in Subsection
(c)(2)(B) as a provider of the technical means to restrict
content.

Subsection (c)(1) states: “No provider or user of an
interactive computer service shall be treated as the publisher or
speaker of any information provided by another information content
provider.” It “is concerned with liability arising from
information provided online,” but as stated
in Barrett v. Rosenthal,
“[l]iability for censoring content is not ordinarily
associated with the defendant’s status as ‘publisher’
or ‘speaker.'”

Subsection (c)(2) provides immunity for moderation or alleged
“censorship” scenarios, stating: “No provider or
user of an interactive computer service shall be held liable on
account of: (a) any action voluntarily taken in good faith to
restrict access to or availability of material that the provider or
user considers to be obscene, lewd, lascivious, filthy, excessively
violent, harassing, or otherwise objectionable, whether or not such
material is constitutionally protected; or (b) any action taken to
enable or make available to information content providers or others
the technical means to restrict access to material described in
paragraph (1).”

Courts have interpreted Subsection (c)(1) broadly as providing
immunity to online platforms, both from suits over content posted
by their users and for their removal of content from their sites.
In a key early decision involving allegedly defamatory messages on
a message board, Zeran v. America 
Online the U.S. Court of Appeals for
the Fourth Circuit held that Section 230 “creates a federal
immunity to any cause of action that would make service providers
liable for information originating with a third-party user of the
service.” “Thus, lawsuits seeking to hold a service
provider liable for its exercise of a publisher’s traditional
editorial functions—such as deciding whether to publish,
withdraw, postpone or alter content—are barred.” This
immunity is generally not limited to particular causes of action,
and because Section 230 preempts state law where inconsistent,
Section (c)(1) is a defense to state tort and contract claims as
well as federal lawsuits.

Subsection (c)(1) is not an absolute bar to litigation for
third-party content on online platforms, however. In a critical
decision denying Section 230 immunity, Fair Housing Council of San Fernando Valley v.
Roommates.com
the U.S. Court of Appeals
for the Ninth Circuit held that Section 230 did not preempt claims
under the Fair Housing Act and state housing
discrimination laws where a roommate-matching service required
users to answer a questionnaire with criteria such as “sex,
sexual orientation and whether they will bring children to the
household.” The Ninth Circuit, noting that Section 230
“was not meant to create a lawless no-man’s-land on the
Internet,” found that the questionnaire was “designed to
force subscribers to divulge protected characteristics and
discriminatory preferences”—in other words, the
defendant was a “developer” of an allegedly
discriminatory system because it elicited content from users and
made use of it in conducting its business based on allegedly
illegal criteria. The Ninth Circuit contrasted this with cases in
which immunity was upheld—including where websites used
“neutral tools” that “did absolutely nothing to
enhance the defamatory sting of the message, to encourage
defamation or to make defamation easier,” such as allowing
users to filter dating profiles based on voluntary user inputs.
Notably, the Ninth Circuit did apply Section 230 immunity to the
“additional comments” section of users’ profiles,
where users were merely encouraged to provide information about
themselves; even though the lawsuit pointed to instances where
users input race or religious requirements into this section, the
Ninth Circuit noted that Roomates.com only passively published
these comments as written, which is precisely what Section 230
protects.

Additionally, the Ninth Circuit has held that failure to warn
cases are not preempted by Section 230. In Doe v. Internet Brands, the plaintiff
alleged that two individuals were using a modeling website to pose
as talent agents and find, contact and lure “targets for a
rape scheme.” The defendant allegedly knew about these
particular individuals and how they were using the website, but
failed to warn users about the risk of being victimized. The Ninth
Circuit determined the critical question under Subsection (c)(1) to
be whether the allegations depended on construing the defendant as
a publisher (i.e., whether the claims arose from the
defendant’s failure to remove content from the website). The
Ninth Circuit noted that, in these circumstances, the marginal
chilling effect of allowing such a claim to proceed did not warrant
turning Section 230 into an “all purpose get-out-of-jail-free
card,” nor would it discourage “‘Good Samaritan’
filtering of third party content.”

Further, in May 2021, the Ninth Circuit reversed a district
court’s dismissal based on Section 230 immunity in Lemmon v. Snap
Parents of three boys ages 17–20 killed in a car accident
sued the maker of Snapchat for its “Speed
Filter”—an overlay users could add to photos and videos
that shows their speed. The parents alleged that one of the boys
opened the app shortly before the crash to “document”
their speed (at one point over 123 miles per hour) and that Snap
allowed this feature notwithstanding (untrue) rumors that users
would receive a “reward” for reaching over 100 miles per
hour in the app. The Ninth Circuit held that the negligent-design
claim did “not seek to hold Snap liable for its conduct as a
publisher or speaker” and “[t]he duty to design a
reasonably safe product is fully independent of Snap’s role in
monitoring or publishing third-party content,” thus Subsection
(c)(1) did not apply. Separately, the Ninth Circuit held Subsection
(c)(1) inapplicable because Snap designed the Speed Filter and
reward system at issue, so the claim did not rely on
“information provided by another information content
provider.” Though the implications of this holding are yet to
be seen, the Ninth Circuit attempted to constrain it to true
defective design cases; the allegations did not depend on the
content of any messages actually transmitted, so this was “not
a case of creative pleading designed to circumvent CDA
immunity.”

The breadth of immunity provided by Section 230 has also been
pared back by subsequent legislation. In 2018, largely as a
response to Backpage.com prevailing on Section 230 immunity in
litigation concerning sex trafficking, the Allow States and Victims to Fight Online Sex
Trafficking Act of 2017 (FOSTA), was signed into law,
amending Section 230 to eliminate platforms’ immunity from
prosecution for violating certain state sex trafficking laws. It
also eliminated platforms’ immunity from civil suits brought by
victims of sex trafficking for knowingly promoting and facilitating
sex trafficking. Notably, the text of FOSTA states that it does not
apply to Subsection (c)(2).

Section 230 and Content Moderation

While Subsection (c)(1) was a paradigm shift in terms of making
the internet a unique forum in which content could be hosted and
accessed without traditional publisher liability applying to
service platforms, Subsection (c)(2) has also been essential in
forming the legal landscape for social media and other online
spaces. Because both provisions of Subsection (c)(2) concern
content removal, it has been particularly relevant in recent years
as more people, including politicians and other public figures,
participate in online communities. Subsection (c)(2) has not been
the deciding factor for many cases to date, but disputes concerning
content moderation issues are likely to proliferate.

Several courts have held that Subsection (c)(2) immunizes online
platforms from liability for content removal decisions, though it
is case-dependent whether such claims can survive the pleading
stage. For example, this year, the U.S. Court of Appeals for the
Second Circuit applied Subsection (c)(2) to affirm the dismissal at
the pleading stage of claims brought against a video sharing site
over the site’s removal of the plaintiffs’ videos promoting
“sexual orientation change efforts.” In Domen v. Vimeo, the
court noted that Subsection (c)(2) is a “broad provision”
that forecloses civil liability where providers restrict access to
content that they consider  objectionable. The
Second Circuit found that the plaintiff had not pleaded that Vimeo
had acted in bad faith because there were no plausible allegations
that Vimeo’s actions were “anti-competitive conduct or
self-serving behavior in the name of content regulation,” as
opposed to “a straightforward consequence of Vimeo’s
content policies.”

Similarly, a case in the U.S. District Court for the Northern
District of California, Daniels v. Alphabet, held that
Subsection (c)(2)(A) barred nearly all of the plaintiff’s
claims regarding removal of his videos from YouTube and alleged
restriction of his account, noting that the plaintiff himself
acknowledged that the defendants’ reason for removal was that
the videos violated “YouTube’s Community Guidelines”
and “YouTube’s policy on harassment and bullying.”
The plaintiff’s conclusory assertions of bad faith were
insufficient to overcome the discretion afforded by Subsection
(c)(2)(A). This decision and the ruling
in Vimeo demonstrate that the good-faith removal
defense can be successfully raised at the pleading stage, though
defendants may have more trouble doing so where plaintiffs bring
more specific allegations of bad faith.

Conversely, the Ninth Circuit in Enigma Software Group USA v.
Malwarebytes
 held that a security software company
was not entitled to immunity under Subsection (c)(2)(B) at the
pleading stage where the plaintiff alleged that Malwarebytes’s
software blocked the installation or use of its security software
for anti-competitive purposes. There, the Ninth Circuit found that
the complaint plausibly alleged the companies were direct
competitors. It reversed the district court’s finding of
immunity and remanded the case to the district court, holding that
the anticompetitive allegations were sufficient to survive
dismissal at the pleading stage.

Numerous other cases have dispensed with content moderation or
account removal allegations against by applying Subsection (c)(2),
often in the social media context and with little analysis of the
good faith requirement. Additionally, several courts have applied
Subsection (c)(1) to removal decisions on the theory that removing
or withholding content from a platform is a typical
“publisher” decision, which is protected by Subsection
(c)(1). Though this approach sidesteps the good-faith analysis
built into Subsection (c)(2), there does not appear to be a
consistent approach among courts regarding when to apply Subsection
(c)(1) to moderation or removal decisions, and it remains to be
seen how reliably courts will take this more-protective route.

Potential Changes to Section 230

Outside of the courts, content moderation has been hotly
contested across the political spectrum. Generally, proposed bills
have divided on party lines. Democrats have sought to protect
providers’ ability to remove hate speech and offensive content
while leaving open liability in the anti-discrimination context,
and Republicans have sought to impose more First Amendment-like
restrictions on what providers can remove.

The Senate Committee on Commerce, Science, and
Transportation held a hearing in October 2020 to address
Section 230 with executives from Twitter, Facebook and Google
present, in which senators addressed issues ranging from political
censorship to the spreading of misinformation. While Subsection
(c)(2) currently protects platforms’ decisions to remove, label
or restrict the spread of content they deem to be damaging in some
way, some senators pressed the companies’ representatives to
explain the reasoning behind the removal or restriction of various
specific posts. Senator Roger Wicker (R-MS), providing the majority
opening statement, acknowledged the role Section 230 played in
enabling the growth of the internet but also claimed it “has
also given these internet platforms the ability to control, stifle,
and even censor content in whatever manner meets their respective
‘standards,'” and “[t]he time has come for that
free pass to end.” He also pointed to instances of removal
that he characterized as inconsistent or evincing political bias.
Senator Maria Cantwell (D-WA), in the minority opening statement,
focused on enabling platforms to remove “hate speech or
misinformation related to health and public safety.”

In March 2021, Facebook CEO Mark Zuckerberg argued before the
House Committee on Energy and Commerce that Section 230 immunity
should be reduced in favor of platforms being “required to
demonstrate that they have systems in place for identifying
unlawful content and removing it.” His proposal contemplated a
third party that would set standards for what would constitute an
adequate system, proportionate to the size of the provider at
issue. Additionally, Mr. Zuckerberg advocated for more transparency
into how platforms decide to remove “harmful but legal”
content.

Since 2020, numerous bills have been introduced that would
further pare back the immunity Section 230 provides to platforms,
both for removing and for failing to remove certain categories of
third-party content. One example is the Safeguarding Against Fraud, Exploitation, Threats,
Extremism and Consumer Harms (SAFE TECH) Act, introduced
by Senators Mark Warner (D-VA), Mazie Hirono (D-HI) and Amy
Klobuchar (D-MN). This bill proposes to limit immunity in cases
involving, among other things, civil rights or discrimination,
antitrust, stalking, harassment, intimidation, international human
rights law or wrongful death. It would also make Section 230 an
affirmative defense—rather than a pleading-stage
immunity—and would make it unavailable to defendants
challenging a preliminary injunction). Another example is
the Platform Accountability and Consumer
Transparency (PACT) Act, which has received some
bipartisan support. This bill seeks to set certain requirements for
platforms’ takedown processes and provides state attorneys
general as well as the Federal Trade Commission with certain
enforcement authority. Several other bills have been introduced
with similar focus on stripping immunity based on the subject
matter of litigation or based on the practices of the platform. The
Biden Administration has not taken an official position on Section
230.

Conclusion

While Section 230 remains the predominant legal protection for
online platforms moderating content in good faith, courts are
beginning to engage more regularly with these issues, and recent
decisions signal that defendants may have difficulty relying on
Subsection (c)(2) immunity to dispose of well-pled suits at the
pleading stage. Further, many cases that have been dismissed above
on Subsection (c)(2) grounds may have survived under new proposed
legislation. Section 230 reform may introduce uncertainty to
online platforms’ litigation risk, so content providers should
remain aware of the shifting landscape for this critical legal
protection.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

[ad_2]

Source link