INSUBCONTINENT EXCLUSIVE:
Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook
While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have
outright sued the platforms for their actions.
That because of a legal distinction between media publications and media platforms that has
made solving hate speech online a vexing problem.
Take, for instance, an op-ed published in the New York Times calling for the slaughter of
The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case
Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.
The reason for this disparitySection 230 of
the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what
its users post or share.The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 & but that may
lead to government getting into the business of regulating speech online.Instead, platforms should step up to the plate and coordinate their
policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his
hate.
A primer on section 230
Section 230 is considered a bedrock of freedom of speech on the internet.Passed in the mid-1990s, it is
credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and
therefore powering the exponential growth of these companies.If it weren''t for section 230, today social media giants would have long been
bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies
altogether.
Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech
companies for user-generated content they host.And it not only social media platforms that have benefited from section 230; sharing economy
companies have used section 230 to defend themselves, with the likes of Airbnb arguing they&re not responsible for what a host posts on
their site.Courts have even found section 230 broad enough to cover dating apps.When a man sued one for not verifying the age of an underage
user, the court tossed out the lawsuit finding an app user misrepresentation of his age not to be the app responsibility because of section
230.
Private regulation of hate speech
Of course, section 230 has not meant that hate speech online has gone unchecked.Platforms like
Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech.Social media companies have
hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access
altogether.But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied
Jones hasfor years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats
run a global child-sex trafficking ring.With thousands of followers on Facebook, Twitter, and YouTube, Jones& hate speech has had real life
consequences.From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C
to save kids from the restaurant nonexistent basement, his messages have had serious deleterious consequences for many.
Alex Jones and
Infowars were finally suspended from ten platforms by our count & with even Twitter falling in line and suspending him for a week after
first dithering.But the varying and delayed responses exposed how different platforms handle the same speech
Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and
the contribution of social media to increased polarization, have led to calls to amend or repeal section 230.If the printed press and cable
news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online & especially when fully
two-thirds of Americans now report getting at least some of their news from social media
Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target.
Should hate speech be
regulated
But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress
own wording in section 230.The section enacted in the mid-90s states that online platforms &offer users a great degree of control over the
information that they receive, as well as the potential for even greater control in the future as technology develops& and &a forum for a
true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.&
Section 230 goes on to declare that it is the &policy of the United States
to encourage the development of technologies which maximize user control over what information is received by individuals, families, and
schools who use the Internet
Based on the above, section 230 offers the now infamous liability protection for online platforms
From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge
Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress words
in 1996 read today as a catalogue of inaccurate predictions
Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never expectedan &individual endorsing (or
denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children& to be
enabled through the protections offered by section 230.
It would be hard to argue that today Congress & having shown little understanding in
recent hearings of how social media operates to begin with & is any more qualified at predicting the effects of regulating speech online
More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and
therefore have the unintended consequence of entrenching incumbents.While Facebook, YouTube, and Twitter may have the resources and
infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will
be at a major disadvantage in keeping up with such a burden.
Last chance before regulation
The answer has to lie with the online platforms
themselves.Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech.They have built up
formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet.Their profits have enabled them
to hire away top talent, from government prosecutors to academics and human rights lawyers
These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams & the ones that
draft policies and oversee their enforcement & are more representative of society at large.Facebook proudly announced that its product
policy team now includes &a former rape crisis counselor, an academic who has spent her career studying hate organizations
and a teacher.Gone are the days when a bunch of engineers exclusively decided where to draw the lines.Big tech companies have been taking
the drafting and enforcement of their policies ever more seriously.
What they now need to do is take the next step and start to coordinate
policies so that those who wish to propagate hate speech can no longer game policies across platforms.Waiting for controversies like
Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation.Proactively pooling
resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct
government regulation.
The social media giants can also build public trust by helping startups get up to speed on the latest approaches to
While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that
policies are easy to access and widely distributed.
Coordination between fierce competitors may sound counterintuitive.But the common
problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response.Precedent
exists for tech titans coordinating when faced with a common threat.Just last year, Facebook, Microsoft, Twitter, and YouTube formalized