Free Speech and Terrorism In the Age of Social Media

No comments

This past week there has been a lot of discussion about impact of social media on the diffusion of terrorism.  In the aftermath of the San Bernardino shootings, more than one political leader has suggested that it’s in society’s interest to “close down” social media (and other internet) access points to suspected terrorists to safeguard citizens. China’s Xi Jinping was one of those voices. Speaking at a Chinese conference on the internet, Xi stated:

“Everyone should abide by the law, with the rights and obligations of parties concerned clearly defined. Cyberspace must be governed, operated and used in accordance with the law so that the internet can enjoy sound development under the rule of law.”

In response to these calls for restrictions, several writers, philosophers and technologists have counter-argued that the Internet is and should remain an uncensored, self-regulated environment devoted to free speech. A lot of these counter-arguments are built (at least in part) on mechanistic principles, by which I mean that censorship is ruled out because the mechanics are either impossible or unwieldy. How, for example, do you spot a potential terrorist on Twitter or Facebook? Other arguments are based on philosophical opposition to any censorship of expression on the web, no matter how dangerous or vile, since no government or other body can or should have authority over what can or cannot be said on-line.

Reading through both sides of this debate, I was reminded of a piece written in 2013 by Jason Pontin, the editor of the MIT Technology Review. Pontin’s piece (“Free Speech in the Era of Its Technological Amplification”) was an open letter to John Stuart Mill, the 18th century utilitarian philosopher, economist, and arch supporter of individual liberty. In his piece, Pontin discussed three episodes that provoked much debate about censorship on the Web a few years ago:

The YouTube Mohammed Movies:

In July of 2012, “Sam Bacile,” later identified by the U.S. government as a Coptic Egyptian named Nakoula Basseley Nakoula, uploaded to YouTube two short videos, “The Real Life of Muhammad” and “Muhammad Movie Trailer.” The videos were in English and purported to be trailers for a full-length movie, which has never been released; both depicted the Prophet Muhammad as a womanizer, a homosexual, and a child abuser. As examples of the filmmaker’s art they were clownish. Nakoula, their producer, wished to provoke, but the videos languished on YouTube unseen and might never have been noticed had not Egyptian television, in September, aired a two-minute excerpt dubbed into Arabic. Enterprising souls provided Arabic captions for the videos, soon collectively named The Innocence of Muslims; millions watched them. In reaction, some Muslims rioted in cities all over the world.

Twitter’s “Country-Witheld” Censorship Policy:

No one felt much outrage when an obscure Hannoverian neo-Nazi group was put down. But Twitter’s next use of country-­withheld content was more troubling, because it was more broadly applied. Not long after the company blocked Besseres Hannover, in response to complaints from the Union of French Jewish Students, it chose to censor tweets within France that used the hashtag “#UnBonJuif” (which means “a good Jew”). French tweeters had been using the hashtag as the occasion for a variety of anti-Semitic gags and protests against Israel. (Samples: “A good Jew is a pile of ash” and “A good Jew is a non-Zionist Jew.”) The tweets were not obscure: when it was taken down, #UnBonJuif was the third-most-popular trending term in France. 

Reddit’s Pornography Debate:

…subreddits were part of a larger trend of websites catering to a taste for images of young women who never consented to public exposure (examples include “self-shots,” where nude photos intended to titillate boyfriends find their way online, and “revenge porn,” sexually explicit photos of women uploaded to the Internet by bitter ex-boyfriends). The two most popular subreddits publishing images of young women, “r/jailbait” and ­ “r/creepshot,” didn’t traffic in illegal child pornography; the forums’ moderator, an Internet troll known as Violentacrez, was much valued by Reddit’s registered users, called “redditors,” for his industriousness in removing unlawful content. … Inevitably, r/jailbait and r/creepshot attracted wide opprobrium (little wonder: a popular shot was the “upskirt”), and Adrian Chen, a writer at Gawker, identified (or “doxxed”) Violentacrez as Michael Brutsch, a 49-year-old Texan computer programmer. Brutsch promptly lost his job and was humiliated.

Pontin notes that these events strained what was know as Silicon Valley’s “sunny compromise,” wherein American Internet companies agreed not to export the traditional U.S. free speech model and to comply with local government censorship rules. As Pontin notes, the compromise was based on a definition of harm that modeled John Stuart Mill’s famous “harm principle.” which he laid out in his book On Liberty:

The sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number is self-protection … The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise or even right … Over himself, over his own body and mind, the individual is sovereign.

In the U.S. the harm principle had evolved over time to mean mostly physical or commercial harm, not necessarily psychological distress or emotional insult. It was that narrow interpretation that was stressed by the three examples Pontin notes:

Silicon Valley’s presumptively absolutist standard of free speech, based on a narrow definition of harm, was exported to parts of the world that did not comprehend the standard or else did not want it. In all three cases I describe, the sunny compromise was considered by the parties involved and, under challenge, collapsed.

In response to this collapse, Pontin questions whether Mill’s principle can and should still apply to today’s internet, with it’s wide-ranging social media impact. Pontin concludes on the affirmative side for both questions, noting that there is only one reply that Mill would have made to anyone who argues that free speech is irritating, at best, or destructive, at worst:

We value free speech, you wrote, because human beings are fallible and forgetful. Our ideas must be tested by argument: wrong opinion must be exposed and truth forced to defend itself, lest it “be held as a dead dogma, not a living truth.”

Consequently, Pontin argues, the only condition that can justify censorship on the Web is when the potential harm is physical or commercial “but excludes personal, religious, or ideological offense.” Pontin closes by making the case that technology “companies should obey American laws about what expressions are legal, complying with local laws only when they are consistent with your principle, or else refuse to operate inside a country.”

Returning now to the question of terrorism, it seems that under Pontin’s interpretation censoring Daesh, white supremacists, or any other group using the Web to plan an attack would be justified under Mill’s harm principle. That is a sensible position to take, since it seems a logical compromise: say what you want on-line (whether or not you insult a given group or religion is irrelevant) but draw the line at insurrection, violence or economic harm. That said, Pontin’s position that “commercial harm” holds the same place as physical harm is a harder one to accept. Indeed, Richard Allen Epstein of NYU Law School responded to Pontin’s essay with a strong case for why Mill’s principle simply cannot be the basis for defining harm in today’s commercial relationships:

Physical harm certainly sets out a valid prima facie case that is subject to defenses that relate to consent and self-defense. But commercial harm is much too broad to be treated in the same fashion. The root of the difficulty is that Pontin’s formulation fails to distinguish three separate cases. 

The first is commercial harm that is wrought by defamation or violation of trade secrets. Here the libertarian norms against fraud reach the defamation case, which always involves a false statement about a plaintiff that a defendant makes to at least one third party. Trade secrets are a bit trickier, but they are best understood as property claims acquired to information by self-help that people can either keep to themselves or share with any number of persons under a promise of confidentiality.

Second, the single most dangerous version of the harm principle abroad in the land is that competitive injury suffered when a rival firm sells a better good at a lower price should be damned as a form of “unfair” or “ruinous” competition, notwithstanding the well-nigh universal proposition that competitive markets lead to optimal resource allocation. This form of commercial injury (which is surely real, as is offense) will lead to massive cartelization if given any legal respect. Like mere offense, it has to be treated as a noncognizable harm.

Third, the reference to commercial injury does not adequately deal with the position of a natural or legal monopolist in a network industry, whether it be railroads or cyberspace. The correct rule in these cases does not allow the monopolist to charge whatever it wants to whomever it wants. Instead, the long common-law tradition says that the party who holds that monopoly power can never engage in an arbitrary refusal to deal, but must offer his goods and services at reasonable and nondiscriminatory rates. Here the first term is intended to squeeze out monopoly profits, and the second to make sure that the monopolist does not engage in favoritism.

When it comes to free speech, even if we agree that commercial harm forms a more complex basis for restricting expression, we can insist that the physical harm aspect of Mill’s principle should be the litmus test for any restriction. This position does not negate that non-physical harm (insults, hate speech, etc) do not cause distress or psychological pain, only that such effects do not justify limitations on a principle as fundamental to our society as freedom of speech. Indeed, as Epstein notes in agreement with Pontin:

Pontin’s version is clearly correct insofar as it excludes “religious or ideological offense” from the category of what lawyers call “cognizable” harms. That odd term “cognizable” is meant to capture this dual understanding. The offense that people take at the conduct of others cannot be dismissed with a wave of the hand, given that these feelings are often deep and long–lasting. They are in fact real harms, subjectively experienced. So the willingness to cut them out of the harm principle cannot rest on a simple denial of the fact, but must rest on the awareness that for the long-term success of the system, each person must waive that claim against all others, no matter how acute the feeling.

If one agrees, then, that limitations on free speech on the internet and social media only to prevent physical harm, the challenge becomes how to implement such a model. My position is that any form of restriction on free speech on the Web should exist (a) only to prevent physical harm in Pontin’s narrow sense, (b) follow due process and (c) be transparently implemented and recorded. Indeed, the rules proposed by the Center for Democracy and Technology in the U.K. are as a good a starting point as any for any framework that would allow restrictions of free speech on the Web:

CDT recommends that:

  • The limits of lawful speech should be governed by international human rights standards, not commercial terms of service. Restrictions on speech must be proven necessary by the state to achieve a legitimate aim.
  • Governments seeking removal of unlawful content must do so by procedures that are provided by law, and that ensure accountability and an opportunity for appeal. Such restrictions must be the least restrictive and proportionate means to achieve the government’s aim.
  • It should not be the executive’s prerogative to define and alter the definition of “extremism” without the approval of Parliament.  The definition should, like the definition of “terrorism”, be a prescribed term in law.
  • Government should provide transparency about their policies and procedures for removing online content, including information about the nature and scope of information removed.  Company transparency reporting can also help to provide more information about content removal requests from governments, though it may not always be apparent to companies when a content removal request originates with the government. This underscores the need for transparency from government directly.

At moments of national stress, it’s both expected and unfortunate that many politicians rush to anti-liberty ideas such as censorship or “closing” parts of the Internet as panaceas to complex threats and problems. This is not just misguided, it is a dangerous, and in some cases fascist, reaction that seeks to define one specific group as an existential threat and a specific ideology as the only salvation. Once that bridge is crossed, anything becomes acceptable in the name of survival, even the reduction or wholesale elimination of individual liberty. This unfortunate reaction is all the more powerful today because, as Pontin notes, the Web has “amplified” free speech to a level unimaginable even a few decades ago. That amplification of ideas, both good and bad, is seen as a threat by many who would use terrorism as an excuse to turn back the clock. This should not be allowed, and we should support only mechanisms that limit expression via technology only when absolutely necessary.

“Whoever would overthrow the liberty of a nation,” wrote Benjamin Franklin, “must begin by subduing the freeness of speech.” We should not let anyone use the fear of terrorists or any other threat — real or imagined — do to our tradition of free speech what one Civil War and two World Wars could not.

Read more:

http://www.technologyreview.com/featuredstory/511276/free-speech-in-the-era-of-its-technological-amplification/

http://www.technologyreview.com/view/514271/the-incompleteness-of-the-harm-principle/

https://cdt.org/blog/pressuring-platforms-to-censor-content-is-wrong-approach-to-combatting-terrorism/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s