Last week terrorists once again attacked the people of London. Lives were shattered and lost yet again, and as such the nation’s leadership was forced to respond to the continuing threat that terrorists post to open societies. Leading the charge, the U.K.’s Prime Minister, Theresa May, decided to place a large part of the blame on the companies that create and run social spaces in the Internet. Speaking outside No. 10, she noted that “we cannot allow this ideology the safe space it needs to breed – yet that is precisely what the Internet, and the big companies that provide internet-based services provide.” As a result, she added, her country needs “to work with allied democratic governments to reach international agreements to regulate cyberspace to prevent the spread of extremist and terrorism planning.”

Putting aside for a moment the politics of her remarks (which are deep since she was in charge of internal security for six years before becoming Prime Minister), her claim for increased regulation is wrong for two reasons. First, it is a legally dubious position. Second, it is largely pointless. Let’s take each critique in turn.

Ms. May claims that technology companies are creating “safe spaces” for terrorists “to breed.” While factually true, it does not mean that the creators of any public space, real or virtual, should be ultimately responsible for everything that happens in their environments. In other words, if gangsters happen to meet in a McDonald’s restaurant to plan crimes, that does not mean we need to pass a law forcing McDonald’s to monitor and record every conversation held at one of its restaurants. Likewise, if drugs are sold at a city park, that does not mean we should hold the city’s parks department responsible for increasing drug use. Both of these common sense conclusions have been reinforced formally by recent rulings U.S. courts in favor of Twitter and Facebook. As Eric Goldman recently noted in his Technology Law blog:

We’ve seen a cluster of lawsuits against social media sites based on their alleged provision of material support to terrorists. The first substantive ruling–in Fields v. Twitter, now on appeal to the Ninth Circuit–was a decisive plaintiff loss, casting a dark shadow on all of the other cases. The second substantive ruling, in the Cohen and Force v. Facebook cases, also is a decisive plaintiff loss.

In case after case, the courts are stating clearly that the Facebooks and YouTubes of the world are not responsible for the actions of third-parties on their sites. At least in the U.S., there is a clear position emerging that these sites are public commons that can and should be policed like (any other public space) but in such a way that is consistent with civil liberties. This is the right view, since it is wrong to treat online discourse differently from “physical” discourse. What rights we have in a park or restaurant must continue to exist when we go on line.

Turning to the second critique, it is not just legally wrong to try to censure online public spaces, it is largely ineffective. Throughout history, those who would harm society are quick to adjust their methods in response to social and legal prohibitions. The Internet is no different. To take but one example, from July to December of 2016, Twitter suspended 376,890 accounts for possible terrorist content, and 74% of those accounts were flagged by internal (and therefore privately funded) technology systems. Yet there is no doubt that on Twitter today one could find a similar number of accounts inciting one dangerous view or another. Moreover, when accounts do not resurface on major sites, they often migrate to other, less-policed, environments. As the Wall Street Journal noted this week:

As Facebook, Twitter and Google’s YouTube have improved in removing explicit terrorist content, much of that material has migrated to lesser-known platforms like chat app Telegram and text-sharing site PasteBin. Terrorists still use the major platforms—because that is where users are—but mainly to identify potential recruits and make contact, before moving conversations toward radicalization on encrypted messaging services, researchers say.

Ironically, some anti-terrorist entities actually regret the closing of suspicious Facebook or Twitter accounts in the first place. They would much prefer to have the accounts remain open on first-tier platforms, so they can be monitored and tracked, a position that night, in fact, argue for lessening content restrictions when all is said and done.

While it is wrong to see social media platforms as responsible for terrorist outreach, it is not however wrong to believe that in specific conditions online speech can and should be curtailed. The use of any public space is a compact that binds the user to rights but also to social responsibilities. The correct approach is to improve policing of public spaces for those who violate social norms of behavior. Indeed, back in 2015, I wrote a piece on the issue of terrorism and free speech. In that post, I argued that there are very real cases in which one’s right to speech on the Internet is forfeit. I suggested three criteria for when and how that should happen:

If one agrees, then, that limitations on free speech on the internet and social media only to prevent physical harm, the challenge becomes how to implement such a model. My position is that any form of restriction on free speech on the Web should exist (a) only to prevent physical harm in Pontin’s narrow sense, (b) follow due process and (c) be transparently implemented and recorded.

The events of the last week have not changed my conclusions. If someone is clearly shown to be inciting physical harm, then his speech can and should be suppressed. Indeed, Ms. May’s government recently passed the most draconian set of mass surveillance laws in any Western country just for this purpose. As a result, U.K. authorities have the most extensive, legally-sanctioned, powers to collect and read the private information of both citizens and foreigners in the democratic world. It is hard to believe that this is not (more than) enough to get the job done. (Indeed, a cynic might conclude that her call for more regulation is really an attempt to draw attention away from the possibility that during her six-year tenure as Home Secretary the current agents of terror in the U.K. continued to evolve and perhaps even thrive.)

Whatever the motive, her call for more government censure of free speech online is wrong. Government should police the digital commons just as with any other public space, but it should do so with the same respect for civil rights (already under siege) that it brings to non-virtual settings. This week’s Economist leader makes the same point:

As in the offline world, legislators must strike a balance between security and liberty. Especially after attacks, when governments want to be seen to act, they may be tempted to impose blanket bans on speech. Instead, they should set out to be clear and narrow about what is illegal—which will also help platforms deal with posts quickly and consistently. Even then, the threshold between free speech and incitement will be hard to define. The aim should be to translate offline legal norms into the cyber domain.

Back in 2015 I made the following observation:

At moments of national stress, it’s both expected and unfortunate that many politicians rush to anti-liberty ideas such as censorship or “closing” parts of the Internet as panaceas to complex threats and problems. This is not just misguided, it is a dangerous, and in some cases fascist, reaction that seeks to define one specific group as an existential threat and a specific ideology as the only salvation. Once that bridge is crossed, anything becomes acceptable in the name of survival, even the reduction or wholesale elimination of individual liberty.

Neither last week’s attacks — nor those caused in the past by the IRA, ETA, the Red Brigade, the PLO, or the bomber in Oklahoma City, just to name a few — should cause democracies to abandon our fundamental liberties. Terrorists, after all, come and go with the times. Freedom of expression, and not its suppression — or even elimination — in response to sick ideologies or actions, must be our legacy to future generations.

Advertisements

Posted by Carlos Alvarenga

Carlos A. Alvarenga is the Executive Director of World 50 Labs and Adjunct Professor in the Logistics, Business and Public Policy Department at the University of Maryland’s Robert E. Smith School of Business.

One Comment

  1. Excellent arguments. Missing from the public dialog today is a frank discussion of the cost of false positives. We are quick to note that nefarious agents were already “on watch” as if to imply a failing of law enforcement to act against everyone along the risk profile spectrum. In fact, such police action would be to achieve the goal of those agents in the first place: the collapse of free democratic society.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s