Analytics Economics Society Technology

Google Will Force Us To Redefine Humanity (As Information)

I recently published a post about what I call the “Linn Effect,” which occurs when technological innovation creates more problems than it solves. That post came to mind as I thought about the latest challenge that Google is facing in Europe: giving people the ability to erase all or part of their digital history — forever. Robert Herritt, writing in Pacific Standard, lays out the background to how Google came to this situation:

UnknownIn 2010, a Spanish lawyer named Mario Costeja González appealed to Spain’s national Data Protection Agency to have an embarrassing auction notice for his repossessed home wiped from a newspaper website and de-indexed from Google’s search results. Because the financial matter had been cleared up years earlier, González argued, the fact that the information continued to accompany searches of his name amounted to a violation of privacy.

The Spanish authorities ruled that the search link should be deleted. In May 2014, the European Court of Justice upheld the Spanish complaint against Google. Since then, Google has received more than 140,000 de-linking requests. Handling these requests in a way that respects both the court and the company’s other commitments requires Google to confront an array of thorny questions. When, for instance, does the public’s right to information trump the personal right to privacy? Should media outlets be consulted in decisions over the de-indexing of news links? When is it in the public interest to deny right-to-be-forgotten requests?

Herritt’s article goes on to describe a meeting Google’s CEO recently organized in Spain to begin to address the many challenges posed by the European ruling. Among the invitees was Lucio Floridi, an Oxford philosopher, who is the director of the Oxford Internet Institute and a “philosopher of information.”

Herritt summarized Floridi’s views on information and identity as follows:

For anyone who wants to address the problems raised by digital technologies, the best way to understand the world is to look at everything that exists—a country, a corporation, a billboard—as constituted fundamentally by information. By viewing reality in these terms, Floridi believes, one can simultaneously shed light on age-old debates and provide useful answers to contemporary problems.

Consider his take on what it means to be a person. For Floridi, you are your information, which comprises everything from data about the relations between particles in your body, to your life story, to your memories, beliefs, and genetic code. By itself, this is a novel answer to the perennial question of personal identity, which has preoccupied philosophers since at least Plato: What defines the self as a coherent entity over time and space? But Floridi’s view can also help us think precisely about the consequential questions that today preoccupy us at a very practical level.

Now that the mining and manipulation of personal information has spread to almost all aspects of life, for instance, one of the most common such questions is, “Who owns your data?” According to Floridi, it’s a misguided query. Your personal information, he argues, should be considered as much a part of you as, say, your left arm. “Anything done to your information,” he has written, “is done to you, not to your belongings.” Identity theft and invasions of privacy thus become more akin to kidnapping than stealing or trespassing. Informational privacy is “a fundamental and inalienable right,” he argues, one that can’t be overridden by concerns about national security, say, or public safety. “Any society (even a utopian one) in which no informational privacy is possible,” he has written, “is one in which no personal identity can be maintained.”

The idea that humans are their information seems novel to Herritt, but it was actually explored at length by James Gleick in his excellent 2011 book, The Information: A History, A Theory, A Flood. Gleick’s book is too expansive to summarize in a few sentences, but one of its most fascinating conceptions is that human DNA, when all is said and done, is simply an idea trying to survive across time. In other words, DNA does not exist to enable human life to continue; rather, human beings exist to enable the transmission of DNA. In Gleick’s view, what is ultimately most important is not actual human life of blood and bones, but the idea of what a human being is. Evolution, for him, is the repeated transmission of an almost Platonic conception of human life from one generation to another:

The macromolecules of organic life embody information in an intricate structure. A single hemoglobin molecule comprises four chains of polypeptides, two with 141 amino acids and two with 146, in strict linear sequence, bonded and folded together. Atoms of hydrogen, oxygen, carbon, and iron could mingle randomly for the lifetime of the universe and be no more likely to form hemoglobin than the proverbial chimpanzees to type the works of Shakespeare. Their genesis requires energy; they are built up from simpler, less patterned parts, and the law of entropy applies. For earthly life, the energy comes as photons from the sun. The information comes via evolution.

Returning to Floridi, then, we can connect the dots and see Floridi’s conceptualization of information as a part of human identify as wholly consistent with Gleick’s idea that human beings are information transmission mechanisms. Floridi believes that if one accepts that position, however, then entities such as Google must evolve their current “information neutral” position — in which all search results are presented with equal status and relevance is a mathematical, not moral, condition — and make certain distinctions about types of information:

“There is some data of mine that is so personal,” he explains, “that not only should nobody have it, but I should not be allowed to share it. … But at the same time, there is plenty of data about me that doesn’t constitute me.” Where you were yesterday for lunch: That doesn’t constitute you.

When I pressed Floridi for specifics about which personal data might be deserving of protection in right-to-be-forgotten cases, he made a helpful analogy. “I can sell my hair,” he said, “but I can’t sell my liver.” Kidneys can be donated, but they cannot be legally bought and sold: When it comes to bodies, we have arrived at certain broadly accepted norms like this over time. Google, Floridi continued, should be asking questions that will make similar distinctions. “Is the information this person would like to see de-linked, or even perhaps removed, constitutive of that person?” he said. “Or is it something completely irrelevant?”

So here we encounter a fascinating set of problems, new in the history of the human race. The problems Google will force us to solve, in the short not long term, pose some very fundamental questions that go beyond the traditional “privacy versus convenience” or “privacy versus security” debates so common today. What Gleick and Floridi, and no doubt the best minds at Google, understand is that the debate is really about the essence of human nature. If we are, in fact, information, then what is the nature of that information (Floridi’s main question)? If there are various kinds of human information, as Floridi argues, then how should they be classified and understood? Moreover, how should these classes of information be protected or unprotected by law or policy or practice? Is Floridi’s model of “constitutive” versus “irrelevant” information the right one? Should we adopt a “share first and protect second” approach so as not to hinder “innovation?” Or should we, as the Spanish court seems to be saying, give very individual the right to decide for herself what can and can’t be shared and what will or won’t be forgotten?

“Evolution itself embodies an ongoing exchange of information between organism and environment,” wrote Gleick. Consequently, how our information interacts with Google’s environment, and the Internet as a whole, is as much an evolutionary phenomenon as a technological debate. I do not make this statement metaphorically: consider Google’s current project to build a pill we will swallow to capture information relevant to certain diseases. Suppose that the data captured by this Google pill is shared with researchers, which in turns leads to a cure, thus changing an evolutionary pattern that had previously eliminated a specific human population. Since information exchange between a species and the environment drives human evolution, the current explosion of human information will mean an unprecedented acceleration of human, not just machine, evolution. As Floridi put it, we are all “interconnected informational organisms … sharing with biological agents and engineered artifacts a global environment ultimately made of information.” Perhaps with that idea in mind, Floridi, notes Herritt, pushed the participants at Google’s meeting “to advance the discussion beyond what he sees as its unproductively entrenched positions” and to see the issue at hand as much broader and more profound than generally conceived. Indeed, Floridi believes that to adequately deal with the identify-information issue that Google and other technologies are creating we need new conceptions of personhood and information as well as the laws that define and protect them:

Much of Floridi’s work is motivated by the idea that in choosing the rules that govern the flow and control of information, we are constructing a new environment in which future generations will live. It won’t be enough, he has written, to adopt “small, incremental changes in old conceptual frameworks.” The situation demands entirely new ways of thinking about technology, privacy, the law, ethics, and, indeed, the nature of personhood itself.

I think Gleick would agree with that statement, as do I. As has happened often in the past, technology has pushed ahead of law and society. Thinkers such as Gleick and Floridi are racing to catch up, but this is too important a matter to leave to academics and elites. Everyone has a stake in the outcome of this debate. We ourselves are being digitized — like a photo being scanned onto a hard drive, we can now live forever or be deleted in an instant. If the Linn Effect has in fact reappeared, the inhuman Internet we created will now have its turn and change the very nature of humanity. Whether it’s for the better or for the worse remains to be seen.


Read more:


  1. This is all very interesting but my sense is that really rigorous information theory (as per Shannon) is not fully reflected in James Gleick’s work. And I think that there’s a risk in over-complicating Darwinism (or making it overly mystical) by referencing an “exchange of information” between organism and environment. Arguably the information content of the genome increases as species evolve but where exactly does information flow back into the environment? How is information stored in the “environment”? And what do we make of the paradox that organisms vastly simpler than humans have similarly sized genomes? The information context of genes is not straightforward to measure.

    Darwinism is one of the world’s all-time most powerful (and dangerous) ideas, as shown by Daniel Dennett and numerous other straightforward writers. It’s a very elegant idea, self-contained, powerfully predictive, and was fully articulated and proven decades before the advent of information theory. There is very little metaphysics in Darwin.

    Anyway, the information angle is interesting and I will continue to ponder, albeit skeptically.

    On a more pragmatic point I am afraid that the post at the outset misrepresents the European Right to be Forgotten case. The RTBF ruling does not “[give] people the ability to erase all or part of their digital history — forever”. There is no erasure of the raw data from which search results are derived – no erasure at all, let alone forever. Instead, search operators are required (if an RTBF application is successful) to block certain results from appearing in the search stream.

    A nice summary has been posted recently by the Hong Kong privacy commissioner:

    In my view, RTBF is simply about adjusting the weighting of search results. Search is the result of secretive big data processes operated by Google and others as part of their advertising businesses. They already apply their own covert weightings to shift search results up or down the stream; search priorities change day to day, and they vary according to the history of the user’s Internet usage. Google’s AI algorithms are forever making automatic judgments about what a searcher is probably interested in. So now Google has to add another consideration to its search calculations; it has to block some results if an RTBF application is successful.

    The “truth” is still out there.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: