Over the course of the past month or so, I have been interviewing World 50 members and expert advisors on the changing nature of the data privacy issue. After many hours of discussion with people who spend their professional lives thinking and working in this space, I have come to a few critical conclusions about where we are in the evolution of this issue from an enterprise perspective.
Conclusion 1: In the West, we have entered the second age of digital privacy
One could argue that the concept of a data hack/breach entered the general lexicon in 2005, which is the year that (a) the Privacy Rights Clearinghouse began its chronology of data breaches and (b) the first time a data breach compromised more than 1 million records (DSW Shoe Warehouse; March 2005; 1.4 million credit card numbers and names on those accounts). From then until 2016, we lived in what I think was the first age of data privacy for most Western consumers. This age was characterized by a kind of privacy equilibrium in which consumers willingly or unknowingly gave over personal data to companies in exchange for ever-improving digital services such as search and e-mail. During this age, of course, great fortunes were made as companies like Google and Facebook executed a wonderful arbitrage, giving digital services of varying levels of value in return for collecting anything and everything they could about their users.
This first age came to an end in 2016. The Snowden revelations which were followed by the Russian election hacks and the Cambridge Analytica scandal on Facebook destroyed the naive equilibrium that had existed in the consumer mindset and called into question fundamental aspects about the Internet and the companies that have come to dominate it. While it may seem that not much has changed on the surface, the average consumer is much more aware of the risks and trade-offs that must be made in order to live online. They may not have changed fundamental practices, but they are much more risk-aware and ready to act than they were in 2015.
Conclusion 2: The basic contractual model that governs data exchange is useless to people and dangerous to companies
Every consumer knows that EULAs are pointless and have been for some time. As a Motherboard piecenoted in 2017:
Indeed, a 2014 NYU study found that roughly a tenth of one percent of consumers even look at licensing agreements at all, and most read them for only a few seconds. And even if you do read the EULA, you cannot feasibly negotiate better terms; EULA are take-it-or-leave-it affairs. As economists and legal experts have demonstrated, an “informed minority” of consumers who attempt to influence the market by boycotting companies for unfair contract terms will never be sufficiently powerful to get companies to change their business practices. This is because the cost of getting informed is designed to be high enough so that there will never be a “tipping point” for corporations to create fairer terms.
It may seem, then, that EULAs are here to stay. However, after speaking to several executives and experts I am convinced that they are just as much of a risk to companies as they are to consumers. As one CISO cautioned me, “one of these days data will get someone killed and a company will be found responsible for that death. No EULA will protect a corporation from the social outcry, whatever the courts may say.” There is such a profound asymmetry (and therefore unfairness) between the people who write these agreements and the people who accept them, that it would not take much for a government to conclude that EULAs do not constitute “informed consent” and thus cannot be held to be the basis for a fair contract. The early days of this change may already be here, notes the same article:
The question that society must ask, then, is whether there are laws or regulations that can be put into place to prevent companies from putting whatever they want into EULAs. It’s hard to envision a scenario in which EULAs go away entirely, but in the short term, eight states have introduced right-to-repair legislationthat specifically invalidates contracts that infringe on the property rights of electronics owners. The legislation is just a start, but at least it’s something.
Conclusion 3: Technology will not save us from the privacy mess technology created
Before starting this project I was mildly optimistic that the same smart people who created this mess would somehow find a solution to our data privacy issues. I have been convinced that this is not going to happen. Everyone I spoke with stressed their belief that tech companies will not fix this problem, because they have too much riding on extending the current asymmetrical model as long as possible. To a person, all of the people I spoke with think legislation such as GDPR is the future and the sooner regulators begin to clamp down on the abuses of personal data, the better off we are all going to be.
Conclusion 4: The clock is ticking on reversing the dangerous anti-privacy path we have taken globally
No phenomenon fascinates and repels me with equal force as China’s emerging “Social Credit System.” Apparently, I am not alone in that fascination. Many of the people I spoke with are also deeply intrigued by China’s grand and frightening experiment; moreover, while most people think that such a model could not come to the West, the experts I spoke with are not so sure. It may take a little longer and arrive via a more circuitous route, but a look around us suggests the foundations for a Western version of China’s system are already in place. What’s more, with China paving the way on the technology front, ten years from now such an all-seeing/all-knowing platform might be one IT contract away for any government wanting to follow the Chinese model. Consequently, the West should be thinking about enabling structural and social defenses against such an outcome now (and not stumbling blindly toward it).
Conclusion 5: Corporations can be on the right side of this debate
Several of the people I spoke with expressed their admiration for a few companies (Apple Google, etc) that have taken serious stances on privacy and have backed their worlds with real and significant financial commitments. All companies should follow suit. This means rethinking every aspect of their relationship with their consumers and the data they collect from them. It also means adopting some form of the “seven pillars” model suggested to me by one World 50 advisor. That model stipulates that every company must follow a few core principles:
- Explicit consent to collect personal data
- Explicit consent to store personal data
- Explicit consent to use and reuse data
- Explicit definition of purpose for any gathering, storage or use/re-use of data
- Explicit option to edit personal data on demand
- Explicit option to delete personal data on demand
- Explicit value exchange in return for the first six points above
By an overwhelming degree, most large companies today do not comply with these principles. That needs to change.
Was it all worth it?
In reflecting over the past few weeks of dialogue on this issue, and on my personal libertarian philosophy, I am struck by how easily we got into this mess. Was the search bar really worth handing over every aspect of our personal curiosity? Was free-email worth handing over all of our private interactions? Was the ability to keep tab on people you barely know worth opening your brain to strangers? I think not, which is why I worry for our fundamental right to privacy. Sometimes I think it’s too late to fix this issue, but let’s hope that I’m wrong. For it would be a great and historic tragedy if future generations look back on us and (rightly) curse us for having traded away forever a fundamental human right with just a few clicks of a mouse.
Read this post on LinkedIn.