Society Technology

Why Facebook can’t be fixed: Part 2

I recently wrote a post claiming that it’s not possible to fix Facebook’s serious, and dangerous, flaws by merely tweaking or adjusting the current business. I argued that there are inherent flaws in the company’s platform, business model and leadership that that make it impossible to fix this platform without starting again from scratch. I hardly needed supporting evidence for my position beyond the company’s own history but, remarkably, on Friday a long (and excellent) Intercept article provided even more reasons to rethink the basic position Facebook plays in our society.

The detailed piece, written by Sam Biddle, explains how a “confidential” Facebook document lays out how the company is developing the ability to predict the future behavior of its users for subsequent packaging and sale to advertisers. In the document,  Biddle notes:

Facebook explains how it can comb through its entire user base of over 2 billion individuals and produce millions of people who are “at risk” of jumping ship from one brand to a competitor. These individuals could then be targeted aggressively with advertising that could pre-empt and change their decision entirely — something Facebook calls “improved marketing efficiency.” This isn’t Facebook showing you Chevy ads because you’ve been reading about Ford all week — old hat in the online marketing world — rather Facebook using facts of your life to predict that in the near future, you’re going to get sick of your car. Facebook’s name for this service: “loyalty prediction.”


Loyalty prediction sounds like something from an Orwell novel, because the premise that a monolithic platform, or government, can a priori know what you will want/do/buy is something that has no place in a democratic society. It’s one thing that an advertiser thinks you like Beer A over Beer B. It’s quite another that it knows mathematically what you are planning to buy. However, achieving just such knowledge about its users is exactly why Cambridge Analytica  was allowed to roam the digitals halls of Facebook in the first place. Moreover, as Biddle points out, such access was not, and is not, in and of itself a problem to Facebook’s leadership:

Facebook’s keen interest in helping clients extract value from user data perhaps helps explain why the company did not condemn what Cambridge Analytica did with the data it extracted from the social network — instead, Facebook’s outrage has focused on Cambridge Analytica‘s deception. With much credit due to Facebook’s communications team, the debate has been over Cambridge Analytica’s “improper access” to Facebook data, rather than why Cambridge Analytica wanted Facebook’s data to begin with. But once you take away the question of access and origin, Cambridge Analytica begins to resemble Facebook’s smaller, less ambitious sibling.

Exactly. There is in fact almost no real difference in what CA was trying to do and what Facebook does every day.

At this point you may be wondering what’s so bad about Facebook knowing, versus guessing, your future actions? The answer is that any company vested in its predictive model is going to be inclined to do all it can to make those predictions come true. Biddle lays this point out clearly:

Pasquale, the law professor, told The Intercept that Facebook’s behavioral prediction work is “eerie” and worried how the company could turn algorithmic predictions into “self-fulfilling prophecies,” since “once they’ve made this prediction, they have a financial interest in making it true.” That is, once Facebook tells an advertising partner you’re going to do some thing or other next month, the onus is on Facebook to either make that event come to pass, or show that they were able to help effectively prevent it (how Facebook can verify to a marketer that it was indeed able to change the future is unclear).

So in Facebook we have not just a prediction platform but also a manipulation/coercion platform the likes of which the world has never seen. This platform is held not by a company, like Apple, with a long history of openness about its collection and use of customer data. It is held by a company with a history of secrecy and strategic miscalculation that starts with its founder and involves almost every aspect of its business model. Again, as the author notes:

Problematic as well is Facebook’s reluctance to fully disclose how it monetizes AI. Albright described this reluctance as a symptom of “the inherent conflict” between Facebook and “accountability,” as “they just can’t release most of the details on these things to the public because that is literally their business model.”

So what is to be done? Well, for one, every single person on Facebook should take a long look at the value the platform is providing agains the cost of the privacy surrendered. For me, as a libertarian, the choice is clear. No digital platform or connection is worth the privacy cost that Facebook demands. I know for others the answer may be different today, but Biddle wonders if that conclusion would be different if people knew everything that Facebook is doing with their information (which they certainly don’t know today):

Maybe enough Facebook users just take it as a given that they’ve made a pact with the Big Data Devil and expect their personal lives to be put through a machine learning advertisement wringer. Hwang noted that “we can’t forget the history of all this, which is that advertising as an industry, going back decades, has been about the question of behavior prediction … of individuals and groups. … This is in some ways the point of advertising.” But we also can’t expect users of Facebook or any other technology to be able to decide for themselves what’s worth protesting and what’s worth fearing when they are so deliberately kept in a state of near-total ignorance. Chipotle is forced, by law, to disclose exactly what it’s serving you and in what amounts. Facebook is under no mandate to explain what exactly it does with your data beyond privacy policy vagaries that state only what Facebook reserves the right to do, maybe. We know Facebook has engaged in the same kind of amoral political boosting as Cambridge Analytica because they used to brag about it on their website. Now the boasts are gone, without explanation, and we’re in the dark once more.

I don’t agree that informed users are unable to calculate the pros and cons of Facebook participation today. For me, even if nothing else comes out, the platform is not work the risk. But I can understand why a casual user, who does not think about civil liberties regularly, might require more detailed evidence before hitting the delete button. These users might also believe that tweaks to the company’s model are all that’s required to make the problem go away. Indeed, I have read a lot of suggestions recently about how Facebook can change: from “why me” buttons to moving to a pay for privacy model. While these are interesting ideas, they won’t change the core mission of Facebook’s founder and the company itself, which is to know everything about everyone and then to make money from that knowledge. For a free society, any company that sets this goal as its foundational purpose is a worry. Imagine if in the future, Facebook achieves its loyalty prediction goal and its platform is taken over by the government or by an even more insidious and organized foreign power. We may not be able to undo that evolution quite so easily, and the consequences could change Western democracies forever. This is a chance we should not take. It’s our civic obligation to seriously rethink this company’s role in our society before we lose any chance to do so.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: