Earlier this year I wrote two posts on why Facebook “can’t be fixed” so long as its current business model remains in place. In my posts, I noted that the latest Facebook scandal was part of a continuous pattern that had and would not change:

The routine is familiar by now. Facebook makes an “innovation” that results in harm to its users’ privacy or even to a nation’s institutions. At first the company denies that anything is wrong, until it is shown incontrovertible proof, after which it issues (sometimes faux-emotional) apologies and promises to do better. But it can’t do better, because to do so would mean that the core value creation mechanism of the company would be denied.

As we all know, once again Facebook is in the news for precisely the kind of behavior that has been their problem since the company’s founding. A comprehensive New York Times investigation recently outlined the many ways in which the company sought to discredit its critics to avoid having to admit to its careless and harmful use of user data. The report is strong support for Facebook’s critics, and while the many lapses in user data management have been well chronicled there is a more subtle problem that should also be considered. That problem is called “PIHO.”

Brett Frischmann (Professor in Law, Business and Economics at Villanova University) recently laid out a good summary of the PIHO problem in a new post for Scientific American. PIHO stands for “personalized-input-to-homogenous-output,” and it refers to techniques, extremely common in the online world, that use personalized stimuli to produce homogenous responses. In other words, a platform is designed that gives users a varied set of stimuli, along with a seemingly wide variety of ways to respond to that stimuli. However, in reality the variety is an illusion since the platform only allows a narrow set of outputs, which are designed in such a way to generate the maximum possible insight set about each user.

Facebook, the author notes, is a good example of PIHO at work. Facebook presents a widely varied set of inputs to elicit action from a user: photos, news feeds, ads, articles, recommendations, invitations, etc. But, the author notes, the output action set within this loop is actually quite narrow: read, “like,” comment, share or (sometimes) delete. “Engagement,” Facebook’s jargon for this set of outputs, sounds like something desirable but in the end it’s nothing more than a set of controlled digital responses that can easily be tracked, measured, and correlated. As Frischmann writes:

Engagement usually refers to a narrow set of practices that generate data and revenues for the company, directly or via its network of side agreements with advertisers, data brokers, app developers, AI trainers, governments and so on.

Moreover, the specificity of the outputs makes experimentation on platforms such as Facebook easy, outsourcable and lucrative:

Not surprisingly, many digital tech companies collect as much data as possible, either directly from consumers or indirectly from data brokers or partners with whom they have side-agreements. They run experiments on consumers. They use various data processing techniques to identify patterns, test hypotheses, and learn about how people behave, whom they interact with, what they say and do, and what works best in shaping their behavior to fit the companies’ interests.

One of the brilliant things about Facebook is how they crate the illusion of “engagement” when in reality all a user is really doing most of the time is exposing a preference, or bias, with respect to a given stimuli. Indeed, it was precisely this exposition of preference/bias that was so critical to Cambridge Analytica, and which continues to be the most valuable product Facebook produces for its real customers: advertisers, sellers and, increasingly, governments.

Of course, Facebook is not alone in its use of PIHO techniques. We see similar models at work on most social platforms and within many enterprise technology platforms. Indeed, it’s a very good exercise to look around next time you are on any social site and ask yourself about the platform’s relationship of inputs-to-outputs and to take a moment to consider what that relationship is teaching an algorithm about your psychology and life. My experience is that such an analysis quickly illuminates both (a) PHIO concepts at work and (b) the myriad of insight models that are probably being run behind the faced presented to users, all in the name of engagement, collaboration, productivity or similar terms.

The author ends his PIHO analysts by noting that word as common as engagement need not be reduced to a few swipes and clicks:

Engagement could mean something more, something great for humanity, and digital networked technologies could pursue such engagement, but that’s not really what we get in our modern digital networked world. Digital tech could, in theory, personalize goods and services in a manner geared toward your interests. Instead, they mostly pursue their own interests and cater to those on the other sides of the market—that is, those who pay the bills—advertisers, partners collecting data and training AI, governments, etc.

He’s right. The social internet started with the idea that people would invent all these new ways to interact through technology, and that this engagement would be dynamic, creative and liberating. Sad to say, quite the opposite turned out to be the case: the most extensive social platforms constrain humans and drive them into narrow pathways of behavior that benefit the platforms’ owners much more than the users. It’s for this reason that I don’t understand how anyone who really knows how the platform works can stay on it. As I wrote earlier this year:

Eventually someone may create a social network that does not share Facebook’s fundamental design and business model flaws. Until that day comes, I will stay off Facebook and so will my kids. No technical or policy changes will fix this company or platform. There will be more breaches and more problems. Anyone who stays on Facebook must ask whether any gain it provides is worth turning over all your most private data to the world. For me, the answer is a resounding no.

 

 

 

Advertisements

Posted by Carlos Alvarenga

Carlos A. Alvarenga is the Executive Director of World 50 Labs and Adjunct Professor in the Logistics, Business and Public Policy Department at the University of Maryland’s Robert E. Smith School of Business.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s