The Algorithm Manager Is Here. Artificial Empathy Is Next.

No comments

Last year I wrote a post about a future in which work is dominated more and more by non-human agents. To accompany the article I used a photo from a TV show about robot workers, which is pretty much what most people think about when they contemplate machines replacing workers. However, the reality is that today the greatest advances in machines over human workers are not at the “blue collar” level but in middle-management. Startups large and small are determined to kill one of the staples of corporate life, and their present and future impact is worth discussing.

Certainly the most famous example of AI-middle management is Uber, a company that has been built from the ground up without the traditional layer of human middle management. A very good 2016 FT piece by Sarah O’Connor gave readers a glimpse into this world of work:

Documents submitted as part of an employment tribunal case brought by the GMB union against Uber in London include an email sent to driver James Farrar in May congratulating him on an average rating above 4.4. “We will continue to monitor your rating every 50 trips and will email you if we see your rating for your past 50 trips falls below 4.6.” Uber will “deactivate” drivers whose ratings drop too low, though it says this is “extremely rare”. The court case documents include one instance, in which Uber sent an email on December 23 2013 to a driver called Ashley Da Gama: “Hi there, we would like to wish you a Merry Xmas and a Happy New Year. We are currently planning for 2014 and would like you to be part of it. However, we do need to see an improvement on your current track record to ensure you are.” Two weeks later, Uber emailed again to say his ratings had not improved enough. His account had been “deactivated . . . as of today”.

At Uber, it’s a not a long-serving manager or director who discusses your performance or even fires you should the need arise. Those tasks have been assigned to an AI platform that monitors every relevant aspect of a drivers’ performance as employees and reports the results back them on a continuous basis. The Uber AI management model is being copied widely by other startups who want to emulate its success. For example, UK company Deliveroo, also mentioned in the same FT piece, has a similar platform in place for its drivers:

The app expects him [the driver] to respond to new orders within 30 seconds. The screen shows a map and address for the local Carluccio’s, an Italian restaurant chain. A swipe bar says “Accept delivery”. That is the only option. The algorithm will not tell him the delivery address until he has picked up the food from Carluccio’s. Deliveroo couriers are assigned fairly small geographic areas but Kyaw says sometimes the delivery address is way outside his allocated zone. You can only decline an order by phoning the driver support line. “They say, ‘No, you have to do it, you already collected the food.’ If you want to return the food to the restaurant they mark it as a driver refusal — that’s bad.”

Deliveroo’s algorithm monitors couriers closely and sends them personalised monthly “service level assessments” on their average “time to accept orders”, “travel time to restaurant”, “travel time to customer”, “time at customer”, “late orders” and “unassigned orders”. The algorithm compares each courier’s performance to its own estimate of how fast they should have been. An example from one of Kyaw’s assessments: “Your average time to customer was less than our estimate, which means you are meeting this service-level criterion. Your average difference was -3.1 minutes.” Deliveroo confirmed it performs the assessments but said its “time-related requirements” took into account reasonable delays and riders were “never against the clock for an order”.

Like so many new things in Silicon Valley, the tech driving this evolution in management is new but the idea is quite old. O’Connor notes correctly that you can trace the psychological foundations for Uber’s approach to the work of an American, Frederick Taylor, in the early 20th century:

“Algorithmic management” might sound like the future but it has uncanny echoes from the past. A hundred years ago, a new theory called “scientific management” swept through the factories of America. It was the brainchild of Frederick W Taylor, the son of a well-to-do Philadelphia family who dropped his preparations for Harvard to become an apprentice in a hydraulics factory. He saw a haphazard workplace where men worked as slowly as they could get away with while their bosses paid them as little as possible. Taylor wanted to replace this “rule of thumb” approach with “the establishment of many rules, laws and formulae which replace the judgment of the individual workman”. To that end, he sent managers with stopwatches and notebooks on to the shop floor. They observed, timed and recorded every stage of every job, and determined the most efficient way that each one should be done. “Perhaps the most prominent single element in modern scientific management is the task idea,” Taylor wrote in his 1911 book The Principles of Scientific Management. “This task specifies not only what is to be done but how it is to be done and the exact time allowed for doing it.”

Taylor tested many of his ideas on the 600 or so labourers who worked in the yard of the Bethlehem Steel Company. After a series of experiments, he decided that a “first class” shoveller would be most productive if he lifted 21 pounds of weight with every shovel load. He ordered different-sized shovels for each type of material in the yard: a small shovel to hold 21 pounds of ore; a large one to hold 21 pounds of ash. The men went to a pigeonhole each morning where a piece of paper would tell them which tools to select and where to start work. Another piece of paper would tell them how well they had performed the previous day. “Many of these men were foreigners and unable to read and write but they all knew at a glance the essence of his report, because yellow paper showed the man that he had failed to do his full task the day before, and informed him that he had not earned as much as $1.85 a day, and that none but high-priced men would be allowed to stay permanently with this gang,” Taylor explained.

Taylor’s ideas took hold and immensely influenced companies such as General Motors and IBM, and they still permeate the cultures of successful companies such as Toyota and UPS. What is different today, of course, is that Taylorism is no longer confined to the shop floor but has evolved upwards into the management layers that in the past were beyond its reach. One researcher notes in the same piece the way in which Taylor’s ideas are impacting non-manufacturing sectors today:

“Taylor’s ideas have been applied very widely in manufacturing . . . but the services industry has always been a black box,” says Serguei Netessine, professor of global technology at Insead business school. “It has been very difficult to figure out: is it employees who are performing well? Or is the employee getting lucky? Now with data we can do this.” So Frederick Taylor is heading to the high street? “Yes — Taylor on steroids.”

One company trying to speed this change along is Percolata, a Silicon Valley startup that makes an AI management platform for retailers. It has dozens of companies as clients and its singular focus is squeezing every drop of productivity from retail sales workers by continously adjusting and readjusting worker assignmemts of time and store section, among other things. There is no way, Percolate argues, that any human manager could do its job better than their platform. Indeed, as its founder has noted: “What’s ironic is we’re not automating the sales associates’ jobs per se, but we’re automating the manager’s job, and [our algorithm] can actually do it better than them.”

Taylor’s impact on American labor was so extreme that it led to protests and the acceleration of labor union penetration within industrial workplaces. Today’s AI-managed workers face even greater challenges since they lack even the most basic human management layer to which they can complain or request adjustments in their work schedules or environment. After all, the Uber app does not care if you are having a bad day or if your kids kept you up all night sick. It demands performance and failure to deliver means elimination from the platform. Many people in Silicon Valley know that to stay the current course is one that poses serious risks to these business models. Society might rebel against being managed by “heartless” machines. Regulators might decide that since robots don’t vote, laws need to be passed to limit the reach of the “AI  boss.” These are serious business risks,  so an interesting line of research is picking up momentum whose goal is to embed a simulation of human empathy within these systems. At first glance, empathy might seem a uniquely human trait, but some researchers would argue quite the contrary.

I recently came across a good overview of what might be called artificial empathy in Stanford’s Intersect journal in an article by Megha Srivastava. The author argues that there is no a priori reason why computers can’t have empathy and then recaps the work of leading researchers in this field, including that of Professor Mark Davis (University of Texas, Austin) who believes that empathy has two parts: “the observable, visceral response and the cognitive understanding of another perspective.” Davis’ artificial empathy model for machines has three sequential steps:

  1. Detect a person’s suffering from verbal description, voice tone, or facial expression.
  2. Calculate similarities between a person’s suffering and one’s own memories.
  3. Create an “appropriate” response, both vocally and through observable expressions.

There is already a good bit of work, notes the author, focused on how to complete the first step:

We can use a similar machine learning approach to visually determine negative emotions. By providing a computer with several examples of faces that are considered “sad” and faces that are not considered “sad,” we can train the computer to use mathematic tools to discover what features of a face are more present in faces tagged as “sad.” As one would expect, this requires a large amount of data that parallels the amount of visual data humans are exposed to throughout a lifetime. Various databases, such as Stanford Professor Fei-Fei Li’s ImageNet and Carnegie Mellon Professor Takeo Kanade’s Expression Database provide a large volume of facial images for this very purpose. Surprisingly, computational visual emotion recognition has done incredibly well in recent years. In fact, a Current Biology report showed that computers performed better than humans in distinguishing faked pain and genuine pain in facial expressions (Bartlett, 2014). Therefore, through computational techniques that mimic the way humans learn information, we can train a computer to detect sadness and pain in humans.

As for the second step, that is probably the easiest of the three:

So, if empathy simply boils down to observing a set of cues that triggers a memory via neurons, then we can imagine providing a computer with a set of possible cues that can “trigger” a memory. For example, if a computer detects an insulting word, it can output: “I remember also being called ______ and feeling really sad— don’t worry, it will get better,” as if the computer actually held a memory that was triggered when sensing a similar event…Already, engineers have made significant advances in artificial memory: in their headline-making paper, “Creating a False Memory in the Hippocampus,” MIT researchers were able to implant false memories in mice (Ramirez, 2013). Indeed, scientists have been able to design memories that living beings think are real. One can imagine, therefore, the exhaustive range of possibilities with implanting memories in an AI.

Srivastava thinks technology platforms such as Siri and Alexa are paving the way  for the third step:

The most fundamental way AI responds to humans is through using a set of canned responses. Apple’s Siri serves as an example: if we ask Siri to “tell me a joke,” she responds with one of at most a dozen responses designed by Apple. While such a hard-coded way of responding appears incredibly rigid, how truly different is it from the way humans respond? As a society, we often suggest rules and algorithm-like structure to empathetic responses. If someone announces a relative’s death, the typical set of responses contains versions of “I’m sorry.” A cheerful or joking response would be considered incongruous and rude. Thus, we already have rules for what constitutes an appropriate, empathetic response. These guidelines, such as instructing the computer to always begin with a set of responses such as “I’m sorry” and “I hope you feel better,” and providing a list of possible filler words such as “like” and “um” to make the responses appear more natural, can easily be reduced to rules in an algorithm.

Putting all three steps together is not that hard to imagine:

In order to fully understand the model proposed above, consider an example: A woman tells an AI, “Someone robbed my house and stole all the jewelry.” First, the AI could detect that her facial expression is statistically most likely “upset” and that the words “robbed” and “stole” typically describe a worrying situation warranting an empathetic response. Then, perhaps, the word “robbed” could trigger a “false” memory in the AI of a time it was “robbed” of its money from its apartment 5 years ago. The AI could then respond, following societal expectations, “I’m so sorry to hear that. Someone robbed my money from my apartment 5 years ago, and I remember how stressful it was. Please let me know if you want help.” Thus, the computer would have exhibited artificial empathy.

Returning to our original topic, I believe that we will soon start to see merger of AI-driven middle management platforms and artificial empathy.  Not very long from now, machines will present workers the stark reality of their performance and place within their organizations but will do so through a window of AI-created empathy that will be equal to, or as some might say togue-in-cheek, greater than that which human managers provide today. Indeed, in a recent MIT Tech Review post, Rana el Kaliouby, CEO co-founder of emotion-AI startup Affectiva, notes the speed at which these technologies are evolving and merging with other platforms:

…the field is progressing so fast that I expect the technologies that surround us to become emotion-aware in the next five years. They will read and respond to human cognitive and emotional states, just the way humans do. Emotion AI will be ingrained in the technologies we use every day, running in the background, making our tech interactions more personalized, relevant, authentic, and interactive. It’s hard to remember now what it was like before we had touch interfaces and speech recognition. Eventually we’ll feel the same way about our emotion-aware devices.

Unfortunately, for many people artificial empathy will actually be an improvement over the legions of disinterested human managers across the world today. But for many other people, artificial empathy, even if greater in scope and volume than the human version, will be yet another step towards the dehumanization of work.

I was at one of our member summits recently with a group of CIOs and at dinner we started talking about the issue of technolgy at work. Even these senior tech executives expressed serious doubts about the direction we are taking as a society in this sphere. After hearing the remarks, I noted that “sometimes I think Silicon Valley is obsessed with turning robots into people and people into robots.” Perhaps somewhere deep inside us we think we can make better bosses if we start not with people but with machines. And maybe this will work, when all is said and done. But part of me worries that we will extend to our creations the same problems that bedevil human managers, even as we program new ones that did not exist in the human versions.

Srinistava quotes Ray Kurweil at the end of her survey: “Our technology, our machines, is part of our humanity. We created them to extend ourselves, and that is what is unique about human beings.” Let’s hope that as machines take over the management of people, a transition that has already started, we will extend the good and delete the bad from our human selves into the platforms that we are creating to replace us.

 

Read this article on LinkedIn.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s