“The greatest griefs are those we cause ourselves.” — Sophocles
One of the more fascinating Brexit phenomenon I have been following is the petition to re-hold the stay/leave vote, which is now approaching four million signatures. As this number increases — and irrespective of what was the right choice in this case — I wonder how many people on that list voted for Brexit and now think that they made a huge mistake in doing so (or how many expected the Remain side to win and so never voted at all). The more I think about this possibility, the more I think about a topic that has intrigued me for decades: the process by which otherwise smart and capable people make terrible or even disastrous decisions.
Of course, the study of bad decisions is an ancient one, going back to Greek tragedy and the role of hamartia in the downfall of great men. However, modern writers and thinkers have also tackled this question in interesting ways. Back in college, I read Barbara Tuchman’s excellent book, The March of Folly, which explores how governments are able to make clearly self-destructive decisions. Another memorable book in this field is The Logic of Failure by the German theoretical psychologist Dietrich Dorner, which is a fascinating analysis of “error in complex situations.”
Interestingly, the MIT Tech Review recently highlighted some new research in this field by Ashton Anderson (Microsoft Research), Jon Kleinberg (Cornell), and Sendhil Mullainathan (Harvard). The researchers created a database of 200 million chess games and divided the games into two classes: those played by amateurs of all levels and those played by grand masters. The database not only records the outcome but also the factors that surround any loss-causing mistakes, which the researchers analyzed to try to understand the factors that drive major errors. As the authors note:
We have used chess as a model system to investigate the types of features that help in analyzing and predicting error in human decision-making. Chess provides us with a highly instrumented domain in which the time available to and skill of a decision-maker are often recorded, and, for positions with few pieces, the set of optimal decisions can be determined computationally.
The researchers drew three major conclusions from their work. First, they found that decision time is a factor in errors but “only up to a point.” As expected, hasty decisions lead to many mistakes, but after a specific time threshold (10 seconds in their model) the duration of decision making is no longer a factor. In other words, whether a player takes two or ten minutes to decide, the probability in both cases is not that she is conducting a complex analysis of the position and/or possible moves but that she simply does not know what to do.
The second finding is that the complexity of the position is also an important factor, which is to be expected. The more complex the position, the more the likelihood of error.
The third finding is that the skill of the player does not impact results in the way most of us would expect, which is that the the better a player gets the less mistakes she makes. Instead, the authors describe a model that has three skill-related outcomes: skill-monopole, in which skill does improve outcome all of the time; skill-neutral, in which skill level makes no difference; and, surprisingly, skill-anomalous, in which increasing skill level actuality increases the error rate. This last finding baffled the researchers:
The existence of skill-anomalous positions is surprising, since there is no a priori reason to believe that chess as a domain should contain common situations in which stronger players make more errors than weaker players. Moreover, the behavior of players in these particular positions does not seem explainable by a strategy in which they are deliberately making a one-move blunder for the sake of the overall game outcome.
The authors suggest that further research on skill-anomalous situations is warranted, and as I read this conclusion it reminded me of the analysis Dorner wrote about in his book. After analyzing errors in complex situations, Dorner found that two major factors influenced outcome. The first is what he called dynamics, which refers to the volatility of the decision factors that must be understood and analyzed correctly in any given problem. The more dynamic the situation, the easier it is to make a terrible mistake. The second factor was what he called intransparence, or the degree to which the reality of a situation cannot be ascertained correctly. The more intransparent the position (i.e., the less its surrounding reality is seen), the higher chance there is of a major error. Using a chess analogy, Dorner notes:
If we want to capture this … in a visual image, we could liken a decision maker in a complex situation to a chess player whose set has many more than the normal number of pieces, several dozen, say. Furthermore, these chessmen are all linked to each other by rubber bands, so that the player cannot move just one figure alone. Also, his men and his opponent’s men can move on their own and in accordance with rules the player does not fully understand or about which he has mistaken assumptions. And, to top things off, some of his own and his opponent’s men are surrounded by a fog that obscures their identity.
If we combine both sets of insights, it’s interesting to apply them to executives and politicians who, though possessing great skill and experience, find themselves in new or newly complex positions, and then fail, sometimes spectacularly. We often see analysts pondering how it is that so-and-so failed as CEO in Company X, even though he had years of success as CEO of Company Y. An understanding of the dynamics of failure suggest that success or failure in these instances may have less to do with the skill of the executive than with the complexity and intransparence of the situation in which he finds herself. I think of recent cases such as Yahoo, in which a seemingly outstanding candidate has struggled to succeed not so much because of her lack of skill but, perhaps, because the dynamics and opaqueness of the strategic position were not understood. Indeed, Dorner notes that in response to such intransparence we all build what he calls reality models, which are inherently flawed:
An individual’s reality model can be right or wrong, complete or incomplete. As a rule, it will be both incomplete and wrong, and one would do well to keep that probability in mind.
Dorner makes another point relevant to the Brexit outcome, and it is that humans are good at some kinds of analyses and bad at others. For example, we are generally good at dealing with spatial configurations. Even children can spot shapes that don’t belong in certain groups, and adults are generally skilled at understanding visual patterns and anomalies. However, we are generally bad at what Dorner calls temporal configurations, i.e., understanding the sequence in which events have unfolded or will unfold in the future. As he writes:
Even when we think in terms of time configurations, our intuition is very limited. In particular, our ability to guess at missing pieces (in this case, future developments) is much less than for space configurations. In contrast to the rich set of spatial concepts we can use to understand patterns in space, we seem to rely on only a few mechanisms of prognostication to gain insight into the future.
The results of this cognitive weakness, Dorner notes, is that people tend to have two reactions to problems involving temporal configurations: “first, limited focus on a notable feature of the present and, second, extension of the perceived trend in a more or less linear and ‘monotone’ fashion (that is, without allowing for any change in direction).” It does not take a lot of imagination to see Dorner’s description at work in the Brexit decision. A British voter does not like what is happening in the EU now, so he concludes that (a) this present negative EU state outweighs all pre-EU realities and post-EU possibilities and that (b) the future of the EU will be a linear extrapolation of the current (disliked) state. Rather than consider with equal weight the negative aspects of life outside the EU (which is not his reality today) or that the EU’s current issues could be remedied, this voter behaves just as Dorner predicts and votes only on the basis of the here and now. “We human beings are creatures of the present,” Dorner writes, and it’s very likely many people feeling Brexit regret are doing so because they failed to recognize, in his fascinating phrasing, “shapes in time.” Of course, the morning after the vote, that same voter wakes up to see the emerging (negative) aspects of his new post-EU reality and remorse arrives just as quickly and forcefully. My guess is that as more and more people see the “temporal shape” of life outside the EU, Brexit regret will only increase.
After considering all of the above, what I take away from the study of failure, and the Brexit example, is a reminder to focus just as much on situational analysis — and the reality models we build in response — as to our own skills in a given situation. The latter are no match for a devastatingly complex position in many cases, and focusing as much on the decision as the decision-maker is something that we should all keep in mind. Of course, this is a difficult challenge. Indeed, with characteristic German understatement, Dorner notes that keeping this last point in mind is “easier said than done.” My guess is that there are a lot of people in the UK just now who have discovered just how true that statement really is.
(LinkedIn Version Here)