030413-article-11-lead

Heads I’m Right, Tails It Was Chance The Strange Case of Irrationality in Program Risk Management


To print a PDF copy of this article, click here.

Author: Lt. Col. Christopher W. Parry, USAF

There’s a difference between having lung cancer (an issue) and living in a way that increases the probability of contracting lung cancer (risk). The former requires treatment, the later requires actions to lower the probability. Some of these mitigations are exercising, increasing intake of healthy food, or quitting smoking. However, we all know people who smoke, don’t exercise, or consistently eat one too many desserts despite knowing the risks. Irrational? Yes. Explainable? Largely.

Like those who irrationally continue in risky life choices, sometimes acquisition professionals persist, consciously or not, in managing programs without adequately “rationalizing” our understanding of programmatic risks. Many times we place ourselves in the “thick of thin things” at the expense of long-term program success. Often, though, we allow our internal biases and fallacious thinking to skew objective thinking of risks.

Behavioral economists and psychologists have made great strides in understanding some of these biases and thinking errors. Many insightful studies have shown how seemingly irrational decisions can be explained. With this knowledge on how we humans process information, we can take steps to correct our biases and fallacious thinking that, left alone, can severely undermine effective risk management objectivity. As Sun Tzu so simply stated in The Art of War, “If you know the enemy [the risks] and know yourself [your own biases and fallacious thinking], you need not fear the results of a hundred battles”… or program management reviews. Below are a sampling of biases and fallacious thinking that may negatively impact our programs’ successes. I share these as a first step in understanding and channeling our irrationality.

Irrational Biases

“Seventy percent of people think they’re above average.”

An often encountered bias in our world is termed the “Planning Optimism Bias.” In my domestic “program management” experience, I told my wife and family that I could build a playhouse in the backyard in 2 weekends, no problem. After taking 2.5 weeks of leave and 4 weekends, the playhouse was completed … just 8 hours before I left for a yearlong deployment. Does this sound familiar to anyone, or am I alone in being below average?

A study in 1995 found only 13 percent of a group of students completed their projects within their most-likely time estimates. Furthermore, only 45 percent of these students completed their work within their previous absolutely worst-case projections. These results have been validated throughout other populations. Have you experienced schedule slips in programs you’ve managed despite the ardent belief at the program’s beginning that delivering on schedule would be a “slam dunk”? Beware of planning optimism bias and mitigate the risks that are surely there.

Have you ever bought a timeshare? Do you look back and think “that was the best decision I’ve ever made”? Do you remember the positive reasons you bought the timeshare but not the negative aspects you considered? If so, you too have fallen victim to the “Choice Supportive 030413-article-11-secondary-3Bias.” In studies, researchers have found people tend to embellish the positive aspects of previous decisions while neglecting the negatives. In one study, 99 college freshmen, when asked about their high school grades, erred systemically to higher grades than what they actually obtained.

Experience in program management is invaluable in program success. However, we must base our future programmatic decisions on a nonbiased view on what we’ve learned in the past. We must remember the negative aspects of the decisions we’ve made. Arguably from a risk management perspective, we need to remember more of the negative aspects of former decisions and then apply that learning to better manage the current program risks.

A close cousin to the Choice Supportive Bias is the “Confirmation Bias,” or the tendency to give more weight to evidence that already supports your current belief. With this bias combined with our “Planning Optimism Bias,” the acquisition professional could find himself not paying enough attention to warning flags or signals in a timely manner.

Even more interesting within this Confirmation Bias is the finding that we give greater weight to information that we hear or see first. For example, “people form a more positive impression of someone described as ‘intelligent, industrious, impulsive, critical, stubborn, envious’ than when they are given the same words in reverse order.”

First impressions count as do first looks at the program’s execution data. And isn’t the program nearly always on schedule at the beginning? And when should the acquisition team be most actively engaged in risk identification? Could we as a profession be lulled early in our programs by the “Dark Sith Lord” of Confirmation Bias? One must actively fight this bias by being aware of it and by pessimistically overcompensating to address programmatic risks adequately.

As our program begins to execute, we may fall into another insidious bias trap. In Dan Ariely’s book Predictably Irrational: The Hidden Forces That Shape Our Decisions, we suffer from the “Endowment Effect Bias.” The bias describes the tendency for people to overvalue what they own or on what they’ve spent time. This overvaluation (normalized for sentimental items that sometimes don’t have a price) can at times approach more than twice the amount one could spend to buy the item today. This bias expresses itself at times as the reluctance to dismiss “sunk costs” of either projects or to abandon currently executing courses of action.

Over time, as involved program team members, we gain a feeling of “ownership” of our projects. We care how the project performs. We want the program to succeed. We want to deliver success. However, are we willing to defer our project to a different project (managed by somebody else) when the data show our program no longer is the best value? Or do we insist on an inflated value of the program (on average, twice the value, according to studies) and minimize the risks our program faces?

Even internal to our programs, are we willing to objectively look at the risks of our current courses of action and rationally weigh the benefits of pursuing another course that would reduce the risks overall, despite our investment in the previous path?

Closely linked to this Endowment Bias is what is called the “Availability Snowball Bias” or “Availability Cascade.” Briefly stated, the bias is shown when your belief becomes stronger and stronger through publicly sharing it again and again and seeing people adopt or believe what you say—for example, “We are going to deliver on schedule!”

Previously I have convinced myself through public proclamations that our program was on schedule … and it was, at the time. And the more I said it, consciously or not, the more I believed and the more I became vested in this position. Team members and supervisors also seemed to believe in my explanations, furthering my belief that I was right. I’m not suggesting we all became hypnotized by predicted success but rather we became a little more complacent than we should have been. This bias is insidious as it tends to lull the program manager into a sense of false security. This security can whitewash risks that the team could and should be actively managing.

Fallacious Logic

“I shot an elephant in my pajamas. How he got in my pajamas, I’ll never know!”
— ”Animal Crackers,” Marx Brothers film

Closely related to thinking biases, fallacious logic clouds our ability to adequately come to proper risk assessments. Biases tilt our thinking in one direction or another. Fallacious logic doesn’t just tilt one’s thinking—it completely undermines it.

A common fallacy witnessed often in program management is the “Wishful Thinking Fallacy.” This fallacy occurs when we believe, despite evidence to the contrary, that a program is going well because we want it to. Columnist Christopher Booker in the April 9, 2011, Telegraph described wishful thinking in terms of “the fantasy cycle” the following way:

030413-article-11-secondary-1When we embark on a course of action which is unconsciously driven by wishful thinking, all may seem to go well for a time, in what may be called the “dream stage.” But because this make-believe can never be reconciled with reality, it leads to a “frustration stage” as things start to go wrong, prompting a more determined effort to keep the fantasy in being. As reality presses in, it leads to a “nightmare stage’”as everything goes wrong, culminating in an “explosion into reality,” when the fantasy finally falls apart.

Risk management is the exercise to “ease into reality” instead of “exploding into it.” In many ways, an acquisition professional should be the pessimist when pushing his team to ensure program success. I have often told program teams who have worked for me that I don’t want to see an “issue” briefed that I haven’t several months earlier seen briefed as a “risk.”

Although the acquisition team needs to be optimistic about the chances for success, they cannot do so by focusing only on the positive data or reports that they receive. If they do, they’ll fall into the “Texas Sharpshooter Fallacy.” This fallacy is named after a Texas sharpshooter (I guess the sharpshooter could be from any other state, but the name just seems to fit) who shoots a bunch of bullets at the side of a barn, walks up to the barn and draws the target around where most of his bullets hit.

I have convinced myself through public proclamations that our program was on schedule … and it was, at the time. And the more I said it, consciously or not, the more I believed and the more I became vested in this position.

Effective risk managers determine what “right looks like” before the data and reports start flowing in and we become enamored with selected success stories. As the staff of the DoD inspector general in Afghanistan said about its boss, “He doesn’t get down in the weeds; he gets under the roots and digs them up!” That’s what risk managers need to do: root out things that could go wrong and mitigate either the causal mechanisms or the effects.

Sometimes in our efforts to mitigate risks, various positions may be offered from the contractor, the government’s engineering team, or any other interested party. As the program manager, you have to make the call, sometimes a hard call, on what you deem the probability and consequence of the risk to be. But be aware of the “Argumentum ad Temperantiam Fallacy.” This fallacy is where one picks the middle ground between two extreme positions to be the correct position just because it’s “in the middle.” Sometimes the extreme position may be the more correct position on a risk. Maybe slightly left or right of center is better. But splitting the “baby,” as in the story of Solomon, benefits neither side nor the program!

The Reward of Now

Hard work pays off eventually, but procrastination pays off now!
—Paraphrase of sociologist Larry Kersten

Honestly, in college, how many times did you complete your term paper or project weeks ahead of the due date? You likely had these due dates listed in the syllabus when the class started, yet you probably were still working on them the night before they were due. If so, you fell ill to the “Student Syndrome,” which occurs when people only fully apply themselves at the last possible moment before a deadline. Commonly, you’ll hear that it will only take an hour if you only have an hour!

030413-article-11-secondary-2Unfortunately, effective risk management is highly susceptible to the “Student Syndrome” for three major reasons. First, unlike a term paper, a risk may never “come due.” All risks have a probability of happening, and, so, by definition, many never will actually become an issue.
Second, many of these risks may not occur for years into the future. Given the proximate threats of the daily issues, it is very tempting to put risk management on the back burner as we are rated on how well we solve our problems (i.e., issues) annually. The short-term gains often are given precedence over long-term growth as sometimes evident in American corporations’ dealings with stockholders.

Finally, there is a dilution of accountability and rewards within government service. This dilution may pervert incentives that lead you away from risk management and more to success in things that are clearly attributed to you that can be captured in an annual appraisal or military report. Similarly, risks typically happen in the future, hence the annual appraisal cycle and frequent personnel moves effectively shield leaders from ever realizing these risks. All these reasons are perfectly rational from an individual’s perspective!

“Embrace the Madness”

Embrace your irrational and biased self! Understand yourself and how and why you make decisions. Only through critical self-awareness can you become an objective and effective risk manager. By cleaning up our logic and zeroing out our biases, we can come to a better objective truth of the real risks and probabilities of those risks within our programs.

Many of the cognitive economic studies show how experience trains an ancient structure within our brains called the amygdala. This small walnut-size portion of our brain recognizes situational patterns, can unconsciously process enormous amounts of data, and gives us our “gut feeling.” In the Army, soldiers perform battle drills to commit combat skills to “muscle memory.” Training is a first step, but practice, practice, practice is what eventually trains our instinct. As we consciously think and recognize our biases and fallacious thinking, we can train ourselves to be more objective and critical thinkers.

Biases and fallacies tend to lure us into the path of least resistance. As stated in James Womack and John Shook’s book Gemba Walks, “Humans will try anything easy that doesn’t work before they will try anything hard that does work.” Effective risk management is hard. It takes time, lots of it, and the majority of the risks that you track and mitigate may never occur. But the programmatic discipline required by risk management provides structure to the program’s effective management. By effective risk management, we can move from “firefighting” to “fire prevention,” a much more cost-effective and less traumatic event. Then we can truly say, “Heads or tails … it doesn’t matter, beause we were prepared for either.”


To print a PDF copy of this article, click here.

Parry is chief of foreign military sales in Afghanistan.

The author can be contacted at christopher.w.parry@afghan.swa.army.mil.

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *