Managing the Bias in Enterprise Risk Management Tools

The following article is from The RMA Journal, February 2017 issue.

“Overconfident professionals sincerely believe they have expertise, act as experts, and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.”
– Daniel Kahneman, psychologist and Nobel laureate in economics

Imagine a business leader within your firm calling you and saying, “I just got asked by a board member what the probability of a major internal fraud is at our firm. You’re the risk professional; please provide an answer.”

How do you approach a question like this? Do you pick the first number that comes to mind? Do you base it off the internal loss experience of your firm? Your industry? Estimating uncertainties is difficult, and realizing that our biases can taint the effectiveness of our estimations is critical to the success of any risk program. Get it wrong and your company may misallocate resources, expose itself to a blind spot or, worse, create unwanted exposure.

The fascinating field of behavioral economics tells us that humans are not rational and at times make suboptimal decisions based on emotions and past experience. As risk managers, we must consider the biases and preferences reflected in our risk assessment tools. Could these tools and resulting reports lead our organizations to make suboptimal decisions? Do we impart a false sense of precision that could lead to blind spots in our risk management programs? Let’s examine some of the possible biases in our commonly used tools and approaches.

Examples of Bias in Decision Making

The overconfidence bias results in a false sense of comfort in our ability to judge probabilities and severities. Psychologists have proven that even the “experts” in a field of study are not safe from this bias and that it can lead to unintended outcomes. The Alpert-Raiffa experiment demonstrates our inability to select intervals with high confidence. The experiment requires an estimation of ranges to random fact-based questions such as, how many square miles is Alaska? The respondents are asked how confident they are in their responses. Most say above 90%. But when the responses are graded, the precision is less than 50%.

More relevant to risk managers is the study “Managerial Overconfidence and Corporate Policies,” by Itzhak Ben-David, Campbell R. Harvey, and John R. Graham. The authors polled CFOs over a 10-year period for their predictions of future S&P 500 returns with an 80% confidence interval. The realized returns matched the CFOs’ predictions only 36% of the time. The study concluded that estimation error or overconfidence can lead to improper managerial decisions. Investment and leverage could be overstated based on poor predictions of expected returns.

Is Risk Management Risky?

The theories from behavioral economics should force risk managers to contemplate how their methods may add variability and uncertainty to a firm’s profile. As risk managers, we have developed numerous tools to support our discussions with management. But we may be placing too much confidence in the precision of these tools and we may not be considering the biases that could creep into the process and lead to unintended outcomes. Let’s look at the tools where these biases may be occurring and review some practices to consider their impacts.

Many firms use a framework to rank and prioritize their risks. These tools are familiar, usable, and effective at explaining a risk story to any audience. But how many of us consider the biases embedded in these frameworks? Could these biases influence perceptions and cause unintended outcomes? Most risk assessment tools require two or more inputs to rank and prioritize exposures. Many firms use probability, impact, and velocity. We should ask ourselves how reliable these methods are for capturing, challenging, and reflecting those measures.

Whether we like it or not, operational risk is hamstrung by a lack of robust data, requiring risk managers to use expert judgment and experience to assign values to the exposures in their organization. This can lead to bias unintentionally impacting our results, given the limitations in our understanding or overconfidence in our ability to envision all possible outcomes.

Let’s use vendor risk as an example to demonstrate how biases can impact our conclusions about perceived risk. Many organizations use a range of vendors to support aspects of their value chain, providing expertise, cost savings, and operational leverage. As risk managers, we are concerned with the adverse effects of a vendor-related event. Our colleagues in credit risk, using a robust set of default and recovery data, can reliably estimate the probability of default for most, if not all, of the firm’s vendors. Our operational risk teams, however, need to use subjective probabilities and estimates to determine the likelihood of a major operational event, such as a system failure, service delivery interruption, or data breach.

In these instances, we tend to look at nominal rankings for the likelihood and severity of a supplier outage—in other words, a very low likelihood and high impact. As risk managers, we look at the contingency and resiliency plans for our suppliers, track service-agreement performance, and conduct onsite audits. All of this gives us support for and comfort in our estimates, but we have to recognize that the estimate is most likely wrong owing to imperfect information and our own perceptions and biases.

In the extreme, if you just witnessed one of your peers face a significant supplier outage, would that influence how your organization rates the exposure? Should it? Herein lies the problem, and it is twofold. First, the nominal use of ratings does not provide adequate measures in the absolute sense, but it allows for relative rankings across other exposures. Second, our bias may lead to a possibility effect, where we overstate the likelihood because we just witnessed one of our competitors being impacted by an event with an extremely low likelihood.

What’s a Risk Manager to Do?

A good starting point is to be mindful of the potential for bias and preference influencing our analysis. Emphasis should be on the need for using multiple dimensions, leveraging scenarios that highlight expected outcomes, and then considering outliers. Another tactic is to employ different mechanisms to capture and inform the estimation of risk exposures, especially for risks at the top of the watch list.

Risk managers should use a two-pronged approach. First, the risk team should interview the stakeholders individually and ask them to list, rank, and describe their exposures. In that session, the role of the risk manager is to listen, challenge, and develop the view. Second, the risk team should then bring all the stakeholders together for a working session to review and debate the aggregate analysis. It’s important for one or more participants to play devil’s advocate and challenge the assumptions and potential biases that informed the analysis.

Many firms have found this method effective in attempts to at least acknowledge the assumptions and biases that may have influenced the analysis and in deciding whether or not the conclusions are appropriate. It also counteracts the potential for conformity bias, where no one wants to challenge the group view, especially if it’s supported by senior leaders, for fear of being wrong or perceived as a contrarian.

There’s another validation step that risk managers should employ. After the risks have been sized, ranked, and plotted, the facilitators should challenge the team and ask, “Are you willing to devote time, energy, and resources to the items that are presented?” This validation step leads to many items being challenged, especially when the team considers the likely velocity of the risk manifesting itself and the margin of error in estimating likelihood and severity. It also reinforces the remainder of the items when there’s commitment to taking action and expending resources.

Lastly, it is critical that risk leaders acknowledge and confront sacred cows and other matters that may lead to bias or blind spots. In the famous children’s book The Emperor’s New Clothes, no one admits that the emperor is naked. How do we, as risk leaders, encourage a culture and the necessary mechanisms to allow sacred cows to be identified and discussed openly?

We have to accept that biases do influence our assessments and risk conclusions. We should acknowledge and highlight the potential for bias among the consumers of our frameworks and information. Taking a step back and challenging the assumptions, biases, and perceptions present in risk analyses could help your firm identify and avoid a blind spot that may have gone unnoticed, helping to avoid some painful outcomes.

 

The above is from The RMA Journal, February 2017 article “Managing the Bias in Enterprise Risk Management Tools” by Adam Rosenthal, head of operational risk management at Vanguard. You can read the article in its entirety here.

Washington - The Week Ahead, December 10-14, 2018

Read More

The RMA Journal Talent Issue Is Now Available

Read More

Washington - The Week Ahead, December 3-7, 2018

Read More

comments powered by Disqus