Skip to Main Content

Highlights and Analysis from RMA’s Second Model Risk Management Survey

Mrm Es Blog 1168X660 071323 Rgb

‘We’ve decided to run the survey because the field of model risk management continues to develop at a rapid pace.’

An RMA survey report out this summer details the current state of model risk management, from the typical ratio of model development vs. model validation staffers, to challenges regarding vendors, to the growing prevalence of artificial intelligence in models. Key findings from RMA’s second Model Risk Management Survey, which included 53 institutions of varying sizes, include: 

  • About two-thirds of firms reported using some form of model risk management IT application to manage the model lifecycle. 
  • Nearly three-quarters of firms reported using artificial intelligence and machine learning models or tools.   
  • The median proportion of full-time equivalent developers to validators was 2:1. 
  • Top vendor model challenges included documentation regarding model design and   analytics, theory and conceptual soundness, and assumptions and limitations.   

For some analysis of the results and a look at model risk management trends, The RMA Journal recently spoke with Thomas Gregory, an RMA risk consultant and architect of the survey. The following interview has been edited for length and clarity. Read the executive summary here.

RMA JOURNAL: Why does RMA’s Model Validation Consortium conduct this survey each year?  

GREGORY: We’ve decided to run the survey because the field of model risk management continues to develop at a rapid pace. First, there are new applications, like artificial intelligence [AI], machine learning [ML], and cybersecurity and climate risk modeling, which the survey questions covered in greater depth this year. Second, processes continue to mature, even at the largest firms that have been at this game the longest. That observation applies doubly for smaller firms. They came to feel regulatory pressure several years after the largest banks and are advancing their capabilities in model validation and governance. We asked a number of questions that go into some depth about the management information system and workflow tools that firms use to keep their houses in order around model risk management. This will be interesting to track going forward. 

RMA JOURNAL: Big picture, what are the takeaways from the latest survey?   

GREGORY: In questions that we repeated from the prior year, we primarily see stability of answers, and a few signs of progress. For example, there is wider reporting of non-model quantitative tools. It will take some more data over time to make stronger inferences about progress, but the data we’ve accumulated so far should be very useful for each firm to assess how they compare to their peers. From the questions about new model types, it was striking, but maybe not too surprising, that 73% of respondents use AI/ML models or tools and only 25%--and only the larger firms—have climate-risk models or tools. AI/ML is now being put to mainstream uses in banks of all sizes.  Climate risk management is increasing from a very small base, but is concentrated in the largest, most internationally active banks. We do think the use of climate risk models will expand significantly in the future, but it will take some time to permeate the ranks. 

RMA JOURNAL: What did the survey find out about exactly how banks are using AI models?  

GREGORY: Fraud detection is by far the largest application of AI/ML models. Fully 84% of the respondents that reported using AI/ML models or tools use them for fraud detection. In second place is marketing [41%], followed by underwriting [32%], and customer interaction [30%]. 

RMA JOURNAL: Why are AI models so useful in these applications you mention? 

GREGORY: AI/ML models are particularly useful for finding patterns in masses of data. That’s why fraud detection is the most common use case. It’s similar for marketing and underwriting, particularly in the retail markets where there are many customers and the need for efficient screening and scoring processes.  

RMA JOURNAL: The executive summary of the survey paper says concerns about fairness in models, which have always existed, have been “turbocharged by AI.” Can you explain why, and what can be done about it? 

GREGORY: Early on, it became evident that models that are calibrated or “trained” on past decisions can actually replicate the bias of the prior human assessors. Rather than making rational, optimal decisions, the models can build in the biases of prior decisions, biases based on factors that have no causal relation to the creditworthiness or suitability of potential customers. Ethically, these are biases that we need to overcome. Fair banking regulations also require it. So model validation processes have been expanded to include testing to detect biases that could be based on geography, race, religion, national origin, sex, marital status, or age, among other things.  

RMA JOURNAL: What are some other challenges and concerns caused by a rise of AI models and tools at financial institutions?  

GREGORY: AI/ML models don’t start from the top-down with a strong set of theories of what drives outcomes. Rather, they simply seek out the patterns that exist in the data. Sometimes the statistical relationships that surface are intuitive, but often they are not. As a result, to many users, AI/ML models seem like “black boxes. Top concerns listed by survey respondents include explainability, theoretical and conceptual soundness, fairness, and interpretability.    

RMA JOURNAL: For the first time, the RMA model risk survey explored the use of cybersecurity models. To wit, about two-thirds of respondents reported having cybersecurity models. Why was it decided to ask about this topic for this survey? What is happening in that area, and what are some challenges and opportunities? 

GREGORY: If you were to ask 10 CROs what their top concerns were, I would bet that most would put cybersecurity in their top three. Private hackers and both state and non-state entities regularly probe for weaknesses in corporations and financial firms’ data and IT security. They are after the fruits of larceny and extortion or do it for purposes of espionage and offensive military and maybe even terroristic tactics.   

RMA JOURNAL: For the first time this year, RMA also asked about model development staffing vs. model validation staffing. It found that the median ratio of model-development staff to model validation staff is 2.0x overall. Does this ratio surprise you? Why or why not?  

GREGORY: It is not so much that it’s surprising as that it is interesting to see that number in print for the first time. It was also curiously stable across the bank sizes. I suppose what it most indicates is that model controls require very significant resources. If we think back two decades, say to the year 2000, I am sure that that ratio would be very significantly higher, meaning that there were even fewer model validators in proportion to model developers at that time. We can debate the optimal level, but I think the industry is significantly better off for the investment in a professional “second set of eyes.” 

RMA JOURNAL: Why did you add a question this year about the use of model risk management “solutions,” and what did you find out?  

GREGORY: This was a set of questions about the IT tools that are used by the whole model development “ecosystem” at each firm, including model developers, users, validators, and people engaged in model governance. Honestly, these were included because of participant request. After we compiled the 2021 survey responses, we interviewed about two-thirds of the respondents. At the end of each interview, we would ask: What other questions would you ask your peers if you had the chance? This was the number one request for additional information.   

So, we asked what tools people were using to manage their own operations and we asked which elements of the model lifecycle were managed using these tools. The number one answer was model inventory management, covering 87% of the 40 respondents that have implemented a tool. Next was issue management, which includes the storage and tracking of validation findings [73%]. Then, a series of answers came in at 50% to 60 % of respondents: model rating/scorecard process, model identification, model risk reporting, and model validation. 

RMA JOURNAL: You asked several questions about vendor models in 2021 and 2022. Can you explain what the survey found?  

GREGORY: Responses for both years show that transparency around vendor models continues to be substandard. Firms are really wishing for more.  Only 4% of respondents said that vendors described “black box components” very well.  Forty-five percent said moderately well.  Thirty-six percent said slightly well and 15% said not well at all.  I think anyone would have to say that the “slightly well” and “not well at all” is not up to the standards expected by model validators, generally, and by the regulators.  Asked about what aspects, if any, of vendor documentation they found to be deficient, two-thirds said model design and analytics, and more than one-half said theory and conceptual soundness and assumptions and limitations.  These deficiencies are persistent.   

RMA JOURNAL: Are black boxes in vendor models an area where industry groups like the MVC could have an impact? If so, how?  

GREGORY: First off, I think that firms would have a positive impact collectively if they put more attention into the terms of their contracts with model vendors. Some contracts include provisions requiring adequate documentation, testing results, and general transparency to meet a model validation standard. I understand that having terms is not the same as enforcing them in all cases. But if more specification of documentation requirements went on upfront, before the check is signed, then buyers would have more grounds to argue for remediation later on. 

On models that have widespread use across firms, the Model Validation Consortium is exploring ways that coordinated effort might bear fruit in this domain.  For example, a majority of survey respondents signaled that they would be willing to coordinate with other institutions using the same model to improve the quality and substance of information received for validation purposes. Likewise, a majority signaled that there would be value in an industry-provided comparative scorecard evaluating model vendors on features, functionality, and customer satisfaction.  MVC is exploring these ideas in dialogue with a number of the respondents. 

 

Download Now