Skip to Main Content

In RMA’s Model Risk Management Survey, A Picture of Banks’ Diligence and Frustrations

RMA’s third annual Model Risk Management (MRM) Survey explores the issues facing banks as they attempt to measure and manage risks, and the ways they’re structuring their model risk function to achieve this. While emphasizing the high priority banks continue to place on model risk management and its importance for meeting regulatory expectations, the survey surfaced shortcomings in areas like vendor transparency and headcount that hamper banks’ best efforts to create an ideal model risk function. The survey’s 100 respondents, all senior risk leaders from predominantly North America-based banks of all sizes, also shared feedback on their approach to incorporating new areas like generative AI and climate risk into their modeling activities. 

Results from the survey show that bank models on average split almost evenly among high-, moderate-, and low-risk categories. The mean number of quantitative models in banks’ model inventories is 175, though naturally the biggest banks skew the distribution much higher. Current expected credit losses (CECL), asset/liability management, and Bank Secrecy Act/anti-money laundering are, by far, the areas where most banks’ high-risk models reside. 

The main challenge to banks expanding their model validation capabilities is cost, 69% of survey respondents said. Talent, at 56%, followed just behind cost, while technology, at 19%, was a larger obstacle to expansion this year than it was in 2022 (12%). 

These are just a few of the survey’s insights. It also sheds light on other practices, from MRM’s reporting lines to model validation frequency, through a deep and detailed set of inquiries. Read on for more highlights. 

Reporting Lines and Staffing 

Recognizing MRM’s importance to the institution, more organizations are ring-fencing the validation function within an independent control group in 2024 (85%) than did in 2022 (64%). Only 3% of institutions in 2024 run a hybrid model, housing MRM both in the business and within its own function, versus 21% two years ago. That said, most model developers (74%) sit within different lines of business. 

Overwhelmingly (88%), and similar to the last survey, the function continues to report into the senior-most risk managers within these organizations, including the chief risk officer, head of enterprise risk management, and the head of operational risk management. 

Some banks indicated that despite the function’s criticality they remain understaffed or under-skilled to manage MRM to its ideal standard. Respondents at banks with more than $250 billion in assets, for instance, have a mean 115 personnel dedicated to internal model risk validation, but a mean shortfall of 18 headcount to achieve an ideal-sized staff, they said.  

To compensate, current staff are sometimes tasked with responsibilities beyond their core competency. Even on teams appropriately staffed, knowledge gaps and skill deficiencies exist. Quantitative talent remains in high demand. Asset/liability management modelers also were highly sought-after. Unsurprisingly, cost was the main barrier to full staffing. 

Frequency of Model Review and Validation 

The landscape of risks shifts quickly, forcing model teams to play catch up on their assumptions and calculations. This pace of change can create potential pitfalls not always identified in the review and validation process, as models lag the latest conditions that they hope to capture. In its report on the failure of Silicon Valley Bank, for instance, the Federal Reserve Board of Governors concluded that SVB failed to assess and manage its interest rate risk—typically assessed using models—as rates rose sharply and the technology sector slowed.  

More than 90% of survey respondents, representing all sizes of banks, said they formally review their highest-risk tier models once a year, with that share dropping to just above 80% for moderate- and low-risk tier models. The frequency of model validation—a more rigorous deep-dive than a formal review—is a mixed bag among banks of all sizes. Half validate their highest-risk tier models every two years, while a quarter do it as frequently as once a year and another quarter do it less frequently. As for moderate- and lower-tier risk models, just over half of banks validate these models every three and five years, respectively. 

The goal of a formal review is to find potential issues with the model and clear them to better manage the truth on the ground of risks. Sometimes, 68% of respondents said, those reviews trigger an immediate validation, though 32% said they’ve never had to escalate a review to a validation. Even so, almost 90% have found issues through their review process, though most (58%) said that has occurred less than half of the time. 

Emerging, and Important, New Model Risk Management Applications  

Banks have years of experience modeling various types of operational, market, and financial risks. But as new classes of risks emerge and grow, they are searching for ways to quantify and model them to better predict their effects on business. 

Take climate risk. More intense and frequent weather events pose substantial threats to bank assets, operations, and collateral. Because these outsized risks from climate change are just beginning to be recognized by the industry, banks lack the data and have yet to figure out how to account for them in their scenario planning (see “Why Predicting Climate Impact Is a Daunting Task for Banks” for more). Most survey respondents (84%) said they had no climate risk models or tools, though 90% of those that indicated they have climate models or non-model analytical tools said they include them in their model inventory.  

Cybersecurity, among the top concerns expressed in RMA’s 2024 CRO Outlook Survey, is another area where banks are still identifying their needs in model risk management. Almost half said they had no models or tools for modeling cybersecurity risk. 

Artificial intelligence and machine learning have created complex challenges for modelers. Banks have taken different approaches to classifying AI/ML tools as models. In 2022, 90% of survey respondents with AI/ML tools included them in their model inventory. In 2024, that number dropped to 66%, influenced, in part, by ongoing learning about generative AI and its similarities to and differences from quantitative and qualitative models. Just 44% said they “always” validated their AI/ML tools, while another 12% said they “never” validate them. Are AI/ML tools “different” from a classic model when it comes to validation? Almost two-thirds (61%) said “no.” 

Vendor Solutions and Transparency 

Generative AI highlights one of banks’ biggest frustrations in understanding and controlling their universe of models—third-party transparency. With 70% of respondents saying they use some form of AI/ML, some vendor-supplied, the explainability of third-party solutions remains a top priority.  

It seems, though, that vendors have not made the necessary adjustments to explain how the black-box components of their solutions work. Just 3% of respondents said vendors describe these “very well”, while 97% said they did so “moderately well” to “not well at all.” That’s about the same distribution of responses (4% and 96%) as in the 2022 survey. 

Some banks try to preemptively minimize blind spots in vendor products by incorporating contract language that requires vendors to meet model risk management standards for documentation, testing, and validation. Only a third, though, do it always or most of the time. More respondents (55%) do it less than half the time or never. 

Where respondents found vendor documentation most lacking was around the assumptions and limitations of their products (70%). The data, inputs, and parameter values category were a close second (68%), showing a marked increase since the 2022 survey (43%). Explainability of model design and analytics also ranked as highly deficient (66%). Engaging the vendor directly rarely led to further insights, respondents said. Instead, roughly 65% mitigate gaps in model transparency by benchmarking against other vendors or an internal model, leaning on industry publications and academic research, implementing more frequent or rigorous performance monitoring, or placing additional constraints on model use. 

Final Takeaways 

Judging by the results of the 2024 survey, banks bear a heavier load in model risk management each year. At the same time, they are constantly evaluating how the discipline of model risk management can be applied to more areas so that risks can be better understood and managed. Meeting acceptable staffing and expertise requirements is an important step toward a solution. Given rapid shifts in the risk landscape, the survey also reveals how banks’ review and validation processes may need to happen more often to keep pace with these changes. And for those third parties serving the banking industry with development support and solutions of their own, the message from banks was clear: Transparency and explainability matter—a lot—and are essential to understanding a bank’s model risk management profile.