Skip to Main Content

Outsourcing Models, Onboarding Risk: Findings From ProSight’s 2025 Model Risk Management Survey

As banks expand oversight of financial and operating risks, their risk management teams are increasing their use of risk models, analytical tools, and diverse data sets, including those sourced from third-party vendors. They often supplement internal models with those from third parties, at times for efficiency and sometimes to access more sophisticated data and analytics.

Using third-party models presents banks with unique regulatory, analytical, and operating challenges. These models must be suitable for the use case and intended application. They must deliver meaningful insight and support decision making. And they must pass robust validation tests for risk management and regulatory oversight.

ProSight’s 2025 Model Risk Management (MRM) Survey reveals that model outsourcing is widespread, as 70% of respondents said they use some combination of model development, implementation, or validation services from third parties (see Figure 1). They said they rely on third parties for model validation—either fully or in collaboration with internal resources.

 

Figure 1. Outsourcing model development and management is widespread

 

Still, a majority of study participants who outsource to third parties voiced concern about the cost and quality of external validation (Figure 2). Fully 75% viewed the cost of independent validation as an obstacle to using it, and 64% questioned the quality of third-party validation. The mechanics of such validations—vendor management, timeliness, and transparency—were less pressing concerns.

 

Figure 2. The cost and quality of independent validation are common concerns among risk management professionals

 

Lack of transparency in model development and implementation between external model developers and their clients is an issue.  When a third party develops a model, it usually designs it for wide use, leaving clients to tailor it to their particular needs. Vendors share as little information about their data and methods as possible, wanting to protect their intellectual property from piracy and competitor scrutiny. The resulting “black box” product obscures information vital to those implementing and evaluating model risk.  

The 2025 study also reveals that third-party vendors of analytical tools seldom provide robust descriptions of the black-box components of their offerings. Just 38% of respondents said their suppliers describe their models extremely, very, or moderately well, while a majority (62%) rated such disclosures only slightly well or not well at all (Figure 3).

 “When we ask for any type of technical documentation, performance, or even an internal validation report with redacted information, that’s where it usually comes to a complete stop with a lot of them,” said a risk management leader at a midsized bank. “Some of the larger market participants with market power tend to shoot down our language asking for reasonable support for model risk management for audit or regulatory purposes.”

A third of respondents said their contract language with model providers usually requires transparent disclosure of data necessary for robust validation; two-thirds said their contracts include such obligations about half the time or less.

 

Figure 3. Amid uneven contract terms, descriptions of analytical tools often fall short

 

As vendors fall short on transparency their clients are often unable to validate how such models work and must accept the results at least partially on faith. That can introduce new kinds of risks.

“Our MRM department asked for model documentation. But a lot of information we received was redacted,” said one risk manager. “Once I was able to look at the model documentation, it was clear that there was something obviously missing from this origination model in terms of a technique that they should have applied to it. I argued that it was probably inappropriate and against regulation to even use it.”

The model’s shortcomings were both material and apparent, and as a result, he and his risk management team abandoned their efforts to deploy this model. “And as the person with a fiduciary duty to my institution, I could easily observe and call out [the model’s failure] with very basic redacted documentation. They had a lot of technology folks, a lot of energy, a lot of fancy code and data processing, but they did not understand bank regulation,” said the risk manager.

Data from the ProSight survey reveals that those with strong contract language calling for disclosure of models’ inner workings are nearly three times as likely to rate their vendors’ explanations highly (Figure 4). Nearly two-thirds of the subset of respondents who require robust disclosures rate their vendors’ descriptions of black box components moderately well, very well, or extremely well. Conversely, those with weaker contracts are far less likely (23%) to enjoy strong performance from their vendors.

 

Figure 4. Strong contract language delivers better disclosures from third-party vendors

 

Of course, structuring strong contract language is a difficult proposition. On one hand, banks need accurate and transparent information on the models they license. On the other hand, model developers are eager to protect their intellectual property and the strategies they use to solve difficult technical and analytic problems. Vendors often agree to “some of the contractual language, but it could be negotiable,” said the VP of risk management at a New England-based bank. In his experience, “they seem to say, ‘this is proprietary,’ which is rarely true. There’s normally a middle ground, or if they have a lion's share of the market, they may just say no.”

While understanding vendors’ desire to protect their IP, banks assume substantial risk when there is limited third-party disclosure. Those selling advanced technology to banks need to better understand their clients’ business and regulatory requirements. “They’re not regulated like banks are, and so in order to do business in this sector, they need to abide by these contracts or banks will have to accept the risk themselves,” the VP of risk said.

While strong contract language tracks closely with higher-quality vendor disclosures, further analysis of the survey data reveals that strong contracts may offer only slight improvements in ensuring delivery of the granular data required for testing and validation (See Figure 5). Still, those with weak contracting practices are more likely by several percentage points to find fault with their vendors’ disclosures on aspects such as model design and analytical methods, theoretical basis and conceptual soundness, assumptions and limitations, and testing methods. Notably, those with strong contracts are 10 percentage points more likely to have difficulty with governance and change management, perhaps due to the complexity and restrictions tied to their own contracting practices.

 

Figure 5. Those with strong contracts are less likely to find fault with vendors’ technical documentation

 

AI Has Arrived in Risk Management

An overwhelming majority of respondents (91%) say their institutions use artificial intelligence technology in some form (see Figure 6). Especially popular are machine learning (75%), Microsoft Copilot (65%), deep learning (51%), and generative AI (47%). In last year’s study, 70% of respondents said their institution was using AI and machine learning technologies.

 

Figure 6. AI adoption is well under way among study participants

 

While applications of technology vary according to institutions’ business requirements and use cases, study participants are especially likely to deploy AI for risk management in fraud detection (79%), security monitoring (43%), document processing (43%), and similar applications, as shown in Figure 7.

 

Figure 7. AI establishes a strong position in fraud detection

 

 “In the past year and a half, we’ve introduced true artificial intelligence models into various risk functions at the bank, especially in fraud detection,” said one risk manager. AI supplements the labor-intensive manual processes his bank uses to fight check fraud. “AI and OCR [optical character recognition] looks at check images and detects any type of behavioral or pattern differences, check stock differences, and identifies anything that’s kind of odd—and then flags them for human review,” he said.

Looking to the future, he anticipates that AI deployments may well expand from decision support to decision making. “As AI gets more sophisticated, perhaps first at larger institutions, they may flip the switch and allow AI applications to do some type of business decisioning,” he said. “But certainly, as a midsized institution and a newcomer to AI, we’re not comfortable with that yet.”

Banks agree that using AI requires developing a well-documented framework (Figure 8). Such frameworks typically include policies for AI governance, mapping of risks and benefits from AI applications, measurement methods and metrics, and management protocols for responding to and recovering from AI risks.

 

Figure 8. IT organizations are the primary architects of AI policy frameworks

 

AI is, of course, both inherently technical and broadly applicable throughout banks’ operations. Some (39%) of study participants said their organizations rely on the IT/information security group to develop their AI policy framework.

Further complicating the question of governance is the fact that AI is dynamic and changes as it is used. Respondents interviewed for this report said AI warrants especially close scrutiny from both technologists and risk managers. “I think AI policy and governance should be co-owned by model risk management and IT risk management, which in some banks may be information security or general IT,” said the New England-based risk manager. “I like co-ownership because AI is so technical. Of course, all third-party models have technology involved, but in my view, particularly rigorous ongoing monitoring is required for AI, and I like having my IT risk management partners with me while we’re governing and managing the risk.”

He contrasted a conventional model with an AI-driven model to illustrate the vigilance required to capture the full value and avoid the risk of this class of new technology. “You [should] consistently monitor model output, conduct back tests, and seek other key indicators to determine whether the model is still operating as appropriate,” he said. “In my opinion, red flags are more prevalent with AI models driven by machine learning, and the models are constantly evolving and changing. If I have a conventional model that is not as sophisticated, I’ll do a validation, and in fact I might not validate that model for three years. But a machine learning model may change several times in one day, and as a result, I like having the IT folks because then they understand both the underlying technology and risk.”

Closing Thoughts

Banks embrace technology from artificial intelligence developers and third parties to save time and money, to manage their businesses with a keen eye on a broad array of risks, and to gain an advantage over their competitors. As technology grows increasingly sophisticated, its reach into an organization expands, as do the potential business benefits and risks. To capture these benefits and avoid these risks, banks and their risk management teams are increasingly vigilant when deploying, testing, validating, and running AI and other third-party IT systems. To support this vigilance, they are looking for better information and collaboration from their external technology vendors.

About This Research

Data cited in this report is sourced from the fourth survey on model risk conducted by the Model Validation Consortium (MVC). In preparing the questionnaire, a subcommittee of the MVC Advisory Board reviewed questions from prior years’ studies and recommended new questions for this year’s survey.  In total, ProSight collected 94 unique institutional responses to the questionnaire between early February and mid-May 2025.