The OCC’s current supervision priorities make clear that model risk management is a key area of focus. Whether it’s asset and liability management, credit risk management, or the allowance for credit losses, banks need models that are properly developed and validated.
How can financial institutions improve their model risk management (MRM), particularly as it relates to models in key risk measurement and management areas?
My experience teaching RMA’s model validation and governance course—and my conversations with the attendees from community, midsize, and large banks—point to five key ways banks can improve their MRM.
1. Implement standards and procedures for model development.
Model developers are the first line of defense for model risk in an organization. However, without any standards or procedures, developers may inadequately or inconsistently apply methodologies for building and testing models—and fall short in their critical risk management role.
Models may sometimes be submitted to the second line of defense for model validation missing critical tests that might have found those models inadequate. This may cause unnecessary delays in model implementation, friction points, and suboptimal usage of resources. What’s the fix? A model submission should include developmental evidence it has followed the required standards and procedures for building a model and determining it is fit for use. These standards and procedures should reflect the model risk appetite of the organization. They should apply to models developed internally or outsourced. The latter is even more important as there may be significant variability in the development methodology from vendor to vendor. Standards and procedures should be included in the vendor requirements.
2. Define what a model is.
The lack of a clear definition of what constitutes a model can lead to an incomplete model inventory—and hence increased model risk. For example, a bank could fail to address an underperforming or misused financial reporting model because it is not defined as a model, and so does not fall under model risk review. In this example, the bank may be unknowingly taking up material risk just because that analytical construct was not classified correctly. The phrase “this is not a model because it is only for accounting and it is subject to a high level of qualitative adjustments” may resonate with you. How can you know for sure? In the course we establish a working definition for a model and examine several examples of “models,” including ones used for calculating allowances for credit losses. The latter type of model tends to be subjected to significant qualitative adjustments.
According to the OCC’s SR-11-7 guidance the definition of model covers quantitative approaches with inputs that are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature. Uncertainty in qualitative models stems from the use of judgment and assumptions. The nature of their validation should be commensurate with the associated risks, e.g., risks with financial reporting in our example.
It is important to form a model definition and apply it across your organization for a complete inventory of models. Any exceptions should be recorded. Each record in that inventory should include the original model risk score at the time of validation as well as ongoing scores. Any change in model risk factors over time—e.g., change in materiality—will impact the model risk score. Such dynamic scores should be an integral part of the regular model risk reporting to the board of directors.
3. Enforce model validation procedures.
Having procedures in place, defined by model use case, is the only way for an organization to ensure consistent validation of internally or externally developed models. Following these procedures, a model validator within the organization can perform effective challenge for a specific model through conceptual soundness and outcomes analysis. Internal audit can check whether the procedures were followed and the challenge was effective.
Similar to the model development procedures, the validation procedures for the second line of defense should reflect the model risk appetite of the organization. They should also serve as part of the requirements when outsourcing model validation to a third party.
4. Monitor your models.
Model risk management is not a one-and-done exercise. Model validation should follow the model life cycle and include ongoing monitoring for testing whether model performance continues within an acceptable range. The frequency of revalidation often depends on the model risk score. Since a CECL model is typically considered high risk, it should be monitored at least annually, and if there is no material change in the model it can be fully revalidated every two years. Lower risk models can be revalidated less frequently.
During its use a model may be modified or may be applied to a new product, triggering new monitoring and validation. The phrase “the model did not change so it does not need to be revalidated” will likely not pass muster. Monitoring and model modification procedures should be in place to govern the whole model life cycle.
5. Give your MRM function proper support.
MRM becomes effective when it is supported by governance, namely model risk policy, standards and procedures, and ongoing model risk reporting to the board with respect to model risk appetite. In the course we provide a rubric for assessing the effectiveness of MRM within an organization regardless of size.
The takeaway? MRM is about more than just performing model validation. At its best, MRM goes beyond compliance by flagging model risks early and triggering model risk mitigants. To find out how it’s done, join us November 6-9 for the next run of the RMA model validation and governance course.