Skip to Main Content

MRM Best Practices: A Q&A with a Validator from RMA’s MVC

Diego Hernán Alvarez recently interviewed Jeison Gil, both of whom are lead quantitative analysts and validators with RMA’s Model Validation Consortium (MVC). Gil offered best practices regarding the model validation process.

The MVC was created to better manage the model validations, revalidations, and annual reviews of mid-tier and community bank members at a reduced cost and with the benefits of sharing best model risk management practices.

Built by bankers, for bankers, the MVC is available exclusively to RMA members. The MVC works with members to provide validations appropriate for the size and risk profile of the bank, and helps to manage capacity through its in-house, highly qualified team. If you would like to learn more or become a member, please email rmamvc@rmahq.org or visit www.rmahq.org/mvc .

Alvarez: How would you determine whether a “tool” has modeling components?

Gil: First, I thoroughly read the model documentation several times. During the last read, I break the model documentation into the main validation components and tend to create a personal flowchart where I link all the model purposes and objectives, systems, tools, assumptions, and limitations, model inputs, model outputs, etc. In this way, I am able to track all the interconnections of the model both theoretically and practically. In my personal opinion, before jumping to understand more complex parts of the validation, the validator must first be clear about what the purpose and objectives are (not all validators follow this thoroughly) since they give the landscape for the validation. A good enough model documentation should lead the validator to easily understand what the methodologies are, and how and where they are implemented. Since this is not always the case, I will usually follow the next steps.

  • Is the model internally developed? If yes, then one might usually have access to the codes of the development. Here it will be easier to understand which are the modeling components if one understands the model objectives and purposes. The idea is to follow the principles provided by Sr-117 where a model is defined as three main components: inputs, processing (usually a mathematical model or mathematical function), and a reporting component.

    Inputs are most commonly tied to the model assumptions, and their complexity might give rise to further modeling components, so properly identifying them will also clear the path of the validator into identifying and separating the modeling components of the different tools one might find in a full model.

    The processing component is more generally tied to the gathering of all the model assumptions, data inputs, and a processing function(s) which ties all of them together into producing the final model outcome or estimates. It may happen as a single component, or it may happen as a set of processing functions each one depending on the previous (waterfall scheme).

    The reporting component might be as simple as the raw output provided by the processing function(s) or it might be further modified by model owners.

    Thus, fully understanding the input data sets, output data sets, and the information that enters and comes out of a tool will also help identify if the tool has a modeling component or not.

  • Is the model implemented by a vendor? If yes, then access to the codes will likely be partially or completely restricted. Here it is more difficult to understand if the tool contains a modeling component.
    All the previous steps are still applicable but will require assistance from the vendor in providing documentation so all the previous items can be easily identified. Another way of identifying the modeling components is to review what information enters the tool and what comes out.

  • Finally, there might be other modeling components pertaining to the input data management processes such as data transformations, cleaning, and exploratory analysis that tend to be more explicit and easier to track since they are usually internally performed by developers, both in internally developed models and vendor models (here they are part of the bank's customization of the tool).

Alvarez: What is the role of the validation process while assessing the impacts of such modeling components on the final model output?

Gil: Every single modeling component needs to be validated both from the theoretical point of view and from the practical point of view. Here the difference is important. The validator might find errors in the model documentation where a component is wrongly specified (bad formula), which represents a theoretical error. Another theoretical error might happen when a component is used for an item not compatible. One example might be applying the VaR scaling factor only applicable to stable distributions (normal) on a nonstable distribution.

The practical errors tend to be more tied to the system implementations or codes. The validator is expected to cross check the codes implementation and model documentation. There cannot be any differences between what is being said is done (model doc), and what actually is done in the codes.

Errors are also expected to be measured in terms of materiality. This is not always an easy task, but it should be conducted in some estimated way. When an explicit way is not approachable, the validator might go back to the most basic materiality that can be found in the model that ties back to the error. For example, I might not be able to measure the actual impact of the error, but I know the exposure of the related instruments (balance,VaR, RWA, etc.) where the error happens. 

Alvarez: Is it necessary to ensure the conceptual soundness checking of such modeling components during the validation?

Gil: Yes, each component needs to be assessed on its conceptual soundness. The conceptual soundness is the gathering of all the activities that start from the theoretical foundation of the model with industry standards research, research of the model methodologies, data management, data cleaning, data processing, data manipulation, and end up on the model output and its interpretation. Conceptual soundness is commonly mistaken with only the theoretical spectrum. 

Alvarez: If there is a change in some of the model uses, what is the importance of having them validated?

Gil: Model changes of any sort can never be taken lightly. They need to be fully developed and explained by developers or model owners. Appropriate documentation is key.

And there has to be an appropriate governance policy for model changes. Understanding the model limitations is key to the validation of these items.

Validation is important because as it was seen in the 2008 crisis, some models were wrongly used, and thus expanded a model's use further than what was expected. This is something that can’t go unvalidated, and should not generally be accepted. 

Alvarez: Could the scenario prediction sensitivity analysis performed during validation exercises help to provide guidance during high-volatility environments (or very disruptive ones)?

Gil: It is important to separate each of the components described above.

Sensitivity Tests: They are generally performed to model parameters or inputs. The purpose of performing these tests is to evaluate if the model is responsive to changes on the parameters or inputs. In this way, the validator will be able to assess if the changes are captured by the model and thus might give light into understanding if the model can and should be used under high volatile periods.

Scenarios: They are usually part of the stress testing frameworks. The main purpose is to develop plausible but adverse enough scenarios that give light to the institution’s unknown risks, or interrelations between modeling components. Once the impacts are identified, the whole purpose is for the institution to create a remediation plan or contingency plan in light of the results. Thus, their appropriate development is quite important in helping the institution prepare for high volatile periods. 

Alvarez: Under which circumstances are modeling overrides necessary? What should be the mechanisms to appropriately track them in order to enhance transparency?

Gil: Model overrides are often the product of identified model limitations. Another reason for overrides can be due to discrepancies between modeling component outputs and expert-based knowledge or business strategic direction.

The best way to track them is to include them in model documentation and in implementation systems if possible. They need to be well developed and explained in the model documentation and the appropriate knowledge and approval from senior management should be made explicit.