October 18, 2018 - Alex Niu

SR 11-7 Compliance: 3 Core Elements for Model Validation

An SR 11-7 compliant validation framework includes 3 core elements: an evaluation of conceptual soundness, ongoing monitoring, and outcomes analysis.

Model validation is a defined set of processes and activities intended to verify that models are performing as expected, aligned with their design objectives, and business uses, and identify potential limitations, assumptions, along with assessing their possible impact. Whether you are a data scientist or a risk analyst, model validation is critical for you not only to meet the regulatory expectations, like those outlined in the SR 11-7 guidance, but also to enable you to work with confidence knowing that your results are accurate. Systematic procedures for validation help banks understand the internally built models and vendor products and its capabilities, applicability, and limitations.

By failing to validate models, banks run the increased risk of regulatory criticism, fines, and penalties.

I’ve had the good fortune of working with several banks over the last few years guiding them in the development of machine learning models to serve their customers better and enhance risk management. Armed with information on what to look for while conducting a reliability assessment of a given model, banks can practice due diligence and be regulatory compliant.

Three Key Elements of SR 11-7

So what does an effective and comprehensive validation framework look like? According to the SR 11-7 guidance issued by the Federal Reserve, here are its three core elements:

• Evaluation of conceptual soundness, including developmental evidence: This step involves assessing the quality of the model design and construction. It entails review of documentation and empirical evidence supporting the methods used and variables selected for the model. A lack of documentation is a red flag. The documentation captures all assumptions about data, usage, purpose, criticality, validation, redundancy et al. It also includes data sources and as well as the business case. Documentation and testing should convey an understanding of model limitations and assumptions, and validation should ensure that judgment exercised in model design and construction is carefully considered.

• Ongoing monitoring, including process verification and benchmarking: This step is essential to evaluate whether changes in products, exposures, activities, clients, or market conditions necessitate adjustment, redevelopment, or replacement of the model and to verify that any extension of the model beyond its original scope is valid. Process verification checks that all model components are functioning as designed. It includes verifying that internal and external data inputs continue to be accurate, complete, consistent with model purpose and design. Sensitivity analysis and other checks for robustness and stability should be repeated periodically – based on the model category. They can be as useful during ongoing monitoring as they are during model development. If models only work well for certain ranges of input values or market conditions, they should be monitored to identify situations where these constraints are approached.

• Outcomes analysis: This analysis is a comparison of model outputs to corresponding actual outcomes. It helps to evaluate model performance, by establishing expected ranges for those actual outcomes in relation to the intended objectives and assessing the reasons for observed variation between the two. If outcomes analysis produces evidence of poor performance, the bank should take action to address those issues with the vendor. Outcomes analysis typically relies on statistical tests or other quantitative measures. It can also include expert judgment to check the intuition behind the outcomes and confirm that the results make sense. This analysis should involve a range of tests because any individual test will have weaknesses. Models with long forecast horizons should be back-tested, but given the amount of time it would take to accumulate the necessary data, that testing should be supplemented by evaluation over shorter periods.

Bringing it All Together

At DataVisor, we understand the importance of being SR 11-7 compliant and collaborate with our customers to provide model validation in the space of fraud. The algorithm of DataVisor’s Unsupervised Machine Learning platform is transparent and offers end-to-end processing with robust documentation.

We are built to handle fraud prevention and follow industry best practices in theory, methodology and performance.

So the next time you evaluate a vendor, be sure to ask the right questions and cover all ground. Because your organization’s growth depends on it.

about Alex Niu
Alex Niu is Director of Solution Engineering at DataVisor. He brings a decade of experience in the financial industry to his role, with a focus on risk management analytics. He was previously Director of Decision Science at American Express, where he led a team of data scientists developing and implementing advanced machine learning solutions.
about Alex Niu
Alex Niu is Director of Solution Engineering at DataVisor. He brings a decade of experience in the financial industry to his role, with a focus on risk management analytics. He was previously Director of Decision Science at American Express, where he led a team of data scientists developing and implementing advanced machine learning solutions.