Introduction to Model Risk Management
Understand what model risk means in the context of AI and machine learning, why regulatory bodies require formal model risk management, and the foundational concepts that underpin effective model governance.
What is Model Risk?
Model risk is the potential for adverse consequences from decisions based on incorrect or misused model outputs. In financial services and regulated industries, models drive critical decisions on credit, pricing, fraud detection, and capital allocation. Errors in these models can lead to significant financial losses, regulatory penalties, and reputational damage.
Sources of Model Risk
| Source | Description | AI/ML Example |
|---|---|---|
| Data Quality | Errors, biases, or gaps in training data | Biased training data leading to discriminatory lending models |
| Methodology | Flawed assumptions or algorithms | Overfitting to historical patterns that no longer hold |
| Implementation | Coding errors or integration defects | Feature engineering bugs in production pipeline |
| Misuse | Using models outside intended scope | Applying a credit model trained on prime to subprime |
| Environment Change | Shift in conditions model was built for | COVID-era model drift in economic forecasting |
Regulatory Context
SR 11-7 (Federal Reserve, 2011)
The foundational guidance on model risk management for banking organizations. Establishes expectations for model development, validation, and governance.
OCC 2011-12
The OCC's companion guidance aligning with SR 11-7 requirements for national banks and federal savings associations.
EU AI Act
European regulation classifying AI systems by risk level, with high-risk systems requiring formal risk management analogous to SR 11-7.
NIST AI RMF
Voluntary framework providing AI risk management guidance applicable across industries, complementing SR 11-7 for AI/ML models.
Why AI/ML Amplifies Model Risk
Complexity
Deep learning models are inherently less interpretable than traditional statistical models, making validation and error detection harder.
Data Dependency
ML models are highly sensitive to training data quality and distribution. Data drift can silently degrade model performance.
Scale
Organizations deploy hundreds of ML models, creating inventory management and monitoring challenges far beyond traditional model portfolios.
Velocity
Rapid model iteration and automated retraining pipelines compress the development cycle, requiring faster validation processes.
Lilly Tech Systems