Risk Assessment for AI Systems
HIPAA requires covered entities and business associates to conduct regular risk assessments. AI systems introduce novel threat vectors that demand specialized assessment methodologies.
HIPAA Risk Assessment Requirements
The Security Rule requires organizations to:
- Identify all ePHI the organization creates, receives, maintains, or transmits
- Identify and assess reasonably anticipated threats to the confidentiality, integrity, and availability of ePHI
- Identify and assess current security measures
- Determine the likelihood and impact of threat occurrence
- Assign risk levels and implement appropriate mitigation
AI-Specific Threat Vectors
AI systems face threats that traditional software risk assessments may not cover:
| Threat | Description | Risk Level |
|---|---|---|
| Model inversion | Attackers reconstruct training data from model outputs | High |
| Membership inference | Determining if a specific record was in the training data | High |
| Data poisoning | Corrupting training data to manipulate AI behavior | Medium |
| Prompt injection | Manipulating LLMs to reveal PHI from context | High |
| Model theft | Stealing models trained on PHI through API extraction | Medium |
| Inadvertent disclosure | AI outputs accidentally containing PHI from training data | High |
Risk Assessment Framework for AI
Follow this structured approach for assessing AI-specific HIPAA risks:
Inventory AI Systems
Catalog all AI systems that access, process, or could potentially expose PHI. Include training pipelines, inference endpoints, data preprocessing systems, and monitoring tools.
Map Data Flows
Document how PHI flows through each AI system — from ingestion to model training to inference to output storage. Identify all points where PHI is transformed, transmitted, or stored.
Identify Threats
For each AI system, enumerate all reasonably anticipated threats including the AI-specific vectors listed above, plus traditional IT threats (unauthorized access, malware, insider threats).
Assess Vulnerabilities
Evaluate each AI system for vulnerabilities: insufficient access controls, lack of encryption, missing audit logs, model robustness gaps, and inadequate data isolation.
Calculate Risk
For each threat-vulnerability pair, assess the likelihood of exploitation and the potential impact on PHI confidentiality, integrity, and availability. Assign a risk score.
Plan Mitigation
Develop mitigation strategies for all risks above acceptable thresholds. Prioritize by risk score and implement controls in order of criticality.
Mitigation Strategies
- Differential privacy: Add mathematical noise to training to prevent individual record extraction
- Federated learning: Train models without centralizing PHI
- Output filtering: Scan AI outputs for potential PHI before delivering to users
- Rate limiting: Prevent model extraction attacks through API rate controls
- Regular penetration testing: Test AI systems specifically for healthcare data exposure
Lilly Tech Systems