Security Frameworks and Standards

Lesson 6 of 7 in the AI Security Fundamentals course.

Industry Frameworks for AI Security

Security frameworks provide structured approaches to identifying, assessing, and mitigating risks in AI systems. They offer a common language for security teams, auditors, and regulators to discuss AI security posture. Understanding these frameworks is essential for building compliant, defensible AI security programs.

NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF, released in January 2023, is the most comprehensive government framework for AI risk management. It is organized around four core functions:

  1. Govern: Establish organizational policies, roles, and accountability structures for AI risk management
  2. Map: Identify and document the context, scope, and risks of AI systems within the organization
  3. Measure: Analyze, assess, and monitor AI risks using quantitative and qualitative methods
  4. Manage: Prioritize and act on AI risks through mitigation, transfer, or acceptance
💡
Key insight: NIST AI RMF is voluntary for most organizations but increasingly referenced in government contracts, procurement requirements, and regulatory guidance. Adopting it proactively positions your organization well for future compliance requirements.

OWASP Machine Learning Security Top 10

OWASP maintains a list of the top 10 security risks for machine learning systems:

  • ML01 - Input Manipulation: Adversarial examples and input-based attacks
  • ML02 - Data Poisoning: Corrupting training data to compromise models
  • ML03 - Model Inversion: Extracting sensitive training data from models
  • ML04 - Membership Inference: Determining if specific data was used in training
  • ML05 - Model Theft: Stealing model intellectual property through extraction
  • ML06 - AI Supply Chain: Vulnerabilities in third-party models, data, and dependencies
  • ML07 - Transfer Learning Attack: Exploiting pre-trained models used for fine-tuning
  • ML08 - Model Skewing: Manipulating model behavior through biased data or training
  • ML09 - Output Integrity: Manipulating or intercepting model outputs
  • ML10 - Model Poisoning: Directly modifying model parameters or architecture

MITRE ATLAS

MITRE ATLAS (Adversarial Threat Landscape for AI Systems) extends the MITRE ATT&CK framework for AI. It catalogs adversarial techniques, tactics, and procedures (TTPs) specific to ML systems:

Python
# MITRE ATLAS tactic mapping for ML threat assessment
ATLAS_TACTICS = {
    "Reconnaissance": [
        "Discover ML model type via API probing",
        "Identify training data sources from documentation",
        "Enumerate model API capabilities and limitations"
    ],
    "Resource Development": [
        "Develop adversarial examples for target model",
        "Create poisoned training datasets",
        "Build model extraction tools"
    ],
    "Initial Access": [
        "Compromise ML data pipeline",
        "Supply chain compromise of ML library",
        "Exploit public model API"
    ],
    "ML Attack Staging": [
        "Craft adversarial inputs",
        "Prepare data poisoning payloads",
        "Generate model extraction queries"
    ],
    "ML Attack": [
        "Execute adversarial example attack",
        "Perform model extraction",
        "Inject poisoned training data",
        "Execute prompt injection against LLM"
    ],
    "Impact": [
        "Evade ML-based detection system",
        "Steal proprietary model IP",
        "Cause model to produce harmful outputs",
        "Exfiltrate training data via model inversion"
    ]
}

# Print tactic overview
for tactic, techniques in ATLAS_TACTICS.items():
    print(f"\n[{tactic}]")
    for t in techniques:
        print(f"  - {t}")

The EU AI Act Security Requirements

The EU AI Act, effective from 2024, imposes security requirements based on risk classification:

  • Unacceptable risk: Banned AI systems (social scoring, real-time biometric surveillance except narrow exceptions)
  • High risk: Must implement risk management, data governance, technical documentation, transparency, human oversight, and cybersecurity measures
  • Limited risk: Transparency obligations (users must know they are interacting with AI)
  • Minimal risk: No specific requirements but voluntary codes of conduct encouraged

ISO/IEC Standards for AI

Several ISO standards address AI security:

  • ISO/IEC 23894: AI risk management guidance
  • ISO/IEC 42001: AI management system requirements
  • ISO/IEC 27001 + 27002: Information security management, applicable to AI infrastructure
  • ISO/IEC 24029: Assessment of robustness of neural networks

Choosing and Implementing Frameworks

Selecting the right frameworks depends on your organization's context:

  • Regulated industries: Start with NIST AI RMF and layer industry-specific requirements (FDA for medical AI, SEC for financial AI)
  • Global companies: Map controls to both NIST AI RMF and EU AI Act to cover major jurisdictions
  • Startups: Begin with OWASP ML Top 10 as a practical checklist, then adopt more comprehensive frameworks as you mature
  • Government contractors: NIST AI RMF compliance is increasingly expected or required
Warning: Frameworks are guides, not guarantees. Following a framework does not make your system secure — it ensures you have considered the right categories of risk. You still need to implement specific technical controls appropriate to your system's threat model.

Building a Compliance Matrix

Create a mapping between framework requirements and your actual security controls. This compliance matrix serves as both a planning tool and an audit artifact. Track each requirement's implementation status, responsible team, and evidence of compliance.

Review and update the matrix quarterly, or whenever significant changes are made to your AI systems, frameworks are updated, or new regulations come into effect.

Summary

Security frameworks provide the structure for building comprehensive AI security programs. By understanding NIST AI RMF, OWASP ML Top 10, MITRE ATLAS, the EU AI Act, and relevant ISO standards, you can build a defensible security posture that meets regulatory requirements and protects against real threats. The next lesson ties everything together into a complete security strategy.