Advanced

Abnormally Dangerous Activity

A practical guide to abnormally dangerous activity for AI risk management practitioners.

What This Lesson Covers

Abnormally Dangerous Activity is a key topic within Strict Liability for AI. In this lesson you will learn the underlying liability framework or insurance pattern, the controlling legal authorities, how to evaluate exposure and procure protection, and the common pitfalls. By the end you will be able to apply abnormally dangerous activity in real risk-management work.

This lesson belongs to the Tort Liability category of the AI Liability & Insurance track. AI liability is now one of the fastest-evolving areas of law, and the insurance market is racing to catch up. Practitioners who understand both sides ship faster, win bigger deals, and avoid existential incidents.

Why It Matters

Master strict liability for AI. Learn the abnormally dangerous activity doctrine, statutory strict liability, EU product liability strict regime, and emerging strict liability proposals.

The reason abnormally dangerous activity deserves dedicated attention is that the gap between teams that take AI liability seriously and teams that don't is widening every quarter. A single uninsured loss or successful class action can dwarf a year of revenue. Understanding the liability landscape and the insurance products available is no longer optional — it is core risk management.

💡
Mental model: Treat abnormally dangerous activity as engineering risk management, not paperwork. The teams that ship AI fastest and most safely are the ones who design liability allocation, insurance procurement, and operational controls into the product from day one — not bolted on after the first regulatory letter arrives.

How It Works in Practice

Below is a practical framework for abnormally dangerous activity. Read it once, then apply it to a real AI use case you are advising on or operating today.

# Strict liability theories for AI

STRICT_LIABILITY_DOCTRINES = {
    "abnormally_dangerous_activity": {
        "test_restatement_3rd_section_20": [
            "1. Activity creates a foreseeable and highly significant risk of harm even when reasonable care is exercised",
            "2. Activity is not one of common usage",
        ],
        "ai_application": (
            "Could apply to: autonomous lethal weapons, fully autonomous vehicles in early "
            "deployment, certain frontier AI applications. Generally a high bar."
        ),
    },
    "product_liability_strict": "Most common strict liability for AI products",
    "statutory_strict_liability": [
        "BIPA (Illinois) - strict liability for biometric privacy",
        "VPPA - strict liability for video viewing data",
        "Some state autonomous vehicle laws",
    ],
}

EU_PRODUCT_LIABILITY_DIRECTIVE_2024 = {
    "scope": "Now includes software (incl. AI) as a 'product'",
    "burden_shifting": [
        "Presumption of defectiveness in some circumstances",
        "Disclosure of evidence by defendant",
    ],
    "covered_damages": [
        "Death and personal injury",
        "Damage to property (with thresholds)",
        "Destruction or corruption of data",
        "Medically recognized psychological harm (NEW)",
    ],
    "limitation_periods": "3 years from discovery / 25 years from product placed on market",
}

DEFENSES_TO_STRICT_LIABILITY = [
    "Substantial change after sale",
    "Misuse / abnormal use",
    "Compliance with mandatory regulations",
    "State of the art defense (limited - varies by jurisdiction)",
]

Step-by-Step Walkthrough

  1. Identify the parties and exposure — Who could be sued? For what? Map the AI value chain (data provider, model provider, fine-tuner, deployer, integrator, end user) and the legal theories applicable to each.
  2. Quantify the potential exposure — Use damages models, statutory ranges, and class action multipliers to estimate worst-case loss. This drives both insurance limits and contractual caps.
  3. Allocate risk via contract — Who bears each risk via indemnification, limitations of liability, insurance requirements, and warranty provisions? Reduce to writing in every AI agreement.
  4. Procure matching insurance — Layer Tech E&O, cyber, product liability, D&O, and specialty AI products to cover the residual risk. Read AI exclusions VERY carefully.
  5. Build operational controls — Logs, audit trails, evals, monitoring, and incident response. These reduce both liability and premium — insurers reward documented governance.

When To Use It (and When Not To)

Abnormally Dangerous Activity applies when:

  • You operate, advise on, or insure AI systems that could cause measurable harm
  • You are negotiating AI vendor or customer contracts at any scale
  • You face regulatory scrutiny or are preparing for it
  • You need to disclose AI risk to investors, lenders, or your board

It is the wrong move when:

  • The use case is so low-risk that the cost of analysis exceeds the residual exposure
  • A different framework (pure compliance, pure ethics, pure engineering) better fits the question
  • You are still iterating on the use case — lock in the scope first, then layer liability/insurance
  • You are using liability concerns as a smokescreen to delay shipping a feature you should delay for other reasons
Common pitfall: Teams treat AI insurance as a generic checkbox, only to discover that key AI risks (algorithmic bias, hallucinations, prompt injection, training-data IP) are EXCLUDED from their existing policies. Always read AI exclusions carefully — the gap between standard tech E&O and your actual AI exposure is wider than most assume.

Practitioner Checklist

  • Have you identified all parties potentially liable in this AI use case?
  • Have you quantified worst-case exposure (statutory damages, class action math, regulatory fines)?
  • Are your contracts allocating risk explicitly via indemnification and limitations?
  • Does your insurance stack actually cover the AI-specific risks (read exclusions)?
  • Have you documented operational controls so you can defend a "due care" position?
  • Is there a tested incident response playbook for AI-related incidents?

Disclaimer

This educational content is provided for general informational purposes only. It does not constitute legal advice or insurance advice, does not create an attorney-client or broker relationship, and should not be relied on for any specific matter. Consult qualified counsel and licensed insurance professionals for advice on your specific situation.

Next Steps

The other lessons in Strict Liability for AI build directly on this one. Once you are comfortable with abnormally dangerous activity, the natural next step is to combine it with the patterns in the surrounding lessons — that is where AI liability practice goes from one-off analyses to an operating system. Liability and insurance work is most useful as a system, not as isolated checks.