Learn Data Poisoning & Training Attacks

Explore how adversaries manipulate training data to insert backdoors, bias models, and compromise AI systems. Learn detection techniques, data validation strategies, and defense frameworks to protect the ML training pipeline.

6
Lessons
Hands-On Examples
🕑
Self-Paced
100%
Free

Your Learning Path

Follow these lessons in order, or jump to any topic that interests you.

What You'll Learn

By the end of this course, you'll be able to:

Identify Poisoning Attacks

Recognize the signs of data poisoning including label corruption, backdoor triggers, and distribution shifts in training data.

💻

Detect Backdoors

Use spectral signatures, activation analysis, and neural cleanse methods to find hidden backdoors in trained models.

🛠

Validate Training Data

Build robust data pipelines with provenance tracking, outlier detection, and integrity verification at every stage.

🎯

Secure the ML Pipeline

Implement end-to-end security controls across data collection, preprocessing, training, and deployment phases.