Learn AWS Inferentia & Trainium

Master AWS custom silicon for machine learning. Learn how Inferentia accelerates inference workloads and Trainium powers cost-effective model training, delivering up to 50% savings compared to GPU-based instances.

6
Lessons
Hands-On Labs
🕑
Self-Paced
100%
Free

Your Learning Path

Follow these lessons in order, or jump to any topic that interests you.

What You'll Learn

By the end of this course, you'll be able to:

💻

Understand Custom Silicon

Know when and why to choose Inferentia or Trainium over traditional GPU instances for your ML workloads.

🚀

Deploy on Inferentia

Compile, optimize, and deploy models on Inf2 instances for high-throughput, low-latency inference.

🔄

Train on Trainium

Set up distributed training jobs on Trn1 instances with the Neuron SDK and popular ML frameworks.

📈

Optimize Costs

Achieve significant cost savings by leveraging AWS custom chips for inference and training workloads.