Learn Hugging Face Transformers
Master the most popular open-source machine learning library. Access 400,000+ pre-trained models for NLP, computer vision, audio, and multimodal tasks — all through a unified Python API.
Your Learning Path
Follow these lessons in order, or jump to any topic that interests you.
1. Introduction
Discover the Hugging Face ecosystem, the transformers library, and how 400K+ models are available at your fingertips.
2. Pipeline API
Use the pipeline() function for text classification, image recognition, audio processing, and zero-shot tasks in just a few lines of code.
3. Models & Tokenizers
Work with AutoModel, AutoTokenizer, and understand different model architectures like BERT, GPT, T5, and Vision Transformers.
4. Fine-tuning
Fine-tune models with the Trainer API, prepare datasets, run evaluation, and push your custom models to the Hugging Face Hub.
5. Inference
Optimize inference performance with quantization, ONNX export, and Text Generation Inference (TGI) for production deployment.
6. Best Practices
Production tips, model selection strategies, performance optimization, and common pitfalls to avoid.
What You'll Learn
By the end of this course, you'll be able to:
Use Pre-trained Models
Load and run any of the 400K+ models on Hugging Face Hub for text, image, audio, and multimodal tasks.
Fine-tune Custom Models
Train models on your own data using the Trainer API and push them to the Hub for sharing.
Deploy to Production
Optimize models with quantization, ONNX, and TGI for fast, cost-effective inference at scale.
Navigate the Ecosystem
Understand the full Hugging Face platform including Hub, Spaces, Datasets, and community tools.
Lilly Tech Systems