Advanced
Best Practices
Optimization tips, recommended hardware, ethical guidelines, and solutions to common problems when working with Stable Diffusion.
Hardware Recommendations
- Minimum: NVIDIA GPU with 4GB VRAM (GTX 1650) — SD 1.5 at 512x512 with optimizations
- Recommended: NVIDIA GPU with 8GB VRAM (RTX 3060/4060) — SD 1.5 and SDXL comfortably
- Ideal: NVIDIA GPU with 12-24GB VRAM (RTX 4070 Ti/4090) — all models, ControlNet, high resolution
- CPU/Apple Silicon: Possible but slow. Apple M1/M2/M3 work via MPS backend
Optimization Tips
- Use half precision (FP16) to halve VRAM usage with negligible quality loss
- Enable xformers or Flash Attention for faster generation and lower memory
- Use VAE tiling for generating images larger than your VRAM allows
- Lower inference steps to 20-25 (diminishing returns beyond 30 for most schedulers)
- Use fast schedulers like DPM++ 2M Karras or Euler a for good results in fewer steps
- Generate at native resolution (512 for SD1.5, 1024 for SDXL), then upscale
Ethical Considerations
Respect copyright: Be thoughtful about using artist names in prompts. The training data includes copyrighted work, and mimicking specific artists' styles raises ethical questions.
Transparency: Label AI-generated images when sharing them publicly. Do not present AI-generated images as photographs or original artwork without disclosure.
Deepfakes: Never use these tools to create non-consensual content of real people. Many jurisdictions have laws against this.
Troubleshooting
- CUDA out of memory: Reduce resolution, enable FP16, use xformers, or lower batch size
- Blurry or distorted faces: Add face-specific quality terms to prompt, use a face restoration model (CodeFormer, GFPGAN), or use ADetailer extension
- Wrong composition: Use ControlNet for structural guidance, or try img2img with a rough sketch
- Oversaturated colors: Lower CFG scale (try 5-7), or add "natural colors" to prompt
- Repeated patterns/artifacts: Change the scheduler, adjust step count, or try a different seed
Workflow Tips
- Start with low steps (15-20) and small batches to iterate on prompts quickly
- Once you find a good prompt, increase steps and generate multiple seeds
- Use img2img to refine promising results
- Use inpainting to fix specific areas rather than regenerating everything
- Save your favorite prompts, seeds, and settings for reproducibility
- Upscale final images with a dedicated upscaler (Real-ESRGAN, 4x-UltraSharp)
Congratulations! You have completed the Stable Diffusion course. You now understand how diffusion works, can craft effective prompts, use ControlNet, fine-tune custom models, and work with professional tools.
Lilly Tech Systems