Gen-3 Alpha Beginner
Gen-3 Alpha is Runway's most advanced video generation model. It produces remarkably coherent, high-fidelity video from text prompts or reference images. This lesson covers prompt engineering techniques, input modes, style control, and settings that give you creative control over the output.
Input Modes
| Mode | Input | Best For |
|---|---|---|
| Text to Video | Text prompt only | Pure creative generation, concept exploration |
| Image to Video | Reference image + text prompt | Animating specific visuals, consistent style |
| Image + Text | Both inputs combined | Maximum control over content and motion |
Prompt Engineering for Video
Video prompts differ from image prompts because they must describe motion and temporal change. Effective video prompts include:
- Subject: Who or what is in the scene
- Action: What movement or change occurs
- Camera: Camera angle and movement (dolly, pan, zoom, static)
- Style: Visual aesthetic (cinematic, anime, documentary)
- Atmosphere: Lighting, mood, time of day
Prompt Formula: "[Camera movement] of [subject] [action] in [setting], [style/mood], [lighting]." Example: "Slow dolly shot of a woman walking through a neon-lit Tokyo street at night, cinematic, volumetric fog, blade runner aesthetic."
Generation Settings
| Setting | Options | Impact |
|---|---|---|
| Duration | 5s or 10s | Longer = more credits, more motion |
| Resolution | 720p, 1080p | Higher = more credits, better detail |
| Seed | Random or fixed | Fixed seed for reproducible results |
Iterating on Generations
AI video generation is iterative. Generate multiple variations, identify the best outputs, and refine your prompt based on what works. Use the "Extend" feature to continue a successful generation beyond its initial duration.
Lilly Tech Systems