Gen-3 Alpha Beginner

Gen-3 Alpha is Runway's most advanced video generation model. It produces remarkably coherent, high-fidelity video from text prompts or reference images. This lesson covers prompt engineering techniques, input modes, style control, and settings that give you creative control over the output.

Input Modes

ModeInputBest For
Text to VideoText prompt onlyPure creative generation, concept exploration
Image to VideoReference image + text promptAnimating specific visuals, consistent style
Image + TextBoth inputs combinedMaximum control over content and motion

Prompt Engineering for Video

Video prompts differ from image prompts because they must describe motion and temporal change. Effective video prompts include:

  • Subject: Who or what is in the scene
  • Action: What movement or change occurs
  • Camera: Camera angle and movement (dolly, pan, zoom, static)
  • Style: Visual aesthetic (cinematic, anime, documentary)
  • Atmosphere: Lighting, mood, time of day
Prompt Formula: "[Camera movement] of [subject] [action] in [setting], [style/mood], [lighting]." Example: "Slow dolly shot of a woman walking through a neon-lit Tokyo street at night, cinematic, volumetric fog, blade runner aesthetic."

Generation Settings

SettingOptionsImpact
Duration5s or 10sLonger = more credits, more motion
Resolution720p, 1080pHigher = more credits, better detail
SeedRandom or fixedFixed seed for reproducible results

Iterating on Generations

AI video generation is iterative. Generate multiple variations, identify the best outputs, and refine your prompt based on what works. Use the "Extend" feature to continue a successful generation beyond its initial duration.