Reference

OpenAI Models

The complete directory of every OpenAI model — from the GPT family to reasoning models, plus DALL-E, Whisper, and specialized tools. Updated with the latest releases.

GPT-4o Family

GPT-4o ("omni") is OpenAI's flagship multimodal model, natively processing text, images, and audio.

ModelReleasedContextMax OutputMultimodalPricing (Input/Output per 1M)
GPT-4oMay 2024128K16,384Text, Vision, Audio$2.50 / $10.00
GPT-4o miniJul 2024128K16,384Text, Vision$0.15 / $0.60

GPT-4o

OpenAI's most versatile model. GPT-4o processes text, images, and audio natively within a single model rather than using separate pipelines. It offers GPT-4-level intelligence with significantly faster responses and lower costs.

  • Best for: General-purpose tasks, multimodal applications, conversational AI, content generation, code assistance
  • Key features: Native multimodal, 2x faster than GPT-4 Turbo, 50% cheaper, supports function calling and JSON mode
  • Parameters: Not publicly disclosed (estimated 200B+)

GPT-4o mini

A smaller, faster, and cheaper version of GPT-4o. Designed for high-volume, latency-sensitive applications where GPT-4o quality is not strictly necessary.

  • Best for: High-volume APIs, chatbots, classification, summarization, lightweight coding tasks
  • Key features: Extremely low cost, fast responses, strong for its size class, supports vision

GPT-4 Family (Legacy)

ModelReleasedContextMax OutputMultimodalPricing (Input/Output per 1M)
GPT-4 TurboApr 2024128K4,096Text, Vision$10.00 / $30.00
GPT-4Mar 20238K / 32K8,192Text (Vision via GPT-4V)$30.00 / $60.00
GPT-3.5 TurboNov 202216K4,096Text only$0.50 / $1.50

GPT-4 Turbo

An improved version of GPT-4 with a 128K context window, knowledge cutoff of April 2024, and significantly reduced pricing. Largely superseded by GPT-4o.

  • Best for: Legacy applications already built on GPT-4 Turbo
  • Note: GPT-4o is recommended over GPT-4 Turbo for new projects

GPT-4

The original GPT-4 model that established the frontier in March 2023. Available in 8K and 32K context variants. Multimodal vision was added later as GPT-4V.

  • Parameters: Not disclosed (rumored ~1.8T MoE)
  • Best for: Legacy use only; newer models are better and cheaper

GPT-3.5 Turbo

The model behind the original ChatGPT. Still available and extremely cost-effective for simple tasks, though outperformed by GPT-4o mini on most benchmarks.

  • Best for: Simple classification, basic Q&A, cost-sensitive high-volume applications

Reasoning Models (o-series)

OpenAI's reasoning models use extended "chain-of-thought" processing before generating a response. They spend more time thinking, which makes them better at complex reasoning tasks but slower and more expensive.

ModelReleasedContextMax OutputPricing (Input/Output per 1M)
o4-miniApr 2025200K100,000$1.10 / $4.40
o3Apr 2025200K100,000$10.00 / $40.00
o3-miniJan 2025200K100,000$1.10 / $4.40
o1Sep 2024200K100,000$15.00 / $60.00
o1-miniSep 2024128K65,536$3.00 / $12.00
o1-proDec 2024200K100,000$150.00 / $600.00 (via ChatGPT Pro)

o3

OpenAI's most capable reasoning model. Excels at complex multi-step problems in math, science, and coding. Uses extended thinking to work through problems methodically.

  • Best for: Advanced math, scientific reasoning, complex coding challenges, competition-level problems
  • Key features: Tool use, image understanding, configurable reasoning effort

o4-mini

A fast, cost-effective reasoning model. Provides strong reasoning capabilities at a fraction of o3's cost, making it practical for production reasoning applications.

  • Best for: Production reasoning tasks, code generation with complex logic, math-heavy applications
  • Key features: Same tool use as o3, fast, supports all modalities

o1

The first reasoning model from OpenAI. Introduced the concept of "thinking tokens" where the model reasons internally before responding.

  • Best for: Complex reasoning, research analysis, advanced code generation
  • Note: o3 is generally recommended over o1 for new projects

o1-pro

A premium version of o1 with enhanced reasoning using more compute per query. Available only through ChatGPT Pro ($200/month subscription). Designed for the hardest problems in math, science, and engineering.

When to use reasoning models: Use o-series models when the task involves complex multi-step reasoning, math, logic, or code debugging. For straightforward tasks like summarization, translation, or Q&A, GPT-4o is faster and cheaper.

Image, Audio & Specialized Models

DALL-E 3

OpenAI's latest image generation model. Integrated directly into ChatGPT and available via API. Generates high-quality images from text descriptions with significantly improved prompt following compared to DALL-E 2.

  • Released: October 2023
  • Resolutions: 1024x1024, 1024x1792, 1792x1024
  • Pricing: $0.040 – $0.120 per image (depending on quality and size)
  • Best for: Creative assets, illustrations, concept art, marketing visuals

Whisper

OpenAI's automatic speech recognition (ASR) model. Open-source and available via API. Supports transcription and translation across 99+ languages.

  • Released: September 2022 (open-source), November 2023 (Whisper V3)
  • Pricing: $0.006 per minute (API)
  • Best for: Transcription, subtitling, voice interfaces, meeting notes
  • Open source: Yes — can be self-hosted

TTS (Text-to-Speech)

OpenAI's text-to-speech models. Two variants: tts-1 (optimized for speed) and tts-1-hd (optimized for quality). Six built-in voices.

  • Pricing: $15.00 per 1M characters (tts-1), $30.00 per 1M characters (tts-1-hd)
  • Best for: Voice assistants, audio content, accessibility, narration

Embeddings

OpenAI offers embedding models for converting text into numerical vectors for search, clustering, and classification.

  • text-embedding-3-large: 3,072 dimensions, $0.13 per 1M tokens
  • text-embedding-3-small: 1,536 dimensions, $0.02 per 1M tokens
  • Best for: Semantic search, RAG, document clustering, recommendation systems

Codex (Legacy)

OpenAI's original code-specialized model. Powered GitHub Copilot in its early versions. Now deprecated in favor of GPT-4o and o-series models which offer superior coding capabilities.

💡
API Access: All current OpenAI models are accessible through the OpenAI API at platform.openai.com. Pricing shown is as of early 2025 and may change. Check the OpenAI pricing page for current rates.