Structured Outputs and Function Calling
A practical guide to structured outputs and function calling within the openai api API.
What This Lesson Covers
Structured Outputs and Function Calling is a key topic within OpenAI API. In this lesson you will learn what it is, why it matters in production, the mechanics behind it, and the patterns experienced engineers use. By the end you will be able to apply structured outputs and function calling in real systems with confidence.
This lesson belongs to the Frontier LLM APIs category of the AI APIs track. The right API choice (and how you call it) compounds across every feature you ship — pick well and you ship faster, cheaper, more reliably; pick poorly and you fight your provider's quirks every release.
Why It Matters
Master the OpenAI API: GPT-5, GPT-4o, o-series reasoning models, structured outputs, function calling, batch API, and the patterns that ship the most production AI today.
The reason structured outputs and function calling deserves dedicated attention is that the difference between a snappy, cheap, reliable AI feature and a slow, expensive, flaky one usually comes down to small decisions made at the API layer. Two teams using the same API can ship at very different cost and quality based on how well they execute on this technique. Understanding the underlying mechanics — not just copying the quick-start — is what lets you adapt when the defaults stop working at your scale.
How It Works in Practice
Below is a worked example showing how to apply structured outputs and function calling in real code. Read through it once, then experiment by changing the parameters and observing the effect on quality, latency, and cost.
from openai import OpenAI
from pydantic import BaseModel
client = OpenAI()
# Structured outputs with Pydantic schema
class Invoice(BaseModel):
vendor: str
total: float
line_items: list[str]
response = client.responses.parse(
model="gpt-5",
input="Extract: Acme Inc invoice, $1,250 total, 3 line items",
text_format=Invoice,
)
invoice: Invoice = response.output_parsed
# Streaming with tool calling
stream = client.responses.create(
model="gpt-5",
input="What is the weather in Boston?",
tools=[{"type": "function", "name": "get_weather", "parameters": {...}}],
stream=True,
)
for event in stream:
print(event)
Step-by-Step Walkthrough
- Set up authentication — Get an API key, store it as an environment variable (never in code), and confirm a basic call works before integrating into your app.
- Pick the right model — Frontier models cost 10-100x more than smaller ones. Route easy tasks to small models and reserve frontier models for the hard cases.
- Implement timeout, retry, and rate-limit handling — All three will bite you in production if you ignore them. Build them into your API client from day one.
- Track tokens and cost per request — Without per-request cost tracking you cannot tell where money is going. Most APIs return usage data — log it.
- Use caching, batching, and streaming when applicable — Prompt caching cuts cost. Batch APIs cut cost further. Streaming improves perceived latency. All three usually pay for themselves quickly.
When To Use It (and When Not To)
Structured Outputs and Function Calling is the right tool when:
- The use case fits the provider's strengths (read the model card before integrating)
- The pricing model matches your workload (per-token vs subscription vs reserved)
- The rate limits and SLAs match your traffic patterns
- You can live with the provider's terms of service, data retention, and privacy posture
It is the wrong tool when:
- A simpler API (or a self-hosted model) already meets your needs
- The use case is at odds with the provider's pricing or rate-limit shape
- The provider's terms of service conflict with your data policies
- You are still iterating on what you actually need — pick the API after you know the shape of the problem
Production Checklist
- Are API keys stored as environment variables (or secrets manager), never in code?
- Do you have timeouts, retries with exponential backoff, and circuit breakers around every call?
- Are you tracking token usage and cost per request, with alerts on spikes?
- Do you have a fallback model or provider if the primary fails?
- Are you handling rate limits gracefully (queue, backoff, retry-after header)?
- Have you load-tested at 2-3x your projected peak to find your provider's breaking point?
Next Steps
The other lessons in OpenAI API build directly on this one. Once you are comfortable with structured outputs and function calling, the natural next step is to combine it with the patterns in the surrounding lessons — that is where compound returns kick in. API skills are most useful as a system, not as isolated tricks.
Lilly Tech Systems