Prompts & Templates
LangChain's prompt system lets you build dynamic, reusable prompts with variables, message roles, few-shot examples, and structured output parsing.
PromptTemplate
The simplest prompt template takes a string with {variables} and fills them in at runtime:
from langchain_core.prompts import PromptTemplate # Simple string template prompt = PromptTemplate.from_template( "Explain {concept} in {style} for a {audience}." ) # Fill in variables result = prompt.invoke({ "concept": "machine learning", "style": "simple terms", "audience": "10-year-old", }) print(result.text) # "Explain machine learning in simple terms for a 10-year-old."
ChatPromptTemplate
For chat models, use ChatPromptTemplate to define system, human, and AI messages:
from langchain_core.prompts import ChatPromptTemplate # Define messages with roles prompt = ChatPromptTemplate.from_messages([ ("system", "You are a {role}. Always respond in {language}."), ("human", "{question}"), ]) # Invoke with variables messages = prompt.invoke({ "role": "helpful coding tutor", "language": "English", "question": "What is a decorator in Python?", }) print(messages) # [SystemMessage(...), HumanMessage(...)]
System / Human / AI Message Templates
You can include AI messages for multi-turn conversation templates:
from langchain_core.prompts import ChatPromptTemplate prompt = ChatPromptTemplate.from_messages([ ("system", "You are a Python expert. Be concise."), ("human", "What is a list comprehension?"), ("ai", "A list comprehension is a concise way to create lists using a single line: [expr for item in iterable if condition]."), ("human", "{followup}"), ]) messages = prompt.invoke({ "followup": "Can you give me 3 examples?", })
Few-Shot Prompt Templates
Few-shot prompts include examples to guide the model's output format and style:
from langchain_core.prompts import FewShotChatMessagePromptTemplate, ChatPromptTemplate # Define examples examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "fast", "output": "slow"}, ] # Template for each example example_prompt = ChatPromptTemplate.from_messages([ ("human", "{input}"), ("ai", "{output}"), ]) # Few-shot prompt few_shot_prompt = FewShotChatMessagePromptTemplate( example_prompt=example_prompt, examples=examples, ) # Full prompt with few-shot examples final_prompt = ChatPromptTemplate.from_messages([ ("system", "You give the antonym of every word."), few_shot_prompt, ("human", "{word}"), ]) messages = final_prompt.invoke({"word": "bright"})
Partial Prompts
Fill in some variables early and the rest later:
from langchain_core.prompts import PromptTemplate from datetime import datetime # Partial with a string value prompt = PromptTemplate.from_template( "Tell me a {adjective} joke about {topic}." ) partial_prompt = prompt.partial(adjective="funny") result = partial_prompt.invoke({"topic": "AI"}) # Partial with a function (evaluated at invoke time) prompt = PromptTemplate( template="Today is {date}. Answer: {question}", input_variables=["question"], partial_variables={"date": lambda: datetime.now().strftime("%Y-%m-%d")}, )
Output Parsers
Output parsers transform raw LLM text into structured data. LangChain provides several built-in parsers:
StrOutputParser
from langchain_core.output_parsers import StrOutputParser # Extracts the string content from a ChatMessage parser = StrOutputParser() chain = prompt | model | parser # Returns a plain string
JsonOutputParser
from langchain_core.output_parsers import JsonOutputParser parser = JsonOutputParser() prompt = ChatPromptTemplate.from_messages([ ("system", "Return a JSON object with keys: name, age, city."), ("human", "Generate a fictional person from {country}."), ]) chain = prompt | model | parser result = chain.invoke({"country": "Japan"}) print(result) # {"name": "Yuki", "age": 28, "city": "Tokyo"}
PydanticOutputParser
from langchain_core.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field class Movie(BaseModel): title: str = Field(description="The movie title") year: int = Field(description="Release year") genre: str = Field(description="Movie genre") rating: float = Field(description="Rating out of 10") parser = PydanticOutputParser(pydantic_object=Movie) prompt = ChatPromptTemplate.from_messages([ ("system", "You are a movie database. {format_instructions}"), ("human", "Tell me about the movie {title}"), ]) # Inject format instructions into the prompt prompt = prompt.partial(format_instructions=parser.get_format_instructions()) chain = prompt | model | parser movie = chain.invoke({"title": "Inception"}) print(movie.title) # "Inception" print(movie.year) # 2010 print(movie.rating) # 8.8
Structured Output (with_structured_output)
The modern, preferred approach for structured output uses with_structured_output directly on the model:
from langchain_openai import ChatOpenAI from pydantic import BaseModel, Field class Sentiment(BaseModel): sentiment: str = Field(description="positive, negative, or neutral") confidence: float = Field(description="Confidence score 0-1") reasoning: str = Field(description="Brief explanation") # Model returns structured Pydantic objects directly llm = ChatOpenAI(model="gpt-4o-mini") structured_llm = llm.with_structured_output(Sentiment) result = structured_llm.invoke("I absolutely love this product!") print(result.sentiment) # "positive" print(result.confidence) # 0.95 print(result.reasoning) # "Strong positive language with 'absolutely love'"
with_structured_output over manual output parsers when your model supports it (OpenAI, Anthropic, Google). It uses native function calling / tool use under the hood, which is more reliable than asking the model to generate JSON text.What's Next?
Now that you can build dynamic prompts and parse structured output, the next lesson covers Chains — how to compose multi-step pipelines with LCEL.
Lilly Tech Systems