Intermediate

Step 1: Build a Single Agent

Before building a multi-agent system, you need to master the building block: a single ReAct agent. In this lesson, you will create a complete agent with tool calling, conversation memory, structured output, and robust error handling.

The ReAct Pattern

ReAct (Reason + Act) is the dominant pattern for tool-using agents. The agent follows a loop: it reasons about what to do, calls a tool, observes the result, and repeats until the task is complete.

# The ReAct loop:
# 1. THINK  - The LLM reasons about the current state and decides what to do
# 2. ACT    - The LLM calls a tool with specific arguments
# 3. OBSERVE - The tool returns a result
# 4. REPEAT  - Go back to step 1 with the new information
# 5. RESPOND - When the task is complete, return the final answer

Define the Shared State

First, define the state schema that all agents will share. This is the data structure that flows through the LangGraph graph.

# agents/state.py
"""Shared state schema for the multi-agent workflow."""
from typing import TypedDict, Annotated, Literal
from operator import add
from langchain_core.messages import BaseMessage


class AgentState(TypedDict):
    """State that flows through the entire multi-agent graph.

    Attributes:
        messages: Accumulated chat messages (human, AI, tool results).
        next_agent: Which agent the supervisor wants to call next.
        task: The current high-level task description.
        results: Dictionary of results collected from each agent.
        status: Current workflow status.
        iteration: Number of supervisor routing iterations (prevents infinite loops).
    """
    messages: Annotated[list[BaseMessage], add]
    next_agent: str
    task: str
    results: dict
    status: Literal["in_progress", "needs_review", "completed", "error"]
    iteration: int
💡
Why Annotated[list, add]? The add operator tells LangGraph to append new messages to the existing list instead of replacing it. Without this, each node would overwrite previous messages. This is how conversation history accumulates across agent calls.

Build the Research Agent

Let us build a complete research agent. This agent can search the web and summarize findings.

# agents/researcher.py
"""Research agent - specialized in web search and information synthesis."""
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

from agents.state import AgentState


# --- Define tools for this agent ---

@tool
def web_search(query: str) -> str:
    """Search the web for current information on a topic.

    Args:
        query: The search query string.

    Returns:
        A summary of the top search results.
    """
    try:
        from tavily import TavilyClient

        client = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))
        response = client.search(query, max_results=5)
        results = []
        for r in response.get("results", []):
            results.append(f"Title: {r['title']}\nURL: {r['url']}\nSnippet: {r['content']}\n")
        return "\n---\n".join(results) if results else "No results found."
    except Exception as e:
        return f"Search failed: {str(e)}. Provide your best answer from training data."


@tool
def summarize_text(text: str, max_length: int = 500) -> str:
    """Summarize a long text into a concise version.

    Args:
        text: The text to summarize.
        max_length: Maximum character length for the summary.

    Returns:
        A concise summary of the input text.
    """
    if len(text) <= max_length:
        return text
    # Simple truncation with ellipsis - in production, use an LLM call
    return text[:max_length].rsplit(" ", 1)[0] + "..."


# --- System prompt ---

RESEARCHER_PROMPT = """You are a Research Agent specialized in finding and synthesizing information.

Your capabilities:
- Search the web for current, accurate information
- Summarize long documents into concise insights
- Cross-reference multiple sources for accuracy
- Provide citations and source URLs

Rules:
1. Always search before answering questions about current events or facts.
2. Cite your sources with URLs when available.
3. If search fails, clearly state you are using training data.
4. Keep responses focused and factual - no speculation.
5. If a question is outside your expertise, say so clearly.

Output format:
- Start with a direct answer to the question
- Follow with supporting details and sources
- End with confidence level: HIGH / MEDIUM / LOW"""


# --- Create the agent ---

def create_researcher_agent():
    """Create a research agent with web search and summarization tools."""
    llm = ChatOpenAI(
        model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
        temperature=0
    )

    agent = create_react_agent(
        model=llm,
        tools=[web_search, summarize_text],
        state_modifier=RESEARCHER_PROMPT
    )
    return agent


# --- Node function for LangGraph ---

def researcher_node(state: AgentState) -> dict:
    """LangGraph node that runs the research agent.

    Takes the current state, extracts the task, runs the agent,
    and returns updated state with the research results.
    """
    agent = create_researcher_agent()

    # Build the input message from the current task
    task_message = HumanMessage(content=f"Research task: {state['task']}")

    # Run the agent
    result = agent.invoke({"messages": [task_message]})

    # Extract the final response
    final_message = result["messages"][-1]

    # Update results dictionary
    current_results = state.get("results", {})
    current_results["researcher"] = final_message.content

    return {
        "messages": [final_message],
        "results": current_results
    }

Build the Coder Agent

The coder agent can write and execute Python code. It uses a sandboxed execution environment.

# agents/coder.py
"""Coder agent - specialized in writing and executing Python code."""
import os
import sys
import io
import traceback
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

from agents.state import AgentState


@tool
def execute_python(code: str) -> str:
    """Execute Python code in a sandboxed environment and return the output.

    Args:
        code: Python code to execute.

    Returns:
        The stdout output of the code, or an error message if execution fails.
    """
    # Capture stdout
    old_stdout = sys.stdout
    sys.stdout = captured_output = io.StringIO()

    try:
        # Execute in a restricted namespace
        exec_globals = {"__builtins__": __builtins__}
        exec(code, exec_globals)
        output = captured_output.getvalue()
        return output if output else "Code executed successfully (no output)."
    except Exception as e:
        return f"Error: {type(e).__name__}: {str(e)}\n{traceback.format_exc()}"
    finally:
        sys.stdout = old_stdout


@tool
def write_file(filepath: str, content: str) -> str:
    """Write content to a file.

    Args:
        filepath: Path where the file should be written.
        content: Content to write to the file.

    Returns:
        Confirmation message with the file path.
    """
    try:
        # Safety check - only allow writing to workspace directory
        if ".." in filepath or filepath.startswith("/"):
            return "Error: Only relative paths within the workspace are allowed."
        os.makedirs(os.path.dirname(filepath) if os.path.dirname(filepath) else ".", exist_ok=True)
        with open(filepath, "w") as f:
            f.write(content)
        return f"File written successfully: {filepath} ({len(content)} characters)"
    except Exception as e:
        return f"Error writing file: {str(e)}"


@tool
def read_file(filepath: str) -> str:
    """Read the contents of a file.

    Args:
        filepath: Path to the file to read.

    Returns:
        The file contents, or an error message.
    """
    try:
        if ".." in filepath or filepath.startswith("/"):
            return "Error: Only relative paths within the workspace are allowed."
        with open(filepath, "r") as f:
            content = f.read()
        return content if content else "(empty file)"
    except FileNotFoundError:
        return f"Error: File not found: {filepath}"
    except Exception as e:
        return f"Error reading file: {str(e)}"


CODER_PROMPT = """You are a Coder Agent specialized in writing and executing Python code.

Your capabilities:
- Write clean, well-documented Python code
- Execute code and analyze the output
- Debug errors and fix code issues
- Read and write files in the workspace

Rules:
1. Always test your code by executing it before returning it as a final answer.
2. Handle errors gracefully - if code fails, debug and retry.
3. Write clean code with docstrings and comments.
4. Never execute destructive operations (rm, delete, format, etc.).
5. Keep code focused on the task - avoid unnecessary complexity.

Output format:
- Show the final working code in a code block
- Include the execution output
- Explain what the code does in plain language"""


def create_coder_agent():
    """Create a coder agent with code execution and file I/O tools."""
    llm = ChatOpenAI(
        model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
        temperature=0
    )

    agent = create_react_agent(
        model=llm,
        tools=[execute_python, write_file, read_file],
        state_modifier=CODER_PROMPT
    )
    return agent


def coder_node(state: AgentState) -> dict:
    """LangGraph node that runs the coder agent."""
    agent = create_coder_agent()

    task_message = HumanMessage(content=f"Coding task: {state['task']}")
    result = agent.invoke({"messages": [task_message]})

    final_message = result["messages"][-1]

    current_results = state.get("results", {})
    current_results["coder"] = final_message.content

    return {
        "messages": [final_message],
        "results": current_results
    }

Build the Analyst Agent

The analyst agent processes data, generates summaries, and creates structured reports.

# agents/analyst.py
"""Analyst agent - specialized in data analysis and report generation."""
import os
import json
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

from agents.state import AgentState


@tool
def analyze_data(data: str, analysis_type: str = "summary") -> str:
    """Analyze data and return insights.

    Args:
        data: JSON string or plain text data to analyze.
        analysis_type: Type of analysis - 'summary', 'trends', 'comparison'.

    Returns:
        Analysis results as structured text.
    """
    try:
        # Try to parse as JSON
        parsed = json.loads(data)
        if isinstance(parsed, list):
            return (
                f"Dataset: {len(parsed)} records\n"
                f"Fields: {list(parsed[0].keys()) if parsed else 'N/A'}\n"
                f"Analysis type: {analysis_type}\n"
                f"Sample: {json.dumps(parsed[:3], indent=2)}"
            )
        elif isinstance(parsed, dict):
            return (
                f"Object with {len(parsed)} keys: {list(parsed.keys())}\n"
                f"Analysis type: {analysis_type}"
            )
    except (json.JSONDecodeError, IndexError, KeyError):
        pass

    # Plain text analysis
    words = data.split()
    return (
        f"Text analysis ({analysis_type}):\n"
        f"- Length: {len(data)} characters, {len(words)} words\n"
        f"- Preview: {data[:200]}..."
    )


@tool
def generate_report(title: str, sections: str) -> str:
    """Generate a structured report from section data.

    Args:
        title: Report title.
        sections: JSON string with section titles and content.

    Returns:
        A formatted markdown report.
    """
    try:
        section_list = json.loads(sections)
    except json.JSONDecodeError:
        section_list = [{"title": "Content", "body": sections}]

    report = f"# {title}\n\n"
    for i, section in enumerate(section_list, 1):
        s_title = section.get("title", f"Section {i}")
        s_body = section.get("body", "No content provided.")
        report += f"## {i}. {s_title}\n\n{s_body}\n\n"

    report += "---\n*Report generated by Analyst Agent*\n"
    return report


ANALYST_PROMPT = """You are an Analyst Agent specialized in data analysis and report generation.

Your capabilities:
- Analyze structured and unstructured data
- Identify trends, patterns, and anomalies
- Generate clear, structured reports
- Synthesize information from multiple sources

Rules:
1. Always structure your analysis with clear sections.
2. Use data to support every claim - no unsupported speculation.
3. Highlight key findings and actionable insights.
4. Present numbers and comparisons clearly.
5. Use the generate_report tool for final deliverables.

Output format:
- Start with an executive summary (2-3 sentences)
- Follow with detailed findings organized by section
- End with recommendations or next steps"""


def create_analyst_agent():
    """Create an analyst agent with data analysis and reporting tools."""
    llm = ChatOpenAI(
        model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
        temperature=0
    )

    agent = create_react_agent(
        model=llm,
        tools=[analyze_data, generate_report],
        state_modifier=ANALYST_PROMPT
    )
    return agent


def analyst_node(state: AgentState) -> dict:
    """LangGraph node that runs the analyst agent."""
    agent = create_analyst_agent()

    # Include results from other agents if available
    context_parts = [f"Analysis task: {state['task']}"]
    results = state.get("results", {})
    if results:
        context_parts.append("\nPrevious agent results:")
        for agent_name, result in results.items():
            context_parts.append(f"\n--- {agent_name} ---\n{result}")

    task_message = HumanMessage(content="\n".join(context_parts))
    result = agent.invoke({"messages": [task_message]})

    final_message = result["messages"][-1]

    current_results = state.get("results", {})
    current_results["analyst"] = final_message.content

    return {
        "messages": [final_message],
        "results": current_results
    }

Test the Agents Individually

Before wiring agents together, test each one in isolation:

# tests/test_agents.py
"""Test each agent individually before wiring them together."""
import os
from dotenv import load_dotenv

load_dotenv()


def test_researcher():
    """Test the research agent."""
    from agents.researcher import researcher_node

    state = {
        "messages": [],
        "next_agent": "",
        "task": "What are the key features of LangGraph 0.2?",
        "results": {},
        "status": "in_progress",
        "iteration": 0
    }

    result = researcher_node(state)
    print("=== RESEARCHER ===")
    print(result["results"]["researcher"][:500])
    print()


def test_coder():
    """Test the coder agent."""
    from agents.coder import coder_node

    state = {
        "messages": [],
        "next_agent": "",
        "task": "Write a Python function that calculates the Fibonacci sequence up to n terms and print the first 10 terms.",
        "results": {},
        "status": "in_progress",
        "iteration": 0
    }

    result = coder_node(state)
    print("=== CODER ===")
    print(result["results"]["coder"][:500])
    print()


def test_analyst():
    """Test the analyst agent."""
    from agents.analyst import analyst_node

    state = {
        "messages": [],
        "next_agent": "",
        "task": "Analyze this sales data and provide insights: [{'month': 'Jan', 'revenue': 50000}, {'month': 'Feb', 'revenue': 62000}, {'month': 'Mar', 'revenue': 58000}]",
        "results": {},
        "status": "in_progress",
        "iteration": 0
    }

    result = analyst_node(state)
    print("=== ANALYST ===")
    print(result["results"]["analyst"][:500])
    print()


if __name__ == "__main__":
    test_researcher()
    test_coder()
    test_analyst()
    print("All individual agent tests passed!")
# Run the tests
python -m tests.test_agents

# Each agent should:
# 1. Receive the task
# 2. Use its tools (search, execute code, analyze data)
# 3. Return a structured response in its results
📝
Checkpoint: Each agent should complete its task independently. The researcher should return search results with citations, the coder should produce working code with output, and the analyst should generate a structured analysis. If any agent fails, check the error message — it usually indicates a missing API key or tool configuration issue.

Key Takeaways

  • Each agent is built with create_react_agent from LangGraph, which handles the ReAct loop automatically.
  • The shared AgentState schema defines what data flows through the entire workflow.
  • Every agent has a focused system prompt, a curated set of tools, and a node function that plugs into LangGraph.
  • Testing agents individually before orchestration saves hours of debugging later.
  • Tool functions use the @tool decorator with detailed docstrings — the LLM reads these to decide when and how to use each tool.

What Is Next

In the next lesson, you will build the tool infrastructure — a shared toolkit with web search, code execution, file I/O, and API integrations that all agents can use.