Creating Tasks Intermediate
The quality of Codex's output depends directly on the quality of your task description. This lesson covers how to write effective tasks, provide context, monitor progress, review generated code, and provide feedback for iterations.
Writing Tasks via Chat
The primary way to create tasks is through the Codex chat interface at codex.openai.com. Write your task in natural language, as if you were describing it to a developer.
Task: Add a rate limiting middleware to the Express API. Requirements: - Limit to 100 requests per minute per IP address - Return 429 status with a JSON error message when exceeded - Include Retry-After header in the 429 response - Apply to all routes under /api/ - Skip rate limiting for health check endpoint /api/health - Use an in-memory store (no Redis dependency) - Add unit tests covering the limit, the skip, and the headers
Tasks from GitHub Issues
You can also reference GitHub issues when creating tasks. Codex will read the issue title, description, labels, and comments for context.
# In the Codex chat: Fix the bug described in issue #42. Make sure to add a regression test so it doesn't happen again. # Codex will read issue #42 from your GitHub repo # and use its description as context for the fix.
Crafting Clear Task Descriptions
Follow this structure for the best results:
-
State the objective clearly
Start with a one-sentence summary of what you want done. Be specific about the outcome.
-
List requirements
Break down the task into specific, testable requirements. Use bullet points for clarity.
-
Provide file context
Tell Codex which files to modify, where to add new files, and what existing patterns to follow.
-
Specify test expectations
Describe what tests should verify. If you have a preferred testing framework, mention it.
-
Mention constraints
Note any restrictions: no new dependencies, specific coding style, backward compatibility requirements.
Good vs Bad Task Examples
"Make the API faster."
This gives Codex no actionable direction. What should be optimized? Which endpoints? What is the performance target?
"Add database query caching to the GET /api/products endpoint using an in-memory LRU cache with a 5-minute TTL. The endpoint currently queries the database on every request, which is slow under load. Cache invalidation should happen when POST/PUT/DELETE /api/products is called. Add tests to verify cache hits, misses, and invalidation."
| Bad Task | Good Task | Why It's Better |
|---|---|---|
| "Fix the login bug" | "Fix the login bug where users with + in their email get a 500 error on POST /auth/login" | Identifies the specific bug, endpoint, and reproduction steps |
| "Add tests" | "Add unit tests for src/utils/date.ts covering formatDate, parseDate, and isWeekend functions" | Specifies exactly which functions need tests |
| "Refactor the code" | "Refactor src/api/orders.ts to extract validation logic into src/validators/order.ts. Keep the same API behavior." | Defines the target structure and constraints |
Monitoring Progress
After submitting a task, you can monitor Codex's progress in real time:
- Terminal output: See the commands Codex runs, including dependency installation, test execution, and linting
- File changes: Watch which files Codex creates, modifies, or deletes as it works
- Logs: Read the agent's reasoning about what it is doing and why
- Status indicators: Track whether the task is in progress, waiting for tests, or completed
Reviewing Generated Code
When Codex finishes a task, it creates a pull request on GitHub. Review it like any other PR:
- Check the diff: Review all changed files for correctness, style, and completeness
- Run the tests: Verify that CI passes and all tests are green
- Test locally: For important changes, pull the branch and test manually
- Check edge cases: Think about scenarios Codex may not have considered
- Verify no regressions: Ensure existing functionality is not broken
Requesting Changes
If the generated code needs adjustments, you can provide feedback directly in the Codex interface or on the PR:
# In the Codex chat or PR comments: Changes needed: 1. The rate limit should be configurable via environment variable RATE_LIMIT_MAX, defaulting to 100 2. Add a log message when a request is rate-limited, including the IP address 3. The test for the skip route is using the wrong endpoint - should be /api/health not /health
Lilly Tech Systems