Agent Permission Models
Understanding how AI coding agents handle permissions is the foundation of agent safety. Each agent has a different approach to controlling what actions it can take — and knowing these models helps you configure the right level of protection.
How Different Agents Handle Permissions
AI coding agents vary significantly in how they manage the boundary between "suggest" and "execute." Here's a comparison of the major agents:
| Agent | Permission Model | Confirmation Required? | Configurable? |
|---|---|---|---|
| Claude Code | Tool-level allow/deny with CLAUDE.md restrictions | Yes, per tool call (unless allow-listed) | Highly configurable |
| Codex CLI | Three modes: suggest, auto-edit, full-auto | Depends on mode | Yes, via mode selection |
| GitHub Copilot | IDE-integrated with inline accept/reject | Yes, for terminal commands | Limited |
| Cursor | IDE agent with diff review before apply | Yes, for file changes | Moderate |
| Windsurf | Cascade flow with step-by-step confirmation | Yes, at each step | Limited |
| Aider | Auto-commit with git integration | Optional (--yes flag) | Yes, via flags |
Claude Code's Permission System
Claude Code has the most sophisticated permission model among current AI coding agents. It operates on a tool-level allow/deny system:
Tool Categories
# Read-only tools (low risk - can be auto-allowed) - Read files - List directory contents - Search/grep codebase - View git log/status/diff # Write tools (medium risk - require confirmation by default) - Edit/write files - Create directories # Execute tools (high risk - always require confirmation) - Run bash commands - Execute shell scripts - Run arbitrary programs # Dangerous patterns (should be denied or require extra caution) - Commands containing: rm -rf, drop database, terraform destroy - Force pushes: git push --force - Credential access: cat ~/.aws/credentials
CLAUDE.md Restrictions
The CLAUDE.md file at your project root can define safety rules that Claude Code will follow:
# Safety Rules
## Forbidden Commands
- NEVER run `terraform destroy` or `terraform apply` without plan
- NEVER run `kubectl delete namespace` on production contexts
- NEVER run `git push --force` to main or master branches
- NEVER run `rm -rf /` or any recursive delete on root paths
- NEVER run `docker system prune -af` without confirmation
- NEVER access or display credentials, secrets, or API keys
## Required Patterns
- ALWAYS run `terraform plan` before `terraform apply`
- ALWAYS use `--dry-run` flag when available
- ALWAYS create a backup before modifying databases
- ALWAYS work on feature branches, never directly on main
## Infrastructure Rules
- Only interact with the dev/staging AWS account (123456789012)
- Never modify resources tagged with `env:production`
- Use read-only credentials for cloud exploration
GitHub Copilot Agent Mode Safeguards
GitHub Copilot's agent mode in VS Code operates with several built-in safeguards:
- Terminal command confirmation: Every terminal command the agent wants to run is shown to the user for approval before execution
- Diff preview: File changes are shown as diffs that the user can accept or reject
- Workspace scope: The agent operates within the current workspace, limiting its reach
- Extension sandboxing: Runs within VS Code's extension sandbox, limiting system access
The Principle of Least Privilege for AI Agents
Just as you wouldn't give a junior developer admin access on day one, AI agents should operate with the minimum permissions needed for their task:
Identify the Task Scope
Before starting an agent session, define what the agent needs to do. "Fix the CSS bug in the login page" doesn't need AWS credentials or database access.
Create Task-Specific Credentials
Use short-lived, scoped credentials for each agent session rather than your personal admin credentials.
Restrict Tool Access
If the agent only needs to edit files, disable shell execution. If it needs to run tests, allow only specific test commands.
Set Environment Boundaries
Point the agent at development environments, not production. Use separate kubeconfig contexts, AWS profiles, and database connections.
#!/bin/bash # Create a restricted AWS profile for AI agent sessions # ~/.aws/config [profile agent-readonly] role_arn = arn:aws:iam::123456789012:role/AgentReadOnly source_profile = default region = us-east-1 duration_seconds = 3600 # 1-hour session max # Before starting an agent session: export AWS_PROFILE=agent-readonly export AWS_DEFAULT_REGION=us-east-1 # Verify the agent sees only read-only permissions aws sts get-caller-identity # Should show the AgentReadOnly role, not your admin identity
Configuring Confirmation for Destructive Commands
Most agents allow you to configure which commands require human confirmation. Here's how to set this up:
{
"permissions": {
"auto_allow": [
"read_file",
"list_directory",
"search_codebase",
"git_status",
"git_diff",
"git_log"
],
"require_confirmation": [
"write_file",
"execute_command",
"git_commit",
"git_push"
],
"always_deny": [
"commands_matching: rm -rf /",
"commands_matching: terraform destroy",
"commands_matching: kubectl delete namespace",
"commands_matching: git push.*--force.*main",
"commands_matching: DROP DATABASE",
"commands_matching: aws .* delete-"
]
}
}
Sandboxed vs Unsandboxed Execution
AI agents can operate in different execution modes that fundamentally change their risk profile:
| Mode | Description | Risk Level | Use Case |
|---|---|---|---|
| Fully Sandboxed | Agent runs in a container with no network access and ephemeral filesystem | Very Low | Code generation, refactoring |
| Network Sandboxed | Agent can modify local files but cannot reach external services | Low | Local development, testing |
| Scoped Access | Agent has network access but limited to specific services/accounts | Medium | Integration testing, staging deploys |
| Full Access | Agent has the same access as the developer | High | Only with comprehensive guardrails |
#!/bin/bash # Run your AI agent inside a sandboxed Docker container docker run --rm -it \ --name agent-sandbox \ --network none \ # No network access --read-only \ # Read-only root filesystem --tmpfs /tmp:rw,size=100m \ # Writable temp space -v $(pwd):/workspace:rw \ # Mount only the project dir -w /workspace \ --memory 2g \ # Limit memory --cpus 2 \ # Limit CPU node:20-slim \ bash
Key Takeaways
- Every AI coding agent has a different permission model — understand yours
- Use CLAUDE.md or equivalent config to define project-specific safety rules
- Apply the principle of least privilege: agents should only have what they need
- Create task-specific, time-limited credentials for agent sessions
- Prefer sandboxed execution modes when the task doesn't require real infrastructure access
- Configure destructive commands to always require human confirmation
Lilly Tech Systems