-q, --quiet Suppress info messages (only show warnings/errors)-d, --debug Enable verbose debug logging--no-tty Disable TUI output (plain logs for piping/automation)-v, --verbose Show detailed execution information--timeout <seconds> Maximum execution time in seconds (default: 300)-C, --directory <path> Run as if agentuse was started in <path> instead of the current directory--env-file <path> Path to custom .env file-m, --model <model> Override the model specified in the agent file
The --quiet and --debug options cannot be used together.
You can also set NO_TTY=true to force plain logging without spinners.
The -C or --directory option is the starting directory for the command:
# Without -C: runs from current directoryagentuse run agent.agentuse# With -C: starts from the specified directoryagentuse run agent.agentuse -C /path/to/project-or-subdir
AgentUse searches from the current directory upward for .agentuse/, .git/, or package.json. If none are found, the starting directory is the project root.The project root is used for:
Loading .env files
Finding plugins in .agentuse/plugins/
Owning shared stores in .agentuse/store/
Grouping session state
For agentuse serve, -C also defines the served scope: only .agentuse files under that directory are exposed, while state still belongs to the detected project root.This is useful for:
Running agents from different projects
CI/CD environments with complex directory structures
# Override with OpenAI GPT-5agentuse run agent.agentuse --model openai:gpt-5# Use Anthropic Claude Haiku for quick testingagentuse run agent.agentuse -m anthropic:claude-haiku-4-5# Use different API key via environment suffixagentuse run agent.agentuse --model openai:gpt-5.2:dev# This will use OPENAI_API_KEY_DEV instead of OPENAI_API_KEY# Use specific environment variableagentuse run agent.agentuse --model anthropic:claude-sonnet-4-6:ANTHROPIC_API_KEY_PERSONAL
When overriding models, provider-specific options (like OpenAI’s reasoningEffort)
will be ignored if you switch to a different provider.
Model overrides propagate to all sub-agents. When you use --model, both the parent
agent and all its sub-agents will use the specified model, regardless of what models
are defined in their individual agent files.
HTTPS Only: Only HTTPS URLs are allowed for security
File Extension: Remote agents must have .agentuse extension
Interactive Confirmation: AgentUse will show a security warning with options:
[p]review - Fetch and display the agent content before running
[y]es - Execute the agent directly
[N]o - Abort execution (default)
Example security prompt:
⚠️ WARNING: You are about to execute an agent from:https://example.com/agent.agentuseOnly continue if you trust the source and have audited the agent.[p]review / [y]es / [N]o:
# Run agent from current directoryagentuse run hello.agentuse# Run agent from subdirectorycd agents/subfolderagentuse run my-agent.agentuse # Finds .env and plugins at project root# Change to different directory first with -Cagentuse run agent.agentuse -C /my/project # Same as: cd /my/project && agentuse run agent.agentuse# Run agent from a different projectagentuse run newsletter/writer.agentuse -C ~/projects/content# Use custom environment fileagentuse run agent.agentuse --env-file .env.production# Run with additional promptagentuse run assistant.agentuse "Help me write an email"# Run with multi-word promptagentuse run code-review.agentuse "focus on security and performance issues"# Run remote agent (requires HTTPS and .agentuse extension)# Shows security warning and requires confirmationagentuse run https://example.com/agent.agentuse# Run remote agent with additional promptagentuse run https://example.com/agent.agentuse "use production settings"# Run with debug outputagentuse run agent.agentuse --debug# Run with additional prompt and verbose outputagentuse run agent.agentuse "be thorough" --verbose# Run with custom timeoutagentuse run agent.agentuse --timeout 600# Run quietly (only warnings/errors)agentuse run agent.agentuse --quiet# Disable TUI output for piping/automationagentuse run agent.agentuse --no-tty# Override model at runtimeagentuse run agent.agentuse --model openai:gpt-5-mini# Override model and add promptagentuse run agent.agentuse "be concise" --model anthropic:claude-haiku-4-5# Use development API key with model overrideagentuse run agent.agentuse -m openai:gpt-5.2:dev# Override model for agent with sub-agents (all will use the same model)agentuse run orchestrator.agentuse --model openai:gpt-5-mini
agentuse sessions list [options]agentuse sessions show <id> [options]agentuse sessions resume <id> [options]agentuse sessions path [options]
agentuse sessions is a shortcut for agentuse sessions list.Session data is stored globally and partitioned by project. CLI session commands default to the current project; use --all or --all-search when you are not sure which project owns a session.
agentuse sessions listagentuse sessions list --allagentuse sessions list --project ../other-project
List options:
-s, --subagents Include subagent sessions-n, --limit <n> Limit number of sessions to show-j, --json Output as JSON--all Show sessions across all projects--project [path] Show sessions for a project path; defaults to current project
agentuse sessions show 01K...agentuse sessions show 01K... --all-searchagentuse sessions show 01K... --project ../other-project
Show options:
-j, --json Output as JSON-f, --full Show full tool input/output--project [path] Search a project path; defaults to current project--all-search Search all projects if not found in the selected project
For a completed or errored session, AgentUse starts a new run from the original agent file with continuation context:
agentuse sessions resume 01K... --prompt "Continue from here and add tests"
Resume options:
-C, --directory <path> Run as if agentuse was started in <path>--project [path] Search a project path; defaults to current project--all-search Search all projects if not found in the selected project-d, --debug Enable debug logging for resume
Start the AgentUse HTTP daemon to run agents via API. This enables integration with external applications, webhooks, approvals, Slack notifications, or any system that can make HTTP requests.
-p, --port <number> Port to listen on (default: 12233)-H, --host <string> Host to bind to (default: 127.0.0.1)-C, --directory <path> Directory to serve agents from; project state is detected upward--default <id> Default project for POST /run when `project` is omitted-d, --debug Enable debug mode--no-auth Disable API key requirement for exposed hosts (dangerous)--no-log-file Disable writing serve logs to a file
AgentUse runs one serve daemon at a time. Repeat -C on that daemon to serve multiple scopes/projects:
agentuse serve -C ./projA -C ./projB
Project ids default to the -C directory basename. Duplicate ids fail startup. Each scope detects its project root independently, and each project loads its own .env / .env.local in its worker process.Use the request project field to choose a project. In multi-project mode, omitting it returns 400 PROJECT_REQUIRED unless --default <id> is set.Starting a second daemon fails with the PID, address, projects, and log path of the daemon that is already running. This keeps approval links, Slack replies, session resumes, and API traffic routed through one owner.
# Start server with defaults (127.0.0.1:12233)agentuse serve# Start on custom portagentuse serve --port 8080# Bind to all interfaces (for external access)agentuse serve --host 0.0.0.0 --port 3000# Set the public URL used in approval review linksagentuse serve --public-url https://agentuse.mycompany.com# Provide Slack credentials for agents that enable Slack channelsSLACK_BOT_TOKEN=xoxb-... SLACK_APPROVAL_CHANNEL=C0123456789 agentuse serve# Serve agents from a project or subdirectoryagentuse serve -C /path/to/project-or-subdir# Host multiple projects from one processagentuse serve -C ./projA -C ./projB# Default POST /run to projA when `project` is omittedagentuse serve -C ./projA -C ./projB --default projA# Start with debug loggingagentuse serve --debug
API Usage Examples:
# Run an agent (JSON response)curl -X POST http://127.0.0.1:12233/run \ -H "Content-Type: application/json" \ -d '{"agent": "agents/assistant.agentuse"}'# Run with additional promptcurl -X POST http://127.0.0.1:12233/run \ -H "Content-Type: application/json" \ -d '{"agent": "agents/writer.agentuse", "prompt": "Write about AI"}'# Override modelcurl -X POST http://127.0.0.1:12233/run \ -H "Content-Type: application/json" \ -d '{"agent": "agents/helper.agentuse", "model": "openai:gpt-5-mini"}'# Streaming responsecurl -N -X POST http://127.0.0.1:12233/run \ -H "Content-Type: application/json" \ -H "Accept: application/x-ndjson" \ -d '{"agent": "agents/analyzer.agentuse"}'# Multi-project: pick which project runs the agentcurl -X POST http://127.0.0.1:12233/run \ -H "Content-Type: application/json" \ -d '{"project": "projA", "agent": "assistant.agentuse"}'# Discovery: list served projectscurl http://127.0.0.1:12233/
Shows PID, port, project summary, total agent/schedule counts, and uptime. Multi-project daemons show the first project id plus +N additional projects.
Amazon Bedrock authenticates via standard AWS environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, optional AWS_SESSION_TOKEN) or AWS_BEARER_TOKEN_BEDROCK. There is no provider login bedrock command.
agentuse provider list---## agentuse skillsList available skills discovered from the project and user skill directories.### Syntax```bashagentuse skills [options]
Found 2 skill(s):.agentuse/skills code-review Review code for quality, security, and maintainability...~/.agentuse/skills seo-analyzer Analyze content for SEO optimization...
With verbose mode (-v):
Found 2 skill(s):.agentuse/skills code-review Review code for quality, security, and maintainability... /path/to/project/.agentuse/skills/code-review/SKILL.md tools: Read, Bash(git:*)
For detailed information on creating and using skills, see the Skills Guide.
Found 3 agent(s):agents/ assistant General-purpose AI assistant code-reviewer Reviews code for quality and security researcher Researches topics and summarizes findings
--force Overwrite existing skills/agents without prompting--all Install all skills and agents without prompting--list List available skills and agents without installing-s, --skill <name> Install specific skill(s) by name (repeatable)-a, --agent <path> Install specific agent(s) by path (repeatable)
agentuse run <file> [prompt...] # Run an agent with optional promptagentuse serve [options] # Start the HTTP daemon to run agents via APIagentuse serve ps [--json] # Show the running serve daemonagentuse provider login [provider] # Authenticate with a provideragentuse provider add <name> --url <url> # Add custom provider (Ollama, LM Studio, etc.)agentuse provider logout [provider] # Remove stored credentialsagentuse provider remove <name> # Remove a custom provideragentuse provider list # Show all providers and credentialsagentuse provider help # Show detailed auth helpagentuse skills [options] # List available skillsagentuse agents [options] # Discover and list project agentsagentuse add <source> [options] # Add skills/agents from GitHub or local pathagentuse --version # Show versionagentuse --help # Show help
The agentuse auth command still works as a backward-compatible alias for agentuse provider.
# Pipe input to agentecho "Translate this to Spanish" | agentuse run translator.agentuse# Pipe agent outputagentuse run generator.agentuse | tee output.txt# Chain agentsagentuse run analyzer.agentuse < data.txt | agentuse run summarizer.agentuse
If you see authentication errors, AgentUse will show specific guidance:
[ERROR] No API key found for anthropicTo authenticate, run: agentuse provider loginOr set your API key: export ANTHROPIC_API_KEY='your-key-here'For more options: agentuse provider --help
Timeout issues
Increase timeout for long-running agents:
agentuse run agent.agentuse --timeout 600# or via environmentMAX_STEPS=2000 agentuse run agent.agentuse