Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentuse.io/llms.txt

Use this file to discover all available pages before exploring further.

Overview

AgentUse supports multiple AI providers. You need to authenticate with at least one provider to run agents.

Supported Providers

Anthropic

Claude models (Opus, Sonnet, Haiku) Supports OAuth and API keys

OpenAI

GPT models including GPT-5, GPT-4, GPT-4o Supports OAuth and API keys

OpenRouter

Access to 100+ models via unified API API key authentication

Amazon Bedrock

Claude, Llama, Mistral, Nova and more via AWS AWS SigV4 or Bearer token

Custom / Local

Any OpenAI-compatible endpoint: Ollama, LM Studio, vLLM, llama.cpp, etc.

Authentication Methods

The simplest way to authenticate:
# Interactive menu
agentuse provider login

# Or login to specific provider
agentuse provider login anthropic
agentuse provider login openai
agentuse provider login openrouter

2. Environment Variables

Set API keys as environment variables:
# Add to ~/.bashrc, ~/.zshrc, or ~/.profile
export ANTHROPIC_API_KEY="sk-ant-api03-..."
export OPENAI_API_KEY="sk-proj-..."
export OPENROUTER_API_KEY="sk-or-v1-..."

3. Configuration File

Create a .env file in your project:
# .env
ANTHROPIC_API_KEY=sk-ant-api03-...
OPENAI_API_KEY=sk-proj-...
OPENROUTER_API_KEY=sk-or-v1-...

Advanced Environment Variable Configuration

AgentUse supports flexible environment variable patterns for multiple API keys:
# Using environment suffixes for different keys
export ANTHROPIC_API_KEY_DEV=sk-ant-api03-dev-key
export ANTHROPIC_API_KEY_PROD=sk-ant-api03-prod-key
export OPENAI_API_KEY_PERSONAL=sk-proj-personal-key

# Then specify in model string
agentuse run agent.md --model anthropic:claude-sonnet-4-6:dev
agentuse run agent.md --model openai:gpt-5.2:OPENAI_API_KEY_PERSONAL
Never commit .env files to version control! Add to .gitignore:
.env
.env.local
.env.*.local

Amazon Bedrock

Bedrock authenticates with standard AWS credentials rather than agentuse provider login. Three modes are supported (in priority order): 1. Static IAM access keys (SigV4)
export AWS_REGION=us-east-1
export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
# Optional, for STS / assumed-role temporary credentials
export AWS_SESSION_TOKEN="..."
2. Bedrock API key (Bearer token)
export AWS_REGION=us-east-1
export AWS_BEARER_TOKEN_BEDROCK="..."
3. AWS SDK credential provider chain β€” used automatically when neither of the above is set. Resolves AWS_PROFILE, ~/.aws/credentials, SSO cache, EC2/ECS/EKS instance roles, etc.
# Refresh SSO / assume-role credentials for your profile, then run:
export AWS_REGION=eu-west-1
export AWS_PROFILE=my-bedrock-profile
agentuse run agent.agentuse -m bedrock:us.anthropic.claude-sonnet-4-5-20250929-v1:0
Use the full Bedrock model ID (which contains colons) as-is, or an inference profile ARN:
agentuse run agent.agentuse -m bedrock:us.anthropic.claude-sonnet-4-5-20250929-v1:0
agentuse run agent.agentuse -m bedrock:meta.llama3-70b-instruct-v1:0
agentuse run agent.agentuse -m bedrock:arn:aws:bedrock:eu-west-1:123456789012:application-inference-profile/abcd1234
Your IAM user/role needs AmazonBedrockFullAccess (or an equivalent custom policy) and you must have requested access to the foundation model in the AWS console. The bedrock: prefix bypasses the static model registry, so any Bedrock model ID or inference-profile ARN is accepted.

Custom Providers (Local LLMs)

Connect to any OpenAI-compatible endpoint β€” Ollama, LM Studio, vLLM, llama.cpp, and more.

Adding a Custom Provider

# Ollama (default port 11434)
agentuse provider add ollama --url http://localhost:11434/v1

# LM Studio (default port 1234)
agentuse provider add lmstudio --url http://localhost:1234/v1

# With optional API key
agentuse provider add myserver --url https://my-gpu-server.example.com/v1 --key sk-mykey123

Using Custom Providers

In your agent file:
---
model: ollama:glm-4.7-flash:q4_K_M
---
---
model: lmstudio:qwen/qwen3.5-9b
---
Or override at runtime:
agentuse run agent.agentuse -m ollama:glm-4.7-flash:q4_K_M
agentuse run agent.agentuse -m lmstudio:qwen/qwen3.5-9b
Custom providers support colons in model names β€” ollama:qwen3.5:0.8b is parsed as provider ollama, model qwen3.5:0.8b.

Environment Variable Overrides

Custom providers support env var overrides using the uppercased provider name:
# Override base URL and API key for a custom provider named "ollama"
export OLLAMA_BASE_URL=http://192.168.1.100:11434/v1
export OLLAMA_API_KEY=sk-optional-key

Managing Custom Providers

# List all providers (including custom)
agentuse provider list

# Remove a custom provider
agentuse provider remove ollama
Local model limitations:
  • Quality: Small models (under 30B parameters) often struggle with complex tool use, multi-step reasoning, and following detailed agent instructions. For best results, use 30B+ parameter models.
  • Context window: Ensure your model is loaded with a context size of at least 8192 tokens. AgentUse’s system prompts can exceed 4096 tokens.
  • Tool calling: Many local models have limited or no native tool/function calling support, which may cause agents with tools to fail or behave unpredictably.

Managing Credentials

List Stored Credentials

agentuse provider list
Output:
πŸ“ Credentials stored in: ~/.local/share/agentuse/auth.json

Stored credentials:
  πŸ”‘ anthropic (oauth) β†’ Use as: anthropic:claude-sonnet-4-6
  🎫 openai (api) β†’ Use as: openai:gpt-5.2

Environment variables:
  🌍 openrouter (OPENROUTER_API_KEY) β†’ Use as: openrouter:z-ai/glm-4.7

Custom Providers:
  πŸ”Œ ollama β†’ http://localhost:11434/v1
  πŸ”Œ lmstudio β†’ http://localhost:1234/v1

Remove Credentials

# Remove specific provider
agentuse provider logout anthropic

# Remove a custom provider
agentuse provider remove ollama

Rotate API Keys

# Logout and login again
agentuse provider logout openai
agentuse provider login openai

Getting API Keys

Get API keys from provider consoles:
  • Anthropic: console.anthropic.com β†’ API Keys β†’ Create Key (starts with sk-ant-api03-)
  • OpenAI: platform.openai.com β†’ API Keys β†’ Create new secret key (starts with sk-proj-)
  • OpenRouter: openrouter.ai β†’ Keys β†’ Create Key (starts with sk-or-v1-)
Keys are shown only once. Store them securely and never commit to version control.

Authentication Priority Order

AgentUse checks authentication sources in this order:
  1. OAuth tokens - Checked first and refreshed automatically
  2. Stored API keys (via agentuse provider login) - Stored in ~/.local/share/agentuse/auth.json
  3. Environment variables - ANTHROPIC_API_KEY, OPENAI_API_KEY, OPENROUTER_API_KEY
  4. Custom environment variables - Using suffix patterns (e.g., ANTHROPIC_API_KEY_DEV) or full variable names

Runtime Model Override

Override the model at runtime using the --model flag:
agentuse run agent.agentuse --model anthropic:claude-haiku-4-5
agentuse run agent.agentuse -m openai:gpt-5-mini

CLI Commands - Model Override

See the complete reference for model override format, environment-specific keys, CI/CD examples, and sub-agent inheritance behavior.

Multi-Provider Setup

Use different providers for different agents:
---
name: fast-agent
model: anthropic:claude-haiku-4-5  # Uses Anthropic
---
---
name: powerful-agent
model: openai:gpt-5  # Uses OpenAI GPT-5
openai:
  reasoningEffort: high    # More thorough reasoning
  textVerbosity: medium    # Balanced response length
---
---
name: specialized-agent
model: openrouter:z-ai/glm-4.7  # Uses OpenRouter
---
---
name: local-agent
model: ollama:glm-4.7-flash:q4_K_M  # Uses local Ollama
---

Provider Options

Configure provider-specific settings for fine-tuned model behavior:

OpenAI Provider Options

For OpenAI models (especially GPT-5), you can control thinking effort and verbosity:
---
model: openai:gpt-5
openai:
  reasoningEffort: high  # Options: 'none', 'minimal', 'low', 'medium', 'high', 'xhigh'
  textVerbosity: low     # Options: 'low', 'medium', 'high'
---
reasoningEffort: Controls thinking effort for reasoning
  • none: Disable reasoning when the selected model supports it
  • minimal: Use the lightest reasoning mode
  • low: Faster responses with less thorough reasoning
  • medium: Balanced performance (default)
  • high: More comprehensive reasoning, slower responses
  • xhigh: Maximum reasoning for models that support it
textVerbosity: Controls response length and detail
  • low: Concise, minimal prose
  • medium: Balanced detail (default)
  • high: Verbose, detailed explanations
If you omit reasoningEffort or textVerbosity, AgentUse leaves the field unset and uses the OpenAI/AI SDK defaults. Reasoning effort is typically medium on reasoning models. none and xhigh are model-specific OpenAI options; if a model does not support a selected effort level, OpenAI will reject the request.
These options help you optimize for:
  • Speed vs Quality: Lower reasoning effort for faster responses
  • Conciseness vs Detail: Lower verbosity for more direct answers
  • Cost Optimization: Lower settings reduce token usage

Troubleshooting

  • Verify API key is correct
  • Check key hasn’t expired
  • Ensure key has required permissions
  • Try logging out and back in
  • Check your API tier and limits
  • Implement exponential backoff
  • Consider upgrading your plan
  • Use different keys for different projects

CI/CD Authentication

For automated environments:

GitHub Actions

name: Run Agent
on: [push]
jobs:
  agent:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: pnpm install -g agentuse
      - run: agentuse run my-agent.agentuse
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}

Docker

FROM node:20
RUN npm install -g pnpm && pnpm install -g agentuse
ENV ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
ENV OPENAI_API_KEY=${OPENAI_API_KEY}
ENV OPENROUTER_API_KEY=${OPENROUTER_API_KEY}
CMD ["agentuse", "run", "agent.agentuse"]

Next Steps

Quick Start

Start using your authenticated providers

Creating Agents

Build agents with your providers