Documentation Index
Fetch the complete documentation index at: https://docs.agentuse.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
AgentUse supports multiple AI providers. You need to authenticate with at least one provider to run agents.Supported Providers
Anthropic
Claude models (Opus, Sonnet, Haiku)
Supports OAuth and API keys
OpenAI
GPT models including GPT-5, GPT-4, GPT-4o
Supports OAuth and API keys
OpenRouter
Access to 100+ models via unified API
API key authentication
Amazon Bedrock
Claude, Llama, Mistral, Nova and more via AWS
AWS SigV4 or Bearer token
Custom / Local
Any OpenAI-compatible endpoint: Ollama, LM Studio, vLLM, llama.cpp, etc.
Authentication Methods
1. Interactive Login (Recommended)
The simplest way to authenticate:2. Environment Variables
Set API keys as environment variables:3. Configuration File
Create a.env file in your project:
Advanced Environment Variable Configuration
AgentUse supports flexible environment variable patterns for multiple API keys:Amazon Bedrock
Bedrock authenticates with standard AWS credentials rather thanagentuse provider login. Three modes are supported (in priority order):
1. Static IAM access keys (SigV4)
AWS_PROFILE, ~/.aws/credentials, SSO cache, EC2/ECS/EKS instance roles, etc.
Your IAM user/role needs
AmazonBedrockFullAccess (or an equivalent custom policy) and you must have requested access to the foundation model in the AWS console. The bedrock: prefix bypasses the static model registry, so any Bedrock model ID or inference-profile ARN is accepted.Custom Providers (Local LLMs)
Connect to any OpenAI-compatible endpoint β Ollama, LM Studio, vLLM, llama.cpp, and more.Adding a Custom Provider
Using Custom Providers
In your agent file:Custom providers support colons in model names β
ollama:qwen3.5:0.8b is parsed as provider ollama, model qwen3.5:0.8b.Environment Variable Overrides
Custom providers support env var overrides using the uppercased provider name:Managing Custom Providers
Managing Credentials
List Stored Credentials
Remove Credentials
Rotate API Keys
Getting API Keys
Get API keys from provider consoles:- Anthropic: console.anthropic.com β API Keys β Create Key (starts with
sk-ant-api03-) - OpenAI: platform.openai.com β API Keys β Create new secret key (starts with
sk-proj-) - OpenRouter: openrouter.ai β Keys β Create Key (starts with
sk-or-v1-)
Authentication Priority Order
AgentUse checks authentication sources in this order:- OAuth tokens - Checked first and refreshed automatically
- Stored API keys (via
agentuse provider login) - Stored in~/.local/share/agentuse/auth.json - Environment variables -
ANTHROPIC_API_KEY,OPENAI_API_KEY,OPENROUTER_API_KEY - Custom environment variables - Using suffix patterns (e.g.,
ANTHROPIC_API_KEY_DEV) or full variable names
Runtime Model Override
Override the model at runtime using the--model flag:
CLI Commands - Model Override
See the complete reference for model override format, environment-specific keys, CI/CD examples, and sub-agent inheritance behavior.
Multi-Provider Setup
Use different providers for different agents:Provider Options
Configure provider-specific settings for fine-tuned model behavior:OpenAI Provider Options
For OpenAI models (especially GPT-5), you can control thinking effort and verbosity:reasoningEffort: Controls thinking effort for reasoning
none: Disable reasoning when the selected model supports itminimal: Use the lightest reasoning modelow: Faster responses with less thorough reasoningmedium: Balanced performance (default)high: More comprehensive reasoning, slower responsesxhigh: Maximum reasoning for models that support it
low: Concise, minimal prosemedium: Balanced detail (default)high: Verbose, detailed explanations
If you omit
reasoningEffort or textVerbosity, AgentUse leaves the field unset and uses the OpenAI/AI SDK defaults. Reasoning effort is typically medium on reasoning models. none and xhigh are model-specific OpenAI options; if a model does not support a selected effort level, OpenAI will reject the request.- Speed vs Quality: Lower reasoning effort for faster responses
- Conciseness vs Detail: Lower verbosity for more direct answers
- Cost Optimization: Lower settings reduce token usage
Troubleshooting
Authentication failed
Authentication failed
- Verify API key is correct
- Check key hasnβt expired
- Ensure key has required permissions
- Try logging out and back in
Rate limiting
Rate limiting
- Check your API tier and limits
- Implement exponential backoff
- Consider upgrading your plan
- Use different keys for different projects
CI/CD Authentication
For automated environments:GitHub Actions
Docker
Next Steps
Quick Start
Start using your authenticated providers
Creating Agents
Build agents with your providers
