Model Configuration and Multi-Model Strategy
The Concept of Model Providers
OpenClaw itself is merely a drive-execution framework; it contains no "thinking ability." Once the system is installed, we need to connect it to a "brain"—a Large Language Model (LLM).
Unlike closed-source applications that are tied to a single vendor, OpenClaw adopts a decoupled architecture, allowing us to flexibly configure and dynamically switch between various Model Provider combinations.
Configuring the Foundation via Config Files
To modify or add models, you need to edit the core system file: ~/.openclaw/openclaw.json.
Open it with an editor and locate the "models" structure. This is the neural hub for model configuration:
{
"models": {
"providers": {
"openai": {
"apiKey": "sk-your-openai-api-key"
},
"anthropic": {
"apiKey": "sk-ant-your-claude-key"
},
"ollama": {
"baseUrl": "http://localhost:11434"
}
},
"defaults": {
"model": {
"primary": "anthropic/claude-3-5-sonnet-20241022"
}
}
}
}
Cloud Model Configuration
For cloud-based models like OpenAI (GPT-4o) and Anthropic (Claude 3.5 Sonnet), you only need to provide the authentication key (apiKey) as shown above.
Best Practice: To avoid security risks from hardcoded keys, it is recommended to use system environment variables (e.g., using the
"apiKey": "${OPENAI_API_KEY}"syntax).
Local Model Integration (Ollama)
If you prioritize privacy or want to save on API costs, you can run Ollama on the same machine.
After installing and running Ollama, pull a model (e.g., ollama run qwen2.5:14b). Then, configure the baseUrl in OpenClaw to point to port 11434. This gives your system a zero-cost local brain for routine tasks.
Applying Configuration Changes:
Any modification involving openclaw.json requires a restart of the underlying service to take effect:
openclaw gateway restart
Three Ways to Switch Models
Although multiple models can be configured, the system must pick a default one for specific tasks. OpenClaw's switching philosophy is command-driven.
Method A: Temporary Chat Command (Most common, session-specific)
Suppose you are chatting with the Agent about a tricky algorithmic problem and feel the current default low-cost model isn't up to the task. You can issue a command directly in the chat input:
/model anthropic/claude-3-5-sonnet-20241022
Now, subsequent messages and proxy tasks in this chat window will use Claude 3.5. If the window is closed and reopened, it usually reverts to the system default.
Method B: Global Change via CLI (Persistent)
Modify the default primary model permanently using the built-in CLI management commands:
openclaw models set anthropic/claude-3-5-sonnet-20241022
Method C: Editing openclaw.json (Manual Persistence)
Manually edit the "defaults" -> "primary" field in the JSON structure as shown earlier. Restart the Gateway, and all new sessions will use this model.
Advanced: Why Use a "Multi-Model Collaboration Strategy"?
As your Agent's workload increases, relying solely on a top-tier cloud model (like Claude 3.5 Sonnet) for everything becomes "expensive" and inefficient. We can use strategies to reduce costs and increase efficiency:
1. Task Decomposition and Cost-Effectiveness
- Heavy Logic Tasks (e.g., debugging code, cross-document summarization, generating complex shell scripts): Explicitly switch to
Claude 3.5for high-quality execution. - Lightweight or Frequent Polling Tasks (e.g., Cron jobs checking if a webpage updated, or sending hourly time announcements to Telegram): These tasks don't require high intelligence but generate many tokens. Delegating them to a local Ollama (like Qwen 2.5 or Llama 3) model allows for unlimited execution at zero marginal cost.
2. Multi-Agent Decoupling
In the upcoming "Creating an Agent" chapter, we will discuss how to bind different Agents to different underlying engines. You can have a "Scout" Agent with a lightweight model process raw information, then forward the data to a "Commander" Agent with an expensive model for final decision-making.
By combining these configurations, you can build an AI system that possesses superior reasoning capabilities without incurring high daily polling bills.