◆ Chrome Extension — Free to Install

How to Switch Models in OpenClaw

OpenClaw supports multiple AI models — from local Ollama models to remote APIs. Here's how to configure and switch between them for different tasks.

Install OmniScriber — Free

Save your AI model evaluation conversations from ChatGPT or Claude

Why Model Selection Matters for AI Agents

Not all AI models are equally good at all tasks. A model that excels at creative writing may struggle with precise code execution. A model optimized for speed may sacrifice accuracy on complex reasoning tasks. For an AI agent like OpenClaw, which needs to handle diverse tasks — from shell commands to web browsing to code generation — model selection has a significant impact on performance.

OpenClaw is model-agnostic by design. It doesn't lock you into a single provider or model. This flexibility is one of its key advantages: you can use the best model for each type of task, switch between local and cloud models based on privacy requirements, and update to newer models as they become available without changing your workflow.

Understanding how to configure and switch models in OpenClaw is essential for getting the most out of the tool.

Which Models Work with OpenClaw

OpenClaw supports a wide range of models through different providers:

Cloud models via API: Claude (Anthropic), GPT-4 and GPT-4o (OpenAI), Gemini (Google), and others. These require API keys and charge per token.

Local models via Ollama: Any model available through Ollama, including Llama 3.2, Qwen 2.5, Mistral, Phi-3, and dozens more. These run on your hardware with no per-token costs.

Custom endpoints: OpenClaw can connect to any OpenAI-compatible API endpoint, which includes many self-hosted model servers and alternative providers.

The best model for OpenClaw depends on your use case: Claude and GPT-4 for complex reasoning, Qwen 2.5 Coder for coding tasks, smaller local models for quick tasks where privacy matters.

Step-by-Step Guide

1

Open the OpenClaw configuration file

Run `openclaw config` to open the configuration file in your default editor, or navigate directly to ~/.openclaw/config.json.

2

Update the model field

Change the `model` field to your desired model identifier (e.g., 'claude-3-5-sonnet-20241022', 'gpt-4o', 'llama3.2'). For Ollama models, use the model name as shown in `ollama list`.

3

Update the provider and API key if needed

If switching between providers (e.g., from Anthropic to OpenAI), update the `provider` field and ensure the correct API key environment variable is set.

4

For Ollama models, set the base URL

Add `"baseUrl": "http://localhost:11434"` to your config when using Ollama. This tells OpenClaw to connect to your local Ollama instance instead of a cloud API.

5

Test the new model

Run a test task: `openclaw run 'What model are you?'` The response should confirm the new model is active. Run a more complex task to verify performance meets your expectations.

Why Pair with OmniScriber?

Save your model evaluation research

When you're comparing models for OpenClaw, you'll have many conversations with AI about their capabilities. OmniScriber saves those conversations so your research is permanently accessible.

Export model configuration guides

When you ask ChatGPT or Claude how to configure a specific model in OpenClaw, export that conversation with OmniScriber and keep it as a reference.

Archive your benchmark results

As you test different models on your tasks, OmniScriber helps you archive the conversations that capture your findings — building a personal model evaluation library.

Share model recommendations

Export your model evaluation conversations and share them with teammates who are making the same decisions — saving everyone the time of running their own evaluations.

Frequently Asked Questions

Export Model Comparison Chats Before You Lose Them

Install OmniScriber — Free

Save your AI model evaluation conversations from ChatGPT or Claude