This sanitized config shows the key settings referenced in the OpenClaw guide.
- Copy
config-example.jsonto~/.openclaw/openclaw.json - Replace all
YOUR_*placeholders with real values - Run
openclaw doctor --fixto validate - Run
openclaw security audit --deepto check for issues
The coordinator vs worker pattern:
- Keep expensive models (Opus, Sonnet) out of the
primaryslot - Use capable but cheap models as your default
- Strong models go in
fallbacksor pinned to specific agents
Why this matters: Expensive defaults = burned quotas on routine work. Cheap defaults with scoped fallbacks = predictable costs.
Uses cheap embeddings (text-embedding-3-small) to search your memory files.
Cost comparison:
- Thousands of searches: ~$0.10
- Using premium models for the same: $5-10+
cache-ttl mode:
- Keeps prompt cache valid for 6 hours
- Automatically drops old messages when cache expires
keepLastAssistants: 3preserves recent continuity
Why TTL matters: Without this, you'll hit token limits faster and pay for re-processing the same context repeatedly.
What it does:
When context hits softThresholdTokens (40k), the agent distills the session into memory/YYYY-MM-DD.md.
The prompt matters: The flush prompt tells the agent what to remember. Focus on decisions, state changes, and lessons—not routine exchanges.
When it writes NO_FLUSH:
If nothing worth storing happened, the agent skips the write. No clutter.
Use the cheapest model you have access to.
Heartbeats run often but do simple checks (read a file, check a condition). No reason to burn premium models here.
Example costs:
- GPT-5 Nano: ~$0.0001 per heartbeat
- Claude Sonnet: ~$0.005 per heartbeat
At 48 heartbeats/day, that's $0.005/day vs $0.24/day.
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}Why this matters: Prevents one bad task from spawning 50 retries and burning your quota in minutes.
"gateway": {
"bind": "loopback"
}Critical: This binds the gateway to 127.0.0.1 (localhost only), not 0.0.0.0 (all interfaces).
Check it:
netstat -an | grep 18789 | grep LISTEN
# You want: 127.0.0.1:18789
# NOT: 0.0.0.0:18789If you see 0.0.0.0, your gateway is exposed to the network. Fix it immediately.
"logging": {
"redactSensitive": "tools"
}Redacts sensitive data (API keys, tokens) from tool output in logs.
Options:
"off"- no redaction (dangerous)"tools"- redact tool output only (recommended)"all"- aggressive redaction (can make debugging harder)
The models.providers.synthetic section shows how to add a custom provider.
Why Synthetic:
- Free access to GLM 4.7 and Kimi K2.5
- Hosted models, no local hardware needed
- Good fallback options when Anthropic quotas are exhausted
See the full guide for referral links and setup details.
Your workspace should look like this:
~/.openclaw/
├── openclaw.json # Main config (this file, sanitized)
├── credentials/ # API keys (chmod 600)
│ ├── openrouter
│ ├── anthropic
│ └── synthetic
└── workspace/ # Your working directory
├── AGENTS.md
├── SOUL.md
├── USER.md
├── TOOLS.md
├── HEARTBEAT.md
├── memory/
│ ├── 2026-02-07.md
│ └── ...
└── skills/
└── your-skills/
Before running OpenClaw in production:
# 1. Validate config
openclaw doctor --fix
# 2. Security audit
openclaw security audit --deep
# 3. Lock down permissions
chmod 700 ~/.openclaw
chmod 600 ~/.openclaw/openclaw.json
chmod 700 ~/.openclaw/credentials
# 4. Verify localhost binding
netstat -an | grep 18789 | grep LISTEN
# 5. Check for exposed secrets
grep -r "sk-" ~/.openclaw/ # Should find nothing in logs1. Leaving expensive models as default
- Opus/Sonnet in
primary= quota burnout - Move them to fallbacks or agent-specific configs
2. No context pruning
- Token usage climbs, costs spiral
- Add
contextPruningwithcache-ttl
3. Gateway exposed to network
bind: "0.0.0.0"= anyone can access your agent- Always use
bind: "loopback"unless you know what you're doing
4. No concurrency limits
- One stuck task spawns 50 retries
- Set
maxConcurrentto something sane (4-8)
5. Skipping security audit
- Run
openclaw security audit --deepafter every config change - Address critical issues immediately
- Set up your channels (Telegram, Discord, etc.)
- Configure role-specific agents (monitor, researcher, communicator)
- Add skills to
workspace/skills/ - Set up heartbeat checks in
HEARTBEAT.md - Test in a local session before enabling 24/7 mode
- Full guide: https://gist.github.com/digitalknk/ec360aab27ca47cb4106a183b2c25a98
- Official docs: https://docs.openclaw.ai
- GitHub issues: https://github.com/openclaw/openclaw/issues
- Discord community: https://discord.com/invite/clawd
- Skill directory: https://clawhub.com
After you're running, check usage regularly:
# Check quotas (if you have the script)
check-quotas
# Monitor costs in provider dashboards
# - OpenRouter: https://openrouter.ai/activity
# - Anthropic: https://console.anthropic.com/settings/usage
# - OpenAI: https://platform.openai.com/usageTarget: $45-50/month for moderate usage (main session + occasional subagents).
If costs climb above $100/month, check for:
- Expensive model in default config
- Runaway agent retries (no concurrency limits)
- Memory flush running too often
- Heartbeat using premium model
i have connection error appers while sending message from my telegram bot
i am using this configurations, can you please review it and show me the problem and recommended fixes?
{
"meta": {
"lastTouchedVersion": "2026.2.6-3",
"lastTouchedAt": "2026-02-09T22:51:47.449Z"
},
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback",
"auth": {
"mode": "token",
"token": "47e0a6cfb971cd35bc5e174711e15af9a11e82546ea5aad5",
"allowTailscale": true
}
},
"wizard": {
"lastRunAt": "2026-02-09T22:51:47.447Z",
"lastRunVersion": "2026.2.6-3",
"lastRunCommand": "doctor",
"lastRunMode": "local"
},
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434",
"apiKey": "ollama-local",
"api": "openai-completions",
"models": [
{
"id": "ollama/deepseek-coder:6.7b",
"name": "deepseek-coder:6.7b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 130000,
"maxTokens": 8000
},
{
"id": "ollama/phi:latest",
"name": "phi:latest",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 130000,
"maxTokens": 8000
},
{
"id": "ollama/llama3.2:3b",
"name": "llama3.2:3b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 130000,
"maxTokens": 8000
},
{
"id": "ollama/mistral:7b-instruct",
"name": "mistral:7b-instruct",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 130000,
"maxTokens": 8000
},
{
"id": "ollama/llama2:13b",
"name": "llama2:13b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 130000,
"maxTokens": 8000
},
{
"id": "ollama/mistral:7b",
"name": "mistral:7b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 130000,
"maxTokens": 8000
},
{
"id": "ollama/llama2:7b",
"name": "llama2:7b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 130000,
"maxTokens": 8000
}
]
},
"deepseek": {
"baseUrl": "https://api.deepseek.com/v1",
"apiKey": "MY DEEPSEEK API",
"api": "openai-completions",
"models": [
{
"id": "deepseek/deepseek-chat",
"name": "DeepSeek-Chat",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 8192
},
{
"id": "deepseek/deepseek-reasoner",
"name": "deepseek-reasoner",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 8192
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "ollama/deepseek-coder:6.7b",
"fallbacks": [
"ollama/phi:latest",
"ollama/llama3.2:3b",
"ollama/mistral:7b-instruct",
"ollama/llama2:13b",
"ollama/mistral:7b",
"ollama/llama2:7b",
"deepseek/deepseek-chat",
"deepseek/deepseek-reasoner"
]
},
"models": {
"ollama/deepseek-coder:6.7b": {},
"ollama/phi:latest": {},
"ollama/llama3.2:3b": {},
"ollama/mistral:7b-instruct": {},
"ollama/llama2:13b": {},
"ollama/mistral:7b": {},
"ollama/llama2:7b": {},
"deepseek/deepseek-chat": {},
"deepseek/deepseek-reasoner": {}
},
"workspace": "/root/.openclaw/workspace",
"memorySearch": {
"enabled": true,
"sources": [
"memory",
"sessions"
],
"experimental": {
"sessionMemory": true
},
"provider": "openai",
"remote": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama-local"
},
"model": "nomic-embed-text:latest"
},
"compaction": {
"mode": "safeguard",
"memoryFlush": {
"enabled": true
}
},
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
},
"list": [
{
"id": "main",
"default": true
}
]
},
"messages": {
"ackReactionScope": "group-mentions"
},
"commands": {
"native": "auto",
"nativeSkills": "auto"
},
"channels": {
"telegram": {
"enabled": true,
"dmPolicy": "pairing",
"botToken": "MY TELEGRAM TOKEN",
"groupPolicy": "allowlist",
"streamMode": "partial"
}
},
"plugins": {
"entries": {
"telegram": {
"enabled": true
}
}
},
"auth": {
"profiles": {
"ollama:default": {
"provider": "ollama",
"mode": "api_key"
},
"deepseek:default": {
"provider": "deepseek",
"mode": "api_key"
}
}
},
"bindings": [
{
"agentId": "main",
"match": {
"channel": "telegram"
}
}
]
}