FAQ
Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS, multi-agent, OAuth/API keys, model failover). For runtime diagnostics, see Troubleshooting. For the full config reference, see Configuration.First 60 seconds if something is broken
-
Quick status (first check)
Fast local summary: OS + update, gateway/service reachability, agents/sessions, provider config + runtime issues (when gateway is reachable).
-
Pasteable report (safe to share)
Read-only diagnosis with log tail (tokens redacted).
-
Daemon + port state
Shows supervisor runtime vs RPC reachability, the probe target URL, and which config the service likely used.
-
Deep probes
Runs a live gateway health probe, including channel probes when supported (requires a reachable gateway). See Health.
-
Tail the latest log
If RPC is down, fall back to:File logs are separate from service logs; see Logging and Troubleshooting.
-
Run the doctor (repairs)
Repairs/migrates config/state + runs health checks. See Doctor.
-
Gateway snapshot
Asks the running gateway for a full snapshot (WS-only). See Health.
Quick start and first-run setup
I am stuck, fastest way to get unstuck
I am stuck, fastest way to get unstuck
- Claude Code: https://www.anthropic.com/claude-code/
- OpenAI Codex: https://openai.com/codex/
--install-method git.Tip: ask the agent to plan and supervise the fix (step-by-step), then execute only the
necessary commands. That keeps changes small and easier to audit.If you discover a real bug or fix, please file a GitHub issue or send a PR:
https://github.com/openclaw/openclaw/issues
https://github.com/openclaw/openclaw/pullsStart with these commands (share outputs when asking for help):openclaw status: quick snapshot of gateway/agent health + basic config.openclaw models status: checks provider auth + model availability.openclaw doctor: validates and repairs common config/state issues.
openclaw status --all, openclaw logs --follow,
openclaw gateway status, openclaw health --verbose.Quick debug loop: First 60 seconds if something is broken.
Install docs: Install, Installer flags, Updating.Heartbeat keeps skipping. What do the skip reasons mean?
Heartbeat keeps skipping. What do the skip reasons mean?
quiet-hours: outside the configured active-hours windowempty-heartbeat-file:HEARTBEAT.mdexists but only contains blank/header-only scaffoldingno-tasks-due:HEARTBEAT.mdtask mode is active but none of the task intervals are due yetalerts-disabled: all heartbeat visibility is disabled (showOk,showAlerts, anduseIndicatorare all off)
Recommended way to install and set up OpenClaw
Recommended way to install and set up OpenClaw
pnpm openclaw onboard.How do I open the dashboard after onboarding?
How do I open the dashboard after onboarding?
How do I authenticate the dashboard on localhost vs remote?
How do I authenticate the dashboard on localhost vs remote?
- Open
http://127.0.0.1:18789/. - If it asks for shared-secret auth, paste the configured token or password into Control UI settings.
- Token source:
gateway.auth.token(orOPENCLAW_GATEWAY_TOKEN). - Password source:
gateway.auth.password(orOPENCLAW_GATEWAY_PASSWORD). - If no shared secret is configured yet, generate a token with
openclaw doctor --generate-gateway-token.
- Tailscale Serve (recommended): keep bind loopback, run
openclaw gateway --tailscale serve, openhttps://<magicdns>/. Ifgateway.auth.allowTailscaleistrue, identity headers satisfy Control UI/WebSocket auth (no pasted shared secret, assumes trusted gateway host); HTTP APIs still require shared-secret auth unless you deliberately use private-ingressnoneor trusted-proxy HTTP auth. Bad concurrent Serve auth attempts from the same client are serialized before the failed-auth limiter records them, so the second bad retry can already showretry later. - Tailnet bind: run
openclaw gateway --bind tailnet --token "<token>"(or configure password auth), openhttp://<tailscale-ip>:18789/, then paste the matching shared secret in dashboard settings. - Identity-aware reverse proxy: keep the Gateway behind a non-loopback trusted proxy, configure
gateway.auth.mode: "trusted-proxy", then open the proxy URL. - SSH tunnel:
ssh -N -L 18789:127.0.0.1:18789 user@hostthen openhttp://127.0.0.1:18789/. Shared-secret auth still applies over the tunnel; paste the configured token or password if prompted.
Why are there two exec approval configs for chat approvals?
Why are there two exec approval configs for chat approvals?
approvals.exec: forwards approval prompts to chat destinationschannels.<channel>.execApprovals: makes that channel act as a native approval client for exec approvals
- If the chat already supports commands and replies, same-chat
/approveworks through the shared path. - If a supported native channel can infer approvers safely, OpenClaw now auto-enables DM-first native approvals when
channels.<channel>.execApprovals.enabledis unset or"auto". - When native approval cards/buttons are available, that native UI is the primary path; the agent should only include a manual
/approvecommand if the tool result says chat approvals are unavailable or manual approval is the only path. - Use
approvals.execonly when prompts must also be forwarded to other chats or explicit ops rooms. - Use
channels.<channel>.execApprovals.target: "channel"or"both"only when you explicitly want approval prompts posted back into the originating room/topic. - Plugin approvals are separate again: they use same-chat
/approveby default, optionalapprovals.pluginforwarding, and only some native channels keep plugin-approval-native handling on top.
What runtime do I need?
What runtime do I need?
pnpm is recommended. Bun is not recommended for the Gateway.Does it run on Raspberry Pi?
Does it run on Raspberry Pi?
Any tips for Raspberry Pi installs?
Any tips for Raspberry Pi installs?
- Use a 64-bit OS and keep Node >= 22.
- Prefer the hackable (git) install so you can see logs and update fast.
- Start without channels/skills, then add them one by one.
- If you hit weird binary issues, it is usually an ARM compatibility problem.
It is stuck on wake up my friend / onboarding will not hatch. What now?
It is stuck on wake up my friend / onboarding will not hatch. What now?
- Restart the Gateway:
- Check status + auth:
- If it still hangs, run:
Can I migrate my setup to a new machine (Mac mini) without redoing onboarding?
Can I migrate my setup to a new machine (Mac mini) without redoing onboarding?
- Install OpenClaw on the new machine.
- Copy
$OPENCLAW_STATE_DIR(default:~/.openclaw) from the old machine. - Copy your workspace (default:
~/.openclaw/workspace). - Run
openclaw doctorand restart the Gateway service.
~/.openclaw/ (for example ~/.openclaw/agents/<agentId>/sessions/).Related: Migrating, Where things live on disk,
Agent workspace, Doctor,
Remote mode.Where do I see what is new in the latest version?
Where do I see what is new in the latest version?
Cannot access docs.openclaw.ai (SSL error)
Cannot access docs.openclaw.ai (SSL error)
docs.openclaw.ai via Xfinity
Advanced Security. Disable it or allowlist docs.openclaw.ai, then retry.
Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status.If you still can’t reach the site, the docs are mirrored on GitHub:
https://github.com/openclaw/openclaw/tree/main/docsDifference between stable and beta
Difference between stable and beta
latest= stablebeta= early build for testing
latest. Maintainers can also
publish straight to latest when needed. That’s why beta and stable can
point at the same version after promotion.See what changed:
https://github.com/openclaw/openclaw/blob/main/CHANGELOG.mdFor install one-liners and the difference between beta and dev, see the accordion below.How do I install the beta version and what is the difference between beta and dev?
How do I install the beta version and what is the difference between beta and dev?
beta (may match latest after promotion).
Dev is the moving head of main (git); when published, it uses the npm dist-tag dev.One-liners (macOS/Linux):How do I try the latest bits?
How do I try the latest bits?
- Dev channel (git checkout):
main branch and updates from source.- Hackable install (from the installer site):
How long does install and onboarding usually take?
How long does install and onboarding usually take?
- Install: 2-5 minutes
- Onboarding: 5-15 minutes depending on how many channels/models you configure
Installer stuck? How do I get more feedback?
Installer stuck? How do I get more feedback?
Windows install says git not found or openclaw not recognized
Windows install says git not found or openclaw not recognized
- Install Git for Windows and make sure
gitis on your PATH. - Close and reopen PowerShell, then re-run the installer.
- Your npm global bin folder is not on PATH.
-
Check the path:
-
Add that directory to your user PATH (no
\binsuffix needed on Windows; on most systems it is%AppData%\npm). - Close and reopen PowerShell after updating PATH.
Windows exec output shows garbled Chinese text - what should I do?
Windows exec output shows garbled Chinese text - what should I do?
system.run/execoutput renders Chinese as mojibake- The same command looks fine in another terminal profile
The docs did not answer my question - how do I get a better answer?
The docs did not answer my question - how do I get a better answer?
How do I install OpenClaw on Linux?
How do I install OpenClaw on Linux?
- Linux quick path + service install: Linux.
- Full walkthrough: Getting Started.
- Installer + updates: Install & updates.
How do I install OpenClaw on a VPS?
How do I install OpenClaw on a VPS?
Where are the cloud/VPS install guides?
Where are the cloud/VPS install guides?
- VPS hosting (all providers in one place)
- Fly.io
- Hetzner
- exe.dev
Can I ask OpenClaw to update itself?
Can I ask OpenClaw to update itself?
What does onboarding actually do?
What does onboarding actually do?
openclaw onboard is the recommended setup path. In local mode it walks you through:- Model/auth setup (provider OAuth, API keys, Anthropic legacy setup-token, plus local model options such as LM Studio)
- Workspace location + bootstrap files
- Gateway settings (bind/port/auth/tailscale)
- Channels (WhatsApp, Telegram, Discord, Mattermost, Signal, iMessage, plus bundled channel plugins like QQ Bot)
- Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2)
- Health checks and skills selection
Do I need a Claude or OpenAI subscription to run this?
Do I need a Claude or OpenAI subscription to run this?
- Anthropic API key: normal Anthropic API billing
- Claude subscription auth in OpenClaw: Anthropic told OpenClaw users on April 4, 2026 at 12:00 PM PT / 8:00 PM BST that this requires Extra Usage billed separately from the subscription
claude -p --append-system-prompt ... can
hit the same Extra Usage guard when the appended prompt identifies
OpenClaw, while the same prompt string does not reproduce that block on
the Anthropic SDK + API-key path. OpenAI Codex OAuth is explicitly
supported for external tools like OpenClaw.OpenClaw also supports other hosted subscription-style options including
Qwen Cloud Coding Plan, MiniMax Coding Plan, and
Z.AI / GLM Coding Plan.Docs: Anthropic, OpenAI,
Qwen Cloud,
MiniMax, GLM Models,
Local models, Models.Can I use Claude Max subscription without an API key?
Can I use Claude Max subscription without an API key?
Do you support Claude subscription auth (Claude Pro or Max)?
Do you support Claude subscription auth (Claude Pro or Max)?
- Anthropic in OpenClaw with a subscription means Extra Usage
- Anthropic in OpenClaw without that path means API key
claude -p --append-system-prompt ... usage when the appended prompt
identifies OpenClaw, while the same prompt string did not reproduce on
the Anthropic SDK + API-key path.For production or multi-user workloads, Anthropic API key auth is the
safer, recommended choice. If you want other subscription-style hosted
options in OpenClaw, see OpenAI, Qwen / Model
Cloud, MiniMax, and
GLM Models.Why am I seeing HTTP 429 rate_limit_error from Anthropic?
Why am I seeing HTTP 429 rate_limit_error from Anthropic?
Extra usage is required for long context requests, the request is trying to use
Anthropic’s 1M context beta (context1m: true). That only works when your
credential is eligible for long-context billing (API key billing or the
OpenClaw Claude-login path with Extra Usage enabled).Tip: set a fallback model so OpenClaw can keep replying while a provider is rate-limited.
See Models, OAuth, and
/gateway/troubleshooting#anthropic-429-extra-usage-required-for-long-context.Is AWS Bedrock supported?
Is AWS Bedrock supported?
amazon-bedrock provider; otherwise you can explicitly enable plugins.entries.amazon-bedrock.config.discovery.enabled or add a manual provider entry. See Amazon Bedrock and Model providers. If you prefer a managed key flow, an OpenAI-compatible proxy in front of Bedrock is still a valid option.How does Codex auth work?
How does Codex auth work?
openai-codex/gpt-5.4 when appropriate. See Model providers and Onboarding (CLI).Do you support OpenAI subscription auth (Codex OAuth)?
Do you support OpenAI subscription auth (Codex OAuth)?
How do I set up Gemini CLI OAuth?
How do I set up Gemini CLI OAuth?
openclaw.json.Use the Gemini API provider instead:- Enable the plugin:
openclaw plugins enable google - Run
openclaw onboard --auth-choice gemini-api-key - Set a Google model such as
google/gemini-3.1-pro-preview
Is a local model OK for casual chats?
Is a local model OK for casual chats?
How do I keep hosted model traffic in a specific region?
How do I keep hosted model traffic in a specific region?
models.mode: "merge" so fallbacks stay available while respecting the regioned provider you select.Do I have to buy a Mac Mini to install this?
Do I have to buy a Mac Mini to install this?
Do I need a Mac mini for iMessage support?
Do I need a Mac mini for iMessage support?
- Run the Gateway on Linux/VPS, and run the BlueBubbles server on any Mac signed into Messages.
- Run everything on the Mac if you want the simplest single-machine setup.
If I buy a Mac mini to run OpenClaw, can I connect it to my MacBook Pro?
If I buy a Mac mini to run OpenClaw, can I connect it to my MacBook Pro?
system.run on that device.Common pattern:- Gateway on the Mac mini (always-on).
- MacBook Pro runs the macOS app or a node host and pairs to the Gateway.
- Use
openclaw nodes status/openclaw nodes listto see it.
Can I use Bun?
Can I use Bun?
Telegram: what goes in allowFrom?
Telegram: what goes in allowFrom?
channels.telegram.allowFrom is the human sender’s Telegram user ID (numeric). It is not the bot username.Onboarding accepts @username input and resolves it to a numeric ID, but OpenClaw authorization uses numeric IDs only.Safer (no third-party bot):- DM your bot, then run
openclaw logs --followand readfrom.id.
- DM your bot, then call
https://api.telegram.org/bot<bot_token>/getUpdatesand readmessage.from.id.
- DM
@userinfobotor@getidsbot.
Can multiple people use one WhatsApp number with different OpenClaw instances?
Can multiple people use one WhatsApp number with different OpenClaw instances?
kind: "direct", sender E.164 like +15551234567) to a different agentId, so each person gets their own workspace and session store. Replies still come from the same WhatsApp account, and DM access control (channels.whatsapp.dmPolicy / channels.whatsapp.allowFrom) is global per WhatsApp account. See Multi-Agent Routing and WhatsApp.Can I run a "fast chat" agent and an "Opus for coding" agent?
Can I run a "fast chat" agent and an "Opus for coding" agent?
Does Homebrew work on Linux?
Does Homebrew work on Linux?
/home/linuxbrew/.linuxbrew/bin (or your brew prefix) so brew-installed tools resolve in non-login shells.
Recent builds also prepend common user bin dirs on Linux systemd services (for example ~/.local/bin, ~/.npm-global/bin, ~/.local/share/pnpm, ~/.bun/bin) and honor PNPM_HOME, NPM_CONFIG_PREFIX, BUN_INSTALL, VOLTA_HOME, ASDF_DATA_DIR, NVM_DIR, and FNM_DIR when set.Difference between the hackable git install and npm install
Difference between the hackable git install and npm install
- Hackable (git) install: full source checkout, editable, best for contributors. You run builds locally and can patch code/docs.
- npm install: global CLI install, no repo, best for “just run it.” Updates come from npm dist-tags.
Can I switch between npm and git installs later?
Can I switch between npm and git installs later?
~/.openclaw) and workspace (~/.openclaw/workspace) stay untouched.From npm to git:--repair in automation).Backup tips: see Backup strategy.Should I run the Gateway on my laptop or a VPS?
Should I run the Gateway on my laptop or a VPS?
- Pros: no server cost, direct access to local files, live browser window.
- Cons: sleep/network drops = disconnects, OS updates/reboots interrupt, must stay awake.
- Pros: always-on, stable network, no laptop sleep issues, easier to keep running.
- Cons: often run headless (use screenshots), remote file access only, you must SSH for updates.
How important is it to run OpenClaw on a dedicated machine?
How important is it to run OpenClaw on a dedicated machine?
- Dedicated host (VPS/Mac mini/Pi): always-on, fewer sleep/reboot interruptions, cleaner permissions, easier to keep running.
- Shared laptop/desktop: totally fine for testing and active use, but expect pauses when the machine sleeps or updates.
What are the minimum VPS requirements and recommended OS?
What are the minimum VPS requirements and recommended OS?
- Absolute minimum: 1 vCPU, 1GB RAM, ~500MB disk.
- Recommended: 1-2 vCPU, 2GB RAM or more for headroom (logs, media, multiple channels). Node tools and browser automation can be resource hungry.
Can I run OpenClaw in a VM and what are the requirements?
Can I run OpenClaw in a VM and what are the requirements?
- Absolute minimum: 1 vCPU, 1GB RAM.
- Recommended: 2GB RAM or more if you run multiple channels, browser automation, or media tools.
- OS: Ubuntu LTS or another modern Debian/Ubuntu.
What is OpenClaw?
What is OpenClaw, in one paragraph?
What is OpenClaw, in one paragraph?
Value proposition
Value proposition
- Your devices, your data: run the Gateway wherever you want (Mac, Linux, VPS) and keep the workspace + session history local.
- Real channels, not a web sandbox: WhatsApp/Telegram/Slack/Discord/Signal/iMessage/etc, plus mobile voice and Canvas on supported platforms.
- Model-agnostic: use Anthropic, OpenAI, MiniMax, OpenRouter, etc., with per-agent routing and failover.
- Local-only option: run local models so all data can stay on your device if you want.
- Multi-agent routing: separate agents per channel, account, or task, each with its own workspace and defaults.
- Open source and hackable: inspect, extend, and self-host without vendor lock-in.
I just set it up - what should I do first?
I just set it up - what should I do first?
- Build a website (WordPress, Shopify, or a simple static site).
- Prototype a mobile app (outline, screens, API plan).
- Organize files and folders (cleanup, naming, tagging).
- Connect Gmail and automate summaries or follow ups.
What are the top five everyday use cases for OpenClaw?
What are the top five everyday use cases for OpenClaw?
- Personal briefings: summaries of inbox, calendar, and news you care about.
- Research and drafting: quick research, summaries, and first drafts for emails or docs.
- Reminders and follow ups: cron or heartbeat driven nudges and checklists.
- Browser automation: filling forms, collecting data, and repeating web tasks.
- Cross device coordination: send a task from your phone, let the Gateway run it on a server, and get the result back in chat.
Can OpenClaw help with lead gen, outreach, ads, and blogs for a SaaS?
Can OpenClaw help with lead gen, outreach, ads, and blogs for a SaaS?
What are the advantages vs Claude Code for web development?
What are the advantages vs Claude Code for web development?
- Persistent memory + workspace across sessions
- Multi-platform access (WhatsApp, Telegram, TUI, WebChat)
- Tool orchestration (browser, files, scheduling, hooks)
- Always-on Gateway (run on a VPS, interact from anywhere)
- Nodes for local browser/screen/camera/exec
Skills and automation
How do I customize skills without keeping the repo dirty?
How do I customize skills without keeping the repo dirty?
~/.openclaw/skills/<name>/SKILL.md (or add a folder via skills.load.extraDirs in ~/.openclaw/openclaw.json). Precedence is <workspace>/skills → <workspace>/.agents/skills → ~/.agents/skills → ~/.openclaw/skills → bundled → skills.load.extraDirs, so managed overrides still win over bundled skills without touching git. If you need the skill installed globally but only visible to some agents, keep the shared copy in ~/.openclaw/skills and control visibility with agents.defaults.skills and agents.list[].skills. Only upstream-worthy edits should live in the repo and go out as PRs.Can I load skills from a custom folder?
Can I load skills from a custom folder?
skills.load.extraDirs in ~/.openclaw/openclaw.json (lowest precedence). Default precedence is <workspace>/skills → <workspace>/.agents/skills → ~/.agents/skills → ~/.openclaw/skills → bundled → skills.load.extraDirs. clawhub installs into ./skills by default, which OpenClaw treats as <workspace>/skills on the next session. If the skill should only be visible to certain agents, pair that with agents.defaults.skills or agents.list[].skills.How can I use different models for different tasks?
How can I use different models for different tasks?
- Cron jobs: isolated jobs can set a
modeloverride per job. - Sub-agents: route tasks to separate agents with different default models.
- On-demand switch: use
/modelto switch the current session model at any time.
The bot freezes while doing heavy work. How do I offload that?
The bot freezes while doing heavy work. How do I offload that?
/subagents.
Use /status in chat to see what the Gateway is doing right now (and whether it is busy).Token tip: long tasks and sub-agents both consume tokens. If cost is a concern, set a
cheaper model for sub-agents via agents.defaults.subagents.model.Docs: Sub-agents, Background Tasks.How do thread-bound subagent sessions work on Discord?
How do thread-bound subagent sessions work on Discord?
- Spawn with
sessions_spawnusingthread: true(and optionallymode: "session"for persistent follow-up). - Or manually bind with
/focus <target>. - Use
/agentsto inspect binding state. - Use
/session idle <duration|off>and/session max-age <duration|off>to control auto-unfocus. - Use
/unfocusto detach the thread.
- Global defaults:
session.threadBindings.enabled,session.threadBindings.idleHours,session.threadBindings.maxAgeHours. - Discord overrides:
channels.discord.threadBindings.enabled,channels.discord.threadBindings.idleHours,channels.discord.threadBindings.maxAgeHours. - Auto-bind on spawn: set
channels.discord.threadBindings.spawnSubagentSessions: true.
A subagent finished, but the completion update went to the wrong place or never posted. What should I check?
A subagent finished, but the completion update went to the wrong place or never posted. What should I check?
- Completion-mode subagent delivery prefers any bound thread or conversation route when one exists.
- If the completion origin only carries a channel, OpenClaw falls back to the requester session’s stored route (
lastChannel/lastTo/lastAccountId) so direct delivery can still succeed. - If neither a bound route nor a usable stored route exists, direct delivery can fail and the result falls back to queued session delivery instead of posting immediately to chat.
- Invalid or stale targets can still force queue fallback or final delivery failure.
- If the child’s last visible assistant reply is the exact silent token
NO_REPLY/no_reply, or exactlyANNOUNCE_SKIP, OpenClaw intentionally suppresses the announce instead of posting stale earlier progress. - If the child timed out after only tool calls, the announce can collapse that into a short partial-progress summary instead of replaying raw tool output.
Cron or reminders do not fire. What should I check?
Cron or reminders do not fire. What should I check?
- Confirm cron is enabled (
cron.enabled) andOPENCLAW_SKIP_CRONis not set. - Check the Gateway is running 24/7 (no sleep/restarts).
- Verify timezone settings for the job (
--tzvs host timezone).
Cron fired, but nothing was sent to the channel. Why?
Cron fired, but nothing was sent to the channel. Why?
--no-deliver/delivery.mode: "none"means no external message is expected.- Missing or invalid announce target (
channel/to) means the runner skipped outbound delivery. - Channel auth failures (
unauthorized,Forbidden) mean the runner tried to deliver but credentials blocked it. - A silent isolated result (
NO_REPLY/no_replyonly) is treated as intentionally non-deliverable, so the runner also suppresses queued fallback delivery.
--no-deliver keeps
that result internal; it does not let the agent send directly with the
message tool instead.Debug:Why did an isolated cron run switch models or retry once?
Why did an isolated cron run switch models or retry once?
LiveSessionModelSwitchError. The retry keeps the switched
provider/model, and if the switch carried a new auth profile override, cron
persists that too before retrying.Related selection rules:- Gmail hook model override wins first when applicable.
- Then per-job
model. - Then any stored cron-session model override.
- Then the normal agent/default model selection.
How do I install skills on Linux?
How do I install skills on Linux?
openclaw skills commands or drop skills into your workspace. The macOS Skills UI isn’t available on Linux.
Browse skills at https://clawhub.ai.openclaw skills install writes into the active workspace skills/
directory. Install the separate clawhub CLI only if you want to publish or
sync your own skills. For shared installs across agents, put the skill under
~/.openclaw/skills and use agents.defaults.skills or
agents.list[].skills if you want to narrow which agents can see it.Can OpenClaw run tasks on a schedule or continuously in the background?
Can OpenClaw run tasks on a schedule or continuously in the background?
- Cron jobs for scheduled or recurring tasks (persist across restarts).
- Heartbeat for “main session” periodic checks.
- Isolated jobs for autonomous agents that post summaries or deliver to chats.
Can I run Apple macOS-only skills from Linux?
Can I run Apple macOS-only skills from Linux?
metadata.openclaw.os plus required binaries, and skills only appear in the system prompt when they are eligible on the Gateway host. On Linux, darwin-only skills (like apple-notes, apple-reminders, things-mac) will not load unless you override the gating.You have three supported patterns:Option A - run the Gateway on a Mac (simplest).
Run the Gateway where the macOS binaries exist, then connect from Linux in remote mode or over Tailscale. The skills load normally because the Gateway host is macOS.Option B - use a macOS node (no SSH).
Run the Gateway on Linux, pair a macOS node (menubar app), and set Node Run Commands to “Always Ask” or “Always Allow” on the Mac. OpenClaw can treat macOS-only skills as eligible when the required binaries exist on the node. The agent runs those skills via the nodes tool. If you choose “Always Ask”, approving “Always Allow” in the prompt adds that command to the allowlist.Option C - proxy macOS binaries over SSH (advanced).
Keep the Gateway on Linux, but make the required CLI binaries resolve to SSH wrappers that run on a Mac. Then override the skill to allow Linux so it stays eligible.-
Create an SSH wrapper for the binary (example:
memofor Apple Notes): -
Put the wrapper on
PATHon the Linux host (for example~/bin/memo). -
Override the skill metadata (workspace or
~/.openclaw/skills) to allow Linux: - Start a new session so the skills snapshot refreshes.
Do you have a Notion or HeyGen integration?
Do you have a Notion or HeyGen integration?
- Custom skill / plugin: best for reliable API access (Notion/HeyGen both have APIs).
- Browser automation: works without code but is slower and more fragile.
- One Notion page per client (context + preferences + active work).
- Ask the agent to fetch that page at the start of a session.
skills/ directory. For shared skills across agents, place them in ~/.openclaw/skills/<name>/SKILL.md. If only some agents should see a shared install, configure agents.defaults.skills or agents.list[].skills. Some skills expect binaries installed via Homebrew; on Linux that means Linuxbrew (see the Homebrew Linux FAQ entry above). See Skills, Skills config, and ClawHub.How do I use my existing signed-in Chrome with OpenClaw?
How do I use my existing signed-in Chrome with OpenClaw?
user browser profile, which attaches through Chrome DevTools MCP:existing-session / user:- actions are ref-driven, not CSS-selector driven
- uploads require
ref/inputRefand currently support one file at a time responsebody, PDF export, download interception, and batch actions still need a managed browser or raw CDP profile
Sandboxing and memory
Is there a dedicated sandboxing doc?
Is there a dedicated sandboxing doc?
Docker feels limited - how do I enable full features?
Docker feels limited - how do I enable full features?
node user, so it does not
include system packages, Homebrew, or bundled browsers. For a fuller setup:- Persist
/home/nodewithOPENCLAW_HOME_VOLUMEso caches survive. - Bake system deps into the image with
OPENCLAW_DOCKER_APT_PACKAGES. - Install Playwright browsers via the bundled CLI:
node /app/node_modules/playwright-core/cli.js install chromium - Set
PLAYWRIGHT_BROWSERS_PATHand ensure the path is persisted.
Can I keep DMs personal but make groups public/sandboxed with one agent?
Can I keep DMs personal but make groups public/sandboxed with one agent?
agents.defaults.sandbox.mode: "non-main" so group/channel sessions (non-main keys) run in Docker, while the main DM session stays on-host. Then restrict what tools are available in sandboxed sessions via tools.sandbox.tools.Setup walkthrough + example config: Groups: personal DMs + public groupsKey config reference: Gateway configurationHow do I bind a host folder into the sandbox?
How do I bind a host folder into the sandbox?
agents.defaults.sandbox.docker.binds to ["host:path:mode"] (e.g., "/home/user/src:/src:ro"). Global + per-agent binds merge; per-agent binds are ignored when scope: "shared". Use :ro for anything sensitive and remember binds bypass the sandbox filesystem walls.OpenClaw validates bind sources against both the normalized path and the canonical path resolved through the deepest existing ancestor. That means symlink-parent escapes still fail closed even when the last path segment does not exist yet, and allowed-root checks still apply after symlink resolution.See Sandboxing and Sandbox vs Tool Policy vs Elevated for examples and safety notes.How does memory work?
How does memory work?
- Daily notes in
memory/YYYY-MM-DD.md - Curated long-term notes in
MEMORY.md(main/private sessions only)
Memory keeps forgetting things. How do I make it stick?
Memory keeps forgetting things. How do I make it stick?
MEMORY.md,
short-term context goes into memory/YYYY-MM-DD.md.This is still an area we are improving. It helps to remind the model to store memories;
it will know what to do. If it keeps forgetting, verify the Gateway is using the same
workspace on every run.Docs: Memory, Agent workspace.Does memory persist forever? What are the limits?
Does memory persist forever? What are the limits?
Does semantic memory search require an OpenAI API key?
Does semantic memory search require an OpenAI API key?
OPENAI_API_KEY or models.providers.openai.apiKey).If you don’t set a provider explicitly, OpenClaw auto-selects a provider when it
can resolve an API key (auth profiles, models.providers.*.apiKey, or env vars).
It prefers OpenAI if an OpenAI key resolves, otherwise Gemini if a Gemini key
resolves, then Voyage, then Mistral. If no remote key is available, memory
search stays disabled until you configure it. If you have a local model path
configured and present, OpenClaw
prefers local. Ollama is supported when you explicitly set
memorySearch.provider = "ollama".If you’d rather stay local, set memorySearch.provider = "local" (and optionally
memorySearch.fallback = "none"). If you want Gemini embeddings, set
memorySearch.provider = "gemini" and provide GEMINI_API_KEY (or
memorySearch.remote.apiKey). We support OpenAI, Gemini, Voyage, Mistral, Ollama, or local embedding
models - see Memory for the setup details.Where things live on disk
Is all data used with OpenClaw saved locally?
Is all data used with OpenClaw saved locally?
- Local by default: sessions, memory files, config, and workspace live on the Gateway host
(
~/.openclaw+ your workspace directory). - Remote by necessity: messages you send to model providers (Anthropic/OpenAI/etc.) go to their APIs, and chat platforms (WhatsApp/Telegram/Slack/etc.) store message data on their servers.
- You control the footprint: using local models keeps prompts on your machine, but channel traffic still goes through the channel’s servers.
Where does OpenClaw store its data?
Where does OpenClaw store its data?
$OPENCLAW_STATE_DIR (default: ~/.openclaw):| Path | Purpose |
|---|---|
$OPENCLAW_STATE_DIR/openclaw.json | Main config (JSON5) |
$OPENCLAW_STATE_DIR/credentials/oauth.json | Legacy OAuth import (copied into auth profiles on first use) |
$OPENCLAW_STATE_DIR/agents/<agentId>/agent/auth-profiles.json | Auth profiles (OAuth, API keys, and optional keyRef/tokenRef) |
$OPENCLAW_STATE_DIR/secrets.json | Optional file-backed secret payload for file SecretRef providers |
$OPENCLAW_STATE_DIR/agents/<agentId>/agent/auth.json | Legacy compatibility file (static api_key entries scrubbed) |
$OPENCLAW_STATE_DIR/credentials/ | Provider state (e.g. whatsapp/<accountId>/creds.json) |
$OPENCLAW_STATE_DIR/agents/ | Per-agent state (agentDir + sessions) |
$OPENCLAW_STATE_DIR/agents/<agentId>/sessions/ | Conversation history & state (per agent) |
$OPENCLAW_STATE_DIR/agents/<agentId>/sessions/sessions.json | Session metadata (per agent) |
~/.openclaw/agent/* (migrated by openclaw doctor).Your workspace (AGENTS.md, memory files, skills, etc.) is separate and configured via agents.defaults.workspace (default: ~/.openclaw/workspace).Where should AGENTS.md / SOUL.md / USER.md / MEMORY.md live?
Where should AGENTS.md / SOUL.md / USER.md / MEMORY.md live?
~/.openclaw.- Workspace (per agent):
AGENTS.md,SOUL.md,IDENTITY.md,USER.md,MEMORY.md(or legacy fallbackmemory.mdwhenMEMORY.mdis absent),memory/YYYY-MM-DD.md, optionalHEARTBEAT.md. - State dir (
~/.openclaw): config, channel/provider state, auth profiles, sessions, logs, and shared skills (~/.openclaw/skills).
~/.openclaw/workspace, configurable via:Recommended backup strategy
Recommended backup strategy
~/.openclaw (credentials, sessions, tokens, or encrypted secrets payloads).
If you need a full restore, back up both the workspace and the state directory
separately (see the migration question above).Docs: Agent workspace.How do I completely uninstall OpenClaw?
How do I completely uninstall OpenClaw?
Can agents work outside the workspace?
Can agents work outside the workspace?
agents.defaults.sandbox or per-agent sandbox settings. If you
want a repo to be the default working directory, point that agent’s
workspace to the repo root. The OpenClaw repo is just source code; keep the
workspace separate unless you intentionally want the agent to work inside it.Example (repo as default cwd):Remote mode: where is the session store?
Remote mode: where is the session store?
Config basics
What format is the config? Where is it?
What format is the config? Where is it?
$OPENCLAW_CONFIG_PATH (default: ~/.openclaw/openclaw.json):~/.openclaw/workspace).I set gateway.bind: "lan" (or "tailnet") and now nothing listens / the UI says unauthorized
I set gateway.bind: "lan" (or "tailnet") and now nothing listens / the UI says unauthorized
Why do I need a token on localhost now?
Why do I need a token on localhost now?
gateway.auth.token, so local WS clients must authenticate. This blocks other local processes from calling the Gateway.If you prefer a different auth path, you can explicitly choose password mode (or, for non-loopback identity-aware reverse proxies, trusted-proxy). If you really want open loopback, set gateway.auth.mode: "none" explicitly in your config. Doctor can generate a token for you any time: openclaw doctor --generate-gateway-token.Do I have to restart after changing config?
Do I have to restart after changing config?
gateway.reload.mode: "hybrid"(default): hot-apply safe changes, restart for critical oneshot,restart,offare also supported
How do I disable funny CLI taglines?
How do I disable funny CLI taglines?
cli.banner.taglineMode in config:off: hides tagline text but keeps the banner title/version line.default: usesAll your chats, one OpenClaw.every time.random: rotating funny/seasonal taglines (default behavior).- If you want no banner at all, set env
OPENCLAW_HIDE_BANNER=1.
How do I enable web search (and web fetch)?
How do I enable web search (and web fetch)?
web_fetch works without an API key. web_search depends on your selected
provider:- API-backed providers such as Brave, Exa, Firecrawl, Gemini, Grok, Kimi, MiniMax Search, Perplexity, and Tavily require their normal API key setup.
- Ollama Web Search is key-free, but it uses your configured Ollama host and requires
ollama signin. - DuckDuckGo is key-free, but it is an unofficial HTML-based integration.
- SearXNG is key-free/self-hosted; configure
SEARXNG_BASE_URLorplugins.entries.searxng.config.webSearch.baseUrl.
openclaw configure --section web and choose a provider.
Environment alternatives:- Brave:
BRAVE_API_KEY - Exa:
EXA_API_KEY - Firecrawl:
FIRECRAWL_API_KEY - Gemini:
GEMINI_API_KEY - Grok:
XAI_API_KEY - Kimi:
KIMI_API_KEYorMOONSHOT_API_KEY - MiniMax Search:
MINIMAX_CODE_PLAN_KEY,MINIMAX_CODING_API_KEY, orMINIMAX_API_KEY - Perplexity:
PERPLEXITY_API_KEYorOPENROUTER_API_KEY - SearXNG:
SEARXNG_BASE_URL - Tavily:
TAVILY_API_KEY
plugins.entries.<plugin>.config.webSearch.*.
Legacy tools.web.search.* provider paths still load temporarily for compatibility, but they should not be used for new configs.
Firecrawl web-fetch fallback config lives under plugins.entries.firecrawl.config.webFetch.*.Notes:- If you use allowlists, add
web_search/web_fetch/x_searchorgroup:web. web_fetchis enabled by default (unless explicitly disabled).- If
tools.web.fetch.provideris omitted, OpenClaw auto-detects the first ready fetch fallback provider from available credentials. Today the bundled provider is Firecrawl. - Daemons read env vars from
~/.openclaw/.env(or the service environment).
config.apply wiped my config. How do I recover and avoid this?
config.apply wiped my config. How do I recover and avoid this?
config.apply replaces the entire config. If you send a partial object, everything
else is removed.Recover:- Restore from backup (git or a copied
~/.openclaw/openclaw.json). - If you have no backup, re-run
openclaw doctorand reconfigure channels/models. - If this was unexpected, file a bug and include your last known config or any backup.
- A local coding agent can often reconstruct a working config from logs or history.
- Use
openclaw config setfor small changes. - Use
openclaw configurefor interactive edits. - Use
config.schema.lookupfirst when you are not sure about an exact path or field shape; it returns a shallow schema node plus immediate child summaries for drill-down. - Use
config.patchfor partial RPC edits; keepconfig.applyfor full-config replacement only. - If you are using the owner-only
gatewaytool from an agent run, it will still reject writes totools.exec.ask/tools.exec.security(including legacytools.bash.*aliases that normalize to the same protected exec paths).
How do I run a central Gateway with specialized workers across devices?
How do I run a central Gateway with specialized workers across devices?
- Gateway (central): owns channels (Signal/WhatsApp), routing, and sessions.
- Nodes (devices): Macs/iOS/Android connect as peripherals and expose local tools (
system.run,canvas,camera). - Agents (workers): separate brains/workspaces for special roles (e.g. “Hetzner ops”, “Personal data”).
- Sub-agents: spawn background work from a main agent when you want parallelism.
- TUI: connect to the Gateway and switch agents/sessions.
Can the OpenClaw browser run headless?
Can the OpenClaw browser run headless?
false (headful). Headless is more likely to trigger anti-bot checks on some sites. See Browser.Headless uses the same Chromium engine and works for most automation (forms, clicks, scraping, logins). The main differences:- No visible browser window (use screenshots if you need visuals).
- Some sites are stricter about automation in headless mode (CAPTCHAs, anti-bot). For example, X/Twitter often blocks headless sessions.
How do I use Brave for browser control?
How do I use Brave for browser control?
browser.executablePath to your Brave binary (or any Chromium-based browser) and restart the Gateway.
See the full config examples in Browser.Remote gateways and nodes
How do commands propagate between Telegram, the gateway, and nodes?
How do commands propagate between Telegram, the gateway, and nodes?
node.* → Node → Gateway → TelegramNodes don’t see inbound provider traffic; they only receive node RPC calls.How can my agent access my computer if the Gateway is hosted remotely?
How can my agent access my computer if the Gateway is hosted remotely?
node.* tools (screen, camera, system) on your local machine over the Gateway WebSocket.Typical setup:- Run the Gateway on the always-on host (VPS/home server).
- Put the Gateway host + your computer on the same tailnet.
- Ensure the Gateway WS is reachable (tailnet bind or SSH tunnel).
- Open the macOS app locally and connect in Remote over SSH mode (or direct tailnet) so it can register as a node.
-
Approve the node on the Gateway:
system.run on that machine. Only
pair devices you trust, and review Security.Docs: Nodes, Gateway protocol, macOS remote mode, Security.Tailscale is connected but I get no replies. What now?
Tailscale is connected but I get no replies. What now?
- Gateway is running:
openclaw gateway status - Gateway health:
openclaw status - Channel health:
openclaw channels status
- If you use Tailscale Serve, make sure
gateway.auth.allowTailscaleis set correctly. - If you connect via SSH tunnel, confirm the local tunnel is up and points at the right port.
- Confirm your allowlists (DM or group) include your account.
Can two OpenClaw instances talk to each other (local + VPS)?
Can two OpenClaw instances talk to each other (local + VPS)?
openclaw agent --message ... --deliver, targeting a chat where the other bot
listens. If one bot is on a remote VPS, point your CLI at that remote Gateway
via SSH/Tailscale (see Remote access).Example pattern (run from a machine that can reach the target Gateway):Do I need separate VPSes for multiple agents?
Do I need separate VPSes for multiple agents?
Is there a benefit to using a node on my personal laptop instead of SSH from a VPS?
Is there a benefit to using a node on my personal laptop instead of SSH from a VPS?
- No inbound SSH required. Nodes connect out to the Gateway WebSocket and use device pairing.
- Safer execution controls.
system.runis gated by node allowlists/approvals on that laptop. - More device tools. Nodes expose
canvas,camera, andscreenin addition tosystem.run. - Local browser automation. Keep the Gateway on a VPS, but run Chrome locally through a node host on the laptop, or attach to local Chrome on the host via Chrome MCP.
Do nodes run a gateway service?
Do nodes run a gateway service?
gateway, discovery, and canvasHost changes.Is there an API / RPC way to apply config?
Is there an API / RPC way to apply config?
config.schema.lookup: inspect one config subtree with its shallow schema node, matched UI hint, and immediate child summaries before writingconfig.get: fetch the current snapshot + hashconfig.patch: safe partial update (preferred for most RPC edits)config.apply: validate + replace the full config, then restart- The owner-only
gatewayruntime tool still refuses to rewritetools.exec.ask/tools.exec.security; legacytools.bash.*aliases normalize to the same protected exec paths
Minimal sane config for a first install
Minimal sane config for a first install
How do I set up Tailscale on a VPS and connect from my Mac?
How do I set up Tailscale on a VPS and connect from my Mac?
-
Install + login on the VPS
-
Install + login on your Mac
- Use the Tailscale app and sign in to the same tailnet.
-
Enable MagicDNS (recommended)
- In the Tailscale admin console, enable MagicDNS so the VPS has a stable name.
-
Use the tailnet hostname
- SSH:
ssh user@your-vps.tailnet-xxxx.ts.net - Gateway WS:
ws://your-vps.tailnet-xxxx.ts.net:18789
- SSH:
How do I connect a Mac node to a remote Gateway (Tailscale Serve)?
How do I connect a Mac node to a remote Gateway (Tailscale Serve)?
- Make sure the VPS + Mac are on the same tailnet.
- Use the macOS app in Remote mode (SSH target can be the tailnet hostname). The app will tunnel the Gateway port and connect as a node.
-
Approve the node on the gateway:
Should I install on a second laptop or just add a node?
Should I install on a second laptop or just add a node?
Env vars and .env loading
How does OpenClaw load environment variables?
How does OpenClaw load environment variables?
.envfrom the current working directory- a global fallback
.envfrom~/.openclaw/.env(aka$OPENCLAW_STATE_DIR/.env)
.env file overrides existing env vars.You can also define inline env vars in config (applied only if missing from the process env):I started the Gateway via the service and my env vars disappeared. What now?
I started the Gateway via the service and my env vars disappeared. What now?
- Put the missing keys in
~/.openclaw/.envso they’re picked up even when the service doesn’t inherit your shell env. - Enable shell import (opt-in convenience):
OPENCLAW_LOAD_SHELL_ENV=1, OPENCLAW_SHELL_ENV_TIMEOUT_MS=15000.I set COPILOT_GITHUB_TOKEN, but models status shows "Shell env: off." Why?
I set COPILOT_GITHUB_TOKEN, but models status shows "Shell env: off." Why?
openclaw models status reports whether shell env import is enabled. “Shell env: off”
does not mean your env vars are missing - it just means OpenClaw won’t load
your login shell automatically.If the Gateway runs as a service (launchd/systemd), it won’t inherit your shell
environment. Fix by doing one of these:-
Put the token in
~/.openclaw/.env: -
Or enable shell import (
env.shellEnv.enabled: true). -
Or add it to your config
envblock (applies only if missing).
COPILOT_GITHUB_TOKEN (also GH_TOKEN / GITHUB_TOKEN).
See /concepts/model-providers and /environment.Sessions and multiple chats
How do I start a fresh conversation?
How do I start a fresh conversation?
/new or /reset as a standalone message. See Session management.Do sessions reset automatically if I never send /new?
Do sessions reset automatically if I never send /new?
session.idleMinutes, but this is disabled by default (default 0).
Set it to a positive value to enable idle expiry. When enabled, the next
message after the idle period starts a fresh session id for that chat key.
This does not delete transcripts - it just starts a new session.Is there a way to make a team of OpenClaw instances (one CEO and many agents)?
Is there a way to make a team of OpenClaw instances (one CEO and many agents)?
Why did context get truncated mid-task? How do I prevent it?
Why did context get truncated mid-task? How do I prevent it?
- Ask the bot to summarize the current state and write it to a file.
- Use
/compactbefore long tasks, and/newwhen switching topics. - Keep important context in the workspace and ask the bot to read it back.
- Use sub-agents for long or parallel work so the main chat stays smaller.
- Pick a model with a larger context window if this happens often.
How do I completely reset OpenClaw but keep it installed?
How do I completely reset OpenClaw but keep it installed?
- Onboarding also offers Reset if it sees an existing config. See Onboarding (CLI).
- If you used profiles (
--profile/OPENCLAW_PROFILE), reset each state dir (defaults are~/.openclaw-<profile>). - Dev reset:
openclaw gateway --dev --reset(dev-only; wipes dev config + credentials + sessions + workspace).
I am getting "context too large" errors - how do I reset or compact?
I am getting "context too large" errors - how do I reset or compact?
-
Compact (keeps the conversation but summarizes older turns):
or
/compact <instructions>to guide the summary. -
Reset (fresh session ID for the same chat key):
- Enable or tune session pruning (
agents.defaults.contextPruning) to trim old tool output. - Use a model with a larger context window.
Why am I seeing "LLM request rejected: messages.content.tool_use.input field required"?
Why am I seeing "LLM request rejected: messages.content.tool_use.input field required"?
tool_use block without the required
input. It usually means the session history is stale or corrupted (often after long threads
or a tool/schema change).Fix: start a fresh session with /new (standalone message).Why am I getting heartbeat messages every 30 minutes?
Why am I getting heartbeat messages every 30 minutes?
HEARTBEAT.md exists but is effectively empty (only blank lines and markdown
headers like # Heading), OpenClaw skips the heartbeat run to save API calls.
If the file is missing, the heartbeat still runs and the model decides what to do.Per-agent overrides use agents.list[].heartbeat. Docs: Heartbeat.Do I need to add a "bot account" to a WhatsApp group?
Do I need to add a "bot account" to a WhatsApp group?
groupPolicy: "allowlist").If you want only you to be able to trigger group replies:How do I get the JID of a WhatsApp group?
How do I get the JID of a WhatsApp group?
Why does OpenClaw not reply in a group?
Why does OpenClaw not reply in a group?
- Mention gating is on (default). You must @mention the bot (or match
mentionPatterns). - You configured
channels.whatsapp.groupswithout"*"and the group isn’t allowlisted.
Do groups/threads share context with DMs?
Do groups/threads share context with DMs?
How many workspaces and agents can I create?
How many workspaces and agents can I create?
- Disk growth: sessions + transcripts live under
~/.openclaw/agents/<agentId>/sessions/. - Token cost: more agents means more concurrent model usage.
- Ops overhead: per-agent auth profiles, workspaces, and channel routing.
- Keep one active workspace per agent (
agents.defaults.workspace). - Prune old sessions (delete JSONL or store entries) if disk grows.
- Use
openclaw doctorto spot stray workspaces and profile mismatches.
Can I run multiple bots or chats at the same time (Slack), and how should I set that up?
Can I run multiple bots or chats at the same time (Slack), and how should I set that up?
- Always-on Gateway host (VPS/Mac mini).
- One agent per role (bindings).
- Slack channel(s) bound to those agents.
- Local browser via Chrome MCP or a node when needed.
Models: defaults, selection, aliases, switching
What is the "default model"?
What is the "default model"?
provider/model (example: openai/gpt-5.4). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still explicitly set provider/model.What model do you recommend?
What model do you recommend?
How do I switch models without wiping my config?
How do I switch models without wiping my config?
/modelin chat (quick, per-session)openclaw models set ...(updates just model config)openclaw configure --section model(interactive)- edit
agents.defaults.modelin~/.openclaw/openclaw.json
config.apply with a partial object unless you intend to replace the whole config.
For RPC edits, inspect with config.schema.lookup first and prefer config.patch. The lookup payload gives you the normalized path, shallow schema docs/constraints, and immediate child summaries.
for partial updates.
If you did overwrite config, restore from backup or re-run openclaw doctor to repair.Docs: Models, Configure, Config, Doctor.Can I use self-hosted models (llama.cpp, vLLM, Ollama)?
Can I use self-hosted models (llama.cpp, vLLM, Ollama)?
- Install Ollama from
https://ollama.com/download - Pull a local model such as
ollama pull glm-4.7-flash - If you want cloud models too, run
ollama signin - Run
openclaw onboardand chooseOllama - Pick
LocalorCloud + Local
Cloud + Localgives you cloud models plus your local Ollama models- cloud models such as
kimi-k2.5:clouddo not need a local pull - for manual switching, use
openclaw models listandopenclaw models set ollama/<model>
What do OpenClaw, Flawd, and Krill use for models?
What do OpenClaw, Flawd, and Krill use for models?
- These deployments can differ and may change over time; there is no fixed provider recommendation.
- Check the current runtime setting on each gateway with
openclaw models status. - For security-sensitive/tool-enabled agents, use the strongest latest-generation model available.
How do I switch models on the fly (without restarting)?
How do I switch models on the fly (without restarting)?
/model command as a standalone message:agents.defaults.models.You can list available models with /model, /model list, or /model status./model (and /model list) shows a compact, numbered picker. Select by number:/model status shows which agent is active, which auth-profiles.json file is being used, and which auth profile will be tried next.
It also shows the configured provider endpoint (baseUrl) and API mode (api) when available.How do I unpin a profile I set with @profile?Re-run /model without the @profile suffix:/model (or send /model <default provider/model>).
Use /model status to confirm which auth profile is active.Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding?
Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding?
- Quick switch (per session):
/model gpt-5.4for daily tasks,/model openai-codex/gpt-5.4for coding with Codex OAuth. - Default + switch: set
agents.defaults.model.primarytoopenai/gpt-5.4, then switch toopenai-codex/gpt-5.4when coding (or the other way around). - Sub-agents: route coding tasks to sub-agents with a different default model.
Why do I see "Model ... is not allowed" and then no reply?
Why do I see "Model ... is not allowed" and then no reply?
agents.defaults.models is set, it becomes the allowlist for /model and any
session overrides. Choosing a model that isn’t in that list returns:agents.defaults.models, remove the allowlist, or pick a model from /model list.Why do I see "Unknown model: minimax/MiniMax-M2.7"?
Why do I see "Unknown model: minimax/MiniMax-M2.7"?
-
Upgrade to a current OpenClaw release (or run from source
main), then restart the gateway. -
Make sure MiniMax is configured (wizard or JSON), or that MiniMax auth
exists in env/auth profiles so the matching provider can be injected
(
MINIMAX_API_KEYforminimax,MINIMAX_OAUTH_TOKENor stored MiniMax OAuth forminimax-portal). -
Use the exact model id (case-sensitive) for your auth path:
minimax/MiniMax-M2.7orminimax/MiniMax-M2.7-highspeedfor API-key setup, orminimax-portal/MiniMax-M2.7/minimax-portal/MiniMax-M2.7-highspeedfor OAuth setup. -
Run:
and pick from the list (or
/model listin chat).
Can I use MiniMax as my default and OpenAI for complex tasks?
Can I use MiniMax as my default and OpenAI for complex tasks?
/model or a separate agent.Option A: switch per session- Agent A default: MiniMax
- Agent B default: OpenAI
- Route by agent or use
/agentto switch
Are opus / sonnet / gpt built-in shortcuts?
Are opus / sonnet / gpt built-in shortcuts?
agents.defaults.models):opus→anthropic/claude-opus-4-6sonnet→anthropic/claude-sonnet-4-6gpt→openai/gpt-5.4gpt-mini→openai/gpt-5.4-minigpt-nano→openai/gpt-5.4-nanogemini→google/gemini-3.1-pro-previewgemini-flash→google/gemini-3-flash-previewgemini-flash-lite→google/gemini-3.1-flash-lite-preview
How do I define/override model shortcuts (aliases)?
How do I define/override model shortcuts (aliases)?
agents.defaults.models.<modelId>.alias. Example:/model sonnet (or /<alias> when supported) resolves to that model ID.How do I add models from other providers like OpenRouter or Z.AI?
How do I add models from other providers like OpenRouter or Z.AI?
No API key found for provider "zai").No API key found for provider after adding a new agentThis usually means the new agent has an empty auth store. Auth is per-agent and
stored in:- Run
openclaw agents add <id>and configure auth during the wizard. - Or copy
auth-profiles.jsonfrom the main agent’sagentDirinto the new agent’sagentDir.
agentDir across agents; it causes auth/session collisions.Model failover and “All models failed”
How does failover work?
How does failover work?
- Auth profile rotation within the same provider.
- Model fallback to the next model in
agents.defaults.model.fallbacks.
429 responses. OpenClaw
also treats messages like Too many concurrent requests,
ThrottlingException, concurrency limit reached,
workers_ai ... quota limit exceeded, resource exhausted, and periodic
usage-window limits (weekly/monthly limit reached) as failover-worthy
rate limits.Some billing-looking responses are not 402, and some HTTP 402
responses also stay in that transient bucket. If a provider returns
explicit billing text on 401 or 403, OpenClaw can still keep that in
the billing lane, but provider-specific text matchers stay scoped to the
provider that owns them (for example OpenRouter Key limit exceeded). If a 402
message instead looks like a retryable usage-window or
organization/workspace spend limit (daily limit reached, resets tomorrow,
organization spending limit exceeded), OpenClaw treats it as
rate_limit, not a long billing disable.Context-overflow errors are different: signatures such as
request_too_large, input exceeds the maximum number of tokens,
input token count exceeds the maximum number of input tokens,
input is too long for the model, or ollama error: context length exceeded stay on the compaction/retry path instead of advancing model
fallback.Generic server-error text is intentionally narrower than “anything with
unknown/error in it”. OpenClaw does treat provider-scoped transient shapes
such as Anthropic bare An unknown error occurred, OpenRouter bare
Provider returned error, stop-reason errors like Unhandled stop reason: error, JSON api_error payloads with transient server text
(internal server error, unknown error, 520, upstream error, backend error), and provider-busy errors such as ModelNotReadyException as
failover-worthy timeout/overloaded signals when the provider context
matches.
Generic internal fallback text like LLM request failed with an unknown error. stays conservative and does not trigger model fallback by itself.What does "No credentials found for profile anthropic:default" mean?
What does "No credentials found for profile anthropic:default" mean?
anthropic:default, but could not find credentials for it in the expected auth store.Fix checklist:- Confirm where auth profiles live (new vs legacy paths)
- Current:
~/.openclaw/agents/<agentId>/agent/auth-profiles.json - Legacy:
~/.openclaw/agent/*(migrated byopenclaw doctor)
- Current:
- Confirm your env var is loaded by the Gateway
- If you set
ANTHROPIC_API_KEYin your shell but run the Gateway via systemd/launchd, it may not inherit it. Put it in~/.openclaw/.envor enableenv.shellEnv.
- If you set
- Make sure you’re editing the correct agent
- Multi-agent setups mean there can be multiple
auth-profiles.jsonfiles.
- Multi-agent setups mean there can be multiple
- Sanity-check model/auth status
- Use
openclaw models statusto see configured models and whether providers are authenticated.
- Use
-
Use Claude CLI
- Run
openclaw models auth login --provider anthropic --method cli --set-defaulton the gateway host.
- Run
-
If you want to use an API key instead
-
Put
ANTHROPIC_API_KEYin~/.openclaw/.envon the gateway host. -
Clear any pinned order that forces a missing profile:
-
Put
-
Confirm you’re running commands on the gateway host
- In remote mode, auth profiles live on the gateway machine, not your laptop.
Why did it also try Google Gemini and fail?
Why did it also try Google Gemini and fail?
No API key found for provider "google".Fix: either provide Google auth, or remove/avoid Google models in agents.defaults.model.fallbacks / aliases so fallback doesn’t route there.LLM request rejected: thinking signature required (Google Antigravity)Cause: the session history contains thinking blocks without signatures (often from
an aborted/partial stream). Google Antigravity requires signatures for thinking blocks.Fix: OpenClaw now strips unsigned thinking blocks for Google Antigravity Claude. If it still appears, start a new session or set /thinking off for that agent.Auth profiles: what they are and how to manage them
Related: /concepts/oauth (OAuth flows, token storage, multi-account patterns)What is an auth profile?
What is an auth profile?
What are typical profile IDs?
What are typical profile IDs?
anthropic:default(common when no email identity exists)anthropic:<email>for OAuth identities- custom IDs you choose (e.g.
anthropic:work)
Can I control which auth profile is tried first?
Can I control which auth profile is tried first?
auth.order.<provider>). This does not store secrets; it maps IDs to provider/mode and sets rotation order.OpenClaw may temporarily skip a profile if it’s in a short cooldown (rate limits/timeouts/auth failures) or a longer disabled state (billing/insufficient credits). To inspect this, run openclaw models status --json and check auth.unusableProfiles. Tuning: auth.cooldowns.billingBackoffHours*.Rate-limit cooldowns can be model-scoped. A profile that is cooling down
for one model can still be usable for a sibling model on the same provider,
while billing/disabled windows still block the whole profile.You can also set a per-agent order override (stored in that agent’s auth-profiles.json) via the CLI:excluded_by_auth_order for that profile instead of trying it silently.OAuth vs API key - what is the difference?
OAuth vs API key - what is the difference?
- OAuth often leverages subscription access (where applicable).
- API keys use pay-per-token billing.
Gateway: ports, “already running”, and remote mode
What port does the Gateway use?
What port does the Gateway use?
gateway.port controls the single multiplexed port for WebSocket + HTTP (Control UI, hooks, etc.).Precedence:Why does openclaw gateway status say "Runtime: running" but "RPC probe: failed"?
Why does openclaw gateway status say "Runtime: running" but "RPC probe: failed"?
status.Use openclaw gateway status and trust these lines:Probe target:(the URL the probe actually used)Listening:(what’s actually bound on the port)Last gateway error:(common root cause when the process is alive but the port isn’t listening)
Why does openclaw gateway status show "Config (cli)" and "Config (service)" different?
Why does openclaw gateway status show "Config (cli)" and "Config (service)" different?
--profile / OPENCLAW_STATE_DIR mismatch).Fix:--profile / environment you want the service to use.What does "another gateway instance is already listening" mean?
What does "another gateway instance is already listening" mean?
ws://127.0.0.1:18789). If the bind fails with EADDRINUSE, it throws GatewayLockError indicating another instance is already listening.Fix: stop the other instance, free the port, or run with openclaw gateway --port <port>.How do I run OpenClaw in remote mode (client connects to a Gateway elsewhere)?
How do I run OpenClaw in remote mode (client connects to a Gateway elsewhere)?
gateway.mode: "remote" and point to a remote WebSocket URL, optionally with shared-secret remote credentials:openclaw gatewayonly starts whengateway.modeislocal(or you pass the override flag).- The macOS app watches the config file and switches modes live when these values change.
gateway.remote.token/.passwordare client-side remote credentials only; they do not enable local gateway auth by themselves.
The Control UI says "unauthorized" (or keeps reconnecting). What now?
The Control UI says "unauthorized" (or keeps reconnecting). What now?
I set gateway.bind tailnet but it cannot bind and nothing listens
I set gateway.bind tailnet but it cannot bind and nothing listens
tailnet bind picks a Tailscale IP from your network interfaces (100.64.0.0/10). If the machine isn’t on Tailscale (or the interface is down), there’s nothing to bind to.Fix:- Start Tailscale on that host (so it has a 100.x address), or
- Switch to
gateway.bind: "loopback"/"lan".
tailnet is explicit. auto prefers loopback; use gateway.bind: "tailnet" when you want a tailnet-only bind.Can I run multiple Gateways on the same host?
Can I run multiple Gateways on the same host?
OPENCLAW_CONFIG_PATH(per-instance config)OPENCLAW_STATE_DIR(per-instance state)agents.defaults.workspace(workspace isolation)gateway.port(unique ports)
- Use
openclaw --profile <name> ...per instance (auto-creates~/.openclaw-<name>). - Set a unique
gateway.portin each profile config (or pass--portfor manual runs). - Install a per-profile service:
openclaw --profile <name> gateway install.
ai.openclaw.<profile>; legacy com.openclaw.*, openclaw-gateway-<profile>.service, OpenClaw Gateway (<profile>)).
Full guide: Multiple gateways.What does "invalid handshake" / code 1008 mean?
What does "invalid handshake" / code 1008 mean?
connect frame. If it receives anything else, it closes the connection
with code 1008 (policy violation).Common causes:- You opened the HTTP URL in a browser (
http://...) instead of a WS client. - You used the wrong port or path.
- A proxy or tunnel stripped auth headers or sent a non-Gateway request.
- Use the WS URL:
ws://<host>:18789(orwss://...if HTTPS). - Don’t open the WS port in a normal browser tab.
- If auth is on, include the token/password in the
connectframe.
Logging and debugging
Where are logs?
Where are logs?
logging.file. File log level is controlled by logging.level. Console verbosity is controlled by --verbose and logging.consoleLevel.Fastest log tail:- macOS:
$OPENCLAW_STATE_DIR/logs/gateway.logandgateway.err.log(default:~/.openclaw/logs/...; profiles use~/.openclaw-<profile>/logs/...) - Linux:
journalctl --user -u openclaw-gateway[-<profile>].service -n 200 --no-pager - Windows:
schtasks /Query /TN "OpenClaw Gateway (<profile>)" /V /FO LIST
How do I start/stop/restart the Gateway service?
How do I start/stop/restart the Gateway service?
openclaw gateway --force can reclaim the port. See Gateway.I closed my terminal on Windows - how do I restart OpenClaw?
I closed my terminal on Windows - how do I restart OpenClaw?
The Gateway is up but replies never arrive. What should I check?
The Gateway is up but replies never arrive. What should I check?
- Model auth not loaded on the gateway host (check
models status). - Channel pairing/allowlist blocking replies (check channel config + logs).
- WebChat/Dashboard is open without the right token.
"Disconnected from gateway: no reason" - what now?
"Disconnected from gateway: no reason" - what now?
- Is the Gateway running?
openclaw gateway status - Is the Gateway healthy?
openclaw status - Does the UI have the right token?
openclaw dashboard - If remote, is the tunnel/Tailscale link up?
Telegram setMyCommands fails. What should I check?
Telegram setMyCommands fails. What should I check?
BOT_COMMANDS_TOO_MUCH: the Telegram menu has too many entries. OpenClaw already trims to the Telegram limit and retries with fewer commands, but some menu entries still need to be dropped. Reduce plugin/skill/custom commands, or disablechannels.telegram.commands.nativeif you do not need the menu.TypeError: fetch failed,Network request for 'setMyCommands' failed!, or similar network errors: if you are on a VPS or behind a proxy, confirm outbound HTTPS is allowed and DNS works forapi.telegram.org.
TUI shows no output. What should I check?
TUI shows no output. What should I check?
/status to see the current state. If you expect replies in a chat
channel, make sure delivery is enabled (/deliver on).Docs: TUI, Slash commands.How do I completely stop then start the Gateway?
How do I completely stop then start the Gateway?
ELI5: openclaw gateway restart vs openclaw gateway
ELI5: openclaw gateway restart vs openclaw gateway
openclaw gateway restart: restarts the background service (launchd/systemd).openclaw gateway: runs the gateway in the foreground for this terminal session.
openclaw gateway when
you want a one-off, foreground run.Fastest way to get more details when something fails
Fastest way to get more details when something fails
--verbose to get more console detail. Then inspect the log file for channel auth, model routing, and RPC errors.Media and attachments
My skill generated an image/PDF, but nothing was sent
My skill generated an image/PDF, but nothing was sent
MEDIA:<path-or-url> line (on its own line). See OpenClaw assistant setup and Agent send.CLI sending:- The target channel supports outbound media and isn’t blocked by allowlists.
- The file is within the provider’s size limits (images are resized to max 2048px).
tools.fs.workspaceOnly=truekeeps local-path sends limited to workspace, temp/media-store, and sandbox-validated files.tools.fs.workspaceOnly=falseletsMEDIA:send host-local files the agent can already read, but only for media plus safe document types (images, audio, video, PDF, and Office docs). Plain text and secret-like files are still blocked.
Security and access control
Is it safe to expose OpenClaw to inbound DMs?
Is it safe to expose OpenClaw to inbound DMs?
- Default behavior on DM-capable channels is pairing:
- Unknown senders receive a pairing code; the bot does not process their message.
- Approve with:
openclaw pairing approve --channel <channel> [--account <id>] <code> - Pending requests are capped at 3 per channel; check
openclaw pairing list --channel <channel> [--account <id>]if a code didn’t arrive.
- Opening DMs publicly requires explicit opt-in (
dmPolicy: "open"and allowlist"*").
openclaw doctor to surface risky DM policies.Is prompt injection only a concern for public bots?
Is prompt injection only a concern for public bots?
- using a read-only or tool-disabled “reader” agent to summarize untrusted content
- keeping
web_search/web_fetch/browseroff for tool-enabled agents - treating decoded file/document text as untrusted too: OpenResponses
input_fileand media-attachment extraction both wrap extracted text in explicit external-content boundary markers instead of passing raw file text - sandboxing and strict tool allowlists
Should my bot have its own email, GitHub account, or phone number?
Should my bot have its own email, GitHub account, or phone number?
Can I give it autonomy over my text messages and is that safe?
Can I give it autonomy over my text messages and is that safe?
- Keep DMs in pairing mode or a tight allowlist.
- Use a separate number or account if you want it to message on your behalf.
- Let it draft, then approve before sending.
Can I use cheaper models for personal assistant tasks?
Can I use cheaper models for personal assistant tasks?
I ran /start in Telegram but did not get a pairing code
I ran /start in Telegram but did not get a pairing code
dmPolicy: "pairing" is enabled. /start by itself doesn’t generate a code.Check pending requests:dmPolicy: "open"
for that account.WhatsApp: will it message my contacts? How does pairing work?
WhatsApp: will it message my contacts? How does pairing work?
channels.whatsapp.selfChatMode.Chat commands, aborting tasks, and “it will not stop”
How do I stop internal system messages from showing in chat?
How do I stop internal system messages from showing in chat?
verboseDefault set
to on in config.Docs: Thinking and verbose, Security.How do I stop/cancel a running task?
How do I stop/cancel a running task?
/, but a few shortcuts (like /status) also work inline for allowlisted senders.How do I send a Discord message from Telegram? ("Cross-context messaging denied")
How do I send a Discord message from Telegram? ("Cross-context messaging denied")
Why does it feel like the bot "ignores" rapid-fire messages?
Why does it feel like the bot "ignores" rapid-fire messages?
/queue to change modes:steer- new messages redirect the current taskfollowup- run messages one at a timecollect- batch messages and reply once (default)steer-backlog- steer now, then process backloginterrupt- abort current run and start fresh
debounce:2s cap:25 drop:summarize for followup modes.Miscellaneous
What is the default model for Anthropic with an API key?
What is the default model for Anthropic with an API key?
ANTHROPIC_API_KEY (or storing an Anthropic API key in auth profiles) enables authentication, but the actual default model is whatever you configure in agents.defaults.model.primary (for example, anthropic/claude-sonnet-4-6 or anthropic/claude-opus-4-6). If you see No credentials found for profile "anthropic:default", it means the Gateway couldn’t find Anthropic credentials in the expected auth-profiles.json for the agent that’s running.Still stuck? Ask in Discord or open a GitHub discussion.