OpenClaw Security
OpenClaw is designed with a local-first architecture that keeps your data, API keys, and conversations on your own machine. Here's how it handles security, what risks to watch for, and how Clawly adds managed security on top.
> Local-first by design
Unlike cloud-based AI assistants, OpenClaw runs entirely on your own infrastructure. Your conversation history, configuration files, and API keys never leave your machine unless you explicitly send them to an AI provider for inference.
This means there's no central server collecting your data, no account to hack, and no third-party database storing your chat logs. The tradeoff is that you're responsible for securing your own environment — your server, your network, your API keys.
> API key management
OpenClaw requires API keys for two things: the AI model provider (Anthropic, OpenAI, etc.) and the messaging channels (Telegram bot token, Discord bot token, etc.). These are stored in a local .env file on the machine running your agent.
Best practices for key management:
-
✓
Never commit keys to version control. The
.envfile is gitignored by default, but always double-check before pushing. - ✓ Use per-agent API keys. Create dedicated keys for your OpenClaw instance so you can rotate or revoke them independently.
- ✓ Set spending limits. Most AI providers let you configure monthly spend caps. Enable these to prevent runaway costs if your agent encounters unexpected traffic.
-
✓
Restrict file system permissions. Ensure the
.envfile is readable only by the user running the agent (chmod 600).
> Sandboxing & isolation
When self-hosting, OpenClaw runs as a Node.js process with whatever permissions your user account has. For production deployments, running inside Docker containers is strongly recommended to limit the blast radius of any issue.
Recommended Docker hardening flags:
docker run \ --read-only \ --cap-drop=ALL \ --memory=256m \ --cpus=0.25 \ --security-opt=no-new-privileges \ --user 1000:1000 \ openclaw-agent
These flags ensure the container runs as a non-root user with a read-only filesystem, no Linux capabilities, and bounded CPU/memory — meaning even if the agent process is compromised, the damage is contained.
> Data privacy
OpenClaw stores conversation history and memory notes locally on disk. This data is never sent to any server other than the AI model provider you've configured. When a message comes in, OpenClaw builds a prompt from local context and sends it to the AI API — the response comes back and is stored locally.
What to be aware of:
- ✓ AI providers see your prompts. The messages you send to Claude, GPT, etc. are processed by their servers. Review each provider's data retention and training policies.
- ✓ Messaging platforms see message content. Telegram, Discord, etc. handle message delivery — they have access to the plaintext of messages in transit.
- ✓ Local storage is unencrypted by default. Conversation logs are stored as plain files. Use full-disk encryption on your server for at-rest protection.
> How Clawly handles security
When you deploy an agent through Clawly, we handle the security hardening for you:
- ✓ Isolated Docker containers with read-only filesystems, dropped capabilities, and memory limits — every agent runs in its own sandbox.
- ✓ Encrypted credential storage. Your API keys and channel tokens are encrypted at rest using Laravel's encryption — never stored in plaintext.
- ✓ AI API proxy on managed plans. Your agent never sees the raw AI provider key — requests go through our proxy, so your Anthropic/OpenAI keys stay on our servers, not in the container.
- ✓ Automatic updates. We keep the agent runtime updated with the latest OpenClaw releases and security patches.
Skip the security setup
Clawly deploys your OpenClaw agent in an isolated, hardened container with encrypted credentials and managed updates. No DevOps required.
Deploy Your Agent