Snippets.

February 4, 2026

AI13 min read

Running OpenClaw Securely with Cloudflare Workers

OpenClaw running on Cloudflare Workers

OpenClaw has quickly become one of the most popular open-source projects on GitHub, crossing 100k stars in record time. It's a self-hosted personal AI assistant that connects to your messaging platforms and does things for you autonomously. Think of it as your own private AI agent that lives in WhatsApp, Telegram, Slack, or Discord and can browse the web, summarize PDFs, manage your emails, and handle scheduling.

Originally created by Peter Steinberger under the name Clawdbot (later renamed to Moltbot, and now OpenClaw), the project is model-agnostic. You can wire it up to Claude, OpenAI, or even run local LLMs through Ollama. It has persistent memory, so it remembers context across conversations. And because of Molthub, its community module ecosystem, you can extend it with plugins for just about anything.

But here's the thing. Running a powerful autonomous agent that can browse the web and send emails on your behalf introduces serious security concerns. In this article, we'll walk through how to run OpenClaw securely using Cloudflare Workers and their Sandbox Containers so you get all the power without leaving your front door wide open.

The security problem

Let's be honest about what OpenClaw actually is from a security perspective: it's a remotely accessible service that can execute arbitrary tasks, browse the internet, read your emails, and interact with external APIs using your credentials. That's an incredibly attractive target.

A recent security audit found roughly 1,000 unprotected OpenClaw gateways exposed directly on the internet. No authentication, no encryption, just sitting there waiting for someone to stumble onto them. If an attacker finds your instance, they can instruct your AI assistant to do whatever they want using your API keys and connected accounts.

The risks break down into a few categories:

  • Remote Code Execution - OpenClaw executes code as part of its normal operation. An unprotected instance means anyone can trigger that execution
  • API Key Theft - Your LLM provider keys, email credentials, and messaging tokens are all accessible if the instance is compromised
  • Supply Chain Attacks - Molthub modules are community-contributed. A malicious module could exfiltrate data or establish a backdoor
  • Lateral Movement - If you're running OpenClaw on a VPS alongside other services, a compromised instance gives attackers a foothold into your broader infrastructure

Running OpenClaw on a bare VPS with a reverse proxy and basic auth is better than nothing, but it's not enough. You need proper isolation, access control, and defense in depth.

Why Cloudflare Workers

Cloudflare Workers run at the edge across Cloudflare's global network. But the key feature for our use case is Sandbox Containers, which give you isolated execution environments that are ephemeral and locked down by default.

Here's why this architecture works well for OpenClaw:

  • Sandbox Isolation - Each container runs in its own sandbox. Even if OpenClaw or a Molthub module is compromised, the blast radius is contained. There's no host OS to pivot to, no other services sharing the same environment
  • Zero Trust Access - Cloudflare Access sits in front of everything. No one touches your OpenClaw instance without passing through identity verification first
  • Edge Routing - The Worker handles routing and proxying at the edge, meaning your actual OpenClaw process is never directly exposed to the internet
  • Built-in Browser Rendering - Cloudflare's Browser Rendering service provides a managed Chromium instance for web browsing tasks, so you don't need to run a headless browser alongside OpenClaw
  • R2 Storage - Persistent storage for configuration and conversation history without managing a database

The trade-off is that you're coupling yourself to Cloudflare's ecosystem. If that bothers you, this approach isn't for you. But if you're already using Cloudflare or you value security over vendor neutrality, it's a solid fit.

Setting up Moltworker

Cloudflare published Moltworker (github.com/cloudflare/moltworker), a purpose-built project that wraps OpenClaw in a Cloudflare Worker with Sandbox Containers. Let's set it up.

Prerequisites

You need the following before starting:

  • A Cloudflare account with a Workers paid plan ($5/month)
  • Node.js 18+ installed locally
  • Wrangler CLI installed and authenticated
  • An LLM provider API key (Claude or OpenAI)

First, make sure Wrangler is installed and logged in:

npm install -g wrangler
wrangler login

Cloning and configuring

Clone the Moltworker repository:

git clone https://github.com/cloudflare/moltworker.git
cd moltworker
npm install

Now open wrangler.toml. This is where the core configuration lives. The default looks something like this:

name = "moltworker"
main = "src/index.ts"
compatibility_date = "2024-12-01"

[containers]
max_instances = 1

[[containers.config]]
image = "ghcr.io/openclaw/openclaw:latest"
port = 8080
memory_mb = 512

[vars]
OPENCLAW_MODE = "production"

A few things to pay attention to:

  • max_instances controls how many Sandbox Containers can run simultaneously. For personal use, 1 is fine. If you have multiple users or heavy workloads, bump it up
  • memory_mb defaults to 512 which is adequate for most use cases. If you're running larger local models through Ollama, you'll need more
  • image points to the official OpenClaw container image. Pin this to a specific version tag in production instead of using latest

Let's pin that version:

[[containers.config]]
image = "ghcr.io/openclaw/openclaw:v2.4.1"
port = 8080
memory_mb = 512

Configuring Cloudflare R2 for storage

OpenClaw needs persistent storage for its configuration, conversation history, and memory. Moltworker uses Cloudflare R2 for this, which is S3-compatible object storage.

Create an R2 bucket:

wrangler r2 bucket create openclaw-storage

Then add the R2 binding to your wrangler.toml:

[[r2_buckets]]
binding = "STORAGE"
bucket_name = "openclaw-storage"

Moltworker maps this bucket to OpenClaw's data directory inside the container. Your conversation history, persistent memory, and configuration files all live here. This means your container can be destroyed and recreated without losing state, which is exactly how Sandbox Containers are designed to work.

One thing to keep in mind is that R2 has eventually consistent reads after writes in some edge cases. For a single-user OpenClaw instance this is irrelevant, but if you're running multiple instances pointing at the same bucket, you could theoretically see stale reads. In practice, this hasn't been an issue for anyone running Moltworker.

Setting up Zero Trust Access

This is arguably the most important step. Without it, you're just running OpenClaw on someone else's infrastructure instead of your own, which doesn't meaningfully improve security.

Cloudflare Zero Trust Access lets you put an identity-aware proxy in front of your Worker. Here's how to configure it.

Protecting admin routes

Go to the Cloudflare Zero Trust dashboard and create a new Access Application:

  1. Navigate to Access > Applications > Add an application
  2. Select Self-hosted
  3. Set the application domain to your Worker's route (e.g., openclaw.yourdomain.com)
  4. Under Policies, create an Allow policy that requires email matching your identity

Your policy should look something like:

Policy name: Owner Only
Action: Allow
Include: Emails - you@yourdomain.com

For additional security, enable device posture checks. This ensures that even if someone steals your credentials, they can't access the instance from an unmanaged device:

Require: Gateway - Warp enabled
Require: Device posture - Disk encryption enabled

Device pairing

OpenClaw uses device pairing for messaging platform connections. When you first connect WhatsApp or Telegram, you scan a QR code or enter a pairing code. This pairing flow needs to be protected behind Zero Trust as well.

In your wrangler.toml, make sure the pairing endpoint is routed through the Worker and not exposed directly:

[vars]
PAIRING_ROUTE = "/pair"
REQUIRE_AUTH = "true"

The Worker will enforce the Zero Trust policy before allowing access to the pairing flow. Without this, anyone who discovers your Worker's URL could pair their own device.

Connecting your LLM provider

OpenClaw needs API keys for your LLM provider. Never put these in wrangler.toml or commit them to your repository. Use Cloudflare's secrets management instead.

wrangler secret put ANTHROPIC_API_KEY

You'll be prompted to enter the value interactively. The secret is encrypted at rest and only available to your Worker at runtime. Do the same for any other provider keys:

wrangler secret put OPENAI_API_KEY

Then reference them in your OpenClaw configuration through environment variables. Moltworker automatically passes Worker secrets to the container as environment variables:

[vars]
LLM_PROVIDER = "anthropic"
LLM_MODEL = "claude-sonnet-4-20250514"

The actual API key is resolved from the secret, not from vars. This separation matters because vars are visible in your wrangler.toml (which you might commit to a repo), while secrets are stored securely in Cloudflare's infrastructure.

If you want to use multiple providers, OpenClaw supports fallback chains:

[vars]
LLM_PROVIDER = "anthropic"
LLM_MODEL = "claude-sonnet-4-20250514"
LLM_FALLBACK_PROVIDER = "openai"
LLM_FALLBACK_MODEL = "gpt-4o"

Connecting messaging platforms

Once your Moltworker is deployed, you can pair messaging platforms through the protected pairing route.

Deploy first:

wrangler deploy

Then visit your pairing URL (e.g., https://openclaw.yourdomain.com/pair). You'll be challenged by Cloudflare Access first, then presented with the OpenClaw pairing interface.

For WhatsApp, you'll get a QR code to scan with the WhatsApp app on your phone. This uses the WhatsApp Web protocol, so your phone needs to stay connected.

For Telegram, you'll register a bot through BotFather and provide the bot token. Store this as a secret:

wrangler secret put TELEGRAM_BOT_TOKEN

For Slack and Discord, you'll need to create apps in their respective developer portals and configure webhook URLs pointing to your Worker. The Worker handles incoming webhooks and routes them to the OpenClaw container.

A word of caution: each messaging platform connection is another attack surface. Only connect the platforms you actually use. Every additional integration is another set of credentials to protect and another protocol to worry about.

Browser automation with CDP proxy

One of OpenClaw's most impressive features is its ability to browse the web autonomously. It can research topics, fill out forms, and extract information from web pages. Under the hood, this uses the Chrome DevTools Protocol (CDP) to control a headless browser.

Running a headless Chromium instance alongside OpenClaw is a significant security risk in traditional deployments. If the browser sandbox is escaped, the attacker has access to everything on your host.

Moltworker solves this by using Cloudflare Browser Rendering, a managed headless browser service. Instead of running Chromium in your container, OpenClaw connects to Cloudflare's browser pool through a CDP proxy.

Add the Browser Rendering binding to your wrangler.toml:

[browser]
binding = "BROWSER"

Then configure OpenClaw to use the CDP proxy:

[vars]
BROWSER_MODE = "cdp_proxy"
CDP_ENDPOINT = "wss://browser.cloudflare.com"

The Worker proxies CDP connections from OpenClaw to Cloudflare's Browser Rendering service. Each browsing session gets a fresh, isolated browser instance. No persistent state, no cookies leaking between sessions, no risk of browser exploits compromising your container.

The one limitation is performance. There's added latency from proxying CDP through the Worker to Cloudflare's browser pool compared to running a local Chromium instance. For most tasks this is imperceptible, but if you're doing heavy scraping with hundreds of page loads, you'll notice it.

Production hardening

The setup above is solid, but there are a few more things you should do before calling it production-ready.

Rate limiting

Add rate limiting to prevent abuse, even behind Zero Trust. A compromised session token could be used to flood your instance:

[vars]
RATE_LIMIT_RPM = "60"
RATE_LIMIT_BURST = "10"

You can also configure rate limiting at the Cloudflare level through the dashboard, which gives you more granular control.

Audit logging

Enable audit logging to track every request to your OpenClaw instance. Moltworker supports logging to Cloudflare Logpush:

[logpush]
enabled = true
destination = "r2://openclaw-logs"

This gives you a full audit trail of who accessed your instance, what commands were issued, and when. If something goes wrong, you'll know exactly what happened.

Module allowlisting

Molthub modules are the biggest supply chain risk in the OpenClaw ecosystem. Any module you install runs with the same permissions as OpenClaw itself. Mitigate this by maintaining an explicit allowlist:

[vars]
MOLTHUB_ALLOWLIST = "official/web-search,official/email,official/calendar"
MOLTHUB_ALLOW_UNOFFICIAL = "false"

Only install modules you've reviewed. Stick to official modules when possible. If you need a community module, read the source code first. This isn't paranoia, it's basic supply chain security.

Container pinning

We already pinned the OpenClaw image version, but you should also verify the image digest:

[[containers.config]]
image = "ghcr.io/openclaw/openclaw:v2.4.1@sha256:abc123..."

This ensures that even if the tag is repointed to a different image (compromised registry), your deployment won't pull the tampered version.

Costs breakdown

Let's be transparent about what this costs monthly:

  • Cloudflare Workers Paid Plan - $5/month (includes 10 million requests and Sandbox Containers)
  • Cloudflare R2 - Free tier covers 10 GB storage and 10 million Class B operations per month. For a personal OpenClaw instance, you'll stay well within this
  • Cloudflare Zero Trust - Free for up to 50 users. More than enough for personal use
  • Cloudflare Browser Rendering - Included in the Workers paid plan with usage limits
  • LLM API Costs - This is the variable part. Claude API usage for a personal assistant typically runs $10-30/month depending on how heavily you use it. OpenAI is similar. If you use Ollama with a local model, this drops to zero but you lose the cloud convenience

Total realistic cost: $15-35/month for a fully secured, production-grade personal AI assistant. Compare this to running a VPS ($5-20/month) where you still have to handle security, updates, monitoring, and backups yourself.

The Workers paid plan is the one non-negotiable cost. You can't run Sandbox Containers on the free tier. But for $5/month, you get sandboxed execution, edge routing, and a global network. That's hard to beat.

Conclusion

OpenClaw is a remarkable project. Having a personal AI assistant that connects to all your messaging platforms and can autonomously handle tasks is genuinely useful. But the security implications of running it are not trivial.

The Moltworker approach gives you defense in depth: Cloudflare Access handles authentication, the Worker provides edge routing without exposing your instance directly, Sandbox Containers isolate execution, R2 provides persistent storage without a database to manage, and Browser Rendering eliminates the risk of running headless Chromium alongside your agent.

Is it perfect? No. You're trusting Cloudflare with your data and execution environment. The $5/month minimum cost isn't free. And there's latency overhead from the additional proxying layers. But compared to the alternative of running an exposed OpenClaw instance on a VPS with nothing but nginx basic auth between you and the internet, this is a significant step up.

If you're going to run a personal AI agent that has access to your email, calendar, and messaging platforms, run it behind proper security infrastructure. The few hours of setup time and $5/month are a small price to pay for not becoming one of those 1,000 unprotected gateways.