OpenClaw MCP Guide

What is OpenClaw? The complete guide for 2026

OpenClaw crossed 250,000 GitHub stars in early March 2026, surpassing React. React took over a decade to get there. OpenClaw did it in about 60 days.

Those numbers are hard to ignore. But the star count doesn't tell you what OpenClaw actually is, whether you should use it, or what the risks are. I keep seeing the same confused questions in forums: Is it a chatbot? Something for developers? Why do people keep bringing up security problems?

This guide covers all of it, including the MCP layer that makes the whole thing extensible and the security mess you should know about before installing.

The short version: what is OpenClaw?

OpenClaw is a free, open-source AI assistant that runs on your own hardware. You install it, connect it to an LLM like Claude or GPT, and it becomes a persistent agent that you talk to through WhatsApp, Telegram, Slack, Discord, or about 20 other messaging platforms.

What makes it different from just chatting with ChatGPT or Claude? OpenClaw does things. It can manage your email, query a database, control your smart lights, or automate whatever file-shuffling task you hate doing manually. It remembers what you told it last week. And it runs 24/7 on your machine, not someone else's server.

All of that external connectivity runs through MCP (Model Context Protocol). More on that in a minute.

How OpenClaw got here: the ClawdBot saga

The backstory is genuinely strange. Peter Steinberger, an Austrian developer, published a project called ClawdBot in November 2025. It was a personal AI assistant built on top of Anthropic's Claude API. The name was a play on "Claude" with a lobster theme.

Anthropic's lawyers didn't love that. They sent a cease-and-desist over the name, arguing it was too close to their "Claude" trademark. So on January 27, 2026, Steinberger renamed it to Moltbot. Three days later, he renamed it again to OpenClaw because "Moltbot never quite rolled off the tongue."

Then things got weirder. On February 14, 2026, Steinberger announced he was joining OpenAI. He made it a condition that OpenClaw would move to an independent 501(c)(3) foundation so it wouldn't become an OpenAI product. The code stays MIT-licensed. The foundation governance is still taking shape, but the legal structure protects the open-source status.

So to be clear: OpenClaw is not owned by OpenAI, Anthropic, or anyone else. It's community-governed open source software that happens to have been created by someone who now works at OpenAI.

How OpenClaw actually works

The core architecture

OpenClaw is a Node.js application that runs as a persistent background process. Messages come in from whatever chat app you use, and the assistant runtime handles the actual thinking and tool execution. Everything lives on your hardware.

You need an API key for at least one LLM provider. OpenClaw supports Anthropic, OpenAI, Google, Fireworks, and any OpenAI-compatible endpoint. So you can run it with Claude, GPT-4o, Gemini, or a local model through something like Ollama. You bring the LLM; OpenClaw handles everything else.

The memory system

Most chatbots forget you the moment the window closes. OpenClaw has a four-layer memory system:

  • Session context: JSONL transcripts of the current conversation
  • Daily logs: markdown files organized by date
  • Long-term memory: a curated MEMORY.md file with facts the agent should always know
  • Semantic search: SQLite with embeddings, used to surface relevant past conversations

Everything is file-based. The AI agent only retains what gets written to disk, which means you can read, edit, or delete the agent's memory with a text editor. I find that reassuring in a way that cloud-based assistants aren't.

Where MCP fits in

Model Context Protocol is how OpenClaw connects to the outside world. When the agent needs to hit a GitHub API, query a database, or read your local files, it calls an MCP server.

You register MCP servers in openclaw.json (or .mcp.json). Each server exposes tools that the agent can discover and call. The setup looks like this:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user/documents"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_TOKEN": "${GITHUB_TOKEN}"
      }
    }
  }
}

MCP was originally created by Anthropic and donated to the Linux Foundation's Agentic AI Foundation in December 2025. It's now an open standard, and OpenClaw's support for it is one of the main reasons people build on it instead of rolling a custom agent from scratch.

If you want to build your own MCP server (it takes about 15 lines of Python), check out our Python MCP server tutorial.

What you can actually do with it

Personal automation

The messaging integration is the thing that hooks most people. You text your OpenClaw agent on WhatsApp the same way you'd text a friend. "Remind me to file taxes Thursday." "Summarize the last 5 emails from Sarah." "Turn off the living room lights." It responds, takes action, and remembers the context for next time.

OpenClaw integrates with 50+ platforms without any plugins: WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Gmail, Google Chat, Microsoft Teams, and more. You pick the channels you already use.

Developer workflows

MCP is doing the heavy lifting here. Connect the GitHub MCP server and your agent can review pull requests, create issues, or search code. Hook up a database server and it queries your production data directly.

I've talked to developers who have agents monitoring CI pipelines and triaging GitHub notifications while they sleep. Because the agent remembers previous conversations, it already knows their codebase layout and coding style.

The skills marketplace

ClawHub is OpenClaw's community skill marketplace. As of late February 2026, it hosts over 13,700 skills across categories like productivity, developer tools, smart home, social media, and AI models. About 40-60 new skills get added daily.

A skill is essentially a SKILL.md file that tells the agent how to behave in a specific context, plus optional MCP server configurations for tool access. You install one with a single command or by dropping the skill folder into your workspace.

But there's a catch. A big one.

The security situation you need to understand

I'm not going to sugarcoat this. OpenClaw has had serious security problems, and you need to understand them before installing it.

The ClawHavoc supply chain attack

In late January 2026, researchers discovered a campaign called ClawHavoc that planted at least 1,184 malicious skills in ClawHub. These skills looked legitimate but contained hidden code. Some stole credentials. Others installed Atomic Stealer, a macOS malware variant, or opened persistent backdoors.

The attack worked through social engineering. Skills had professional-looking README files with "prerequisites" sections that tricked users into running malicious terminal commands. At one point, roughly 20% of the ClawHub ecosystem was compromised, according to Bitdefender's analysis.

The OpenClaw team responded by removing 2,419 suspicious skills and partnering with VirusTotal for automated scanning. But the incident exposed how vulnerable a skill-based ecosystem can be when there's no mandatory code review process.

CVEs and other vulnerabilities

Beyond ClawHavoc, OpenClaw has disclosed multiple CVEs. The most severe was CVE-2026-25253, rated CVSS 8.8, which allowed external websites to hijack local OpenClaw agents via WebSocket. It was patched in version 2026.1.29. Additional vulnerabilities (CVE-2026-25593, CVE-2026-24763, and others) covered remote code execution, command injection, and path traversal.

Version 2026.2.14, released on Valentine's Day, included 50+ security fixes. Keep your installation updated.

How to run it safely

If you're going to use OpenClaw, use Docker. The Docker image runs as a non-root user and provides container-level isolation that the bare-metal install doesn't. For a deeper look at this, read our Docker sandboxing guide.

Beyond Docker, vet every skill before installing it. Check the author's profile, read the SKILL.md, look at the actual code. Our skills guide has a 5-step vetting checklist you can follow.

OpenClaw runs code on your machine with whatever permissions you give it. If you skip the security steps, you're handing an AI agent the keys to your system. Treat skill installation the way you'd treat running a random npm package from the internet, because that's exactly what it is.

How OpenClaw compares to alternatives

OpenClaw isn't the only option.

NanoClaw launched in late January 2026 as a security-focused alternative. It has 5 files, one process, and runs inside Linux containers with filesystem isolation by default. If security is your top priority and you don't need 13,000+ skills, NanoClaw is the more conservative choice. The trade-off is a smaller ecosystem (it's about a month old).

Claude Code is Anthropic's official coding agent. It's purpose-built for software development: reading codebases, writing code, running tests, managing git. If all you need is an AI coding assistant, Claude Code is better at that specific job. But it won't manage your WhatsApp messages or turn off your lights. Different problem entirely.

OpenClaw's advantage is breadth. 50+ messaging integrations, thousands of skills, multi-LLM support, persistent memory. Its disadvantage is that breadth comes with a bigger attack surface and more complexity (nearly 500K lines of code, 70+ dependencies).

Getting started in 5 minutes

What you need

  • Node.js 22 or higher (24 is recommended)
  • macOS 12+, Windows with WSL2, or Linux
  • About 500 MB of disk space
  • An API key from Anthropic, OpenAI, or another supported provider

Install and configure

# Install OpenClaw globally
npm install -g openclaw@latest

# Run the onboarding wizard
openclaw onboard

The onboarding wizard handles the initial gateway setup and messaging channel connections. If you'd rather use Docker (and you should, for security):

# Clone and run with Docker Compose
git clone https://github.com/openclaw/openclaw.git
cd openclaw
./docker-setup.sh

Connect your first MCP server

Once OpenClaw is running, add an MCP server to give it access to external tools. The filesystem server is the simplest starting point:

# In your openclaw.json
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/your/files"]
    }
  }
}

Restart OpenClaw and your agent can now access files in that directory. From here you can add more MCP servers for GitHub, databases, or build your own.

Frequently asked questions

Is OpenClaw free?

The software is free and MIT-licensed. You pay for LLM API usage, which varies by provider. Claude's API, for example, charges per token. Running a local model through Ollama has no per-query cost but requires decent hardware.

Can OpenClaw run offline?

Partially. If you use a local LLM backend (like Ollama or LM Studio), the core agent works offline. But any MCP server that hits a web API obviously needs internet, and messaging integrations like WhatsApp require connectivity to the platform's servers.

Is OpenClaw safe for production use?

It depends on your definition of production. OpenClaw is a community project with 614 contributors and an active security response team. But the ClawHavoc incident and the multiple CVEs show that the security posture is still maturing. If you run it, use Docker, keep it updated, and vet your skills carefully. Our security guide covers the details.

Where this is heading

OpenClaw is the most widely adopted open-source AI agent right now. That part isn't debatable. What I'm less sure about is whether the project can maintain quality as it scales under a new foundation structure, with its creator working at a competitor, and with an ecosystem that's already been targeted by serious supply chain attacks.

The MCP layer is what makes OpenClaw interesting to me long-term. It means the agent isn't locked into any specific set of tools. New MCP servers are being published daily. And because MCP is an open standard (not an OpenClaw-specific protocol), your skills and servers work with Claude Desktop, other agents, and custom setups too.

If you're going to try it, start with Docker, connect one or two MCP servers, and see if the messaging-based workflow fits how you actually want to use an AI assistant. What would you build with a persistent agent that lives in your WhatsApp?