OpenClaw skills are the reason a single open-source AI agent can manage your calendar, then turn around and deploy your code. The project has 250,000+ GitHub stars, and skills are what make it actually useful beyond basic chat.
But I keep thinking about February 2026. The ClawHavoc supply chain attack poisoned ClawHub with 1,184 malicious skills that looked like normal productivity tools. One attacker uploaded 677 packages. The payloads included credential stealers and reverse shells, all hiding behind professional READMEs and convincing descriptions.
So how do you use an ecosystem of 13,700+ community-built skills without accidentally giving someone your SSH keys?
This guide covers how OpenClaw skills work, how they connect to MCP servers, how to find and vet them without getting burned, and how to build your own from scratch.
What are OpenClaw skills, and how do they relate to MCP?
If you've spent any time in the OpenClaw community, you've probably seen three terms used interchangeably: skills, MCP servers, and integrations.
They're different things. Mixing them up leads to confusion, and sometimes security mistakes.
Skills vs MCP servers vs integrations
Skills are markdown instruction files that tell your OpenClaw agent how to do a specific job. Every skill is a folder with a SKILL.md file inside it. The file has YAML frontmatter for metadata, then step-by-step instructions in plain English: what to do, which tools to call, and what rules to follow.
MCP servers are separate processes that expose tools through the Model Context Protocol. They connect your AI agent to external systems like databases, search APIs, file storage, or third-party services like GitHub.
Integrations are messaging channels. WhatsApp, Telegram, Slack, Discord, and 50+ other platforms where your agent can receive and respond to messages.
Here's the thing that took me a while to grasp: over 65% of active OpenClaw skills now wrap underlying MCP servers. When you install something like serpapi-mcp, you're getting an MCP server that handles search requests plus a SKILL.md that tells the agent when and how to use it.
Think of it this way: skills tell the agent what to do. MCP servers give it the ability to actually do it.
One important distinction: MCP servers are portable. They work with Claude Desktop, Cursor, VS Code, and any MCP-compatible host. Skills are OpenClaw-specific. If you want the best of both, build an MCP server for the tool capabilities and wrap it in a skill for the workflow logic.
The ClawHub marketplace
ClawHub is OpenClaw's public skill registry. As of late February 2026, it lists over 13,700 community-built skills covering development tools, productivity, communication, smart home control, and AI model management.
The search uses vector embeddings rather than just keyword matching, so you can search by what you want to do, not just exact names. Every listing includes version history, community reviews, and, since the ClawHavoc cleanup, automated VirusTotal scan badges.
If you want a curated starting point instead of browsing the full catalog, VoltAgent's awesome-openclaw-skills repo on GitHub maintains a categorized list of 5,400+ skills filtered from the official registry.
Popularity numbers are easy to find. Whether a skill is safe is harder to figure out.
How to find and install OpenClaw skills without getting burned
The scariest part of ClawHavoc wasn't the malware itself. According to The Hacker News, the malicious skills had professional READMEs and legitimate-sounding names. Some even had working features. The malware was a side payload hiding alongside real functionality.
Finding skills is easy. Knowing whether to trust them is the actual problem.
Browsing ClawHub
You can browse on the web or use the CLI. The CLI is faster if you know roughly what you're looking for:
# List skills that match your environment
clawhub list --eligible
# Search for a specific capability
clawhub search "github"
# Inspect a skill before installing
clawhub inspect github-manager
Use the --eligible flag. It filters to skills that actually run in your current setup, matching your OS, installed binaries, and configured MCP servers. Without it, you'll get results for skills you can't even use.
Vetting skills before installation
After spending time digging through the ClawHavoc aftermath, here's the process I use before installing anything:
- Check the VirusTotal badge. Every ClawHub skill shows a scan status: approved, suspicious, or blocked. A "benign" result is a good sign but not a guarantee. VirusTotal catches known malware patterns, not new ones.
- Read the
SKILL.mdyourself. During ClawHavoc, attackers used ClickFix social engineering, burying malicious instructions inside long documentation sections that tricked users into running terminal commands. If a skill asks you tocurlsomething from a URL you don't recognize, skip it. - Look at the publisher's other work. A developer who has published multiple skills over months is a better bet than an account created last week with one package. GitHub accounts under one week old can't publish to ClawHub anymore, but older accounts can still be compromised.
- Check what it asks for. If a weather skill requests file system write access, something is off. The
SKILL.mdfrontmatter declares its requirements underopenclaw.requires. Look at what environment variables, binaries, and permissions it expects. - Run a scanner. clawvet runs 6 independent analysis passes on any
SKILL.mdfile and catches obfuscated patterns that basic regex misses.
OpenClaw ships with 53 built-in skills maintained by the core team. If one of those covers what you need, use it. For everything else, treat third-party skills like code from a stranger, because that's what they are.
Installing via the ClawHub CLI
Once a skill passes your checks, installation is one command:
# Install from ClawHub
npx clawhub@latest install <skill-slug>
# Install a specific version
clawhub install github-manager --version 2.1.0
# Install to a custom directory
clawhub install github-manager --dir ./my-skills
After installing, make sure it's picked up:
# List all installed skills
clawhub list
# Update all skills to latest versions
clawhub update --all
# Remove a skill
clawhub uninstall <skill-slug>
If the skill wraps an MCP server, you also need to register that server in your openclaw.json:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}
Notice the ${GITHUB_TOKEN} syntax. That pulls the value from your environment variables instead of hardcoding the token in the config file. Do this. Your openclaw.json might end up in a git repo or backup, and you don't want secrets in it.
Set
chmod 600 openclaw.jsonto restrict file permissions. And if you're testing skills from unknown publishers, run OpenClaw inside a Docker sandbox. We have a full guide on that.
Building your own OpenClaw skill with MCP
Building a custom skill gives you full control over what the agent does and which MCP servers it talks to. The barrier to entry is low. You need a folder with one file.
Skill file structure
A valid skill is a directory with a SKILL.md file inside it. That's the minimum. For more complex skills, the recommended layout looks like this:
my-skill/
SKILL.md # Required — main instructions
README.md # Optional — documentation
scripts/ # Optional — helper scripts
fetch-data.sh
references/ # Optional — context files
api-docs.md
examples/ # Optional — usage examples
sample-output.md
The SKILL.md starts with YAML frontmatter for metadata, followed by markdown instructions:
---
name: my-custom-skill
description: Short summary of what this skill does
version: 1.0.0
openclaw:
emoji: "🔧"
requires:
env:
- MY_API_KEY
bins:
- node
- curl
primaryEnv: MY_API_KEY
---
# My Custom Skill
## Purpose
This skill does X when the user asks for Y.
## Workflow
1. Parse the user's request for...
2. Call the MCP tool `tool_name` with...
3. Format the response as...
## Rules
- Always confirm before destructive operations
- Never expose API keys in output
- If the MCP server is unreachable, suggest alternatives
The frontmatter tells the registry and security scanners what your skill needs. requires.env lists environment variables. requires.bins lists CLI tools that must be installed. The body is the actual instructions.
One thing to know: the agent treats your instructions as guidance. It fills gaps with its own judgment. If a particular behavior matters, be explicit about it in a Rules section. Vague instructions get vague results.
A practical example: GitHub PR reviewer with MCP
I'll walk through building a skill that reviews pull requests using the GitHub MCP server. This shows how the two pieces fit together: a skill for the review workflow, an MCP server for the actual GitHub API access.
Create the SKILL.md:
---
name: pr-reviewer
description: Review GitHub pull requests with automated analysis
version: 1.0.0
openclaw:
emoji: "🔍"
requires:
env:
- GITHUB_TOKEN
bins:
- npx
primaryEnv: GITHUB_TOKEN
---
# PR Reviewer Skill
## Purpose
Analyze GitHub pull requests and provide structured code
reviews when the user shares a PR link or number.
## Workflow
1. Extract the repo owner, name, and PR number from the user's message
2. Use the GitHub MCP server's `get_pull_request` tool to fetch PR details
3. Use `get_pull_request_diff` to retrieve the actual code changes
4. Analyze the diff for:
- Potential bugs or logic errors
- Security concerns (hardcoded secrets, SQL injection)
- Style consistency and naming conventions
- Missing tests for new functionality
5. Present findings grouped by severity: critical, warning, suggestion
## Output Format
- PR title, author, and branch info
- Summary of changes (files modified, lines added/removed)
- Findings grouped by severity with line references
- Overall assessment: approve, request changes, or needs discussion
## Rules
- Never approve PRs that add hardcoded credentials
- Flag any file larger than 500 lines of changes for manual review
- If the GitHub MCP server is unavailable, tell the user to check their token
Then register the GitHub MCP server in openclaw.json:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}
That's the whole thing. Two files. When a user says "review PR #42 in openclaw/openclaw," the agent matches the request to your skill, calls the GitHub MCP server's tools, and delivers a structured code review in the format you specified.
The MCP server is portable. It works with Claude Desktop, Cursor, or any other MCP host. But your skill's review logic and rules are yours. That combination is what makes this worth doing.
Publishing to ClawHub
If you want to share your skill, the process goes like this:
- Fork the registry. Go to github.com/openclaw/clawhub, fork it, add your skill folder, and open a pull request. Your GitHub account needs to be at least one week old.
- Use semantic versioning. Start with
1.0.0in your frontmatter. Increment for fixes, features, and breaking changes. - Wait for the automated scan. Every submission goes through VirusTotal. Skills flagged as "benign" get approved. "Suspicious" results go to manual review.
- Keep it maintained. ClawHub re-scans active skills periodically. If a dependency gets compromised, your skill could get flagged. Keep dependencies minimal.
Honestly, building your own skill is the single best thing you can do for your OpenClaw security. You wrote it. You know what it does. Nobody slipped a reverse shell into paragraph 47 of the README.
Bridging skills and MCP with mcporter
What if you want to use an existing MCP server with OpenClaw but don't want to write a full skill for it?
mcporter handles that. It's a CLI tool that discovers, installs, and configures MCP servers across multiple AI hosts, including OpenClaw, Claude Desktop, and Cursor.
# Install mcporter globally
npm install -g mcporter
# Search for available MCP servers
mcporter search "database"
# Install and configure for OpenClaw automatically
mcporter install @modelcontextprotocol/server-postgres --target openclaw
The --target openclaw flag adds the server to your openclaw.json automatically. No manual JSON editing.
This is the fastest way to connect OpenClaw to the 1,000+ community-built MCP servers covering Google Drive, Slack, databases, and enterprise systems, without writing any skill code at all.
Frequently asked questions
Are OpenClaw skills safe to install?
Not by default. The ClawHavoc attack exposed 1,184 malicious skills on ClawHub, and roughly one in five packages were flagged as suspicious. OpenClaw's VirusTotal partnership catches known malware but doesn't stop new attacks.
Layer your defenses: vet skills manually, use scanners like clawvet, run OpenClaw inside a Docker sandbox, and stick with the 53 built-in skills when they do what you need.
Can OpenClaw skills access my local files?
Yes, unless you've configured sandboxing. By default, OpenClaw runs with no permission restrictions and no command allowlist. A skill can access anything the OpenClaw process can reach: your home directory, SSH keys, browser credential stores.
To lock this down, configure filesystem restrictions in openclaw.json with explicit read paths, write paths, and deny paths. Always deny ~/.ssh and ~/.gnupg. For full isolation, run with Docker using --read-only --cap-drop=ALL flags. Our OpenClaw MCP Security guide has the complete checklist.
What's the difference between a skill and an MCP server?
A skill is a SKILL.md text file that tells your agent how to do a workflow. An MCP server is a running process that gives the agent actual tool capabilities through the Model Context Protocol. Most modern OpenClaw skills combine both: MCP for the tools, skill for the logic. The main difference is portability. MCP servers work with any compatible host. Skills only work with OpenClaw.
Do I need to know how to code to build a skill?
No. A SKILL.md is plain markdown with YAML frontmatter. If you can write step-by-step instructions, you can build a skill. The agent interprets natural language. Complex skills might include helper scripts, but the core file is always human-readable text.
What I'd actually recommend
OpenClaw skills work because they pair human-readable instructions with MCP server capabilities. You write the workflow in English. The MCP server does the heavy lifting with external APIs. It's a good model.
But the ClawHavoc attack showed what happens when 13,700+ community packages go unvetted. I'd honestly start with the built-in skills and build custom ones for anything they don't cover. That way you know what's running on your machine.
If you've already built a skill, I'm curious: did you wrap an MCP server, or did you go pure markdown? What was the tricky part? Reach out on X/Twitter if you want to share.