The Debrief

Malware in ClawHub: When AI Agent Skills Turn Against You

4 min read

The Short Version

Remember the previous article about OpenClaw, where I said skills let it learn new tricks? Well, attackers figured that out too.

Security researchers just found 341 malicious skills in ClawHub, the community marketplace where people share OpenClaw skills. These weren't subtle bugs. They were malware designed to steal your passwords, crypto wallets, and browser data.

What Actually Happened

In late January, someone started uploading skills to ClawHub that looked totally legit. Professional descriptions. Clean documentation. Names like "Solana Wallet Manager" or "Twitter Integration" or "YouTube Downloader."

The trick was in the setup instructions. Each skill had a "Prerequisites" section that said something like: "First, install openclaw-core" with a helpful link.

That link? It didn't go to any real software. It led to a page that tricked you (or your AI agent) into running something dangerous on your computer. Both Mac and Windows were targeted. The malware is called Atomic Stealer, and it's sold on the black market for $500–$1,000/month. Yes, malware is a business.

The Damage

Security firm Koi Security audited all 2,857 skills on ClawHub. 335 were part of a single coordinated campaign they named ClawHavoc. Another security firm found that 7.1% of all skills were leaking passwords and credentials, even ones that weren't intentionally malicious.

If you ran any of the bad ones, the malware could grab every password saved in your browser, your crypto wallets, files from your Desktop and Documents, chat logs, and your Mac's saved credentials.

One skill called "buy-anything" collected credit card details. Think about that for a second.

Why a Markdown File Can Be Dangerous

This is the part that's wild to me.

OpenClaw skills are often just markdown files with instructions. Nobody thinks a markdown file is dangerous. But when your AI agent reads that file, it doesn't just read it. It follows the instructions. Including the part that says "run this command first."

As 1Password's Jason Meller put it: "Markdown isn't content in an agent ecosystem. Markdown is an installer."

Every skill you install is basically giving someone else's instructions to your AI agent and hoping they're not malicious.

It Wasn't Just the Skills

Around the same time, a security researcher found a separate bug in OpenClaw itself (CVE-2026-25253). You click a link, and that one click lets someone else take over your OpenClaw and run whatever they want on your computer. No installs, no approvals. One click. (This was patched on January 29.)

On top of that, Trend Micro showed that attackers can hide instructions inside a Google Doc or an email. When your agent reads that document, it follows the hidden instructions. Remember the example from the first article about an email forwarding all your messages to an attacker? Researchers demonstrated it actually working.

What You Should Do

If you use OpenClaw and have installed skills from ClawHub, check if any skill asked you to install something extra or visit an external link. That's the red flag. If you ran anything suspicious, change your passwords and update OpenClaw to the latest version.

If you're running OpenClaw on a work computer, don't. As Meller said: "Do not do it on a company device. Full stop." If you already have, talk to your IT team.

Going forward, only install skills from people you trust, and read what a skill does before installing it. If it asks you to run commands from external links, skip it.

The Big Picture

This is the flip side of the magic I described in the first article. The same system that can teach itself to transcribe your voice messages can also be tricked into stealing your passwords.

OpenClaw's team is adding a safety scanner and a reporting system. But the real problem is bigger than one tool. Security researchers are finding similar issues across the entire AI ecosystem. AI tools that can act on your behalf are becoming targets.

The question from the first article still stands: how do we stay in control when AI does things for us?

We're still figuring it out.