State of AI Agent Security: Q1 2026
6,943 AI agent skills have security flaws. We scanned all 40,059 across 14,808 publishers. The largest published agent skill security analysis.
Our Journal
Research, analysis, and practical guidance on AI agent security, backed by data from scanning thousands of agent skills across the ecosystem.
We evaluated Firmis Monitor against 278 CVE-derived attack scenarios and 49 published CVE reproductions. Here is every number, including the gaps.
On March 30, compromised versions of axios hit npm. A 2-layer obfuscation chain deployed platform-specific RATs to iOS, Windows, and Linux.
42 public repositories. 10 scanners. One judge model. We built the first independent benchmark for AI agent security tools.
Every agent platform ships some security. None ship enough. We mapped the built-in defenses and where the gaps are.
Individual findings are noise. Exploit chains are signal. We traced 9 complete attack paths across 8 major agent frameworks.
An honest guide to every AI agent security tool available, from free open-source scanners to enterprise platforms.
OpenClaw's built-in audit is a solid first line of defense. But config-level checks and VirusTotal hashes miss what static analysis catches.
Over 180,000 developers deployed an AI agent that could read their emails and execute code. Then the vulnerabilities appeared.
mcp-scan is a solid MCP-focused scanner. Firmis scans your entire agent stack. When to use each, and why you might want both.
Gitleaks finds secrets in your code. It doesn't understand that your MCP config just exposed those secrets to 5 connected AI tools.
Tool poisoning is the attack where a helpful-looking AI skill secretly steals your data. Here's how it works, why it's spreading, and how to detect it.
An AI Bill of Materials is a machine-readable inventory of every component in your agent stack. Compliance auditors are starting to ask for one.
One command. 30 seconds. Free.
Open source · Apache-2.0 scanner · No sign-up required