Blog
News, insights, and ecosystem reports on MCP server security.
AgentEscape: How MCP Servers Let AI Agents Read Your Private Keys
We found a vulnerability in a 49,000-star project that lets an attacker trick your AI agent into reading SSH keys, .env files, and database passwords. The fix is merged — but the pattern exists in hundreds of other MCP servers.
We Found and Fixed Security Vulnerabilities in 5 Popular Open-Source Projects
SpiderShield's automated scanner identified real vulnerabilities in projects with 86K+ combined GitHub stars — including context7 (49K), airi (35K), and mcp-server-kubernetes (1.3K). All 5 fixes were merged by maintainers.
We Scanned 5,928 MCP Servers, Then Manually Audited the Worst Ones
Our scanner flagged 114 MCP servers as Grade F. We manually reviewed source code of the most popular ones. Some had real vulnerabilities — readBase64File() with zero path validation. Others were false positives. 14% false positive rate, 16 ratings corrected.
We Rated 5,928 MCP Servers. Zero Scored an A.
Not a single MCP server in our database of 5,928 achieves Grade A (9.0+/10). The average score is 4.81/10. 22% score D or F. Here's the full grade distribution — calibrated with a 14% FP correction — and what it means for AI agent security.
How to Secure OpenClaw Agents with SpiderShield
Add three-phase runtime security to your OpenClaw agents. One install, zero config. Every tool call checked, every secret caught, every decision logged.
How to Secure Claude Code with SpiderShield (3 Minutes Setup)
Add automatic security checks to every MCP tool call in Claude Code. One script, zero dependencies, 3 minutes. Grade F tools get blocked automatically.
98% of MCP Tools Don't Tell AI Agents When to Use Them — Deep Dive
We analyzed 78,849 MCP tool descriptions. Only 2% include a scenario trigger. When AI picks the wrong tool, the real problem isn't AI — it's documentation.
State of MCP Security 2026: We Scanned 15,923 AI Tools. Here's What We Found.
The largest independent security analysis of the MCP/AI tool ecosystem. 15,923 tools scanned. 36% of MCP servers fail. Token leakage is the #1 vulnerability. 42 skills confirmed malicious after LLM verification.
98% of MCP Tools Don't Tell AI Agents When to Use Them
We analyzed 78,849 tools across 15,923 MCP servers and skills. 98% don't specify when to use them. Only 3% document parameters. Only 2% explain failures.
OpenClaw 2026.3.1 Security Evaluation: Grade B
We evaluated OpenClaw v2026.3.1 — scanning 3,566 source files and 202 tool definitions. Security is clean, but tool descriptions are holding it back.
We Scanned 200+ OpenClaw Skills. Here's What We Found.
The first independent security audit of the OpenClaw skill ecosystem. Most skills score C or below -- missing sandboxing, shell access, and unclear scope are systemic issues.
Introducing SpiderRating: Independent Security Ratings for MCP Servers
Today we launch SpiderRating, an open-source security rating system for the MCP ecosystem. Every server gets a transparent, reproducible score across three dimensions.
How We Score MCP Servers: A Deep Dive into the SpiderScore Model
A detailed look at our 3-layer scoring model: what we measure, why it matters, and how we calibrate scores to be fair and actionable.