Blog

News, insights, and ecosystem reports on MCP server security.

·6 min read

AgentEscape: How MCP Servers Let AI Agents Read Your Private Keys

We found a vulnerability in a 49,000-star project that lets an attacker trick your AI agent into reading SSH keys, .env files, and database passwords. The fix is merged — but the pattern exists in hundreds of other MCP servers.

MCPSecurityAgentEscapeCWE-22Path TraversalAI Agents
·5 min read

We Found and Fixed Security Vulnerabilities in 5 Popular Open-Source Projects

SpiderShield's automated scanner identified real vulnerabilities in projects with 86K+ combined GitHub stars — including context7 (49K), airi (35K), and mcp-server-kubernetes (1.3K). All 5 fixes were merged by maintainers.

MCPSecurityOpen SourceCWE-208CWE-22CWE-78Vulnerability
·8 min read

We Scanned 5,928 MCP Servers, Then Manually Audited the Worst Ones

Our scanner flagged 114 MCP servers as Grade F. We manually reviewed source code of the most popular ones. Some had real vulnerabilities — readBase64File() with zero path validation. Others were false positives. 14% false positive rate, 16 ratings corrected.

MCPSecurityAuditAI AgentsOpen SourceVulnerabilityScanner
·5 min read

We Rated 5,928 MCP Servers. Zero Scored an A.

Not a single MCP server in our database of 5,928 achieves Grade A (9.0+/10). The average score is 4.81/10. 22% score D or F. Here's the full grade distribution — calibrated with a 14% FP correction — and what it means for AI agent security.

MCPSecurityDataAI AgentsEcosystem ReportGrade Distribution
·4 min read

How to Secure OpenClaw Agents with SpiderShield

Add three-phase runtime security to your OpenClaw agents. One install, zero config. Every tool call checked, every secret caught, every decision logged.

OpenClawSecurityMCPRuntimePluginTutorial
·4 min read

How to Secure Claude Code with SpiderShield (3 Minutes Setup)

Add automatic security checks to every MCP tool call in Claude Code. One script, zero dependencies, 3 minutes. Grade F tools get blocked automatically.

Claude CodeSecurityMCPRuntimeTutorial
·6 min read

98% of MCP Tools Don't Tell AI Agents When to Use Them — Deep Dive

We analyzed 78,849 MCP tool descriptions. Only 2% include a scenario trigger. When AI picks the wrong tool, the real problem isn't AI — it's documentation.

MCPAI AgentsDocumentationTool DesignDeveloper Experience
·8 min read

State of MCP Security 2026: We Scanned 15,923 AI Tools. Here's What We Found.

The largest independent security analysis of the MCP/AI tool ecosystem. 15,923 tools scanned. 36% of MCP servers fail. Token leakage is the #1 vulnerability. 42 skills confirmed malicious after LLM verification.

MCPSecurityResearchAI ToolsOpenClawSkillsVulnerability
·5 min read

98% of MCP Tools Don't Tell AI Agents When to Use Them

We analyzed 78,849 tools across 15,923 MCP servers and skills. 98% don't specify when to use them. Only 3% document parameters. Only 2% explain failures.

MCPDocumentationResearchStatistics
·6 min read

OpenClaw 2026.3.1 Security Evaluation: Grade B

We evaluated OpenClaw v2026.3.1 — scanning 3,566 source files and 202 tool definitions. Security is clean, but tool descriptions are holding it back.

OpenClawEvaluationSecuritySpiderScore
·7 min read

We Scanned 200+ OpenClaw Skills. Here's What We Found.

The first independent security audit of the OpenClaw skill ecosystem. Most skills score C or below -- missing sandboxing, shell access, and unclear scope are systemic issues.

OpenClawSkillsSecurityAudit
·4 min read

Introducing SpiderRating: Independent Security Ratings for MCP Servers

Today we launch SpiderRating, an open-source security rating system for the MCP ecosystem. Every server gets a transparent, reproducible score across three dimensions.

AnnouncementMCPSecurity
·6 min read

How We Score MCP Servers: A Deep Dive into the SpiderScore Model

A detailed look at our 3-layer scoring model: what we measure, why it matters, and how we calibrate scores to be fair and actionable.

MethodologySecurityScoring