An open-source security proxy and active firewall for the Model Context Protocol (MCP)
McpVanguard 🛡️
Titan-Grade AI Firewall for MCP Agents (v1.7.0)
MCP (Model Context Protocol) enables AI agents to interact with host-level tools. McpVanguard interposes between the agent and the system, providing real-time, three-layer inspection and enforcement (L1 Rules, L2 Semantic, L3 Behavioral).
Transparent integration. Zero-configuration requirements for existing servers.
Tests PyPI version License: Apache 2.0 Python 3.11+
Part of the Provnai Open Research Initiative — Building the Immune System for AI.
⚡ Quickstart
pip install mcp-vanguard
Local stdio wrap (no network):
vanguard start --server "npx @modelcontextprotocol/server-filesystem ."
Cloud Security Gateway (SSE, deploy on Railway):
export VANGUARD_API_KEY="your-secret-key"
vanguard sse --server "npx @modelcontextprotocol/server-filesystem ."
Deploy on Railway
📖 Full Railway Deployment Guide
🛡️ Getting Started (New Users)
Bootstrap your security workspace with a single command:
# 1. Initialize safe zones and .env template
vanguard init
# 2. (Optional) Protect your Claude Desktop servers
vanguard configure-claude
# 3. Launch the visual security dashboard
vanguard ui --port 4040
🧠 How it works
Every time an AI agent calls a tool (e.g. read_file, run_command), McpVanguard inspects the request across three layers before it reaches the underlying server:
Layer What it checks Latency
L1 — Safe Zones & Rules Kernel-level isolation (openat2 / Windows canonicalization) and 50+ deterministic signatures ~16ms
L2 — Semantic LLM-based intent scoring via OpenAI, DeepSeek, Groq or Ollama Async
L3 — Behavioral Shannon Entropy ($H(X)$) scouter and sliding-window anomaly detection Stateful
Performance Note: The 16ms overhead is measured at peak concurrent load. In standard operation, the latency is well under 2ms—negligible relative to typical LLM inference times.
If a request is blocked, the agent receives a standard JSON-RPC error response. The underlying server never sees it.
Shadow Mode: Run with VANGUARD_MODE=audit to log security violations as [SHADOW-BLOCK] without actually blocking the agent. Perfect for assessing risk in existing production workflows.
🛡️ What gets blocked
Sandbox Escapes: TOCTOU symlink attacks, Windows 8.3 shortnames (PROGRA~1), DOS device namespaces
Data Exfiltration: High-entropy payloads (H > 7.5 cryptographic keys) and velocity-based secret scraping
Filesystem attacks: Path traversal (../../etc/passwd), null bytes, restricted paths (~/.ssh), Unicode homograph evasion
Command injection: Pipe-to-shell, reverse shells, command chaining via ; && \n, expansion bypasses
SSRF & Metadata Protection: Blocks access to cloud metadata endpoints (AWS/GCP/Azure) and hex/octal encoded IPs.
Jailbreak Detection: Actively identifies prompt injection patterns and instruction-ignore sequences.
Continuous Monitoring: Visualize all of the above in real-time with the built-in Security Dashboard.
📊 Security Dashboard
Launch the visual monitor to see your agent's activity and security status in real-time.
vanguard ui --port 4040
The dashboard provides a low-latency, HTMX-powered feed of:
Real-time Blocks: Instantly see which rule or layer triggered a rejection.
Entropy Scores: Pulse-check the
H
(
X
)
levels of your agent's data streams.
Audit History: Contextual log fragments for rapid incident response.
VEX Protocol — Deterministic Audit Log
When McpVanguard blocks an attack, it creates an OPA/Cerbos-compatible Secure Tool Manifest detailing the Principal, Action, Resource, and environmental snapshot.
This manifest is then sent as a cryptographically-signed report to the VEX Protocol. VEX anchors that report to the Bitcoin blockchain via the CHORA Gate.
This means an auditor can independently verify exactly what was blocked, the entropy score, and why — without relying on your local logs.
export VANGUARD_VEX_URL="https://api.vexprotocol.com"
export VANGUARD_VEX_KEY="your-agent-jwt"
export VANGUARD_AUDIT_FORMAT="json" # Optional: Route JSON logs directly into SIEM (ELK, Splunk)
vanguard sse --server "..." --behavioral
Architecture
┌─────────────────────────────────────────────────┐
AI Agent │ McpVanguard Proxy │
(Claude, GPT) │ │
│ │ ┌───────────────────────────────────────────┐ │
│ JSON-RPC │ │ L1 — Rules Engine │ │
│──────────────▶│ │ 50+ YAML signatures (path, cmd, net...) │ │
│ (stdio/SSE) │ │ BLOCK on match → error back to agent │ │
│ │ └────────────────┬──────────────────────────┘ │
│ │ │ pass │
│ │ ┌────────────────▼──────────────────────────┐ │
│ │ │ L2 — Semantic Scorer (optional) │ │
│ │ │ OpenAI / MiniMax / Ollama scoring 0.0→1.0│ │
│ │ │ Async — never blocks the proxy loop │ │
│ │ └────────────────┬──────────────────────────┘ │
│ │ │ pass │
│ │ ┌────────────────▼──────────────────────────┐ │
│ │ │ L3 — Behavioral Analysis (optional) │ │
│ │ │ Sliding window: scraping, enumeration │ │
│ │ │ In-memory or Redis (multi-instance) │ │
│ │ └────────────────┬──────────────────────────┘ │
│ │ │ │
│◀── BLOCK ─────│───────────────────┤ (any layer) │
│ (JSON-RPC │ │ ALLOW │
│ error) │ ▼ │
│ │ MCP Server Process │
│ │ (filesystem, shell, APIs...) │
│ └──────────────────┬──────────────────────────────┘
│ │
│◀─────────────── response ────────┘
│
│ (on BLOCK)
└──────────────▶ VEX API ──▶ CHORA Gate ──▶ Bitcoin Anchor
(async, fire-and-forget audit receipt)
L2 Semantic Backend Options
The Layer 2 semantic scorer supports a Universal Provider Architecture. Set the corresponding API keys to activate a backend — the first available key wins (priority: Custom > OpenAI > MiniMax > Ollama):
Backend Env Vars Notes
Universal Custom (DeepSeek, Groq, Mistral, vLLM) VANGUARD_SEMANTIC_CUSTOM_KEY, VANGUARD_SEMANTIC_CUSTOM_MODEL, VANGUARD_SEMANTIC_CUSTOM_URL Fast, cheap inference. Examples:
Groq: https://api.groq.com/openai/v1
DeepSeek: https://api.deepseek.com/v1
OpenAI VANGUARD_OPENAI_API_KEY, VANGUARD_OPENAI_MODEL Default model: gpt-4o-mini
MiniMax VANGUARD_MINIMAX_API_KEY, VANGUARD_MINIMAX_MODEL, VANGUARD_MINIMAX_BASE_URL Default model: MiniMax-M2.5
Ollama (local) VANGUARD_OLLAMA_URL, VANGUARD_OLLAMA_MODEL Default model: phi4-mini. No API key required
# Example: Use Groq for ultra-fast L2 scoring
export VANGUARD_SEMANTIC_ENABLED=true
export VANGUARD_SEMANTIC_CUSTOM_KEY="your-groq-key"
export VANGUARD_SEMANTIC_CUSTOM_MODEL="llama3-8b-8192"
export VANGUARD_SEMANTIC_CUSTOM_URL="https://api.groq.com/openai/v1"
vanguard start --server "npx @modelcontextprotocol/server-filesystem ."