- 🧠 AgentNull: AI System Security Threat Catalog + Proof-of-Concepts
🧠 AgentNull: AI System Security Threat Catalog + Proof-of-Concepts
🧠 AgentNull: AI System Security Threat Catalog + Proof-of-Concepts
This repository contains a red team-oriented catalog of attack vectors targeting AI systems including autonomous agents (MCP, LangGraph, AutoGPT), RAG pipelines, vector databases, and embedding-based retrieval systems, along with individual proof-of-concepts (PoCs) for each.
📘 Structure
catalog/AgentNull_Catalog.md— Human-readable threat catalogcatalog/AgentNull_Catalog.json— Structured version for SOC/SIEM ingestionpocs/— One directory per attack vector, each with its own README, code, and sample input/output
⚠️ Disclaimer
This repository is for educational and internal security research purposes only. Do not deploy any techniques or code herein in production or against systems you do not own or have explicit authorization to test.
🔧 Usage
Navigate into each pocs/<attack_name>/ folder and follow the README to replicate the attack scenario.
🤖 Testing with Local LLMs (Recommended)
For enhanced PoC demonstrations without API costs, use Ollama with local models:
Install Ollama
# Linux/macOS
curl -fsSL https://ollama.ai/install.sh | sh
# Or download from https://ollama.ai/download
Setup Local Model
# Pull a lightweight model (recommended for testing)
ollama pull gemma3
# Or use a more capable model
ollama pull deepseek-r1
ollama pull qwen3
Run PoCs with Local LLM
# Advanced Tool Poisoning with real LLM
cd pocs/AdvancedToolPoisoning
python3 advanced_tool_poisoning_agent.py local
# Other PoCs work with simulation mode
cd pocs/ContextPackingAttacks
python3 context_packing_agent.py
Ollama Configuration
- Default endpoint:
http://localhost:11434 - Model selection: Edit the model name in PoC files if needed
- Performance: Llama2 (~4GB RAM), Mistral (~4GB RAM), CodeLlama (~4GB RAM)
🧩 Attack Vectors Covered
🤖 MCP & Agent Systems
- ⭐ Full-Schema Poisoning (FSP) - Exploit any field in tool schema beyond descriptions
- ⭐ Advanced Tool Poisoning Attack (ATPA) - Manipulate tool outputs to trigger secondary actions
- ⭐ MCP Rug Pull Attack - Swap benign descriptions for malicious ones after approval
- ⭐ Schema Validation Bypass - Exploit client validation implementation differences
- Tool Confusion Attack - Trick agents into using wrong tools via naming similarity
- Nested Function Call Hijack - Use JSON-like data to trigger dangerous function calls
- Subprompt Extraction - Induce agents to reveal system instructions or tools
- Backdoor Planning - Inject future intent into multi-step planning for exfiltration
🧠 Memory & Context Systems
- Recursive Leakage - Secrets leak through context summarization
- Token Gaslighting - Push safety instructions out of context via token spam
- Heuristic Drift Injection - Poison agent logic with repeated insecure patterns
- ⭐ Context Packing Attacks - Overflow context windows to truncate safety instructions
🔍 RAG & Vector Systems
- ⭐ Cross-Embedding Poisoning - Manipulate embeddings to increase malicious content retrieval
- ⭐ Index Skew Attacks - Bias vector indices to favor malicious content (theoretical)
- ⭐ Zero-Shot Vector Beaconing - Embed latent activation patterns for covert signaling (theoretical)
- ⭐ Embedding Feedback Loops - Poison continual learning systems (theoretical)
💻 Code & File Systems
- Hidden File Exploitation - Get agents to modify
.env,.git, or internal config files
⚡ Resource & Performance
- Function Flooding - Generate recursive tool calls to overwhelm budgets/APIs
- Semantic DoS - Trigger infinite generation or open-ended tasks
📚 Related Research & Attribution
Novel Attack Vectors (⭐)
The attack vectors marked with ⭐ represent novel concepts primarily developed within the AgentNull project, extending beyond existing documented attack patterns.
Known Attack Patterns with Research Links
- Recursive Leakage: Lost in the Middle: How Language Models Use Long Contexts
- Heuristic Drift Injection: Poisoning Web-Scale Training Data is Practical
- Tool Confusion Attack: LLM-as-a-judge
- Token Gaslighting: RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
- Function Flooding: Denial-of-Service Attack on Test-Time-Tuning Models
- Subprompt Extraction: Prompt-Hacking: An Attack on NLP-based Applications
- Hidden File Exploitation: OWASP Top 10 for Large Language Model Applications
- Backdoor Planning: Backdoor Attacks on Language Models
- Nested Function Call Hijack: OWASP Top 10 for Large Language Model Applications
- Semantic DoS: The Rise of AI-Powered Denial-of-Service Attacks and How to Mitigate Them