- 🧠 Memory MCP Server - Orchestrator
🧠 Memory MCP Server - Orchestrator
🧠 Memory MCP Server - Orchestrator
🚀 Your AI Agent's Persistent Brain - A Comprehensive Memory & Task Management System
Features • Installation • Configuration • Workflow • Tools • Architecture • Development
🚨 CRITICAL: This MCP Server requires
workflow.mdto function properly!The
workflow.mdfile is not optional - it's the AI Driver that transforms this collection of tools into an intelligent system. Without it, your AI agent will have tools but no structured way to use them effectively.Before using this server:
- ✅ Install and configure the MCP server
- ✅ Load
workflow.mdinto your AI agent's system prompt- ✅ Ensure your agent follows the 6-mode operational structure
📋 Table of Contents
- 🌟 Overview
- ✨ Features
- 🚀 Installation
- ⚙️ Configuration
- 🎮 The AI Driver: Understanding workflow.md
- 🛠️ Available Tools
- 🏗️ Architecture
- 💻 Development
- 📚 Documentation
- 🤝 Contributing
- 📄 License
🌟 Overview
The Memory MCP Server (Orchestrator) is a powerful Model Context Protocol (MCP) server that provides AI agents with persistent memory, advanced task planning, and comprehensive knowledge management capabilities. Built with TypeScript and SQLite, it transforms your AI agents from stateless assistants into intelligent systems with long-term memory and structured workflows.
🚨 Critical Component: The AI Driver (workflow.md)
The workflow.md file is the brain of this system! It contains the operational protocols and behavioral rules that transform a collection of tools into an intelligent, coordinated system. Think of it as the "AI Driver" that:
- 🎯 Defines 6 Operational Modes: From prompt refinement to task execution
- 🛡️ Enforces Safety Protocols: Prevents unauthorized actions and overager behavior
- 📋 Structures Workflows: Ensures systematic approach to every task
- 🔄 Manages State Transitions: Controls how the AI moves between different modes
- ✅ Validates Actions: Requires user approval before executing changes
Without workflow.md, this is just a toolbox. With it, it becomes an intelligent agent system.
🎯 Key Benefits
- 🧠 Persistent Memory: Never lose context between sessions
- 📊 Structured Planning: Break complex tasks into manageable steps
- 🔍 Knowledge Graph: Build and query relationships between entities
- 🤖 AI-Enhanced: Leverage Gemini AI for intelligent task suggestions
- 📈 Performance Tracking: Monitor success metrics and learn from corrections
- 🔗 External Integrations: Connect with web search and AI services
✨ Features
💾 Memory Management
- Conversation History: Track multi-turn dialogues with full context
- Dynamic Context Storage: Version-controlled storage for agent state, preferences, and parameters
- Knowledge Graph: Create, query, and manage entity relationships
- Vector Embeddings: Semantic search capabilities for code and documentation
📝 Task & Planning System
- AI-Powered Planning: Generate comprehensive plans from refined prompts
- Hierarchical Tasks: Support for tasks, subtasks, and dependencies
- Progress Tracking: Real-time monitoring of task execution
- Review System: Built-in task and plan review mechanisms
🤖 AI Integration
- Google Gemini Integration:
- Prompt refinement and structuring
- Context summarization
- Entity extraction
- Code analysis
- Task suggestions
- Tavily Web Search: Advanced web search capabilities
- Semantic Search: Vector-based content retrieval
🛡️ Reliability & Compliance
- Data Validation: JSON schema validation for all inputs
- Comprehensive Logging: Track all operations and errors
- Backup & Restore: Full database backup capabilities
- MCP Compliant: Seamless integration with MCP-compatible clients
🚀 Installation
Prerequisites
| Requirement | Version | Required |
|---|---|---|
| Node.js | 18.x or higher | ✅ |
| npm | Latest | ✅ |
| Git | Any | ✅ |
Step-by-Step Installation
# 1. Clone the repository
git clone https://github.com/yourusername/memory-mcp-server.git
cd memory-mcp-server
# 2. Install dependencies
npm install
# 3. Build the project
npm run build
# 4. Verify installation
npm run test
🎯 Quick Start for AI Agents
# CRITICAL STEP: Load the AI Driver
# Your AI agent MUST load the workflow.md file as part of its system prompt
# This file contains the operational protocols that make the system work
# Example for loading in your AI agent:
# 1. Read the workflow.md file
# 2. Include it in your system prompt or rules
# 3. Follow the 6-mode operational structure
🐳 Docker Installation (Alternative)
# Coming soon - Docker support planned
⚙️ Configuration
🔑 API Keys Setup
The server requires API keys for external services. These should be configured in your MCP client settings.
| Service | Environment Variable | Required | Get API Key |
|---|---|---|---|
| Google Gemini | GEMINI_API_KEY | ✅ | Get Key |
| Tavily Search | TAVILY_API_KEY | ✅ | Get Key |
📝 MCP Client Configuration
For VS Code Cline Extension
-
Locate the settings file:
- Windows:
%APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json - macOS:
~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json - Linux:
~/.config/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
- Windows:
-
Add the server configuration:
{
"memory-mcp-server": {
"disabled": false,
"autoApprove": [],
"timeout": 120,
"transportType": "stdio",
"command": "node",
"args": [
"/absolute/path/to/memory-mcp-server/build/index.js"
],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here",
"TAVILY_API_KEY": "your-tavily-api-key-here"
}
}
}
⚠️ Important: Replace
/absolute/path/to/memory-mcp-server/with the actual path where you cloned the repository.
For Other MCP Clients
Adapt the configuration format according to your client's requirements. The key parameters are:
- Command:
node - Arguments:
["path/to/build/index.js"] - Transport:
stdio - Environment: API keys
🎮 The AI Driver: Understanding workflow.md
The workflow.md file is THE MOST IMPORTANT COMPONENT of this system. It's not just documentation - it's the operational manual that AI agents must follow to use this server effectively.
📋 The 6 Operational Modes
| Mode | Purpose | Key Responsibility |
|---|---|---|
| MODE 0: PROMPT_REFINE | 🎯 Entry point for ALL tasks | Disambiguates requests, checks past errors, creates structured goals |
| MODE 1: THINK | 🧠 Analysis & Strategy | Builds mental model, gathers information, forms strategy |
| MODE 2: CODE_ANALYSIS | 🔍 Deep Code Examination | Analyzes code structure, dependencies, quality |
| MODE 3: INNOVATE | 💡 Creative Problem Solving | Generates novel solutions, breaks through impasses |
| MODE 4: PLAN | 📋 Detailed Planning | Creates step-by-step execution plans with AI assistance |
| MODE 5: EXECUTE | ⚡ Controlled Action | Implements approved plans with comprehensive logging |
| MODE 6: REVIEW | ✅ Validation & Learning | Validates outcomes, synthesizes lessons learned |
🔄 Workflow State Machine
graph TD
A[User Request] -->|MANDATORY| B[MODE 0: PROMPT_REFINE]
B -->|Auto transition| C[MODE 1: THINK]
C -->|Need analysis| D[MODE 2: CODE_ANALYSIS]
C -->|Need creativity| E[MODE 3: INNOVATE]
C -->|Ready to plan| F[MODE 4: PLAN]
D --> C
E --> C
F -->|User approval required| G[MODE 5: EXECUTE]
G -->|Completion| H[MODE 6: REVIEW]
G -->|Error/Halt| I[HALTED STATE]
I -->|User instruction| F
H --> J[Await Next Request]
🛡️ Critical Safety Rules
- No Unauthorized Actions: The agent CANNOT modify files or execute commands without an approved plan
- Mandatory Mode Declaration: Every response MUST start with
[MODE: MODE_NAME] - Tool-Centric Operations: All significant actions MUST use official tools
- No Post-Task Solicitation: Agent must NOT ask "what's next?" after completing tasks
🚀 How to Use workflow.md
For AI Agents:
- Load
workflow.mdinto your system prompt or rules - Follow the mode progression strictly
- Use only the tools allowed in each mode
- Respect user authorization requirements
For Developers:
- Review
workflow.mdto understand the intended agent behavior - Ensure your prompts align with the workflow structure
- Monitor agent compliance with the protocols
⚠️ Warning: AI agents may not always follow these rules perfectly. The workflow.md provides guidelines, not guarantees. Monitor agent behavior and provide corrections as needed.
🛠️ Available Tools
The server provides 65+ tools organized into categories:
📚 Memory & Context Tools
Conversation Management (4 tools)
store_conversation_message- Store messages in conversation historyget_conversation_history- Retrieve past conversationssearch_conversation_by_keywords- Search conversations by keywordssummarize_conversation- AI-powered conversation summarization
Context Management (9 tools)
store_context- Store dynamic contextual dataget_context- Retrieve stored contextget_all_contexts- Get all contexts for an agentsearch_context_by_keywords- Keyword search in contextsprune_old_context- Clean up old context entriessummarize_context- AI summarization of contextextract_entities- Extract entities from contextsemantic_search_context- Vector-based semantic search
📋 Planning & Task Tools
Plan Management (15 tools)
create_task_plan- Create plans (manual or AI-generated)get_task_plan_details- Get detailed plan informationlist_task_plans- List all plansupdate_task_plan_status- Update plan statusdelete_task_plan- Remove plansai_analyze_plan- AI analysis of plan qualityai_suggest_subtasks- AI-generated subtask suggestionsai_suggest_task_details- AI-enhanced task detailsai_summarize_task_progress- AI progress summaries
🔍 Knowledge & Attribution Tools
Knowledge Graph (9 operations)
knowledge_graph_memory- Comprehensive KG operations- Create/read/update/delete entities
- Manage relationships
- Add observations
- Natural language queries
- Infer relationships
- Generate visualizations
📊 Logging & Performance Tools
Comprehensive Logging (23 tools)
- Tool execution logging
- Task progress tracking
- Error logging and management
- Correction tracking
- Success metrics
- Review logs (task and plan level)
🔧 Utility & Integration Tools
Git Operations (16 tools)
- Complete Git workflow support
- Clone, pull, push, commit
- Branch management
- Stash operations
- Remote management
External Services (5 tools)
tavily_web_search- Advanced web searchask_gemini- Direct Gemini AI queriesanalyze_code_file_with_gemini- AI code analysisrefine_user_prompt- AI prompt enhancementingest_codebase_embeddings- Vector embedding generation
🏗️ Architecture
📁 Project Structure
memory-mcp-server/
├── 📂 src/
│ ├── 📂 database/ # Database schemas and managers
│ │ ├── 📂 managers/ # Entity-specific managers
│ │ ├── 📂 services/ # Business logic services
│ │ ├── schema.sql # Main database schema
│ │ └── vector_store_schema.sql
│ ├── 📂 tools/ # MCP tool implementations
│ │ ├── conversation_tools.ts
│ │ ├── plan_management_tools.ts
│ │ ├── ai_task_enhancement_tools.ts
│ │ └── ... (60+ tool files)
│ ├── 📂 utils/ # Utility functions
│ ├── 📂 types/ # TypeScript definitions
│ └── index.ts # Main entry point
├── 📂 docs/ # Documentation
├── 📄 workflow.md # Agent workflow rules
├── 📄 package.json # Dependencies
└── 📄 README.md # This file
🗄️ Database Schema
The server uses two SQLite databases:
-
Main Database (
memory.db):- Conversation history
- Context information
- Task plans and progress
- Knowledge graph
- Logs and metrics
-
Vector Store (
vector_store.db):- Code embeddings
- Semantic search indices
🔄 Data Flow
graph LR
A[AI Agent] -->|MCP Protocol| B[Memory MCP Server]
B --> C[SQLite Databases]
B --> D[External Services]
D --> E[Gemini AI]
D --> F[Tavily Search]
B --> G[Knowledge Graph]
B --> H[Task Planner]
💻 Development
🛠️ Development Setup
# Install dependencies
npm install
# Run in development mode with auto-rebuild
npm run watch
# Run tests
npm test
# Start the MCP Inspector for debugging
npm run inspector
🧪 Testing
The project uses Jest for testing:
# Run all tests
npm test
# Run tests in watch mode
npm test -- --watch
# Run tests with coverage
npm test -- --coverage
🐛 Debugging
Since MCP servers communicate over stdio, use the MCP Inspector:
npm run inspector
# Opens a browser-based debugging interface
📝 Code Style
- Language: TypeScript 5.3+
- Style: ESLint configuration included
- Format: Prettier compatible
📚 Documentation
📖 Key Documents
- 🚨 Workflow Rules - CRITICAL: The AI Driver that makes everything work!
- Defines the 6 operational modes
- Enforces safety protocols
- Structures agent behavior
- MUST be loaded into AI agent's system prompt
- API Documentation - Detailed tool schemas and parameters
- Implementation Notes - Technical details
- Future Implementations - Roadmap
🎯 Quick Start Examples
Example 1: Creating an AI-Generated Plan
// 1. Refine the user prompt
const refinedPrompt = await refine_user_prompt({
agent_id: "my-agent",
raw_user_prompt: "Build a REST API for user management"
});
// 2. Create a plan from the refined prompt
const plan = await create_task_plan({
agent_id: "my-agent",
refined_prompt_id: refinedPrompt.refined_prompt_id
});
// 3. Get AI suggestions for subtasks
const subtasks = await ai_suggest_subtasks({
agent_id: "my-agent",
plan_id: plan.plan_id,
parent_task_id: plan.task_ids[0]
});
Example 2: Knowledge Graph Operations
// Create entities
await knowledge_graph_memory({
agent_id: "my-agent",
operation: "create_entities",
entities: [
{
name: "UserController",
entityType: "class",
observations: ["Handles user CRUD operations"]
}
]
});
// Query with natural language
const results = await kg_nl_query({
agent_id: "my-agent",
query: "What classes handle user operations?"
});
🤝 Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
🐛 Reporting Issues
- Check existing issues first
- Use issue templates
- Provide reproduction steps
- Include error logs
🔧 Pull Requests
- Fork the repository
- Create a feature branch
- Add tests for new features
- Ensure all tests pass
- Submit PR with clear description
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ for AI Agents