Sponsored by Deepsite.site

🧠 Memory MCP Server - Orchestrator

Created By
rashee19977 months ago
Your AI Agent's Persistent Brain - A Comprehensive Memory & Task Management System
Content

🧠 Memory MCP Server - Orchestrator

Memory MCP Server License: MIT Node.js TypeScript

🚀 Your AI Agent's Persistent Brain - A Comprehensive Memory & Task Management System

FeaturesInstallationConfigurationWorkflowToolsArchitectureDevelopment


🚨 CRITICAL: This MCP Server requires workflow.md to function properly!

The workflow.md file is not optional - it's the AI Driver that transforms this collection of tools into an intelligent system. Without it, your AI agent will have tools but no structured way to use them effectively.

Before using this server:

  1. ✅ Install and configure the MCP server
  2. Load workflow.md into your AI agent's system prompt
  3. ✅ Ensure your agent follows the 6-mode operational structure

📖 Jump to workflow.md documentation


📋 Table of Contents


🌟 Overview

The Memory MCP Server (Orchestrator) is a powerful Model Context Protocol (MCP) server that provides AI agents with persistent memory, advanced task planning, and comprehensive knowledge management capabilities. Built with TypeScript and SQLite, it transforms your AI agents from stateless assistants into intelligent systems with long-term memory and structured workflows.

🚨 Critical Component: The AI Driver (workflow.md)

The workflow.md file is the brain of this system! It contains the operational protocols and behavioral rules that transform a collection of tools into an intelligent, coordinated system. Think of it as the "AI Driver" that:

  • 🎯 Defines 6 Operational Modes: From prompt refinement to task execution
  • 🛡️ Enforces Safety Protocols: Prevents unauthorized actions and overager behavior
  • 📋 Structures Workflows: Ensures systematic approach to every task
  • 🔄 Manages State Transitions: Controls how the AI moves between different modes
  • Validates Actions: Requires user approval before executing changes

Without workflow.md, this is just a toolbox. With it, it becomes an intelligent agent system.

🎯 Key Benefits

  • 🧠 Persistent Memory: Never lose context between sessions
  • 📊 Structured Planning: Break complex tasks into manageable steps
  • 🔍 Knowledge Graph: Build and query relationships between entities
  • 🤖 AI-Enhanced: Leverage Gemini AI for intelligent task suggestions
  • 📈 Performance Tracking: Monitor success metrics and learn from corrections
  • 🔗 External Integrations: Connect with web search and AI services

✨ Features

💾 Memory Management

  • Conversation History: Track multi-turn dialogues with full context
  • Dynamic Context Storage: Version-controlled storage for agent state, preferences, and parameters
  • Knowledge Graph: Create, query, and manage entity relationships
  • Vector Embeddings: Semantic search capabilities for code and documentation

📝 Task & Planning System

  • AI-Powered Planning: Generate comprehensive plans from refined prompts
  • Hierarchical Tasks: Support for tasks, subtasks, and dependencies
  • Progress Tracking: Real-time monitoring of task execution
  • Review System: Built-in task and plan review mechanisms

🤖 AI Integration

  • Google Gemini Integration:
    • Prompt refinement and structuring
    • Context summarization
    • Entity extraction
    • Code analysis
    • Task suggestions
  • Tavily Web Search: Advanced web search capabilities
  • Semantic Search: Vector-based content retrieval

🛡️ Reliability & Compliance

  • Data Validation: JSON schema validation for all inputs
  • Comprehensive Logging: Track all operations and errors
  • Backup & Restore: Full database backup capabilities
  • MCP Compliant: Seamless integration with MCP-compatible clients

🚀 Installation

Prerequisites

RequirementVersionRequired
Node.js18.x or higher
npmLatest
GitAny

Step-by-Step Installation

# 1. Clone the repository
git clone https://github.com/yourusername/memory-mcp-server.git
cd memory-mcp-server

# 2. Install dependencies
npm install

# 3. Build the project
npm run build

# 4. Verify installation
npm run test

🎯 Quick Start for AI Agents

# CRITICAL STEP: Load the AI Driver
# Your AI agent MUST load the workflow.md file as part of its system prompt
# This file contains the operational protocols that make the system work

# Example for loading in your AI agent:
# 1. Read the workflow.md file
# 2. Include it in your system prompt or rules
# 3. Follow the 6-mode operational structure

🐳 Docker Installation (Alternative)

# Coming soon - Docker support planned

⚙️ Configuration

🔑 API Keys Setup

The server requires API keys for external services. These should be configured in your MCP client settings.

ServiceEnvironment VariableRequiredGet API Key
Google GeminiGEMINI_API_KEYGet Key
Tavily SearchTAVILY_API_KEYGet Key

📝 MCP Client Configuration

For VS Code Cline Extension

  1. Locate the settings file:

    • Windows: %APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json
    • macOS: ~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
    • Linux: ~/.config/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
  2. Add the server configuration:

{
  "memory-mcp-server": {
    "disabled": false,
    "autoApprove": [],
    "timeout": 120,
    "transportType": "stdio",
    "command": "node",
    "args": [
      "/absolute/path/to/memory-mcp-server/build/index.js"
    ],
    "env": {
      "GEMINI_API_KEY": "your-gemini-api-key-here",
      "TAVILY_API_KEY": "your-tavily-api-key-here"
    }
  }
}

⚠️ Important: Replace /absolute/path/to/memory-mcp-server/ with the actual path where you cloned the repository.

For Other MCP Clients

Adapt the configuration format according to your client's requirements. The key parameters are:

  • Command: node
  • Arguments: ["path/to/build/index.js"]
  • Transport: stdio
  • Environment: API keys

🎮 The AI Driver: Understanding workflow.md

The workflow.md file is THE MOST IMPORTANT COMPONENT of this system. It's not just documentation - it's the operational manual that AI agents must follow to use this server effectively.

📋 The 6 Operational Modes

ModePurposeKey Responsibility
MODE 0: PROMPT_REFINE🎯 Entry point for ALL tasksDisambiguates requests, checks past errors, creates structured goals
MODE 1: THINK🧠 Analysis & StrategyBuilds mental model, gathers information, forms strategy
MODE 2: CODE_ANALYSIS🔍 Deep Code ExaminationAnalyzes code structure, dependencies, quality
MODE 3: INNOVATE💡 Creative Problem SolvingGenerates novel solutions, breaks through impasses
MODE 4: PLAN📋 Detailed PlanningCreates step-by-step execution plans with AI assistance
MODE 5: EXECUTE⚡ Controlled ActionImplements approved plans with comprehensive logging
MODE 6: REVIEW✅ Validation & LearningValidates outcomes, synthesizes lessons learned

🔄 Workflow State Machine

graph TD
    A[User Request] -->|MANDATORY| B[MODE 0: PROMPT_REFINE]
    B -->|Auto transition| C[MODE 1: THINK]
    C -->|Need analysis| D[MODE 2: CODE_ANALYSIS]
    C -->|Need creativity| E[MODE 3: INNOVATE]
    C -->|Ready to plan| F[MODE 4: PLAN]
    D --> C
    E --> C
    F -->|User approval required| G[MODE 5: EXECUTE]
    G -->|Completion| H[MODE 6: REVIEW]
    G -->|Error/Halt| I[HALTED STATE]
    I -->|User instruction| F
    H --> J[Await Next Request]

🛡️ Critical Safety Rules

  1. No Unauthorized Actions: The agent CANNOT modify files or execute commands without an approved plan
  2. Mandatory Mode Declaration: Every response MUST start with [MODE: MODE_NAME]
  3. Tool-Centric Operations: All significant actions MUST use official tools
  4. No Post-Task Solicitation: Agent must NOT ask "what's next?" after completing tasks

🚀 How to Use workflow.md

For AI Agents:

  1. Load workflow.md into your system prompt or rules
  2. Follow the mode progression strictly
  3. Use only the tools allowed in each mode
  4. Respect user authorization requirements

For Developers:

  1. Review workflow.md to understand the intended agent behavior
  2. Ensure your prompts align with the workflow structure
  3. Monitor agent compliance with the protocols

⚠️ Warning: AI agents may not always follow these rules perfectly. The workflow.md provides guidelines, not guarantees. Monitor agent behavior and provide corrections as needed.


🛠️ Available Tools

The server provides 65+ tools organized into categories:

📚 Memory & Context Tools

Conversation Management (4 tools)
  • store_conversation_message - Store messages in conversation history
  • get_conversation_history - Retrieve past conversations
  • search_conversation_by_keywords - Search conversations by keywords
  • summarize_conversation - AI-powered conversation summarization
Context Management (9 tools)
  • store_context - Store dynamic contextual data
  • get_context - Retrieve stored context
  • get_all_contexts - Get all contexts for an agent
  • search_context_by_keywords - Keyword search in contexts
  • prune_old_context - Clean up old context entries
  • summarize_context - AI summarization of context
  • extract_entities - Extract entities from context
  • semantic_search_context - Vector-based semantic search

📋 Planning & Task Tools

Plan Management (15 tools)
  • create_task_plan - Create plans (manual or AI-generated)
  • get_task_plan_details - Get detailed plan information
  • list_task_plans - List all plans
  • update_task_plan_status - Update plan status
  • delete_task_plan - Remove plans
  • ai_analyze_plan - AI analysis of plan quality
  • ai_suggest_subtasks - AI-generated subtask suggestions
  • ai_suggest_task_details - AI-enhanced task details
  • ai_summarize_task_progress - AI progress summaries

🔍 Knowledge & Attribution Tools

Knowledge Graph (9 operations)
  • knowledge_graph_memory - Comprehensive KG operations
    • Create/read/update/delete entities
    • Manage relationships
    • Add observations
    • Natural language queries
    • Infer relationships
    • Generate visualizations

📊 Logging & Performance Tools

Comprehensive Logging (23 tools)
  • Tool execution logging
  • Task progress tracking
  • Error logging and management
  • Correction tracking
  • Success metrics
  • Review logs (task and plan level)

🔧 Utility & Integration Tools

Git Operations (16 tools)
  • Complete Git workflow support
  • Clone, pull, push, commit
  • Branch management
  • Stash operations
  • Remote management
External Services (5 tools)
  • tavily_web_search - Advanced web search
  • ask_gemini - Direct Gemini AI queries
  • analyze_code_file_with_gemini - AI code analysis
  • refine_user_prompt - AI prompt enhancement
  • ingest_codebase_embeddings - Vector embedding generation

🏗️ Architecture

📁 Project Structure

memory-mcp-server/
├── 📂 src/
│   ├── 📂 database/          # Database schemas and managers
│   │   ├── 📂 managers/      # Entity-specific managers
│   │   ├── 📂 services/      # Business logic services
│   │   ├── schema.sql        # Main database schema
│   │   └── vector_store_schema.sql
│   ├── 📂 tools/             # MCP tool implementations
│   │   ├── conversation_tools.ts
│   │   ├── plan_management_tools.ts
│   │   ├── ai_task_enhancement_tools.ts
│   │   └── ... (60+ tool files)
│   ├── 📂 utils/             # Utility functions
│   ├── 📂 types/             # TypeScript definitions
│   └── index.ts              # Main entry point
├── 📂 docs/                  # Documentation
├── 📄 workflow.md            # Agent workflow rules
├── 📄 package.json           # Dependencies
└── 📄 README.md              # This file

🗄️ Database Schema

The server uses two SQLite databases:

  1. Main Database (memory.db):

    • Conversation history
    • Context information
    • Task plans and progress
    • Knowledge graph
    • Logs and metrics
  2. Vector Store (vector_store.db):

    • Code embeddings
    • Semantic search indices

🔄 Data Flow

graph LR
    A[AI Agent] -->|MCP Protocol| B[Memory MCP Server]
    B --> C[SQLite Databases]
    B --> D[External Services]
    D --> E[Gemini AI]
    D --> F[Tavily Search]
    B --> G[Knowledge Graph]
    B --> H[Task Planner]

💻 Development

🛠️ Development Setup

# Install dependencies
npm install

# Run in development mode with auto-rebuild
npm run watch

# Run tests
npm test

# Start the MCP Inspector for debugging
npm run inspector

🧪 Testing

The project uses Jest for testing:

# Run all tests
npm test

# Run tests in watch mode
npm test -- --watch

# Run tests with coverage
npm test -- --coverage

🐛 Debugging

Since MCP servers communicate over stdio, use the MCP Inspector:

npm run inspector
# Opens a browser-based debugging interface

📝 Code Style

  • Language: TypeScript 5.3+
  • Style: ESLint configuration included
  • Format: Prettier compatible

📚 Documentation

📖 Key Documents

🎯 Quick Start Examples

Example 1: Creating an AI-Generated Plan
// 1. Refine the user prompt
const refinedPrompt = await refine_user_prompt({
  agent_id: "my-agent",
  raw_user_prompt: "Build a REST API for user management"
});

// 2. Create a plan from the refined prompt
const plan = await create_task_plan({
  agent_id: "my-agent",
  refined_prompt_id: refinedPrompt.refined_prompt_id
});

// 3. Get AI suggestions for subtasks
const subtasks = await ai_suggest_subtasks({
  agent_id: "my-agent",
  plan_id: plan.plan_id,
  parent_task_id: plan.task_ids[0]
});
Example 2: Knowledge Graph Operations
// Create entities
await knowledge_graph_memory({
  agent_id: "my-agent",
  operation: "create_entities",
  entities: [
    {
      name: "UserController",
      entityType: "class",
      observations: ["Handles user CRUD operations"]
    }
  ]
});

// Query with natural language
const results = await kg_nl_query({
  agent_id: "my-agent",
  query: "What classes handle user operations?"
});

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

🐛 Reporting Issues

  1. Check existing issues first
  2. Use issue templates
  3. Provide reproduction steps
  4. Include error logs

🔧 Pull Requests

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new features
  4. Ensure all tests pass
  5. Submit PR with clear description

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


Built with ❤️ for AI Agents

⬆ Back to Top

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
DeepChatYour AI Partner on Desktop
Playwright McpPlaywright MCP server
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
WindsurfThe new purpose-built IDE to harness magic
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
ChatWiseThe second fastest AI chatbot™
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
CursorThe AI Code Editor
Tavily Mcp
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Serper MCP ServerA Serper MCP Server
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Amap Maps高德地图官方 MCP Server