- MCP Synaptic
MCP Synaptic
Memory-enhanced MCP server with local RAG database and expiring memory capabilities
Content
MCP Synaptic
A memory-enhanced MCP (Model Context Protocol) server with local RAG (Retrieval-Augmented Generation) database and expiring memory capabilities.
Features
🧠 Memory Management
- Expiring Memories: Store temporary memories with configurable TTL (Time To Live)
- Memory Types: Support for different memory categories (short-term, long-term, ephemeral)
- Automatic Cleanup: Background processes to remove expired memories
- Redis Integration: Optional Redis backend for distributed memory storage
📚 RAG Database
- Local Vector Storage: ChromaDB-based vector database for document storage
- Embedding Models: Built-in support for sentence-transformers models
- Semantic Search: Similarity-based document retrieval
- Document Management: Add, update, and delete documents with versioning
🔄 Real-time Communication
- Server-Sent Events (SSE): Real-time updates for memory and RAG operations
- MCP Protocol: Full Model Context Protocol implementation
- WebSocket Support: Alternative real-time communication channel
- Event Streaming: Live updates for memory expiration and document changes
🐳 Docker Ready
- Containerized Deployment: Ready-to-use Docker containers
- Docker Compose: Multi-service orchestration with Redis and database
- Environment Configuration: Flexible configuration through environment variables
Quick Start
Prerequisites
- Python 3.11 or higher
- UV package manager
- Docker (optional, for containerized deployment)
Installation
-
Clone the repository:
git clone https://github.com/your-org/mcp-synaptic.git cd mcp-synaptic -
Install dependencies:
# For API-based embeddings (recommended - lightweight) uv sync # For local embeddings (includes PyTorch - heavy) uv sync --extra local-embeddings -
Initialize the project:
uv run mcp-synaptic init -
Start the server:
uv run mcp-synaptic server
The server will start on http://localhost:8000 by default.
Docker Deployment
-
Build and run with Docker Compose:
docker-compose up --build -
Or run individual container:
docker build -t mcp-synaptic . docker run -p 8000:8000 mcp-synaptic
Configuration
Environment Variables
Create a .env file in the project root (use .env.example as template):
# Server Configuration
SERVER_HOST=localhost
SERVER_PORT=8000
DEBUG=false
LOG_LEVEL=INFO
# Database Configuration
SQLITE_DATABASE_PATH=./data/synaptic.db
CHROMADB_PERSIST_DIRECTORY=./data/chroma
# Memory Configuration
DEFAULT_MEMORY_TTL_SECONDS=3600
MAX_MEMORY_ENTRIES=10000
MEMORY_CLEANUP_INTERVAL_SECONDS=300
# RAG Configuration
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_PROVIDER=api
EMBEDDING_API_BASE=http://localhost:4000
EMBEDDING_API_KEY=your-api-key-here
MAX_RAG_RESULTS=10
RAG_SIMILARITY_THRESHOLD=0.7
# Redis (Optional)
REDIS_URL=redis://localhost:6379/0
REDIS_ENABLED=false
Memory Types
- Ephemeral: Very short-lived memories (seconds to minutes)
- Short-term: Session-based memories (minutes to hours)
- Long-term: Persistent memories (days to weeks)
- Permanent: Never-expiring memories
Embedding Configuration
API-based Embeddings (Recommended)
- Lightweight deployment without PyTorch dependencies
- Works with LiteLLM, OpenAI API, or any OpenAI-compatible endpoint
- Set
EMBEDDING_PROVIDER=apiand configureEMBEDDING_API_BASE
Local Embeddings
- Includes full PyTorch and sentence-transformers
- No external API dependency but much larger container
- Set
EMBEDDING_PROVIDER=localand install with--extra local-embeddings
Usage Examples
Python API
import asyncio
from mcp_synaptic import SynapticServer, Settings
async def main():
settings = Settings()
server = SynapticServer(settings)
# Add a memory with 1-hour expiration
await server.memory_manager.add(
key="user_preference",
data={"theme": "dark", "language": "en"},
ttl_seconds=3600
)
# Store a document in RAG database
await server.rag_database.add_document(
content="MCP Synaptic is a memory-enhanced server",
metadata={"source": "documentation", "version": "1.0"}
)
# Search for similar documents
results = await server.rag_database.search(
query="memory enhanced server",
limit=5
)
await server.start()
if __name__ == "__main__":
asyncio.run(main())
CLI Usage
# Start server with custom configuration
uv run mcp-synaptic server --host 0.0.0.0 --port 9000 --debug
# Initialize new project
uv run mcp-synaptic init ./my-project
# Show version
uv run mcp-synaptic version
SSE Client Example
const eventSource = new EventSource('http://localhost:8000/events');
eventSource.onmessage = function(event) {
const data = JSON.parse(event.data);
console.log('Event:', data);
};
// Listen for memory expiration events
eventSource.addEventListener('memory_expired', function(event) {
const data = JSON.parse(event.data);
console.log('Memory expired:', data.key);
});
// Listen for RAG document updates
eventSource.addEventListener('document_added', function(event) {
const data = JSON.parse(event.data);
console.log('Document added:', data.id);
});
API Endpoints
Memory Management
POST /memory- Add new memoryGET /memory/{key}- Retrieve memory by keyDELETE /memory/{key}- Delete memoryGET /memory- List all memories
RAG Database
POST /rag/documents- Add documentGET /rag/documents/{id}- Get document by IDPOST /rag/search- Search documentsDELETE /rag/documents/{id}- Delete document
Real-time Events
GET /events- SSE endpoint for real-time updatesGET /ws- WebSocket endpoint (alternative)
Development
Setup Development Environment
# Install development dependencies
uv sync --group dev
# Install pre-commit hooks
pre-commit install
# Run tests
uv run pytest
# Run type checking
uv run mypy mcp_synaptic
# Run linting
uv run ruff check mcp_synaptic
uv run black mcp_synaptic
# Run all checks
uv run pytest && uv run mypy mcp_synaptic && uv run ruff check mcp_synaptic
Project Structure
mcp-synaptic/
├── mcp_synaptic/ # Main package
│ ├── core/ # Core server functionality
│ ├── mcp/ # MCP protocol implementation
│ ├── sse/ # Server-Sent Events
│ ├── rag/ # RAG database
│ ├── memory/ # Memory management
│ ├── config/ # Configuration
│ └── utils/ # Utilities
├── tests/ # Test suite
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
├── data/ # Data storage
├── docker/ # Docker configuration
└── docs/ # Documentation
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Development Guidelines
- Follow PEP 8 style guidelines
- Add type hints to all functions
- Write comprehensive tests
- Update documentation for new features
- Use conventional commit messages
Testing
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=mcp_synaptic --cov-report=html
# Run specific test file
uv run pytest tests/unit/test_memory.py
# Run integration tests only
uv run pytest tests/integration/
Performance
Benchmarks
- Memory Operations: 10,000+ ops/sec
- RAG Search: Sub-100ms response time
- Concurrent Connections: 1,000+ SSE connections
- Memory Footprint: <100MB baseline
Optimization Tips
- Use Redis for distributed setups
- Tune embedding model for your use case
- Configure appropriate TTL values
- Monitor memory cleanup intervals
Deployment
Production Deployment
# Using Docker Compose
docker-compose -f docker-compose.prod.yml up -d
# Using systemd service
sudo systemctl enable mcp-synaptic
sudo systemctl start mcp-synaptic
Monitoring
- Health check endpoint:
GET /health - Metrics endpoint:
GET /metrics - Admin interface:
GET /admin
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Model Context Protocol for the MCP specification
- ChromaDB for vector database capabilities
- FastAPI for the web framework
- Sentence Transformers for embeddings
Support
MCP Synaptic - Bridging memories and knowledge for intelligent AI systems.
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
WindsurfThe new purpose-built IDE to harness magic
Playwright McpPlaywright MCP server
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题;
Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
CursorThe AI Code Editor
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Amap Maps高德地图官方 MCP Server
ChatWiseThe second fastest AI chatbot™
Serper MCP ServerA Serper MCP Server
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
DeepChatYour AI Partner on Desktop
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Tavily Mcp
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.