- A-MEM MCP Server
A-MEM MCP Server
A-MEM MCP Server
A Memory Control Protocol (MCP) server for the Agentic Memory (A-MEM) system - a flexible, dynamic memory system for LLM agents.
Overview
The A-MEM MCP Server provides a RESTful API wrapper around the core Agentic Memory (A-MEM) system, enabling easy integration with any LLM agent framework. The server exposes endpoints for memory creation, retrieval, updating, deletion, and search operations.
A-MEM is a novel agentic memory system for LLM agents that can dynamically organize memories without predetermined operations, drawing inspiration from the Zettelkasten method of knowledge management.
Key Features
- 🔄 RESTful API for memory operations
- 🧠 Dynamic memory organization based on Zettelkasten principles
- 🔍 Intelligent indexing and linking of memories
- 📝 Comprehensive note generation with structured attributes
- 🌐 Interconnected knowledge networks
- 🧬 Continuous memory evolution and refinement
- 🤖 Agent-driven decision making for adaptive memory management
Installation
- Clone the repository:
git clone https://github.com/Titan-co/amem-mcp-server.git
cd amem-mcp-server
- Install dependencies:
pip install -r requirements.txt
- Start the server:
uvicorn server:app --host 0.0.0.0 --port 8000 --reload
API Endpoints
Create Memory
- Endpoint: POST /memories
- Description: Create a new memory note
- Request Body:
{ "content": "string", "tags": ["string"], "category": "string", "timestamp": "string" }
Get Memory
- Endpoint: GET /memories/{id}
- Description: Retrieve a memory by ID
Update Memory
- Endpoint: PUT /memories/{id}
- Description: Update an existing memory
- Request Body:
{ "content": "string", "tags": ["string"], "category": "string", "context": "string", "keywords": ["string"] }
Delete Memory
- Endpoint: DELETE /memories/{id}
- Description: Delete a memory by ID
Search Memories
- Endpoint: GET /memories/search?query={query}&k={k}
- Description: Search for memories based on query
- Query Parameters:
query: Search query stringk: Number of results to return (default: 5)
Configuration
The server can be configured through environment variables:
OPENAI_API_KEY: API key for OpenAI servicesLLM_BACKEND: LLM backend to use (openaiorollama, default:openai)LLM_MODEL: LLM model to use (default:gpt-4)EMBEDDING_MODEL: Embedding model for semantic search (default:all-MiniLM-L6-v2)EVO_THRESHOLD: Number of memories before triggering evolution (default:3)
Documentation
Interactive API documentation is available at:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
References
Based on research paper: A-MEM: Agentic Memory for LLM Agents