Sponsored by Deepsite.site

Memtrace

Created By
syncable-dev15 days ago
Memtrace — Structural Memory for AI Coding Agents The Problem Every AI coding agent — Claude Code, Cursor, Codex, Copilot — starts each turn completely blank. It re-reads raw source files and re-derives the full call graph, type hierarchy, and import tree from scratch on every single invocation. That structural rework burns 60–90% of the context window before any real reasoning begins. Less than 5% of tokens in a typical agentic coding session contribute genuine new intelligence. The rest is expensive, redundant noise — and it compounds: accuracy drops 40% as sessions grow, stale context crowds out signal, and summaries strip out the structural relationships agents need most. The Solution Memtrace is a bi-temporal structural memory layer that turns your codebase into a live, queryable knowledge graph — compiled from the AST, not guessed from embeddings. Every function, class, interface, and API endpoint becomes a typed node with deterministic relationships. Every file save becomes a queryable episode with timestamps, so agents can reason about structure, detect regressions, and time-travel through their own work without re-reading anything. One Rust binary. Zero configuration. Five-minute install. What agents can do with it Find callers, callees, and dependencies instantly — no file scanning, no token waste Compute blast radius before making a change — know exactly what breaks before anything is touched Detect structural drift between sessions — catch regressions the moment they happen, not at PR review Time-travel through code evolution — query any prior state of any symbol, not just git commits Search across the full codebase with hybrid retrieval — BM25 full-text + HNSW vector + graph traversal fused in one query Map API topology across services — cross-repo HTTP call graphs, dependency chains, dead endpoint detection Benefits −90% token cost on structural queries (Mem0) +26% accuracy on multi-step agentic tasks (Mem0) −91% p95 latency on structural lookups vs. RAG baselines +32.8% SWE-bench bug-fix success rate when agents have graph context (RepoGraph) 200–800ms per-save re-indexing — every file save is a queryable episode in under a second 40+ MCP tools covering indexing, search, relationships, impact analysis, temporal evolution, API topology, graph algorithms, and direct Cypher queries 12 languages + 3 IaC formats supported via Tree-sitter grammars Local-first, closed-source Rust — code never leaves the machine, no account required, no telemetry
Content

Overview

What is Memtrace?

Memtrace is a structural memory layer for AI coding agents, delivered as an MCP server. It compiles any codebase into a live, bi-temporal knowledge graph from the AST — so agents like Claude Code, Cursor, Windsurf, and Zed can instantly query callers, callees, blast radius, dependency chains, and code evolution without re-reading source files on every turn. Instead of burning 60–90% of the context window re-deriving structure from scratch, agents get deterministic, millisecond-resolution answers from a graph that updates with every file save.

How to use Memtrace?

Install Memtrace with a single command, start the graph server, and index your repository. The MCP server starts automatically and connects to any MCP-compatible agent. From that point, your agent has persistent structural memory of the entire codebase — no configuration, no cloud account, no telemetry required.

npm install -g memtrace
memtrace start
memtrace index .

Key features of Memtrace?

  • Structural knowledge graph compiled from the AST — every function, class, interface, and API endpoint as a typed node with deterministic relationships across 12 languages and 3 IaC formats.
  • Bi-temporal episodic memory — every file save becomes a queryable episode with valid-time and transaction-time, enabling agents to time-travel through code history beyond git commits.
  • Blast radius analysis — agents can compute exactly which files, functions, and services break before making any change.
  • Hybrid retrieval engine — BM25 full-text, HNSW vector search, and graph traversal fused in a single query surface with RRF rank fusion.
  • Six traversal strategies — Impact, Novel, Recent, Compound, Directional, and Overview — so agents can ask "what matters here?" rather than guessing.
  • API topology mapping — cross-repo HTTP call graphs, cross-service dependency chains, and dead endpoint detection across microservice architectures.
  • 40+ MCP tools covering indexing, search, relationships, impact analysis, temporal evolution, graph algorithms, and direct Cypher queries.
  • Local-first, closed-source Rust — code never leaves the machine, no account required, no telemetry.

Use cases of Memtrace?

  1. Giving AI coding agents persistent structural memory so they stop re-deriving call graphs and type hierarchies on every turn, reducing token waste by up to 90%.
  2. Computing blast radius before a refactor — knowing exactly which callers, services, and tests break before any file is touched.
  3. Detecting structural drift and regressions mid-session, catching breaking changes at the moment they happen rather than at PR review.
  4. Onboarding agents to large, unfamiliar codebases instantly — the full symbol graph, dependency tree, and API topology are queryable from the first turn.
  5. Time-travelling through code evolution to understand how a function, type, or module changed over time, beyond what git log can express.
  6. Mapping cross-service API topology in microservice architectures to identify dead endpoints, dependency cycles, and blast radius across repos.
  7. Running multiple AI agents concurrently on the same codebase with shared structural context, amortizing graph computation across the fleet.

FAQ from Memtrace?

  • What agents and IDEs does Memtrace work with? Memtrace works with any MCP-compatible agent or IDE, including Claude Code, Cursor, Zed, VS Code, Windsurf, Continue, Aider, and OpenAI Codex CLI.

  • Does Memtrace send my code to the cloud? No. Memtrace is fully local-first. Your code never leaves your machine. There is no account, no telemetry, and no cloud dependency.

  • How long does indexing take? Initial indexing typically completes in seconds for most repositories. After that, each file save triggers an incremental re-index that completes in 200–800ms, keeping the graph current in real time.

  • Which programming languages are supported? Memtrace supports 12 programming languages and 3 IaC formats via Tree-sitter grammars, including TypeScript, JavaScript, Python, Rust, Go, Java, C, C++, Ruby, PHP, C#, and Swift, plus Terraform, Kubernetes, and Docker.

  • How is Memtrace different from RAG-based code search tools? RAG tools return lexically or semantically similar snippets. Memtrace returns structurally meaningful answers — the actual callers of a function, the real blast radius of a change, the deterministic dependency chain — because it reasons over a compiled knowledge graph, not over embeddings of text chunks.

  • Does Memtrace replace my existing tools like LSP or GitHub Copilot? No. Memtrace is additive. It gives any agent a structural memory layer that LSP servers and embedding-based tools don't provide. Your existing tools keep working; agents simply gain access to 40+ additional MCP tools for graph-based reasoning.

  • What is the performance impact of running Memtrace alongside my agent? Memtrace runs as a background daemon with minimal resource footprint. The graph database is a single binary with no external dependencies. Query response times are millisecond-class at P95.

Server Config

{
  "mcpServers": {
    "memtrace": {
      "command": "memtrace",
      "args": [
        "mcp"
      ],
      "env": {
        "MEMTRACE_ARCADEDB_BOLT_URL": "bolt://localhost:7687"
      }
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
WindsurfThe new purpose-built IDE to harness magic
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
CursorThe AI Code Editor
Playwright McpPlaywright MCP server
Tavily Mcp
Serper MCP ServerA Serper MCP Server
ChatWiseThe second fastest AI chatbot™
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Amap Maps高德地图官方 MCP Server
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
DeepChatYour AI Partner on Desktop
Y GuiA web-based graphical interface for AI chat interactions with support for multiple AI models and MCP (Model Context Protocol) servers.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
RedisA Model Context Protocol server that provides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.