- Memtrace
Memtrace
Overview
What is Memtrace?
Memtrace is a structural memory layer for AI coding agents, delivered as an MCP server. It compiles any codebase into a live, bi-temporal knowledge graph from the AST — so agents like Claude Code, Cursor, Windsurf, and Zed can instantly query callers, callees, blast radius, dependency chains, and code evolution without re-reading source files on every turn. Instead of burning 60–90% of the context window re-deriving structure from scratch, agents get deterministic, millisecond-resolution answers from a graph that updates with every file save.
How to use Memtrace?
Install Memtrace with a single command, start the graph server, and index your repository. The MCP server starts automatically and connects to any MCP-compatible agent. From that point, your agent has persistent structural memory of the entire codebase — no configuration, no cloud account, no telemetry required.
npm install -g memtrace
memtrace start
memtrace index .
Key features of Memtrace?
- Structural knowledge graph compiled from the AST — every function, class, interface, and API endpoint as a typed node with deterministic relationships across 12 languages and 3 IaC formats.
- Bi-temporal episodic memory — every file save becomes a queryable episode with valid-time and transaction-time, enabling agents to time-travel through code history beyond git commits.
- Blast radius analysis — agents can compute exactly which files, functions, and services break before making any change.
- Hybrid retrieval engine — BM25 full-text, HNSW vector search, and graph traversal fused in a single query surface with RRF rank fusion.
- Six traversal strategies — Impact, Novel, Recent, Compound, Directional, and Overview — so agents can ask "what matters here?" rather than guessing.
- API topology mapping — cross-repo HTTP call graphs, cross-service dependency chains, and dead endpoint detection across microservice architectures.
- 40+ MCP tools covering indexing, search, relationships, impact analysis, temporal evolution, graph algorithms, and direct Cypher queries.
- Local-first, closed-source Rust — code never leaves the machine, no account required, no telemetry.
Use cases of Memtrace?
- Giving AI coding agents persistent structural memory so they stop re-deriving call graphs and type hierarchies on every turn, reducing token waste by up to 90%.
- Computing blast radius before a refactor — knowing exactly which callers, services, and tests break before any file is touched.
- Detecting structural drift and regressions mid-session, catching breaking changes at the moment they happen rather than at PR review.
- Onboarding agents to large, unfamiliar codebases instantly — the full symbol graph, dependency tree, and API topology are queryable from the first turn.
- Time-travelling through code evolution to understand how a function, type, or module changed over time, beyond what git log can express.
- Mapping cross-service API topology in microservice architectures to identify dead endpoints, dependency cycles, and blast radius across repos.
- Running multiple AI agents concurrently on the same codebase with shared structural context, amortizing graph computation across the fleet.
FAQ from Memtrace?
-
What agents and IDEs does Memtrace work with? Memtrace works with any MCP-compatible agent or IDE, including Claude Code, Cursor, Zed, VS Code, Windsurf, Continue, Aider, and OpenAI Codex CLI.
-
Does Memtrace send my code to the cloud? No. Memtrace is fully local-first. Your code never leaves your machine. There is no account, no telemetry, and no cloud dependency.
-
How long does indexing take? Initial indexing typically completes in seconds for most repositories. After that, each file save triggers an incremental re-index that completes in 200–800ms, keeping the graph current in real time.
-
Which programming languages are supported? Memtrace supports 12 programming languages and 3 IaC formats via Tree-sitter grammars, including TypeScript, JavaScript, Python, Rust, Go, Java, C, C++, Ruby, PHP, C#, and Swift, plus Terraform, Kubernetes, and Docker.
-
How is Memtrace different from RAG-based code search tools? RAG tools return lexically or semantically similar snippets. Memtrace returns structurally meaningful answers — the actual callers of a function, the real blast radius of a change, the deterministic dependency chain — because it reasons over a compiled knowledge graph, not over embeddings of text chunks.
-
Does Memtrace replace my existing tools like LSP or GitHub Copilot? No. Memtrace is additive. It gives any agent a structural memory layer that LSP servers and embedding-based tools don't provide. Your existing tools keep working; agents simply gain access to 40+ additional MCP tools for graph-based reasoning.
-
What is the performance impact of running Memtrace alongside my agent? Memtrace runs as a background daemon with minimal resource footprint. The graph database is a single binary with no external dependencies. Query response times are millisecond-class at P95.
Server Config
{
"mcpServers": {
"memtrace": {
"command": "memtrace",
"args": [
"mcp"
],
"env": {
"MEMTRACE_ARCADEDB_BOLT_URL": "bolt://localhost:7687"
}
}
}
}