Sponsored by Deepsite.site

Tag

#agent-memory

20 results found

Scratchpad Mcp

scratchpad-mcp is an MCP server that gives AI agents persistent, token-efficient storage. It solves a specific waste problem: agents constantly re-read files they've already seen, re-summarize documents they've already processed, and re-load context they've already understood. Every one of those round-trips burns tokens for no new information. This server fixes that with eight tools designed around how agents actually work: Versioned writes. write_file automatically versions every write and keeps the 10 most recent versions per file. Storage is append-only on success and atomic on failure partial writes can't corrupt state. Structured diffs. read_file accepts a since_version parameter and returns a JSON line-diff against that prior version instead of the full content. Agents that have already seen v1 can ask "what changed in v3?" and get a small structured payload they can reason about, not the entire file again. Append-only logs. append_log and read_log give agents an event-stream they can replay. Cursor-based pagination (since_entry + last_entry_id + has_more) means an agent can checkpoint where it left off and resume cheaply. On-demand summaries. summarize_file calls Claude Haiku to summarize files over ~2000 estimated tokens. Summaries are cached per file version, so repeat calls on an unchanged file cost nothing. The threshold is enforced server-side you can't accidentally pay to summarize something small. Per-agent isolation. Every operation is scoped by an agent_id parameter, so one server instance can serve many agents without leaking state between them. Storage limits. 1 MB per file write, 64 KB per log entry, 1000 files / 100k log entries / 100 MB total per agent sane multi-tenant guardrails out of the box. Backed by a single SQLite file (Postgres migration is on the roadmap). All SQL is parameterized, paths are validated against a strict allowlist, and the security model is documented honestly it's safe for one-user-per-process deployments today, and the V2 plan derives agent_id from the caller's API key for true multi-tenancy. Build agents that remember what they've already seen.

Memtrace

Memtrace — Structural Memory for AI Coding Agents The Problem Every AI coding agent — Claude Code, Cursor, Codex, Copilot — starts each turn completely blank. It re-reads raw source files and re-derives the full call graph, type hierarchy, and import tree from scratch on every single invocation. That structural rework burns 60–90% of the context window before any real reasoning begins. Less than 5% of tokens in a typical agentic coding session contribute genuine new intelligence. The rest is expensive, redundant noise — and it compounds: accuracy drops 40% as sessions grow, stale context crowds out signal, and summaries strip out the structural relationships agents need most. The Solution Memtrace is a bi-temporal structural memory layer that turns your codebase into a live, queryable knowledge graph — compiled from the AST, not guessed from embeddings. Every function, class, interface, and API endpoint becomes a typed node with deterministic relationships. Every file save becomes a queryable episode with timestamps, so agents can reason about structure, detect regressions, and time-travel through their own work without re-reading anything. One Rust binary. Zero configuration. Five-minute install. What agents can do with it Find callers, callees, and dependencies instantly — no file scanning, no token waste Compute blast radius before making a change — know exactly what breaks before anything is touched Detect structural drift between sessions — catch regressions the moment they happen, not at PR review Time-travel through code evolution — query any prior state of any symbol, not just git commits Search across the full codebase with hybrid retrieval — BM25 full-text + HNSW vector + graph traversal fused in one query Map API topology across services — cross-repo HTTP call graphs, dependency chains, dead endpoint detection Benefits −90% token cost on structural queries (Mem0) +26% accuracy on multi-step agentic tasks (Mem0) −91% p95 latency on structural lookups vs. RAG baselines +32.8% SWE-bench bug-fix success rate when agents have graph context (RepoGraph) 200–800ms per-save re-indexing — every file save is a queryable episode in under a second 40+ MCP tools covering indexing, search, relationships, impact analysis, temporal evolution, API topology, graph algorithms, and direct Cypher queries 12 languages + 3 IaC formats supported via Tree-sitter grammars Local-first, closed-source Rust — code never leaves the machine, no account required, no telemetry