Sponsored by Deepsite.site

Tag

#memo

213 results found

Scratchpad Mcp

scratchpad-mcp is an MCP server that gives AI agents persistent, token-efficient storage. It solves a specific waste problem: agents constantly re-read files they've already seen, re-summarize documents they've already processed, and re-load context they've already understood. Every one of those round-trips burns tokens for no new information. This server fixes that with eight tools designed around how agents actually work: Versioned writes. write_file automatically versions every write and keeps the 10 most recent versions per file. Storage is append-only on success and atomic on failure partial writes can't corrupt state. Structured diffs. read_file accepts a since_version parameter and returns a JSON line-diff against that prior version instead of the full content. Agents that have already seen v1 can ask "what changed in v3?" and get a small structured payload they can reason about, not the entire file again. Append-only logs. append_log and read_log give agents an event-stream they can replay. Cursor-based pagination (since_entry + last_entry_id + has_more) means an agent can checkpoint where it left off and resume cheaply. On-demand summaries. summarize_file calls Claude Haiku to summarize files over ~2000 estimated tokens. Summaries are cached per file version, so repeat calls on an unchanged file cost nothing. The threshold is enforced server-side you can't accidentally pay to summarize something small. Per-agent isolation. Every operation is scoped by an agent_id parameter, so one server instance can serve many agents without leaking state between them. Storage limits. 1 MB per file write, 64 KB per log entry, 1000 files / 100k log entries / 100 MB total per agent sane multi-tenant guardrails out of the box. Backed by a single SQLite file (Postgres migration is on the roadmap). All SQL is parameterized, paths are validated against a strict allowlist, and the security model is documented honestly it's safe for one-user-per-process deployments today, and the V2 plan derives agent_id from the caller's API key for true multi-tenancy. Build agents that remember what they've already seen.

Memtrace

Memtrace — Structural Memory for AI Coding Agents The Problem Every AI coding agent — Claude Code, Cursor, Codex, Copilot — starts each turn completely blank. It re-reads raw source files and re-derives the full call graph, type hierarchy, and import tree from scratch on every single invocation. That structural rework burns 60–90% of the context window before any real reasoning begins. Less than 5% of tokens in a typical agentic coding session contribute genuine new intelligence. The rest is expensive, redundant noise — and it compounds: accuracy drops 40% as sessions grow, stale context crowds out signal, and summaries strip out the structural relationships agents need most. The Solution Memtrace is a bi-temporal structural memory layer that turns your codebase into a live, queryable knowledge graph — compiled from the AST, not guessed from embeddings. Every function, class, interface, and API endpoint becomes a typed node with deterministic relationships. Every file save becomes a queryable episode with timestamps, so agents can reason about structure, detect regressions, and time-travel through their own work without re-reading anything. One Rust binary. Zero configuration. Five-minute install. What agents can do with it Find callers, callees, and dependencies instantly — no file scanning, no token waste Compute blast radius before making a change — know exactly what breaks before anything is touched Detect structural drift between sessions — catch regressions the moment they happen, not at PR review Time-travel through code evolution — query any prior state of any symbol, not just git commits Search across the full codebase with hybrid retrieval — BM25 full-text + HNSW vector + graph traversal fused in one query Map API topology across services — cross-repo HTTP call graphs, dependency chains, dead endpoint detection Benefits −90% token cost on structural queries (Mem0) +26% accuracy on multi-step agentic tasks (Mem0) −91% p95 latency on structural lookups vs. RAG baselines +32.8% SWE-bench bug-fix success rate when agents have graph context (RepoGraph) 200–800ms per-save re-indexing — every file save is a queryable episode in under a second 40+ MCP tools covering indexing, search, relationships, impact analysis, temporal evolution, API topology, graph algorithms, and direct Cypher queries 12 languages + 3 IaC formats supported via Tree-sitter grammars Local-first, closed-source Rust — code never leaves the machine, no account required, no telemetry

Mcp Server Rabel

Project description 🧠 RABEL MCP Server Recidive Active Brain Environment Layer Local-first AI memory with semantic search, graph relations, and soft pipelines. Mem0 inspired, HumoticaOS evolved. By Jasper & Root AI from HumoticaOS 💙 🚀 Quick Start # Install pip install mcp-server-rabel # For full features (vector search) pip install mcp-server-rabel[full] # Add to Claude CLI claude mcp add rabel -- python -m mcp_server_rabel # Verify claude mcp list # rabel: ✓ Connected 🤔 What is RABEL? RABEL gives AI assistants persistent memory that works 100% locally. Before RABEL: AI: "Who is Storm?" → "I don't know, you haven't told me" After RABEL: You: "Remember: Storm is Jasper's 7-year-old son" AI: *saves to RABEL* Later... You: "Who is Storm?" AI: *searches RABEL* → "Storm is Jasper's 7-year-old son!" No cloud. No API keys. No data leaving your machine. 🛠️ Available Tools Tool Description rabel_hello Test if RABEL is working rabel_add_memory Add a memory (fact, experience, knowledge) rabel_search Semantic search through memories rabel_add_relation Add graph relation (A --rel--> B) rabel_get_relations Query the knowledge graph rabel_get_guidance Get soft pipeline hints (EN/NL) rabel_next_step What should I do next? rabel_stats Memory statistics 📖 Examples Adding Memories # Remember facts rabel_add_memory(content="Jasper is the founder of HumoticaOS", scope="user") rabel_add_memory(content="TIBET handles trust and provenance", scope="team") rabel_add_memory(content="Always validate input before processing", scope="agent") Searching Memories # Semantic search - ask questions naturally rabel_search(query="Who founded HumoticaOS?") # → Returns: "Jasper is the founder of HumoticaOS" rabel_search(query="What handles trust?") # → Returns: "TIBET handles trust and provenance" Knowledge Graph # Add relations rabel_add_relation(subject="Jasper", predicate="father_of", object="Storm") rabel_add_relation(subject="TIBET", predicate="part_of", object="HumoticaOS") rabel_add_relation(subject="RABEL", predicate="part_of", object="HumoticaOS") # Query relations rabel_get_relations(subject="Jasper") # → Jasper --father_of--> Storm rabel_get_relations(predicate="part_of") # → TIBET --part_of--> HumoticaOS # → RABEL --part_of--> HumoticaOS Soft Pipelines (Bilingual!) # Get guidance in English rabel_get_guidance(intent="solve_puzzle", lang="en") # → "Puzzle: Read → Analyze → Attempt → Verify → Document" # Get guidance in Dutch rabel_get_guidance(intent="solve_puzzle", lang="nl") # → "Puzzel: Lezen → Analyseren → Proberen → Verifiëren → Documenteren" # What's next? rabel_next_step(intent="solve_puzzle", completed=["read", "analyze"]) # → Suggested next step: "attempt" 🏗️ Architecture ┌─────────────────────────────────────────────────────────────┐ │ RABEL │ │ Recidive Active Brain Environment Layer │ ├─────────────────────────────────────────────────────────────┤ │ │ │ Memory Layer → Semantic facts with embeddings │ │ Graph Layer → Relations between entities │ │ Soft Pipelines → Guidance without enforcement (EN/NL) │ │ │ │ Storage: SQLite + sqlite-vec (optional) │ │ Embeddings: Ollama nomic-embed-text (optional) │ │ │ │ 100% LOCAL - Zero cloud dependencies │ │ │ └─────────────────────────────────────────────────────────────┘ Graceful Degradation RABEL works with minimal dependencies: Feature Without extras With [full] Text memories ✅ ✅ Text search ✅ (LIKE query) ✅ (semantic) Graph relations ✅ ✅ Soft pipelines ✅ ✅ Vector search ❌ ✅ Embeddings ❌ ✅ (Ollama) 🌍 Philosophy "LOKAAL EERST - het systeem MOET werken zonder internet" (LOCAL FIRST - the system MUST work without internet) RABEL is built on the belief that: Your data stays yours - No cloud, no tracking, no API keys Soft guidance beats hard rules - Pipelines suggest, not enforce Bilingual by default - Dutch & English, more coming Graceful degradation - Works with minimal deps, better with more 🙏 Credits Inspired by: Mem0 - Thank you for the architecture insights! We took their ideas and made them: 100% local-first Bilingual (EN/NL) With soft pipelines With graph relations 🏢 Part of HumoticaOS RABEL is part of a larger ecosystem: Package Purpose Status mcp-server-tibet Trust & Provenance ✅ Available mcp-server-rabel Memory & Knowledge ✅ Available mcp-server-betti Complexity Management 🔜 Coming 📞 Contact HumoticaOS Website: humotica.com GitHub: github.com/jaspertvdm 📜 License MIT License - One love, one fAmIly 💙 Built with love in Den Dolder, Netherlands By Jasper & Root AI - December 2025