Sponsored by Deepsite.site

Tag

#agent

1174 results found

Xrpl Utilities

MCP server for the XRPL-Utilities portfolio. 11 tools across four services: • XR-Sentinel — classify any XRPL wallet by its on-chain activity pattern. Returns a 0–100 activity_score, a Low/Medium/High/Dormant level, behavioral signals from a 22-entry catalog, top counterparties with XRPScan labels, and an AI-generated reasoning narrative. • XR-Pulse — normalized signal feed mixing public-source news (regulatory press + central banks + crypto media filtered for XRP/RLUSD/XRPL), on-chain whale activity, and XLS-70/80/81 permissioned-domain lifecycle events. Each row carries 4-hour XRPL price correlation, institutional watchlist labels, and Sentinel cross-references. • XR-Telemetry — XRPL macro snapshot. Total/circulating/escrowed/dormant supply, AMM-locked, exchange omnibus, DEX orderbook depth, and a derived Active Float model with the full additive mathematical bridge. Two payment flows: inline x402 OR async invoice (deeplink + QR). • XR-Trust — directory + drill-down for the XRPL permissioned-asset stack. PermissionedDomain (XLS-80) enumeration, credential issuer aggregation, XLS-81 permissioned-DEX trade economics, and XLS-40 DID identity bridge with .well-known/xrp-ledger.toml resolution. Stateless passthrough proxy — every paid call uses the caller's own x402 v2 payment header (XRP or RLUSD), settled on XRPL mainnet via the t54 facilitator. $0.10 USD per query. The MCP server holds no wallets and takes no cut.

Scratchpad Mcp

scratchpad-mcp is an MCP server that gives AI agents persistent, token-efficient storage. It solves a specific waste problem: agents constantly re-read files they've already seen, re-summarize documents they've already processed, and re-load context they've already understood. Every one of those round-trips burns tokens for no new information. This server fixes that with eight tools designed around how agents actually work: Versioned writes. write_file automatically versions every write and keeps the 10 most recent versions per file. Storage is append-only on success and atomic on failure partial writes can't corrupt state. Structured diffs. read_file accepts a since_version parameter and returns a JSON line-diff against that prior version instead of the full content. Agents that have already seen v1 can ask "what changed in v3?" and get a small structured payload they can reason about, not the entire file again. Append-only logs. append_log and read_log give agents an event-stream they can replay. Cursor-based pagination (since_entry + last_entry_id + has_more) means an agent can checkpoint where it left off and resume cheaply. On-demand summaries. summarize_file calls Claude Haiku to summarize files over ~2000 estimated tokens. Summaries are cached per file version, so repeat calls on an unchanged file cost nothing. The threshold is enforced server-side you can't accidentally pay to summarize something small. Per-agent isolation. Every operation is scoped by an agent_id parameter, so one server instance can serve many agents without leaking state between them. Storage limits. 1 MB per file write, 64 KB per log entry, 1000 files / 100k log entries / 100 MB total per agent sane multi-tenant guardrails out of the box. Backed by a single SQLite file (Postgres migration is on the roadmap). All SQL is parameterized, paths are validated against a strict allowlist, and the security model is documented honestly it's safe for one-user-per-process deployments today, and the V2 plan derives agent_id from the caller's API key for true multi-tenancy. Build agents that remember what they've already seen.

Memtrace

Memtrace — Structural Memory for AI Coding Agents The Problem Every AI coding agent — Claude Code, Cursor, Codex, Copilot — starts each turn completely blank. It re-reads raw source files and re-derives the full call graph, type hierarchy, and import tree from scratch on every single invocation. That structural rework burns 60–90% of the context window before any real reasoning begins. Less than 5% of tokens in a typical agentic coding session contribute genuine new intelligence. The rest is expensive, redundant noise — and it compounds: accuracy drops 40% as sessions grow, stale context crowds out signal, and summaries strip out the structural relationships agents need most. The Solution Memtrace is a bi-temporal structural memory layer that turns your codebase into a live, queryable knowledge graph — compiled from the AST, not guessed from embeddings. Every function, class, interface, and API endpoint becomes a typed node with deterministic relationships. Every file save becomes a queryable episode with timestamps, so agents can reason about structure, detect regressions, and time-travel through their own work without re-reading anything. One Rust binary. Zero configuration. Five-minute install. What agents can do with it Find callers, callees, and dependencies instantly — no file scanning, no token waste Compute blast radius before making a change — know exactly what breaks before anything is touched Detect structural drift between sessions — catch regressions the moment they happen, not at PR review Time-travel through code evolution — query any prior state of any symbol, not just git commits Search across the full codebase with hybrid retrieval — BM25 full-text + HNSW vector + graph traversal fused in one query Map API topology across services — cross-repo HTTP call graphs, dependency chains, dead endpoint detection Benefits āˆ’90% token cost on structural queries (Mem0) +26% accuracy on multi-step agentic tasks (Mem0) āˆ’91% p95 latency on structural lookups vs. RAG baselines +32.8% SWE-bench bug-fix success rate when agents have graph context (RepoGraph) 200–800ms per-save re-indexing — every file save is a queryable episode in under a second 40+ MCP tools covering indexing, search, relationships, impact analysis, temporal evolution, API topology, graph algorithms, and direct Cypher queries 12 languages + 3 IaC formats supported via Tree-sitter grammars Local-first, closed-source Rust — code never leaves the machine, no account required, no telemetry

Chain.Love MCP

## Overview ### what is Chain.Love MCP? Chain.Love MCP is a hosted remote MCP server and gateway for AI agents. It provides a single endpoint for discovering and comparing Web3 infrastructure services across 50+ blockchain networks, including RPCs, indexing, oracles, storage, compute, and developer tools. ### how to use Chain.Love MCP? To use Chain.Love MCP, add the hosted endpoint to your MCP client and connect to `https://app.chain.love/mcp` over Streamable HTTP. For public use cases, the basic MCP server URL is enough. For private downstream MCPs, add credentials only when required using `x-chainlove-cred-<credentialKey>` headers. ### key features of Chain.Love MCP? - Hosted remote MCP gateway for AI agents - Single endpoint for Web3 infrastructure discovery across 50+ blockchain networks - Aggregates infrastructure options across RPCs, indexing, oracles, storage, compute, and developer tools - Streamable HTTP transport - Public documentation and onboarding resources available online ### use cases of Chain.Love MCP? - Discovering and comparing Web3 infrastructure providers across many blockchain networks - Finding RPC, indexing, oracle, storage, compute, and developer tooling options through one MCP server - Giving AI agents a single hosted integration surface for Web3 infrastructure discovery - Reducing the need to integrate many separate provider-specific endpoints ### FAQ from Chain.Love MCP? - Can Chain.Love MCP be used as a hosted remote MCP server? Yes. Chain.Love MCP is designed to be consumed as a hosted remote MCP endpoint at `https://app.chain.love/mcp`. - Does Chain.Love MCP require credentials? Not always. Some downstream integrations may require credentials, which can be passed using `x-chainlove-cred-<credentialKey>` headers when needed. - How do I know which credential header to use? You can check the open-source Chain.Love registry at `https://github.com/Chain-Love/chain-love/blob/main/references/offers/mcpservers.csv` or browse `https://app.chain.love/toolbox/mcpservers` and look for the relevant `credentialKey` value. - Where can I learn more? Landing page: `https://www.chain.love/mcp-gateway` Documentation: `https://chain-love.gitbook.io/mcp-module`