Sponsored by Deepsite.site

Tag

#ast

360 results found

Memtrace

Memtrace — Structural Memory for AI Coding Agents The Problem Every AI coding agent — Claude Code, Cursor, Codex, Copilot — starts each turn completely blank. It re-reads raw source files and re-derives the full call graph, type hierarchy, and import tree from scratch on every single invocation. That structural rework burns 60–90% of the context window before any real reasoning begins. Less than 5% of tokens in a typical agentic coding session contribute genuine new intelligence. The rest is expensive, redundant noise — and it compounds: accuracy drops 40% as sessions grow, stale context crowds out signal, and summaries strip out the structural relationships agents need most. The Solution Memtrace is a bi-temporal structural memory layer that turns your codebase into a live, queryable knowledge graph — compiled from the AST, not guessed from embeddings. Every function, class, interface, and API endpoint becomes a typed node with deterministic relationships. Every file save becomes a queryable episode with timestamps, so agents can reason about structure, detect regressions, and time-travel through their own work without re-reading anything. One Rust binary. Zero configuration. Five-minute install. What agents can do with it Find callers, callees, and dependencies instantly — no file scanning, no token waste Compute blast radius before making a change — know exactly what breaks before anything is touched Detect structural drift between sessions — catch regressions the moment they happen, not at PR review Time-travel through code evolution — query any prior state of any symbol, not just git commits Search across the full codebase with hybrid retrieval — BM25 full-text + HNSW vector + graph traversal fused in one query Map API topology across services — cross-repo HTTP call graphs, dependency chains, dead endpoint detection Benefits −90% token cost on structural queries (Mem0) +26% accuracy on multi-step agentic tasks (Mem0) −91% p95 latency on structural lookups vs. RAG baselines +32.8% SWE-bench bug-fix success rate when agents have graph context (RepoGraph) 200–800ms per-save re-indexing — every file save is a queryable episode in under a second 40+ MCP tools covering indexing, search, relationships, impact analysis, temporal evolution, API topology, graph algorithms, and direct Cypher queries 12 languages + 3 IaC formats supported via Tree-sitter grammars Local-first, closed-source Rust — code never leaves the machine, no account required, no telemetry

Chain.Love MCP

## Overview ### what is Chain.Love MCP? Chain.Love MCP is a hosted remote MCP server and gateway for AI agents. It provides a single endpoint for discovering and comparing Web3 infrastructure services across 50+ blockchain networks, including RPCs, indexing, oracles, storage, compute, and developer tools. ### how to use Chain.Love MCP? To use Chain.Love MCP, add the hosted endpoint to your MCP client and connect to `https://app.chain.love/mcp` over Streamable HTTP. For public use cases, the basic MCP server URL is enough. For private downstream MCPs, add credentials only when required using `x-chainlove-cred-<credentialKey>` headers. ### key features of Chain.Love MCP? - Hosted remote MCP gateway for AI agents - Single endpoint for Web3 infrastructure discovery across 50+ blockchain networks - Aggregates infrastructure options across RPCs, indexing, oracles, storage, compute, and developer tools - Streamable HTTP transport - Public documentation and onboarding resources available online ### use cases of Chain.Love MCP? - Discovering and comparing Web3 infrastructure providers across many blockchain networks - Finding RPC, indexing, oracle, storage, compute, and developer tooling options through one MCP server - Giving AI agents a single hosted integration surface for Web3 infrastructure discovery - Reducing the need to integrate many separate provider-specific endpoints ### FAQ from Chain.Love MCP? - Can Chain.Love MCP be used as a hosted remote MCP server? Yes. Chain.Love MCP is designed to be consumed as a hosted remote MCP endpoint at `https://app.chain.love/mcp`. - Does Chain.Love MCP require credentials? Not always. Some downstream integrations may require credentials, which can be passed using `x-chainlove-cred-<credentialKey>` headers when needed. - How do I know which credential header to use? You can check the open-source Chain.Love registry at `https://github.com/Chain-Love/chain-love/blob/main/references/offers/mcpservers.csv` or browse `https://app.chain.love/toolbox/mcpservers` and look for the relevant `credentialKey` value. - Where can I learn more? Landing page: `https://www.chain.love/mcp-gateway` Documentation: `https://chain-love.gitbook.io/mcp-module`

Pro Tools Mcp

# protools-mcp **A natural language interface for Avid Pro Tools, powered by Claude and the Model Context Protocol.** protools-mcp is a local [MCP (Model Context Protocol)](https://modelcontextprotocol.io) server that connects Claude — or any MCP-compatible AI assistant — to a live Pro Tools session via the PTSL (Pro Tools Scripting Library) API. Instead of navigating menus or writing scripts, you describe what you need in plain language and Claude handles it directly in your session. Built for podcast post-production workflows, it works equally well across music production, broadcast, and audio post. --- ## What It Looks Like in Practice Once connected, you can have conversations like: > *"What's in this session?"* > *"Search the transcript for everywhere the guest mentions climate change."* > *"Set a marker at 00:14:32:00 called 'Act Two'."* > *"Mute all the music tracks and solo the host."* > *"What clips are on the timeline between 22 and 35 minutes?"* > *"Save a new version of this session called EP47_mix_v2."* > *"Export the dialogue tracks as an AAF to the delivery folder."* Claude reads your live session state, answers questions about your timeline, and executes write operations directly in Pro Tools — no scripting, no keyboard shortcuts, no menu diving. --- ## Capabilities 25+ tools across 7 functional groups: | Group | Tools | Description | |-------|-------|-------------| | **Session** | `get_session_info`, `get_markers`, `get_track_list`, `get_session_snapshot`, `get_show_profile` | Session metadata, tracks, markers, show profiles | | **Tracks** | `get_track_edl`, `get_track_playlists`, `get_clips_in_range` | Clip-level detail, playlists, time-range queries | | **Transcript** | `get_transcript`, `search_transcript`, `get_transcript_for_range` | Speech-to-text CSV search with context and speaker labels | | **Navigation** | `get_playhead_position`, `get_current_selection`, `set_playhead` | Playhead and selection state | | **Edit** | `select_region`, `create_marker`, `mute_track`, `unmute_track`, `solo_track`, `consolidate_clip` | Session modifications (Claude confirms before executing) | | **Session Mgmt** | `save_session`, `close_session`, `open_session`, `save_session_as` | Save, close, open, and version sessions | | **Export** | `export_tracks_as_aaf` | AAF export with configurable format, bit depth, and copy option | An optional **Show Profile** system lets you define per-show configuration — host names, track layouts, naming conventions — so Claude has the context it needs to work intelligently across multiple shows. --- ## Prerequisites - **macOS** with Pro Tools running (PTSL listens on `localhost:31416`) - **Python 3.11+** (tested with 3.11) - **py-ptsl** installed system-wide or in a virtual environment - **Claude Desktop** or **Claude Code** (for MCP integration) - **Accessibility permission** for Claude/terminal (required for AAF export dialog automation) --- ## Setup 1. **Clone this repository:** ```bash git clone https://github.com/BlueElevatorProductions/protools-mcp.git cd protools-mcp ``` 2. **Create a virtual environment:** ```bash python3 -m venv venv --system-site-packages source venv/bin/activate pip install -r requirements.txt --no-cache-dir ``` The `--system-site-packages` flag reuses your system-wide `py-ptsl` and `grpcio` installs. 3. **Configure `.env`** (optional — defaults shown): ``` PTSL_HOST=localhost PTSL_PORT=31416 ``` 4. **Add show profiles** (optional): Place JSON files in `show_profiles/`. See `show_profiles/holy_uncertain.json` for the format. 5. **Register with Claude Desktop** — add to `~/Library/Application Support/Claude/claude_desktop_config.json`: ```json { "mcpServers": { "protools-mcp": { "command": "/path/to/protools-mcp/venv/bin/python", "args": ["/path/to/protools-mcp/server.py"] } } } ``` This makes the server available in Claude Desktop Chat, Cowork, and Code sessions. For **Claude Code CLI only**, use: ```bash claude mcp add protools-mcp -s user -- /path/to/protools-mcp/venv/bin/python /path/to/protools-mcp/server.py ``` 6. **Grant Accessibility access** (required for AAF export automation): System Settings > Privacy & Security > Accessibility — enable Claude Desktop and/or your terminal app. 7. **Open Pro Tools** with a session loaded. The MCP server connects lazily on first tool call. --- ## Tool Reference ### Session Context (read-only) - **`get_session_snapshot()`** — Composite: session info + markers + tracks + auto-matched show profile. **Start here.** Best tool to call at the beginning of any session conversation. - **`get_session_info()`** — Session name, path, sample rate, bit depth, timecode format, track count, audio file count. - **`get_markers()`** — All memory location markers with index, name, timecode, and comment. - **`get_track_list(filter="all")`** — Tracks with active, muted, soloed, and hidden state. Filter: `all`, `active`, `audio`, `inactive`. - **`get_show_profile(show_id?)`** — Returns show profile config. Auto-infers from session name prefix if no ID given. ### Track Detail (read-only) - **`get_track_edl(track_name)`** — Full clip list for a track: clip name, start/end timecodes, duration, state. - **`get_track_playlists(track_name)`** — All playlists on a track, including inactive alternates. - **`get_clips_in_range(start_timecode, end_timecode, track_filter?)`** — All clips across tracks within a timecode range. ### Transcript (read-only) - **`get_transcript()`** — Full transcript from a Pro Tools Speech-to-Text CSV export. - **`search_transcript(query, track_filter?, start_timecode?, end_timecode?)`** — Keyword search with 2-row context window. - **`get_transcript_for_range(start_timecode, end_timecode)`** — Transcript rows in a time range, formatted as `SPEAKER: text` dialogue. ### Navigation - **`get_playhead_position()`** — Current playhead timecode. - **`get_current_selection()`** — Start, end, duration, and selected track names. - **`set_playhead(timecode)`** — Moves the playhead to a specified timecode. ### Edit Operations (write) All write tools are labeled `[WRITE]`. Claude will describe the operation and confirm before executing. - **`select_region(start_timecode, end_timecode, track_names?)`** — Sets timeline selection. Non-destructive. - **`create_marker(name, timecode, comment?)`** — Adds a memory location marker at the specified timecode. - **`mute_track(track_name)`** / **`unmute_track(track_name)`** — Toggles track mute state. - **`solo_track(track_name)`** — Solos a track. - **`consolidate_clip(track_name, start_timecode, end_timecode)`** — Consolidates a region into a single clip. **Creates a new audio file on disk.** ### Session Management (write) - **`save_session()`** — Saves the current session to disk. - **`save_session_as(session_name, session_location)`** — Saves with a new name. `session_name` is the filename without extension; `session_location` is the target directory. - **`close_session(save_before_close=True)`** — Closes the session, optionally saving first. - **`open_session(session_path)`** — Opens a `.ptx` or `.ptf` session file. ### Export (write) - **`export_tracks_as_aaf(...)`** — Exports selected tracks as an AAF. Handles the Pro Tools folder dialog automatically via osascript. - `audio_format`: `WAV` (default), `AIFF`, `MXF`, `Embedded` - `bit_depth`: `24` (default), `16` - `copy_option`: `copy` (default), `consolidate`, `link` - `quantize_to_frame`: `true` (default) - `avid_compatible`: `false` (default) — enforce Media Composer compatibility - `stereo_as_multichannel`: `false` (default) - `sequence_name`: defaults to `file_name` --- ## Transcript Support The transcript tools expect a Pro Tools Speech-to-Text CSV export alongside the session. Place the CSV in your session directory (or set `transcript_export_path` in your show profile) and the server will discover it automatically. It reloads on file modification, so the data stays current as you iterate on transcripts. --- ## Show Profile Format Show profiles let Claude understand the structure of a specific show — which tracks belong to which speakers, naming conventions, and where exports live. Place JSON files in `show_profiles/`. Profiles are auto-matched by session name prefix. ```json { "show_id": "HU", "show_name": "Holy Uncertain", "session_name_prefix": "HU-", "hosts": ["Chris", "Lauren"], "dialogue_tracks": ["Chris", "Lauren Int R", "Chris Int R"], "guest_tracks": ["Randy Int R"], "music_tracks": ["Music"], "transcript_export_path": "/path/to/episodes/", "naming_conventions": { "session": "HU-{episode_number}-{guest_last_name}-V{version}", "export": "HU-{episode_number}-{guest_last_name}-MIX-V{version}" } } ``` --- ## Architecture ``` Claude Desktop ──stdio──▶ server.py (FastMCP) │ ┌─────────────┼─────────────┐ ▼ ▼ ▼ PTSLBridge Transcript ShowProfile (gRPC) Watcher Loader │ │ │ ▼ ▼ ▼ Pro Tools CSV files JSON files :31416 ``` - **PTSLBridge** — Lazy gRPC connection with auto-reconnect. The `@ptsl_command` decorator handles errors uniformly. Custom `Operation` subclasses cover PTSL commands not in py-ptsl's ops module. - **TranscriptWatcher** — Stat-based CSV cache. Reloads only when the file's `mtime` changes. Auto-discovers CSV by searching the session directory. - **ShowProfileLoader** — Reads `show_profiles/*.json` once at startup, matches sessions by name prefix. - **osascript integration** — For PTSL commands that trigger Pro Tools dialogs (e.g., AAF export), the bridge runs the command in a background thread and uses System Events to dismiss the dialog automatically. Requires Accessibility permission. --- ## Error Handling All PTSL errors return structured dicts before being raised as `ToolError`: | Error Key | Meaning | |-----------|---------| | `ptsl_unavailable` | Pro Tools not running or gRPC connection lost | | `no_session` | No session is open in Pro Tools | | `ptsl_command_error` | PTSL command failed (details in message) | | `no_transcript` | No transcript CSV found or configured | | `dialog_waiting` | AAF export dialog needs manual confirmation (Accessibility not granted) | --- ## Implementation Notes - **Timecode format**: Pro Tools uses `HH:MM:SS:FF`. Markers return raw sample positions internally; the bridge converts using `samples_to_timecode(samples, sample_rate, fps)`. - **Track `active` field**: Derived from `is_inactive == TAState_None` on `TrackAttributes`. Distinct from muted/hidden. - **EDL text**: Parsed from Pro Tools' tab-delimited text export with columns: `CHANNEL`, `EVENT`, `CLIP NAME`, `START TIME`, `END TIME`, `DURATION`, `STATE`. - **Pro Tools quirks**: `SaveSessionAs` and directory paths require a trailing `/`. Some commands (`GetTrackPlaylists`, `GetPlaylistElements`) need `CId_`-prefixed command IDs. Empty `track_id` fields must be stripped from JSON to avoid "only one of track_id/track_name" errors. - **Connection management**: gRPC connections can go stale between calls. The `@ptsl_command` decorator catches `grpc.RpcError` and resets the connection automatically. --- ## Troubleshooting - **"Pro Tools is not running"** — Make sure Pro Tools is open with a session loaded. PTSL listens on port `31416`. - **Transcript not found** — Set `transcript_export_path` in your show profile, or place the CSV next to the session file. - **Stale data** — EDL cache expires after 30 seconds. Transcripts reload on file modification. Call tools again for fresh data. - **AAF export hangs** — Grant Accessibility access in System Settings > Privacy & Security > Accessibility for the app running the MCP server. - **"only one of track_id and track_name"** — Handled internally by `json_messup()` overrides on custom Operations. --- ## Contributing Issues and pull requests welcome. If you're using this in a specific workflow and run into edge cases, open an issue — Pro Tools has many quirks and real-world sessions surface them fast. --- ## License MIT --- *Built by [Blue Elevator Productions](https://blueelevatorproductions.com)*

Intelligence Aeternum Data Portal

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735) The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Intelligence Aeternum Data Portal

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735). The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Codegraph Rust

🎯 Overview CodeGraph is a powerful CLI tool that combines MCP (Model Context Protocol) server management with sophisticated code analysis capabilities. It provides a unified interface for indexing projects, managing embeddings, and running MCP servers with multiple transport options. All you now need is an Agent(s) to create your very own deep code and project knowledge synthehizer system! Key Capabilities 🔍 Advanced Code Analysis: Parse and analyze code across multiple languages using Tree-sitter 🚄 Dual Transport Support: Run MCP servers with STDIO, HTTP, or both simultaneously 🎯 Vector Search: Semantic code search using FAISS-powered vector embeddings 📊 Graph-Based Architecture: Navigate code relationships with RocksDB-backed graph storage ⚡ High Performance: Optimized for large codebases with parallel processing and batched embeddings 🔧 Flexible Configuration: Extensive configuration options for embedding models and performance tuning RAW PERFORMANCE ✨✨✨ 170K lines of rust code in 0.49sec! 21024 embeddings in 3:24mins! On M3 Pro 32GB Qdrant/all-MiniLM-L6-v2-onnx on CPU no Metal acceleration used! Parsing completed: 353/353 files, 169397 lines in 0.49s (714.5 files/s, 342852 lines/s) [00:03:24] [########################################] 21024/21024 Embeddings complete ✨ Features Core Features Project Indexing Multi-language support (Rust, Python, JavaScript, TypeScript, Go, Java, C++) Incremental indexing with file watching Parallel processing with configurable workers Smart caching for improved performance MCP Server Management STDIO transport for direct communication HTTP streaming with SSE support Dual transport mode for maximum flexibility Background daemon mode with PID management Code Search Semantic search using embeddings Exact match and fuzzy search Regex and AST-based queries Configurable similarity thresholds Architecture Analysis Component relationship mapping Dependency analysis Code pattern detection Architecture visualization support