Sponsored by Deepsite.site

Tag

#claude

586 results found

Pro Tools Mcp

# protools-mcp **A natural language interface for Avid Pro Tools, powered by Claude and the Model Context Protocol.** protools-mcp is a local [MCP (Model Context Protocol)](https://modelcontextprotocol.io) server that connects Claude — or any MCP-compatible AI assistant — to a live Pro Tools session via the PTSL (Pro Tools Scripting Library) API. Instead of navigating menus or writing scripts, you describe what you need in plain language and Claude handles it directly in your session. Built for podcast post-production workflows, it works equally well across music production, broadcast, and audio post. --- ## What It Looks Like in Practice Once connected, you can have conversations like: > *"What's in this session?"* > *"Search the transcript for everywhere the guest mentions climate change."* > *"Set a marker at 00:14:32:00 called 'Act Two'."* > *"Mute all the music tracks and solo the host."* > *"What clips are on the timeline between 22 and 35 minutes?"* > *"Save a new version of this session called EP47_mix_v2."* > *"Export the dialogue tracks as an AAF to the delivery folder."* Claude reads your live session state, answers questions about your timeline, and executes write operations directly in Pro Tools — no scripting, no keyboard shortcuts, no menu diving. --- ## Capabilities 25+ tools across 7 functional groups: | Group | Tools | Description | |-------|-------|-------------| | **Session** | `get_session_info`, `get_markers`, `get_track_list`, `get_session_snapshot`, `get_show_profile` | Session metadata, tracks, markers, show profiles | | **Tracks** | `get_track_edl`, `get_track_playlists`, `get_clips_in_range` | Clip-level detail, playlists, time-range queries | | **Transcript** | `get_transcript`, `search_transcript`, `get_transcript_for_range` | Speech-to-text CSV search with context and speaker labels | | **Navigation** | `get_playhead_position`, `get_current_selection`, `set_playhead` | Playhead and selection state | | **Edit** | `select_region`, `create_marker`, `mute_track`, `unmute_track`, `solo_track`, `consolidate_clip` | Session modifications (Claude confirms before executing) | | **Session Mgmt** | `save_session`, `close_session`, `open_session`, `save_session_as` | Save, close, open, and version sessions | | **Export** | `export_tracks_as_aaf` | AAF export with configurable format, bit depth, and copy option | An optional **Show Profile** system lets you define per-show configuration — host names, track layouts, naming conventions — so Claude has the context it needs to work intelligently across multiple shows. --- ## Prerequisites - **macOS** with Pro Tools running (PTSL listens on `localhost:31416`) - **Python 3.11+** (tested with 3.11) - **py-ptsl** installed system-wide or in a virtual environment - **Claude Desktop** or **Claude Code** (for MCP integration) - **Accessibility permission** for Claude/terminal (required for AAF export dialog automation) --- ## Setup 1. **Clone this repository:** ```bash git clone https://github.com/BlueElevatorProductions/protools-mcp.git cd protools-mcp ``` 2. **Create a virtual environment:** ```bash python3 -m venv venv --system-site-packages source venv/bin/activate pip install -r requirements.txt --no-cache-dir ``` The `--system-site-packages` flag reuses your system-wide `py-ptsl` and `grpcio` installs. 3. **Configure `.env`** (optional — defaults shown): ``` PTSL_HOST=localhost PTSL_PORT=31416 ``` 4. **Add show profiles** (optional): Place JSON files in `show_profiles/`. See `show_profiles/holy_uncertain.json` for the format. 5. **Register with Claude Desktop** — add to `~/Library/Application Support/Claude/claude_desktop_config.json`: ```json { "mcpServers": { "protools-mcp": { "command": "/path/to/protools-mcp/venv/bin/python", "args": ["/path/to/protools-mcp/server.py"] } } } ``` This makes the server available in Claude Desktop Chat, Cowork, and Code sessions. For **Claude Code CLI only**, use: ```bash claude mcp add protools-mcp -s user -- /path/to/protools-mcp/venv/bin/python /path/to/protools-mcp/server.py ``` 6. **Grant Accessibility access** (required for AAF export automation): System Settings > Privacy & Security > Accessibility — enable Claude Desktop and/or your terminal app. 7. **Open Pro Tools** with a session loaded. The MCP server connects lazily on first tool call. --- ## Tool Reference ### Session Context (read-only) - **`get_session_snapshot()`** — Composite: session info + markers + tracks + auto-matched show profile. **Start here.** Best tool to call at the beginning of any session conversation. - **`get_session_info()`** — Session name, path, sample rate, bit depth, timecode format, track count, audio file count. - **`get_markers()`** — All memory location markers with index, name, timecode, and comment. - **`get_track_list(filter="all")`** — Tracks with active, muted, soloed, and hidden state. Filter: `all`, `active`, `audio`, `inactive`. - **`get_show_profile(show_id?)`** — Returns show profile config. Auto-infers from session name prefix if no ID given. ### Track Detail (read-only) - **`get_track_edl(track_name)`** — Full clip list for a track: clip name, start/end timecodes, duration, state. - **`get_track_playlists(track_name)`** — All playlists on a track, including inactive alternates. - **`get_clips_in_range(start_timecode, end_timecode, track_filter?)`** — All clips across tracks within a timecode range. ### Transcript (read-only) - **`get_transcript()`** — Full transcript from a Pro Tools Speech-to-Text CSV export. - **`search_transcript(query, track_filter?, start_timecode?, end_timecode?)`** — Keyword search with 2-row context window. - **`get_transcript_for_range(start_timecode, end_timecode)`** — Transcript rows in a time range, formatted as `SPEAKER: text` dialogue. ### Navigation - **`get_playhead_position()`** — Current playhead timecode. - **`get_current_selection()`** — Start, end, duration, and selected track names. - **`set_playhead(timecode)`** — Moves the playhead to a specified timecode. ### Edit Operations (write) All write tools are labeled `[WRITE]`. Claude will describe the operation and confirm before executing. - **`select_region(start_timecode, end_timecode, track_names?)`** — Sets timeline selection. Non-destructive. - **`create_marker(name, timecode, comment?)`** — Adds a memory location marker at the specified timecode. - **`mute_track(track_name)`** / **`unmute_track(track_name)`** — Toggles track mute state. - **`solo_track(track_name)`** — Solos a track. - **`consolidate_clip(track_name, start_timecode, end_timecode)`** — Consolidates a region into a single clip. **Creates a new audio file on disk.** ### Session Management (write) - **`save_session()`** — Saves the current session to disk. - **`save_session_as(session_name, session_location)`** — Saves with a new name. `session_name` is the filename without extension; `session_location` is the target directory. - **`close_session(save_before_close=True)`** — Closes the session, optionally saving first. - **`open_session(session_path)`** — Opens a `.ptx` or `.ptf` session file. ### Export (write) - **`export_tracks_as_aaf(...)`** — Exports selected tracks as an AAF. Handles the Pro Tools folder dialog automatically via osascript. - `audio_format`: `WAV` (default), `AIFF`, `MXF`, `Embedded` - `bit_depth`: `24` (default), `16` - `copy_option`: `copy` (default), `consolidate`, `link` - `quantize_to_frame`: `true` (default) - `avid_compatible`: `false` (default) — enforce Media Composer compatibility - `stereo_as_multichannel`: `false` (default) - `sequence_name`: defaults to `file_name` --- ## Transcript Support The transcript tools expect a Pro Tools Speech-to-Text CSV export alongside the session. Place the CSV in your session directory (or set `transcript_export_path` in your show profile) and the server will discover it automatically. It reloads on file modification, so the data stays current as you iterate on transcripts. --- ## Show Profile Format Show profiles let Claude understand the structure of a specific show — which tracks belong to which speakers, naming conventions, and where exports live. Place JSON files in `show_profiles/`. Profiles are auto-matched by session name prefix. ```json { "show_id": "HU", "show_name": "Holy Uncertain", "session_name_prefix": "HU-", "hosts": ["Chris", "Lauren"], "dialogue_tracks": ["Chris", "Lauren Int R", "Chris Int R"], "guest_tracks": ["Randy Int R"], "music_tracks": ["Music"], "transcript_export_path": "/path/to/episodes/", "naming_conventions": { "session": "HU-{episode_number}-{guest_last_name}-V{version}", "export": "HU-{episode_number}-{guest_last_name}-MIX-V{version}" } } ``` --- ## Architecture ``` Claude Desktop ──stdio──▶ server.py (FastMCP) │ ┌─────────────┼─────────────┐ ▼ ▼ ▼ PTSLBridge Transcript ShowProfile (gRPC) Watcher Loader │ │ │ ▼ ▼ ▼ Pro Tools CSV files JSON files :31416 ``` - **PTSLBridge** — Lazy gRPC connection with auto-reconnect. The `@ptsl_command` decorator handles errors uniformly. Custom `Operation` subclasses cover PTSL commands not in py-ptsl's ops module. - **TranscriptWatcher** — Stat-based CSV cache. Reloads only when the file's `mtime` changes. Auto-discovers CSV by searching the session directory. - **ShowProfileLoader** — Reads `show_profiles/*.json` once at startup, matches sessions by name prefix. - **osascript integration** — For PTSL commands that trigger Pro Tools dialogs (e.g., AAF export), the bridge runs the command in a background thread and uses System Events to dismiss the dialog automatically. Requires Accessibility permission. --- ## Error Handling All PTSL errors return structured dicts before being raised as `ToolError`: | Error Key | Meaning | |-----------|---------| | `ptsl_unavailable` | Pro Tools not running or gRPC connection lost | | `no_session` | No session is open in Pro Tools | | `ptsl_command_error` | PTSL command failed (details in message) | | `no_transcript` | No transcript CSV found or configured | | `dialog_waiting` | AAF export dialog needs manual confirmation (Accessibility not granted) | --- ## Implementation Notes - **Timecode format**: Pro Tools uses `HH:MM:SS:FF`. Markers return raw sample positions internally; the bridge converts using `samples_to_timecode(samples, sample_rate, fps)`. - **Track `active` field**: Derived from `is_inactive == TAState_None` on `TrackAttributes`. Distinct from muted/hidden. - **EDL text**: Parsed from Pro Tools' tab-delimited text export with columns: `CHANNEL`, `EVENT`, `CLIP NAME`, `START TIME`, `END TIME`, `DURATION`, `STATE`. - **Pro Tools quirks**: `SaveSessionAs` and directory paths require a trailing `/`. Some commands (`GetTrackPlaylists`, `GetPlaylistElements`) need `CId_`-prefixed command IDs. Empty `track_id` fields must be stripped from JSON to avoid "only one of track_id/track_name" errors. - **Connection management**: gRPC connections can go stale between calls. The `@ptsl_command` decorator catches `grpc.RpcError` and resets the connection automatically. --- ## Troubleshooting - **"Pro Tools is not running"** — Make sure Pro Tools is open with a session loaded. PTSL listens on port `31416`. - **Transcript not found** — Set `transcript_export_path` in your show profile, or place the CSV next to the session file. - **Stale data** — EDL cache expires after 30 seconds. Transcripts reload on file modification. Call tools again for fresh data. - **AAF export hangs** — Grant Accessibility access in System Settings > Privacy & Security > Accessibility for the app running the MCP server. - **"only one of track_id and track_name"** — Handled internally by `json_messup()` overrides on custom Operations. --- ## Contributing Issues and pull requests welcome. If you're using this in a specific workflow and run into edge cases, open an issue — Pro Tools has many quirks and real-world sessions surface them fast. --- ## License MIT --- *Built by [Blue Elevator Productions](https://blueelevatorproductions.com)*

Qwen Coding Engine

Stop letting AI hallucinations eat your hours. With this engine, your work flows smoothly while a full SRE squad of models codes and debugs on your behalf. Are you building complex applications, only to find that AI hallucinations are eating your entire afternoon? You know the loop: You ask Claude or Cursor to fix a bug. It gives you a snippet. It breaks something else. You paste the error back. It forgets the original architecture and responds with "// ... rest of your code here". What started as a 5-minute feature turns into a 3-hour circular debugging nightmare. If this engine actually works, you are saved. The Qwen Engineering Engine (powered by the Lachman Protocol) completely stops the "two steps forward, one step back" dance. Instead of relying on a single, forgetful LLM to do everything, this MCP Server deploys a dedicated, specialized squad of Qwen models to your local codebase: - Zero Placeholders: The dedicated qwen_coder tool writes 100% complete, production-grade files. No lazy snipping. - Deep Debugging: Instead of pasting logs to Claude, the qwen_audit tool unleashes QwQ (Qwen's reasoning model) to act as your Senior Auditor. It reads the files, finds the memory leak, and tells you exactly what failed. - Architectural Immunity: Before writing code, the qwen_architect drafts a JSON roadmap and self-verifies it against your stack. If it's a bad idea, it rejects it *before* breaking your app. Why Qwen? Because running an entire squad of GPT-4o or Claude 3.5 Opus models to constantly rewrite files would cost you $50 a day. By routing this heavy lifting through Alibaba's DashScope API (Qwen 3.5 Plus & Qwen 2.5 Coder 32B), the cost is literal fractions of a cent. Let your main assistant (Claude/Antigravity/Cursor) be the Commander. Let the Qwen Engine do the heavy lifting in the trenches. Stop chatting. Start shipping.

Instagit - Let Your Agents Instantly Understand Any Github Repo

Works with Claude Code, Claude Desktop, Cursor, OpenClaw, and any MCP-compatible client. The @latest tag ensures you always get the most recent version. Why Agents that integrate with external libraries are flying blind. They read docs (if they exist), guess at APIs, and hallucinate patterns that don't match the actual code. The result: broken integrations, wrong function signatures, outdated usage patterns, hours of debugging. When an agent can actually analyze the source code of a library or service it's integrating with, everything changes. It sees the real function signatures, the actual data flow, the patterns the maintainers intended. Integration becomes dramatically easier and less error-prone because the agent is working from ground truth, not guesses. What Agents Can Do With This - Integrate with any library correctly the first time — "How do I set up authentication with this SDK?" gets answered from the actual code, not outdated docs or training data. Your agent sees the real constructors, the real config options, the real error types. - Migrate between versions without the guesswork — Point your agent at both the old and new version of a library. It can diff the actual implementations and generate a migration plan that accounts for every breaking change. - Debug issues across repository boundaries — When a bug spans your code and a dependency, your agent can read both codebases and trace the issue to its root cause — even into libraries you've never opened. - Generate integration code that actually works — Instead of producing plausible-looking code that fails at runtime, your agent writes integration code based on the real API surface: actual method names, actual parameter types, actual return values. - Evaluate libraries before committing — "Should we use library A or B?" Your agent can analyze both implementations, compare their approaches to error handling, test coverage, and architectural quality, and give you a grounded recommendation. - Onboard to unfamiliar codebases in minutes — Point your agent at any repo and ask how things work. It answers from the code itself, with file paths and line numbers, not from memory that may be months out of date.

Agent Smith

Auto-generate AGENTS.md from your codebase Stop writing AGENTS.md by hand. Run agentsmith and it scans your codebase to generate a comprehensive context file that AI coding tools read automatically. What is AGENTS.md? AGENTS.md is an open standard for giving AI coding assistants context about your project. It's adopted by 60,000+ projects and supported by: Cursor GitHub Copilot Claude Code VS Code Gemini CLI And 20+ more tools AI tools automatically discover and read AGENTS.md files - no configuration needed. What agentsmith does Instead of writing AGENTS.md manually, agentsmith scans your codebase and generates it: npx @jpoindexter/agent-smith agentsmith Scanning /Users/you/my-project... ✓ Found 279 components ✓ Found 5 components with CVA variants ✓ Found 37 color tokens ✓ Found 14 custom hooks ✓ Found 46 API routes (8 with schemas) ✓ Found 87 environment variables ✓ Detected Next.js (App Router) ✓ Detected shadcn/ui (26 Radix packages) ✓ Found cn() utility ✓ Found mode/design-system ✓ Detected 6 code patterns ✓ Found existing CLAUDE.md ✓ Found .ai/ folder (12 files) ✓ Found prisma schema (28 models) ✓ Scanned 1572 files (11.0 MB, 365,599 lines) ✓ Found 17 barrel exports ✓ Found 15 hub files (most imported) ✓ Found 20 Props types ✓ Found 40 test files (12% component coverage) ✓ Generated AGENTS.md ~11K tokens (9% of 128K context) Install # Run directly (no install needed) npx @jpoindexter/agent-smith # Or install globally npm install -g @jpoindexter/agent-smith Usage # Generate AGENTS.md in current directory agentsmith # Generate for a specific directory agentsmith ./my-project # Preview without writing (dry run) agentsmith --dry-run # Custom output file agentsmith --output CONTEXT.md # Force overwrite existing file agentsmith --force Output Modes # Default - comprehensive output (~11K tokens) agentsmith # Compact - fewer details (~20% smaller) agentsmith --compact # Compress - signatures only (~40% smaller) agentsmith --compress # Minimal - ultra-compact (~3K tokens) agentsmith --minimal # XML format (industry standard, matches Repomix) agentsmith --xml # Include file tree visualization agentsmith --tree

MCP-MESSENGER

**SlashMCP** is a production-grade AI workspace that connects LLMs to real-world data and tools through an intuitive chat interface. Built on the Model Context Protocol (MCP), it enables seamless interaction with multiple AI providers (OpenAI, Claude, Gemini) while providing powerful capabilities for document analysis, financial data queries, web scraping, and multi-agent workflow orchestration. ### Key Features: - **Multi-LLM Support**: Switch between GPT-4, Claude, and Gemini at runtime—no restart needed - **Smart Command Autocomplete**: Type `/` to discover and execute MCP server commands instantly - **Document Intelligence**: Drag-and-drop documents with automatic OCR extraction and vision analysis - **Financial Data Integration**: Real-time stock quotes, charts, and prediction market data via Alpha Vantage and Polymarket - **Browser Automation**: Web scraping and navigation using Playwright MCP - **Multi-Agent Orchestration**: Intelligent routing with specialized agents for command discovery, tool execution, and response synthesis - **Dynamic MCP Registry**: Add and use any MCP server on the fly without code changes - **Voice Interaction**: Browser-based transcription and text-to-speech support ### Use Cases: - Research and analysis workflows - Document processing and extraction - Financial market monitoring - Web data collection and comparison - Multi-step task automation **Live Demo:** [ slashmcp.vercel.app ]( https://slashmcp.vercel.app ) **GitHub:** [ github.com/mcpmessenger/slashmcp ]( https://github.com/mcpmessenger/slashmcp ) **Website:** [ slashmcp.com](https://slashmcp.com )