Sponsored by Deepsite.site

Tag

#code

444 results found

Pro Tools Mcp

# protools-mcp **A natural language interface for Avid Pro Tools, powered by Claude and the Model Context Protocol.** protools-mcp is a local [MCP (Model Context Protocol)](https://modelcontextprotocol.io) server that connects Claude — or any MCP-compatible AI assistant — to a live Pro Tools session via the PTSL (Pro Tools Scripting Library) API. Instead of navigating menus or writing scripts, you describe what you need in plain language and Claude handles it directly in your session. Built for podcast post-production workflows, it works equally well across music production, broadcast, and audio post. --- ## What It Looks Like in Practice Once connected, you can have conversations like: > *"What's in this session?"* > *"Search the transcript for everywhere the guest mentions climate change."* > *"Set a marker at 00:14:32:00 called 'Act Two'."* > *"Mute all the music tracks and solo the host."* > *"What clips are on the timeline between 22 and 35 minutes?"* > *"Save a new version of this session called EP47_mix_v2."* > *"Export the dialogue tracks as an AAF to the delivery folder."* Claude reads your live session state, answers questions about your timeline, and executes write operations directly in Pro Tools — no scripting, no keyboard shortcuts, no menu diving. --- ## Capabilities 25+ tools across 7 functional groups: | Group | Tools | Description | |-------|-------|-------------| | **Session** | `get_session_info`, `get_markers`, `get_track_list`, `get_session_snapshot`, `get_show_profile` | Session metadata, tracks, markers, show profiles | | **Tracks** | `get_track_edl`, `get_track_playlists`, `get_clips_in_range` | Clip-level detail, playlists, time-range queries | | **Transcript** | `get_transcript`, `search_transcript`, `get_transcript_for_range` | Speech-to-text CSV search with context and speaker labels | | **Navigation** | `get_playhead_position`, `get_current_selection`, `set_playhead` | Playhead and selection state | | **Edit** | `select_region`, `create_marker`, `mute_track`, `unmute_track`, `solo_track`, `consolidate_clip` | Session modifications (Claude confirms before executing) | | **Session Mgmt** | `save_session`, `close_session`, `open_session`, `save_session_as` | Save, close, open, and version sessions | | **Export** | `export_tracks_as_aaf` | AAF export with configurable format, bit depth, and copy option | An optional **Show Profile** system lets you define per-show configuration — host names, track layouts, naming conventions — so Claude has the context it needs to work intelligently across multiple shows. --- ## Prerequisites - **macOS** with Pro Tools running (PTSL listens on `localhost:31416`) - **Python 3.11+** (tested with 3.11) - **py-ptsl** installed system-wide or in a virtual environment - **Claude Desktop** or **Claude Code** (for MCP integration) - **Accessibility permission** for Claude/terminal (required for AAF export dialog automation) --- ## Setup 1. **Clone this repository:** ```bash git clone https://github.com/BlueElevatorProductions/protools-mcp.git cd protools-mcp ``` 2. **Create a virtual environment:** ```bash python3 -m venv venv --system-site-packages source venv/bin/activate pip install -r requirements.txt --no-cache-dir ``` The `--system-site-packages` flag reuses your system-wide `py-ptsl` and `grpcio` installs. 3. **Configure `.env`** (optional — defaults shown): ``` PTSL_HOST=localhost PTSL_PORT=31416 ``` 4. **Add show profiles** (optional): Place JSON files in `show_profiles/`. See `show_profiles/holy_uncertain.json` for the format. 5. **Register with Claude Desktop** — add to `~/Library/Application Support/Claude/claude_desktop_config.json`: ```json { "mcpServers": { "protools-mcp": { "command": "/path/to/protools-mcp/venv/bin/python", "args": ["/path/to/protools-mcp/server.py"] } } } ``` This makes the server available in Claude Desktop Chat, Cowork, and Code sessions. For **Claude Code CLI only**, use: ```bash claude mcp add protools-mcp -s user -- /path/to/protools-mcp/venv/bin/python /path/to/protools-mcp/server.py ``` 6. **Grant Accessibility access** (required for AAF export automation): System Settings > Privacy & Security > Accessibility — enable Claude Desktop and/or your terminal app. 7. **Open Pro Tools** with a session loaded. The MCP server connects lazily on first tool call. --- ## Tool Reference ### Session Context (read-only) - **`get_session_snapshot()`** — Composite: session info + markers + tracks + auto-matched show profile. **Start here.** Best tool to call at the beginning of any session conversation. - **`get_session_info()`** — Session name, path, sample rate, bit depth, timecode format, track count, audio file count. - **`get_markers()`** — All memory location markers with index, name, timecode, and comment. - **`get_track_list(filter="all")`** — Tracks with active, muted, soloed, and hidden state. Filter: `all`, `active`, `audio`, `inactive`. - **`get_show_profile(show_id?)`** — Returns show profile config. Auto-infers from session name prefix if no ID given. ### Track Detail (read-only) - **`get_track_edl(track_name)`** — Full clip list for a track: clip name, start/end timecodes, duration, state. - **`get_track_playlists(track_name)`** — All playlists on a track, including inactive alternates. - **`get_clips_in_range(start_timecode, end_timecode, track_filter?)`** — All clips across tracks within a timecode range. ### Transcript (read-only) - **`get_transcript()`** — Full transcript from a Pro Tools Speech-to-Text CSV export. - **`search_transcript(query, track_filter?, start_timecode?, end_timecode?)`** — Keyword search with 2-row context window. - **`get_transcript_for_range(start_timecode, end_timecode)`** — Transcript rows in a time range, formatted as `SPEAKER: text` dialogue. ### Navigation - **`get_playhead_position()`** — Current playhead timecode. - **`get_current_selection()`** — Start, end, duration, and selected track names. - **`set_playhead(timecode)`** — Moves the playhead to a specified timecode. ### Edit Operations (write) All write tools are labeled `[WRITE]`. Claude will describe the operation and confirm before executing. - **`select_region(start_timecode, end_timecode, track_names?)`** — Sets timeline selection. Non-destructive. - **`create_marker(name, timecode, comment?)`** — Adds a memory location marker at the specified timecode. - **`mute_track(track_name)`** / **`unmute_track(track_name)`** — Toggles track mute state. - **`solo_track(track_name)`** — Solos a track. - **`consolidate_clip(track_name, start_timecode, end_timecode)`** — Consolidates a region into a single clip. **Creates a new audio file on disk.** ### Session Management (write) - **`save_session()`** — Saves the current session to disk. - **`save_session_as(session_name, session_location)`** — Saves with a new name. `session_name` is the filename without extension; `session_location` is the target directory. - **`close_session(save_before_close=True)`** — Closes the session, optionally saving first. - **`open_session(session_path)`** — Opens a `.ptx` or `.ptf` session file. ### Export (write) - **`export_tracks_as_aaf(...)`** — Exports selected tracks as an AAF. Handles the Pro Tools folder dialog automatically via osascript. - `audio_format`: `WAV` (default), `AIFF`, `MXF`, `Embedded` - `bit_depth`: `24` (default), `16` - `copy_option`: `copy` (default), `consolidate`, `link` - `quantize_to_frame`: `true` (default) - `avid_compatible`: `false` (default) — enforce Media Composer compatibility - `stereo_as_multichannel`: `false` (default) - `sequence_name`: defaults to `file_name` --- ## Transcript Support The transcript tools expect a Pro Tools Speech-to-Text CSV export alongside the session. Place the CSV in your session directory (or set `transcript_export_path` in your show profile) and the server will discover it automatically. It reloads on file modification, so the data stays current as you iterate on transcripts. --- ## Show Profile Format Show profiles let Claude understand the structure of a specific show — which tracks belong to which speakers, naming conventions, and where exports live. Place JSON files in `show_profiles/`. Profiles are auto-matched by session name prefix. ```json { "show_id": "HU", "show_name": "Holy Uncertain", "session_name_prefix": "HU-", "hosts": ["Chris", "Lauren"], "dialogue_tracks": ["Chris", "Lauren Int R", "Chris Int R"], "guest_tracks": ["Randy Int R"], "music_tracks": ["Music"], "transcript_export_path": "/path/to/episodes/", "naming_conventions": { "session": "HU-{episode_number}-{guest_last_name}-V{version}", "export": "HU-{episode_number}-{guest_last_name}-MIX-V{version}" } } ``` --- ## Architecture ``` Claude Desktop ──stdio──▶ server.py (FastMCP) │ ┌─────────────┼─────────────┐ ▼ ▼ ▼ PTSLBridge Transcript ShowProfile (gRPC) Watcher Loader │ │ │ ▼ ▼ ▼ Pro Tools CSV files JSON files :31416 ``` - **PTSLBridge** — Lazy gRPC connection with auto-reconnect. The `@ptsl_command` decorator handles errors uniformly. Custom `Operation` subclasses cover PTSL commands not in py-ptsl's ops module. - **TranscriptWatcher** — Stat-based CSV cache. Reloads only when the file's `mtime` changes. Auto-discovers CSV by searching the session directory. - **ShowProfileLoader** — Reads `show_profiles/*.json` once at startup, matches sessions by name prefix. - **osascript integration** — For PTSL commands that trigger Pro Tools dialogs (e.g., AAF export), the bridge runs the command in a background thread and uses System Events to dismiss the dialog automatically. Requires Accessibility permission. --- ## Error Handling All PTSL errors return structured dicts before being raised as `ToolError`: | Error Key | Meaning | |-----------|---------| | `ptsl_unavailable` | Pro Tools not running or gRPC connection lost | | `no_session` | No session is open in Pro Tools | | `ptsl_command_error` | PTSL command failed (details in message) | | `no_transcript` | No transcript CSV found or configured | | `dialog_waiting` | AAF export dialog needs manual confirmation (Accessibility not granted) | --- ## Implementation Notes - **Timecode format**: Pro Tools uses `HH:MM:SS:FF`. Markers return raw sample positions internally; the bridge converts using `samples_to_timecode(samples, sample_rate, fps)`. - **Track `active` field**: Derived from `is_inactive == TAState_None` on `TrackAttributes`. Distinct from muted/hidden. - **EDL text**: Parsed from Pro Tools' tab-delimited text export with columns: `CHANNEL`, `EVENT`, `CLIP NAME`, `START TIME`, `END TIME`, `DURATION`, `STATE`. - **Pro Tools quirks**: `SaveSessionAs` and directory paths require a trailing `/`. Some commands (`GetTrackPlaylists`, `GetPlaylistElements`) need `CId_`-prefixed command IDs. Empty `track_id` fields must be stripped from JSON to avoid "only one of track_id/track_name" errors. - **Connection management**: gRPC connections can go stale between calls. The `@ptsl_command` decorator catches `grpc.RpcError` and resets the connection automatically. --- ## Troubleshooting - **"Pro Tools is not running"** — Make sure Pro Tools is open with a session loaded. PTSL listens on port `31416`. - **Transcript not found** — Set `transcript_export_path` in your show profile, or place the CSV next to the session file. - **Stale data** — EDL cache expires after 30 seconds. Transcripts reload on file modification. Call tools again for fresh data. - **AAF export hangs** — Grant Accessibility access in System Settings > Privacy & Security > Accessibility for the app running the MCP server. - **"only one of track_id and track_name"** — Handled internally by `json_messup()` overrides on custom Operations. --- ## Contributing Issues and pull requests welcome. If you're using this in a specific workflow and run into edge cases, open an issue — Pro Tools has many quirks and real-world sessions surface them fast. --- ## License MIT --- *Built by [Blue Elevator Productions](https://blueelevatorproductions.com)*

Intelligence Aeternum Data Portal

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735) The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Intelligence Aeternum Data Portal

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735). The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Cognitive Nutrition to Cure AI Model Collapse plus Advanced Image Enhancement Tools

AI training dataset marketplace — 2M+ museum artworks across 7 world-class institutions with on-demand 111-field Golden Codex AI enrichment. x402 USDC micropayments on Base L2. First monetized art/provenance MCP server. Research-backed: dense metadata improves VLM capability by +25.5% (DOI: 10.5281/zenodo.18667735). The complete creative AI pipeline exposed as MCP tools. From generation to permanent storage — every stage available via x402 USDC micropayments on Base L2. Generation - SD 3.5 Large + T5-XXL — Stable Diffusion 3.5 Large with T5-XXL text encoder on NVIDIA L4 GPU. High-fidelity image generation with superior prompt adherence. LoRA support (Artiswa v2 style transfer). Upscaling - ESRGAN x4 Upscaler — Real-ESRGAN x4plus on NVIDIA L4 GPU (24GB VRAM). Takes 1024px to 4096px in ~1.15s. Production-grade super-resolution for print and archival quality. AI Enrichment - Golden Codex Metadata Creation (Nova) — 111-field deep visual analysis powered by Gemini VLM. Color harmony, composition, symbolism, emotional journey, provenance chain, archetypal resonance. 2,000-6,000 tokens per artwork. Research-backed: +25.5% VLM improvement (DOI: 10.5281/zenodo.18667735). Metadata Infusion - Atlas XMP/IPTC/C2PA Infusion — Embed Golden Codex metadata directly into image files via ExifTool. XMP-gc namespace, gzip+base64 compressed payload, SHA-256 Soulmark hash, C2PA Content Credentials. Strip-proof: metadata recoverable via hash registry even if XMP is removed. Verification - Aegis Provenance Verification — "Shazam for Art." Perceptual hash lookup against 100K+ scale LSH index (16x4 bands). Verify any image's provenance chain in <500ms. Free tier available. Dataset Access - Alexandria Aeternum — 2M+ museum artworks across 7 world-class institutions (Met, Rijksmuseum, Smithsonian, NGA, Chicago, Cleveland, Paris). Search, preview, and purchase enriched training data. Human_Standard and Hybrid_Premium tiers with auto-generated AB 2013 + EU AI Act compliance manifests. Permanent Storage - Arweave Permanent Storage — Store artifacts on Arweave L1 for 200+ year permanence. No AR tokens needed — pay in USDC via x402 and we handle the rest. Native AR SDK, direct L1 posting, transaction ID returned for on-chain verification. Your art outlives every server. NFT Minting - Mintra Blockchain Minting — Mint provenance-tracked NFTs on Polygon. Metadata-rich tokens with full Golden Codex schema on-chain. Archivus (Arweave) + Mintra (Polygon) pipeline: permanent storage → immutable ownership in one call. Pricing — Genesis Epoch: 20% off all services for 90 days. Volume discounts auto-apply per wallet (100+ 25% off, 500+ 37% off, 2000+ 50% off). Enterprise packages from $8,000.

Instagit - Let Your Agents Instantly Understand Any Github Repo

Works with Claude Code, Claude Desktop, Cursor, OpenClaw, and any MCP-compatible client. The @latest tag ensures you always get the most recent version. Why Agents that integrate with external libraries are flying blind. They read docs (if they exist), guess at APIs, and hallucinate patterns that don't match the actual code. The result: broken integrations, wrong function signatures, outdated usage patterns, hours of debugging. When an agent can actually analyze the source code of a library or service it's integrating with, everything changes. It sees the real function signatures, the actual data flow, the patterns the maintainers intended. Integration becomes dramatically easier and less error-prone because the agent is working from ground truth, not guesses. What Agents Can Do With This - Integrate with any library correctly the first time — "How do I set up authentication with this SDK?" gets answered from the actual code, not outdated docs or training data. Your agent sees the real constructors, the real config options, the real error types. - Migrate between versions without the guesswork — Point your agent at both the old and new version of a library. It can diff the actual implementations and generate a migration plan that accounts for every breaking change. - Debug issues across repository boundaries — When a bug spans your code and a dependency, your agent can read both codebases and trace the issue to its root cause — even into libraries you've never opened. - Generate integration code that actually works — Instead of producing plausible-looking code that fails at runtime, your agent writes integration code based on the real API surface: actual method names, actual parameter types, actual return values. - Evaluate libraries before committing — "Should we use library A or B?" Your agent can analyze both implementations, compare their approaches to error handling, test coverage, and architectural quality, and give you a grounded recommendation. - Onboard to unfamiliar codebases in minutes — Point your agent at any repo and ask how things work. It answers from the code itself, with file paths and line numbers, not from memory that may be months out of date.

Agent Smith

Auto-generate AGENTS.md from your codebase Stop writing AGENTS.md by hand. Run agentsmith and it scans your codebase to generate a comprehensive context file that AI coding tools read automatically. What is AGENTS.md? AGENTS.md is an open standard for giving AI coding assistants context about your project. It's adopted by 60,000+ projects and supported by: Cursor GitHub Copilot Claude Code VS Code Gemini CLI And 20+ more tools AI tools automatically discover and read AGENTS.md files - no configuration needed. What agentsmith does Instead of writing AGENTS.md manually, agentsmith scans your codebase and generates it: npx @jpoindexter/agent-smith agentsmith Scanning /Users/you/my-project... ✓ Found 279 components ✓ Found 5 components with CVA variants ✓ Found 37 color tokens ✓ Found 14 custom hooks ✓ Found 46 API routes (8 with schemas) ✓ Found 87 environment variables ✓ Detected Next.js (App Router) ✓ Detected shadcn/ui (26 Radix packages) ✓ Found cn() utility ✓ Found mode/design-system ✓ Detected 6 code patterns ✓ Found existing CLAUDE.md ✓ Found .ai/ folder (12 files) ✓ Found prisma schema (28 models) ✓ Scanned 1572 files (11.0 MB, 365,599 lines) ✓ Found 17 barrel exports ✓ Found 15 hub files (most imported) ✓ Found 20 Props types ✓ Found 40 test files (12% component coverage) ✓ Generated AGENTS.md ~11K tokens (9% of 128K context) Install # Run directly (no install needed) npx @jpoindexter/agent-smith # Or install globally npm install -g @jpoindexter/agent-smith Usage # Generate AGENTS.md in current directory agentsmith # Generate for a specific directory agentsmith ./my-project # Preview without writing (dry run) agentsmith --dry-run # Custom output file agentsmith --output CONTEXT.md # Force overwrite existing file agentsmith --force Output Modes # Default - comprehensive output (~11K tokens) agentsmith # Compact - fewer details (~20% smaller) agentsmith --compact # Compress - signatures only (~40% smaller) agentsmith --compress # Minimal - ultra-compact (~3K tokens) agentsmith --minimal # XML format (industry standard, matches Repomix) agentsmith --xml # Include file tree visualization agentsmith --tree

Mcp Server For Bitrix24

# mcp-bitrix24 MCP server for Bitrix24 Tasks, Workgroups, and Users. Implements MCP/JSON-RPC over STDIO. ## Features - Tasks: create, update, close, reopen, list - Workgroups: create, list - Users: list, current user, available fields - Task fields: available fields + validation for `create_task.fields` ## Requirements - Node.js >= 18 - Bitrix24 webhook URL ## Install / Build ```bash npm install npm run build ``` Run via npm: ```bash npx mcp-bitrix24 ``` ## Configuration Set the Bitrix24 webhook URL via environment variable: ``` BITRIX24_WEBHOOK_URL=https://<your-domain>/rest/<user_id>/<webhook>/ ``` Example Codex MCP config: ```toml [mcp_servers.bitrix24] command = "npx" args = ["-y", "mcp-bitrix24"] [mcp_servers.bitrix24.env] BITRIX24_WEBHOOK_URL = "https://<your-domain>/rest/<user_id>/<webhook>/" ``` ## Tools ### Tasks - `create_task` - Input: `title` (string, required), `description?` (string), `responsible_id?` (number), `group_id?` (number), `fields?` (object) - Output: `{ task_id: number }` - Note: if `fields` is provided, keys are validated against `get_task_fields`. - `update_task` - Input: `task_id` (number, required) + at least one of: `title?`, `description?`, `responsible_id?`, `group_id?` - Output: `{ task_id: number }` - `close_task` - Input: `task_id` (number, required) - Output: `{ task_id: number }` - `reopen_task` - Input: `task_id` (number, required) - Output: `{ task_id: number }` - `list_tasks` - Input: `responsible_id?` (number), `group_id?` (number), `start?` (number), `limit?` (number) - Output: `{ tasks: [{ id, title, status }] }` - `get_task_fields` - Input: `{}` - Output: `{ fields: { [field: string]: object } }` - `list_task_history` - Input: `task_id` (number, required), `filter?` (object), `order?` (object) - Output: `{ list: [ { id, createdDate, field, value, user } ] }` ### Workgroups - `create_group` - Input: `name` (string, required), `description?` (string) - Output: `{ group_id: number }` - `list_groups` - Input: `limit?` (number) - Output: `{ groups: [{ id, name }] }` ### Users - `list_users` - Input: - `filter?` (object) - `sort?` (string) - `order?` ("ASC" | "DESC") - `admin_mode?` (boolean) - `start?` (number) - `limit?` (number) - Output: `{ users: [{ id, name, last_name, email?, active }] }` - Note: `filter` supports Bitrix24 `user.get` filters (including prefixes like `>=`, `%`, `@`, etc.). `start` controls paging (Bitrix returns 50 records per page); `limit` is a local slice after the API response. - `get_user_fields` - Input: `{}` - Output: `{ fields: { [field: string]: string } }` - `get_current_user` - Input: `{}` - Output: `{ user: { id, name, last_name, email?, active } }` ## Architecture Clean architecture layers: - `mcp/` — protocol, transport, server - `adapters/` — MCP tools mapping to domain - `domain/` — entities, services, ports - `infrastructure/` — Bitrix24 REST client ## Development Notes - Input validation uses `zod`. - Transport: STDIO only. - Build: `tsc` (`npm run build`). ## Contributing See `CONTRIBUTING.md` for guidelines.

Greb Mcp

GREB MCP Server Semantic code search for AI agents without indexing your codebase or storing any data. Fast and accurate. Available on npm (cheetah-greb) and PyPI (cheetah-greb). FEATURES - Natural Language Search: Describe what you're looking for in plain English - High-Precision Results: Smart ranking returns the most relevant code first - Works with Any MCP Client: Claude Desktop, Cursor, Windsurf, Cline, Kiro, and more - No Indexing Required: Search any codebase instantly without setup - Fast: Results in under 5 seconds even for large repositories INSTALLATION Install Greb globally using pip or npm. Python: pip install cheetah-greb Node.js: npm install -g cheetah-greb GET YOUR API KEY 1. Go to Dashboard > API Keys at https://grebmcp.com/dashboard/api-keys 2. Click "Create API Key" 3. Copy the key (starts with grb_) CONFIGURATION Add to your MCP client config (Cursor, Windsurf, Claude Desktop, Kiro, etc.): Python installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } Node.js installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp-js", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } CLAUDE CODE SETUP Mac/Linux (Python): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp Windows PowerShell (Python): claude mcp add greb-mcp greb-mcp --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" Mac/Linux (Node.js): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp-js Windows PowerShell (Node.js): claude mcp add greb-mcp greb-mcp-js --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" TOOL: code_search Search code using natural language queries powered by AI. Parameters: - query (string, required): Natural language search query - keywords (object, required): Search configuration - keywords.primary_terms (string array, required): High-level semantic terms (e.g., "authentication", "database") - keywords.code_patterns (string array, optional): Literal code patterns to grep for - keywords.file_patterns (string array, required): File extensions to search (e.g., ["*.ts", "*.js"]) - keywords.intent (string, required): Brief description of what you're looking for - directory (string, required): Full absolute path to directory to search Example: { "query": "find authentication middleware", "keywords": { "primary_terms": ["authentication", "middleware", "jwt"], "code_patterns": ["authenticate(", "isAuthenticated"], "file_patterns": ["*.js", "*.ts"], "intent": "find auth middleware implementation" }, "directory": "/Users/dev/my-project" } Response includes: - File paths - Line numbers - Relevance scores - Code content - Reasoning for each match USAGE EXAMPLES Ask your AI assistant to search code naturally: "Use greb mcp to find authentication middleware" "Use greb mcp to find all API endpoints" "Use greb mcp to look for database connection setup" "Use greb mcp to find where user validation happens" "Use greb mcp to search for error handling patterns" LINKS Website: https://grebmcp.com Documentation: https://grebmcp.com/docs Get API Key: https://grebmcp.com/dashboard/api-keys