Sponsored by Deepsite.site

Tag

#ai

2443 results found

Greb Mcp

GREB MCP Server Semantic code search for AI agents without indexing your codebase or storing any data. Fast and accurate. Available on npm (cheetah-greb) and PyPI (cheetah-greb). FEATURES - Natural Language Search: Describe what you're looking for in plain English - High-Precision Results: Smart ranking returns the most relevant code first - Works with Any MCP Client: Claude Desktop, Cursor, Windsurf, Cline, Kiro, and more - No Indexing Required: Search any codebase instantly without setup - Fast: Results in under 5 seconds even for large repositories INSTALLATION Install Greb globally using pip or npm. Python: pip install cheetah-greb Node.js: npm install -g cheetah-greb GET YOUR API KEY 1. Go to Dashboard > API Keys at https://grebmcp.com/dashboard/api-keys 2. Click "Create API Key" 3. Copy the key (starts with grb_) CONFIGURATION Add to your MCP client config (Cursor, Windsurf, Claude Desktop, Kiro, etc.): Python installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } Node.js installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp-js", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } CLAUDE CODE SETUP Mac/Linux (Python): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp Windows PowerShell (Python): claude mcp add greb-mcp greb-mcp --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" Mac/Linux (Node.js): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp-js Windows PowerShell (Node.js): claude mcp add greb-mcp greb-mcp-js --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" TOOL: code_search Search code using natural language queries powered by AI. Parameters: - query (string, required): Natural language search query - keywords (object, required): Search configuration - keywords.primary_terms (string array, required): High-level semantic terms (e.g., "authentication", "database") - keywords.code_patterns (string array, optional): Literal code patterns to grep for - keywords.file_patterns (string array, required): File extensions to search (e.g., ["*.ts", "*.js"]) - keywords.intent (string, required): Brief description of what you're looking for - directory (string, required): Full absolute path to directory to search Example: { "query": "find authentication middleware", "keywords": { "primary_terms": ["authentication", "middleware", "jwt"], "code_patterns": ["authenticate(", "isAuthenticated"], "file_patterns": ["*.js", "*.ts"], "intent": "find auth middleware implementation" }, "directory": "/Users/dev/my-project" } Response includes: - File paths - Line numbers - Relevance scores - Code content - Reasoning for each match USAGE EXAMPLES Ask your AI assistant to search code naturally: "Use greb mcp to find authentication middleware" "Use greb mcp to find all API endpoints" "Use greb mcp to look for database connection setup" "Use greb mcp to find where user validation happens" "Use greb mcp to search for error handling patterns" LINKS Website: https://grebmcp.com Documentation: https://grebmcp.com/docs Get API Key: https://grebmcp.com/dashboard/api-keys

MCP-MESSENGER

**SlashMCP** is a production-grade AI workspace that connects LLMs to real-world data and tools through an intuitive chat interface. Built on the Model Context Protocol (MCP), it enables seamless interaction with multiple AI providers (OpenAI, Claude, Gemini) while providing powerful capabilities for document analysis, financial data queries, web scraping, and multi-agent workflow orchestration. ### Key Features: - **Multi-LLM Support**: Switch between GPT-4, Claude, and Gemini at runtime—no restart needed - **Smart Command Autocomplete**: Type `/` to discover and execute MCP server commands instantly - **Document Intelligence**: Drag-and-drop documents with automatic OCR extraction and vision analysis - **Financial Data Integration**: Real-time stock quotes, charts, and prediction market data via Alpha Vantage and Polymarket - **Browser Automation**: Web scraping and navigation using Playwright MCP - **Multi-Agent Orchestration**: Intelligent routing with specialized agents for command discovery, tool execution, and response synthesis - **Dynamic MCP Registry**: Add and use any MCP server on the fly without code changes - **Voice Interaction**: Browser-based transcription and text-to-speech support ### Use Cases: - Research and analysis workflows - Document processing and extraction - Financial market monitoring - Web data collection and comparison - Multi-step task automation **Live Demo:** [ slashmcp.vercel.app ]( https://slashmcp.vercel.app ) **GitHub:** [ github.com/mcpmessenger/slashmcp ]( https://github.com/mcpmessenger/slashmcp ) **Website:** [ slashmcp.com](https://slashmcp.com )

Crawleo Mcp Server

Crawleo MCP Server Real-time web search and crawling capabilities for AI assistants through Model Context Protocol (MCP). Overview Crawleo MCP enables AI assistants to access live web data through two powerful tools: web.search - Real-time web search with multiple output formats web.crawl - Deep content extraction from any URL Features ✅ Real-time web search from any country/language ✅ Multiple output formats - Enhanced HTML, Raw HTML, Markdown, Plain Text ✅ Device-specific results - Desktop, mobile, or tablet view ✅ Deep content extraction with JavaScript rendering ✅ Zero data retention - Complete privacy ✅ Auto-crawling option for search results Getting Your API Key Visit crawleo.dev Sign up for a free account Navigate to your dashboard Copy your API key Setup Instructions 1. Claude Desktop Location of config file: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json Linux: ~/.config/Claude/claude_desktop_config.json Configuration: Open the config file and add: json{ "mcpServers": { "crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } ``` Replace `YOUR_API_KEY_HERE` with your actual API key from crawleo.dev. **Steps:** 1. Open the config file in a text editor 2. Add the Crawleo MCP configuration 3. Save the file 4. Restart Claude Desktop completely (quit and reopen) 5. Start a new conversation and ask Claude to search the web! **Example usage:** ``` "Search for the latest AI news and summarize the top 5 articles" "Find Python web scraping tutorials and extract code examples" 2. Cursor IDE Location of config file: macOS: ~/.cursor/config.json or ~/Library/Application Support/Cursor/config.json Windows: %APPDATA%\Cursor\config.json Linux: ~/.config/Cursor/config.json Configuration: json{ "mcpServers": { "crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } ``` **Steps:** 1. Locate and open your Cursor config file 2. Add the Crawleo MCP configuration 3. Save the file 4. Restart Cursor 5. The MCP tools will be available in your AI assistant **Example usage in Cursor:** ``` "Search for React best practices and add them to my code comments" "Find the latest documentation for this API endpoint" 3. Windsurf IDE Location of config file: macOS: ~/Library/Application Support/Windsurf/config.json Windows: %APPDATA%\Windsurf\config.json Linux: ~/.config/Windsurf/config.json Configuration: json{ "mcpServers": { "crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } Steps: Open the Windsurf config file Add the Crawleo MCP server configuration Save and restart Windsurf Start using web search in your coding workflow 4. GitHub Copilot ⚠️ Note: As of now, GitHub Copilot does not natively support MCP servers. MCP integration is currently available for Claude Desktop, Cursor, Windsurf, and other MCP-compatible applications. If GitHub Copilot adds MCP support in the future, the configuration would be similar to other tools. Alternative: Use Cursor IDE (which supports both Copilot-like features AND MCP) for the best of both worlds. 5. OpenAI Platform (Custom Integration) OpenAI's platform doesn't directly support MCP, but you can integrate Crawleo through function calling: Using OpenAI API with Crawleo: pythonimport openai import requests # Define Crawleo as a function for OpenAI tools = [ { "type": "function", "function": { "name": "web_search", "description": "Search the web in real-time", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "Search query" }, "markdown": { "type": "boolean", "description": "Return results in Markdown format" } }, "required": ["query"] } } } ] # When OpenAI calls the function, execute it: def execute_web_search(query, markdown=True): response = requests.post( "https://api.crawleo.dev/mcp", headers={"Authorization": "Bearer YOUR_API_KEY_HERE"}, json={ "method": "web.search", "params": { "query": query, "markdown": markdown } } ) return response.json() # Use with OpenAI response = openai.ChatCompletion.create( model="gpt-4", messages=[{"role": "user", "content": "Search for AI news"}], tools=tools ) ``` --- ## Available Tools ### web.search Search the web in real-time with customizable parameters. **Parameters:** - `query` *(required)* - Search term - `max_pages` - Number of result pages (default: 1) - `setLang` - Language code (e.g., "en", "ar") - `cc` - Country code (e.g., "US", "EG") - `device` - Device type: "desktop", "mobile", "tablet" (default: "desktop") - `enhanced_html` - Get clean HTML (default: true) - `raw_html` - Get raw HTML (default: false) - `markdown` - Get Markdown format (default: true) - `page_text` - Get plain text (default: false) - `auto_crawling` - Auto-crawl result URLs (default: false) **Example:** ``` Ask your AI: "Search for 'Python web scraping' and return results in Markdown" ``` --- ### web.crawl Extract content from specific URLs. **Parameters:** - `urls` *(required)* - List of URLs to crawl - `rawHtml` - Return raw HTML (default: false) - `markdown` - Convert to Markdown (default: false) - `screenshot` - Capture screenshot (optional) - `country` - Geographic location **Example:** ``` Ask your AI: "Crawl https://example.com and extract the main content in Markdown" ``` --- ## Troubleshooting ### MCP server not appearing 1. **Check config file location** - Make sure you're editing the correct file 2. **Verify JSON syntax** - Use a JSON validator to check for syntax errors 3. **Restart the application** - Completely quit and reopen (not just reload) 4. **Check API key** - Ensure your API key is valid and active at crawleo.dev ### Authentication errors - Verify your API key is correct - Make sure the key is wrapped in quotes - Check that "Bearer " prefix is included in the Authorization header - Confirm your account has available credits at crawleo.dev ### No results returned - Check your internet connection - Verify the search query is not empty - Try a simpler search query first - Check API status at crawleo.dev --- ## Usage Examples ### Research Assistant ``` "Search for recent developments in quantum computing and summarize the key findings" ``` ### Content Analysis ``` "Search for competitor pricing pages and extract their pricing tiers" ``` ### Code Documentation ``` "Find the official documentation for FastAPI and extract the quickstart guide" ``` ### News Monitoring ``` "Search for today's news about artificial intelligence from US sources" ``` ### Market Research ``` "Search for customer reviews of iPhone 15 and analyze sentiment" Pricing Crawleo MCP uses the same affordable pricing as our API: 10,000 searches → $20 100,000 searches → $100 250,000 searches → $200 Check your usage and manage your subscription at crawleo.dev Privacy & Security ✅ Zero data retention - We never store your search queries or results ✅ Secure authentication - API keys transmitted over HTTPS ✅ No tracking - Your usage patterns remain private Support Documentation: crawleo.dev/docs API Status: crawleo.dev/status Contact: support@crawleo.dev Links 🌐 Website: crawleo.dev 📚 Documentation: crawleo.dev/docs 🔑 Get API Key: crawleo.dev Built with ❤️ by Ahmed Ellaban Empowering AI with real-time web knowledge. Would you like me to: Add troubleshooting for specific errors? Create a video script for setup tutorial? Add more code examples for different languages? Create a quick-start guide separate from this README? RetryAEthis is example of doing that using githup copilot { "servers": { "MongoDB": { "type": "stdio", "command": "npx", "args": [ "-y", "mongodb-mcp-server", "--connectionString", "mongodb+srv://Smarty:Smartydb@cluster0.bla8e.mongodb.net/", "--readOnly" ] }, "Crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer sk_2adaff4e_215a0da400999b9c2ca8c2002643f206cab10dc4bcb1f34f798ccd328c1f4117", } } } }and this using open ai platformfrom openai import OpenAI client = OpenAI() response = client.responses.create( model="gpt-5-nano", input=[ { "role": "user", "content": [ { "type": "input_text", "text": "search for latest news about openai models\n\n" } ] } ], text={ "format": { "type": "text" }, "verbosity": "medium" }, reasoning={ "effort": "medium" }, tools=[ { "type": "mcp", "server_label": "Crawleo", "server_url": "https://api.crawleo.dev/mcp", "server_description": "Crawleo MCP Server - Real-Time Web Knowledge for AI", "authorization": "sk_2adaff4e_215a0da400999b9c2ca8c2002643f206cab10dc4bcb1f34f798ccd328c1f4117", "allowed_tools": [ "search_web", "crawl_web" ], "require_approval": "always" } ], store=True, include=[ "reasoning.encrypted_content", "web_search_call.action.sources" ] )Show morePerfect! Here's the updated README with the correct configurations for GitHub Copilot and OpenAI Platform: Crawleo MCP Server Real-time web search and crawling capabilities for AI assistants through Model Context Protocol (MCP). Overview Crawleo MCP enables AI assistants to access live web data through two powerful tools: web.search - Real-time web search with multiple output formats web.crawl - Deep content extraction from any URL Features ✅ Real-time web search from any country/language ✅ Multiple output formats - Enhanced HTML, Raw HTML, Markdown, Plain Text ✅ Device-specific results - Desktop, mobile, or tablet view ✅ Deep content extraction with JavaScript rendering ✅ Zero data retention - Complete privacy ✅ Auto-crawling option for search results Getting Your API Key Visit crawleo.dev Sign up for a free account Navigate to your dashboard Copy your API key (starts with sk_) Setup Instructions 1. Claude Desktop Location of config file: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json Linux: ~/.config/Claude/claude_desktop_config.json Configuration: json{ "mcpServers": { "crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } ``` Replace `YOUR_API_KEY_HERE` with your actual API key from crawleo.dev. **Steps:** 1. Open the config file in a text editor 2. Add the Crawleo MCP configuration 3. Save the file 4. Restart Claude Desktop completely (quit and reopen) 5. Start a new conversation and ask Claude to search the web! **Example usage:** ``` "Search for the latest AI news and summarize the top 5 articles" "Find Python web scraping tutorials and extract code examples" 2. Cursor IDE Location of config file: macOS: ~/.cursor/config.json or ~/Library/Application Support/Cursor/config.json Windows: %APPDATA%\Cursor\config.json Linux: ~/.config/Cursor/config.json Configuration: json{ "mcpServers": { "crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } ``` **Steps:** 1. Locate and open your Cursor config file 2. Add the Crawleo MCP configuration 3. Save the file 4. Restart Cursor 5. The MCP tools will be available in your AI assistant **Example usage in Cursor:** ``` "Search for React best practices and add them to my code comments" "Find the latest documentation for this API endpoint" 3. Windsurf IDE Location of config file: macOS: ~/Library/Application Support/Windsurf/config.json Windows: %APPDATA%\Windsurf\config.json Linux: ~/.config/Windsurf/config.json Configuration: json{ "mcpServers": { "crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } Steps: Open the Windsurf config file Add the Crawleo MCP server configuration Save and restart Windsurf Start using web search in your coding workflow 4. GitHub Copilot Location of config file: For GitHub Copilot in VS Code or compatible editors, you need to configure MCP servers. Configuration: Create or edit your MCP config file and add: json{ "servers": { "Crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } Complete example with multiple servers: json{ "servers": { "Crawleo": { "url": "https://api.crawleo.dev/mcp", "transport": "http", "headers": { "Authorization": "Bearer YOUR_API_KEY_HERE" } } } } ``` **Steps:** 1. Open your GitHub Copilot MCP configuration 2. Add the Crawleo server configuration 3. Save the file 4. Restart VS Code or your IDE 5. GitHub Copilot can now use Crawleo for web searches! **Example usage:** ``` Ask Copilot: "Search for the latest Python best practices" Ask Copilot: "Find documentation for this library" 5. OpenAI Platform (Direct Integration) OpenAI now supports MCP servers directly! Here's how to use Crawleo with OpenAI's API: Python Example: pythonfrom openai import OpenAI client = OpenAI() response = client.responses.create( model="gpt-4", input=[ { "role": "user", "content": [ { "type": "input_text", "text": "search for latest news about openai models" } ] } ], text={ "format": { "type": "text" }, "verbosity": "medium" }, reasoning={ "effort": "medium" }, tools=[ { "type": "mcp", "server_label": "Crawleo", "server_url": "https://api.crawleo.dev/mcp", "server_description": "Crawleo MCP Server - Real-Time Web Knowledge for AI", "authorization": "YOUR_API_KEY_HERE", "allowed_tools": [ "web.search", "web.crawl" ], "require_approval": "always" } ], store=True, include=[ "reasoning.encrypted_content", "web_search_call.action.sources" ] ) print(response) Key Parameters: server_url - Crawleo MCP endpoint authorization - Your Crawleo API key allowed_tools - Enable web.search and/or web.crawl require_approval - Set to "always", "never", or "conditional" Node.js Example: javascriptimport OpenAI from 'openai'; const client = new OpenAI(); const response = await client.responses.create({ model: 'gpt-4', input: [ { role: 'user', content: [ { type: 'input_text', text: 'search for latest AI developments' } ] } ], tools: [ { type: 'mcp', server_label: 'Crawleo', server_url: 'https://api.crawleo.dev/mcp', server_description: 'Crawleo MCP Server - Real-Time Web Knowledge for AI', authorization: 'YOUR_API_KEY_HERE', allowed_tools: ['web.search', 'web.crawl'], require_approval: 'always' } ] }); console.log(response); ``` --- ## Available Tools ### web.search Search the web in real-time with customizable parameters. **Parameters:** - `query` *(required)* - Search term - `max_pages` - Number of result pages (default: 1) - `setLang` - Language code (e.g., "en", "ar") - `cc` - Country code (e.g., "US", "EG") - `device` - Device type: "desktop", "mobile", "tablet" (default: "desktop") - `enhanced_html` - Get clean HTML (default: true) - `raw_html` - Get raw HTML (default: false) - `markdown` - Get Markdown format (default: true) - `page_text` - Get plain text (default: false) - `auto_crawling` - Auto-crawl result URLs (default: false) **Example:** ``` Ask your AI: "Search for 'Python web scraping' and return results in Markdown" ``` --- ### web.crawl Extract content from specific URLs. **Parameters:** - `urls` *(required)* - List of URLs to crawl - `rawHtml` - Return raw HTML (default: false) - `markdown` - Convert to Markdown (default: false) - `screenshot` - Capture screenshot (optional) - `country` - Geographic location **Example:** ``` Ask your AI: "Crawl https://example.com and extract the main content in Markdown" ``` --- ## Troubleshooting ### MCP server not appearing 1. **Check config file location** - Make sure you're editing the correct file 2. **Verify JSON syntax** - Use a JSON validator to check for syntax errors 3. **Restart the application** - Completely quit and reopen (not just reload) 4. **Check API key** - Ensure your API key is valid and active at crawleo.dev ### Authentication errors - Verify your API key is correct (should start with `sk_`) - Make sure the key is wrapped in quotes - Check that "Bearer " prefix is included in the Authorization header (for Claude/Cursor/Windsurf) - For OpenAI Platform, use the key directly in the `authorization` field - Confirm your account has available credits at crawleo.dev ### No results returned - Check your internet connection - Verify the search query is not empty - Try a simpler search query first - Check API status at crawleo.dev ### Tool names not recognized Make sure you're using the correct tool names: - Use `web.search` (not `search_web`) - Use `web.crawl` (not `crawl_web`) --- ## Usage Examples ### Research Assistant ``` "Search for recent developments in quantum computing and summarize the key findings" ``` ### Content Analysis ``` "Search for competitor pricing pages and extract their pricing tiers" ``` ### Code Documentation ``` "Find the official documentation for FastAPI and extract the quickstart guide" ``` ### News Monitoring ``` "Search for today's news about artificial intelligence from US sources" ``` ### Market Research ``` "Search for customer reviews of iPhone 15 and analyze sentiment" Pricing Crawleo MCP uses the same affordable pricing as our API: 10,000 searches → $20 100,000 searches → $100 250,000 searches → $200 Check your usage and manage your subscription at crawleo.dev Privacy & Security ✅ Zero data retention - We never store your search queries or results ✅ Secure authentication - API keys transmitted over HTTPS ✅ No tracking - Your usage patterns remain private Support Documentation: crawleo.dev/docs API Status: crawleo.dev/status Contact: support@crawleo.dev Links 🌐 Website: crawleo.dev 📚 Documentation: crawleo.dev/docs 🔑 Get API Key: crawleo.dev

Agentpmt Agent Payment

AgentPMT - Empowering AI Agents with Secure Payment Capabilities AgentPMT is the essential infrastructure layer that connects autonomous AI agents to the global digital economy. As businesses increasingly rely on AI agents to handle complex tasks, we solve a critical challenge: enabling these agents to securely transact and pay for services while maintaining human oversight and control. Our Platform Features: Secure Digital Wallets: Automatically deployed wallets using Circle's enterprise infrastructure with institutional-grade security USDC Integration: Leverage the world's largest regulated digital dollar with 1:1 USD backing for stable, reliable transactions Granular Budget Controls: Create multiple budgets from a single wallet with customizable spending limits, vendor whitelists, and service restrictions Instant Settlement: Near-instantaneous blockchain payments on Base (Layer 2 Ethereum) with minimal fees and complete transparency Easy Integration: Connect in under 10 minutes via MCP installer for Claude Desktop or direct API integration with any LLM Verifiable Records: Every transaction is recorded on-chain, providing immutable audit trails and complete accountability Use Cases: Whether your AI agent needs to purchase data feeds, pay for API calls, order supplies, or access premium services, AgentPMT provides the secure payment layer that makes it possible. Our vendor marketplace connects agents to a growing ecosystem of AI-enabled services without requiring separate accounts or subscriptions. Built for the Agentic Economy: As we enter an era where AI agents become essential business partners, AgentPMT ensures these digital workers can operate effectively in the real world. We're not just processing payments – we're enabling a future where human creativity and AI capability combine to achieve unprecedented productivity.

Tomba: Find, Verify, And Enrich Emails For Mcp

A Model Context Protocol (MCP) server for integrating with the Tomba.io API. This server provides comprehensive email discovery, verification, and enrichment capabilities through a standardized MCP interface. Features Tools (8 available) Domain Search: Find all email addresses associated with a domain Email Finder: Generate likely email addresses from names and domains Email Verifier: Verify email deliverability and check database presence Email Enrichment: Enrich emails with additional contact data Author Finder: Discover email addresses of article authors LinkedIn Finder: Find emails from LinkedIn profile URLs Phone Finder: Search phone numbers by email, domain, or LinkedIn Phone Validator: Validate phone numbers and check carrier info Resources (5 available) tomba://api/status - API status and account info tomba://domain/{domain} - Domain information tomba://email/{email} - Email information tomba://docs/api - API documentation tomba://docs/tools - Tools documentation Prompts (7 pre-built workflows) find_contact - Find complete contact info for a person verify_email_list - Batch verify email addresses research_company - Research company contacts and structure enrich_lead - Enrich a lead with all available data find_journalists - Find journalist contacts from articles finder_phone - Find phone numbers for contacts validate_phone - Validate a phone number Transport Options stdio - Standard input/output (default, for Claude Desktop) http - HTTP server with REST endpoints Installation Prerequisites Node.js 18 or higher npm or yarn Tomba API account (Sign up here)

Smart Ai Bridge

Smart AI Bridge is a production-ready Model Context Protocol (MCP) server that orchestrates AI-powered development operations across multiple backends with automatic failover, smart routing, and advanced error prevention capabilities. Key Features 🤖 Multi-AI Backend Orchestration Pre-configured 4-Backend System: 1 local model + 3 cloud AI backends (fully customizable - bring your own providers) Fully Expandable: Add unlimited backends via EXTENDING.md guide Intelligent Routing: Automatic backend selection based on task complexity and content analysis Health-Aware Failover: Circuit breakers with automatic fallback chains Bring Your Own Models: Configure any AI provider (local models, cloud APIs, custom endpoints) 🎨 Bring Your Own Backends: The system ships with example configuration using local LM Studio and NVIDIA cloud APIs, but supports ANY AI providers - OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, custom APIs, or local models via Ollama/vLLM/etc. See EXTENDING.md for integration guide. 🎯 Advanced Fuzzy Matching Three-Phase Matching: Exact (<5ms) → Fuzzy (<50ms) → Suggestions (<100ms) Error Prevention: 80% reduction in "text not found" errors Levenshtein Distance: Industry-standard similarity calculation Security Hardened: 9.7/10 security score with DoS protection Cross-Platform: Automatic Windows/Unix line ending handling 🛠️ Comprehensive Toolset 19 Total Tools: 9 core tools + 10 intelligent aliases Code Review: AI-powered analysis with security auditing File Operations: Advanced read, edit, write with atomic transactions Multi-Edit: Batch operations with automatic rollback Validation: Pre-flight checks with fuzzy matching support 🔒 Enterprise Security Security Score: 9.7/10 with comprehensive controls DoS Protection: Complexity limits, iteration caps, timeout enforcement Input Validation: Type checking, structure validation, sanitization Metrics Tracking: Operation monitoring and abuse detection Audit Trail: Complete logging with error sanitization 🏆 Production Ready: 100% test coverage, enterprise-grade reliability, MIT licensed 🚀 Multi-Backend Architecture Flexible 4-backend system pre-configured with 1 local + 3 cloud backends for maximum development efficiency. The architecture is fully expandable - see EXTENDING.md for adding additional backends. 🎯 Pre-configured AI Backends The system comes with 4 specialized backends (fully expandable via EXTENDING.md): Cloud Backend 1 - Coding Specialist (Priority 1) Specialization: Advanced coding, debugging, implementation Optimal For: JavaScript, Python, API development, refactoring, game development Routing: Automatic for coding patterns and task_type: 'coding' Example Providers: OpenAI GPT-4, Anthropic Claude, Qwen via NVIDIA API, Codestral, etc. Cloud Backend 2 - Analysis Specialist (Priority 2) Specialization: Mathematical analysis, research, strategy Features: Advanced reasoning capabilities with thinking process Optimal For: Game balance, statistical analysis, strategic planning Routing: Automatic for analysis patterns and math/research tasks Example Providers: DeepSeek via NVIDIA/custom API, Claude Opus, GPT-4 Advanced, etc. Local Backend - Unlimited Tokens (Priority 3) Specialization: Large context processing, unlimited capacity Optimal For: Processing large files (>50KB), extensive documentation, massive codebases Routing: Automatic for large prompts and unlimited token requirements Example Providers: Any local model via LM Studio, Ollama, vLLM - DeepSeek, Llama, Mistral, Qwen, etc. Cloud Backend 3 - General Purpose (Priority 4) Specialization: General-purpose tasks, additional fallback capacity Optimal For: Diverse tasks, backup routing, multi-modal capabilities Routing: Fallback and general-purpose queries Example Providers: Google Gemini, Azure OpenAI, AWS Bedrock, Anthropic Claude, etc. 🎨 Example Configuration: The default setup uses LM Studio (local) + NVIDIA API (cloud), but you can configure ANY providers. See EXTENDING.md for step-by-step instructions on integrating OpenAI, Anthropic, Azure, AWS, or custom APIs. 🧠 Smart Routing Intelligence Advanced content analysis with empirical learning: // Smart Routing Decision Tree if (prompt.length > 50,000) → Local Backend (unlimited capacity) else if (math/analysis patterns detected) → Cloud Backend 2 (analysis specialist) else if (coding patterns detected) → Cloud Backend 1 (coding specialist) else → Default to Cloud Backend 1 (highest priority) Pattern Recognition: Coding Patterns: function|class|debug|implement|javascript|python|api|optimize Math/Analysis Patterns: analyze|calculate|statistics|balance|metrics|research|strategy Large Context: File size >100KB or prompt length >50,000 characters