Sponsored by Deepsite.site

Tag

#code

348 results found

Greb Mcp

GREB MCP Server Semantic code search for AI agents without indexing your codebase or storing any data. Fast and accurate. Available on npm (cheetah-greb) and PyPI (cheetah-greb). FEATURES - Natural Language Search: Describe what you're looking for in plain English - High-Precision Results: Smart ranking returns the most relevant code first - Works with Any MCP Client: Claude Desktop, Cursor, Windsurf, Cline, Kiro, and more - No Indexing Required: Search any codebase instantly without setup - Fast: Results in under 5 seconds even for large repositories INSTALLATION Install Greb globally using pip or npm. Python: pip install cheetah-greb Node.js: npm install -g cheetah-greb GET YOUR API KEY 1. Go to Dashboard > API Keys at https://grebmcp.com/dashboard/api-keys 2. Click "Create API Key" 3. Copy the key (starts with grb_) CONFIGURATION Add to your MCP client config (Cursor, Windsurf, Claude Desktop, Kiro, etc.): Python installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } Node.js installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp-js", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } CLAUDE CODE SETUP Mac/Linux (Python): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp Windows PowerShell (Python): claude mcp add greb-mcp greb-mcp --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" Mac/Linux (Node.js): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp-js Windows PowerShell (Node.js): claude mcp add greb-mcp greb-mcp-js --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" TOOL: code_search Search code using natural language queries powered by AI. Parameters: - query (string, required): Natural language search query - keywords (object, required): Search configuration - keywords.primary_terms (string array, required): High-level semantic terms (e.g., "authentication", "database") - keywords.code_patterns (string array, optional): Literal code patterns to grep for - keywords.file_patterns (string array, required): File extensions to search (e.g., ["*.ts", "*.js"]) - keywords.intent (string, required): Brief description of what you're looking for - directory (string, required): Full absolute path to directory to search Example: { "query": "find authentication middleware", "keywords": { "primary_terms": ["authentication", "middleware", "jwt"], "code_patterns": ["authenticate(", "isAuthenticated"], "file_patterns": ["*.js", "*.ts"], "intent": "find auth middleware implementation" }, "directory": "/Users/dev/my-project" } Response includes: - File paths - Line numbers - Relevance scores - Code content - Reasoning for each match USAGE EXAMPLES Ask your AI assistant to search code naturally: "Use greb mcp to find authentication middleware" "Use greb mcp to find all API endpoints" "Use greb mcp to look for database connection setup" "Use greb mcp to find where user validation happens" "Use greb mcp to search for error handling patterns" LINKS Website: https://grebmcp.com Documentation: https://grebmcp.com/docs Get API Key: https://grebmcp.com/dashboard/api-keys

Codegraph Mcp

# Transform any MCP-compatible LLM into a codebase expert through semantic intelligence A blazingly fast graphRAG implementation. 100% Rust for indexing and querying large codebases with natural language. Supports multiple embedding providers: modes cpu (no graph just AST parsing), onnx (blazingly fast medium quality embeddings with Qdrant/all-MiniLM-L6-v2-onnx) and Ollama (time consuming SOTA embeddings with hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M). I would argue this is the fastest codebase indexer on the Github atm. Includes a Rust SDK made stdio MCP server so that your agents can query the indexed codegraph with natural language and get deep insights from your codebase before starting development or making changes. Currently supports typescript, javascript, rust, go, Python and C++ codebases. 📊 Performance Benchmarking (M4 Max 128GB) Production Codebase Results (1,505 files, 2.5M lines, Python, Javascript, Typescript and Go) 🎉 INDEXING COMPLETE! 📊 Performance Summary ┌───────────────. ─┐ │ 📄 Files: 1,505 indexed │ │ 📝 Lines: 2,477,824 processed │ │ 🔧 Functions: 30,669 extracted │ │ 🏗️ Classes: 880 extracted │ │ 💾 Embeddings: 538,972 generated │ └───────────────. ─┘ Embedding Provider Performance Comparison Provider Time Quality Use Case 🧠 Ollama nomic-embed-code ~15-18h SOTA retrieval accuracy Production, smaller codebases ⚡ ONNX all-MiniLM-L6-v2 32m 22s Good general embeddings Large codebases, lunch-break indexing 📚 LEANN ~4h The next best thing I could find in Github CodeGraph Advantages ✅ Incremental Updates: Only reprocess changed files (LEANN can't do this) ✅ Provider Choice: Speed vs. quality optimization based on needs ✅ Memory Optimization: Automatic optimisations based on your system ✅ Production Ready: Index 2.5M lines while having lunch Read the README.md carefully the installation is complex and requires you to download the embedding model in onnx format and Ollama and setting up multiple environment variables (I would recommend setting these in your bash configuration)

Codegraph Rust

🎯 Overview CodeGraph is a powerful CLI tool that combines MCP (Model Context Protocol) server management with sophisticated code analysis capabilities. It provides a unified interface for indexing projects, managing embeddings, and running MCP servers with multiple transport options. All you now need is an Agent(s) to create your very own deep code and project knowledge synthehizer system! Key Capabilities 🔍 Advanced Code Analysis: Parse and analyze code across multiple languages using Tree-sitter 🚄 Dual Transport Support: Run MCP servers with STDIO, HTTP, or both simultaneously 🎯 Vector Search: Semantic code search using FAISS-powered vector embeddings 📊 Graph-Based Architecture: Navigate code relationships with RocksDB-backed graph storage ⚡ High Performance: Optimized for large codebases with parallel processing and batched embeddings 🔧 Flexible Configuration: Extensive configuration options for embedding models and performance tuning RAW PERFORMANCE ✨✨✨ 170K lines of rust code in 0.49sec! 21024 embeddings in 3:24mins! On M3 Pro 32GB Qdrant/all-MiniLM-L6-v2-onnx on CPU no Metal acceleration used! Parsing completed: 353/353 files, 169397 lines in 0.49s (714.5 files/s, 342852 lines/s) [00:03:24] [########################################] 21024/21024 Embeddings complete ✨ Features Core Features Project Indexing Multi-language support (Rust, Python, JavaScript, TypeScript, Go, Java, C++) Incremental indexing with file watching Parallel processing with configurable workers Smart caching for improved performance MCP Server Management STDIO transport for direct communication HTTP streaming with SSE support Dual transport mode for maximum flexibility Background daemon mode with PID management Code Search Semantic search using embeddings Exact match and fuzzy search Regex and AST-based queries Configurable similarity thresholds Architecture Analysis Component relationship mapping Dependency analysis Code pattern detection Architecture visualization support

Xcode Mcp Server (drewster99)

An MCP (Model Context Protocol) server for controlling and interacting with Xcode from AI assistants and LLMs like Claude Code, Cursor, Claude Desktop, LM Studio, etc. This server significantly improves the build cycle. Now Claude (or your favorite tool) can directly command Xcode to build your project. Because Xcode is building it directly (rather than xcodebuild command-line or similar), the build happens exactly the same way as when you build it in Xcode. Xcode-mcp-server returns relevant build errors or warnings back to your coding tool (like Cursor or Claude Code), so the LLM sees exactly the same errors you do. Included tool functions here - you don't really need to know this info because your coding LLM will get this info (and more details) automatically, but I've included it here for the curious: - version - Returns xcode-mcp-server's version string - get_xcode_projects - Finds all .xcodeproj and .xcworkspace projects in the given search_path. If search_path is empty, all paths to which the tool has been granted access are searched - get_project_hierarchy - Returns the path hierarchy of the project or workspace - get_project_schemes - Returns a list of build schemes for the specified project - build_project - Commands Xcode to build. This is the workhorse that builds your project again and again, returning success or build errors - run_project - Commands Xcode to run your project - get_build_errors - Returns most recent build errors from the given project - clean_project - Cleans build