Sponsored by Deepsite.site

Tag

#semantic

59 results found

Greb Mcp

GREB MCP Server Semantic code search for AI agents without indexing your codebase or storing any data. Fast and accurate. Available on npm (cheetah-greb) and PyPI (cheetah-greb). FEATURES - Natural Language Search: Describe what you're looking for in plain English - High-Precision Results: Smart ranking returns the most relevant code first - Works with Any MCP Client: Claude Desktop, Cursor, Windsurf, Cline, Kiro, and more - No Indexing Required: Search any codebase instantly without setup - Fast: Results in under 5 seconds even for large repositories INSTALLATION Install Greb globally using pip or npm. Python: pip install cheetah-greb Node.js: npm install -g cheetah-greb GET YOUR API KEY 1. Go to Dashboard > API Keys at https://grebmcp.com/dashboard/api-keys 2. Click "Create API Key" 3. Copy the key (starts with grb_) CONFIGURATION Add to your MCP client config (Cursor, Windsurf, Claude Desktop, Kiro, etc.): Python installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } Node.js installation: { "mcpServers": { "greb-mcp": { "command": "greb-mcp-js", "env": { "GREB_API_KEY": "grb_your_api_key_here" } } } } CLAUDE CODE SETUP Mac/Linux (Python): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp Windows PowerShell (Python): claude mcp add greb-mcp greb-mcp --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" Mac/Linux (Node.js): claude mcp add --transport stdio greb-mcp --env GREB_API_KEY=grb_your_api_key_here -- greb-mcp-js Windows PowerShell (Node.js): claude mcp add greb-mcp greb-mcp-js --transport stdio --env "GREB_API_KEY=grb_your_api_key_here" TOOL: code_search Search code using natural language queries powered by AI. Parameters: - query (string, required): Natural language search query - keywords (object, required): Search configuration - keywords.primary_terms (string array, required): High-level semantic terms (e.g., "authentication", "database") - keywords.code_patterns (string array, optional): Literal code patterns to grep for - keywords.file_patterns (string array, required): File extensions to search (e.g., ["*.ts", "*.js"]) - keywords.intent (string, required): Brief description of what you're looking for - directory (string, required): Full absolute path to directory to search Example: { "query": "find authentication middleware", "keywords": { "primary_terms": ["authentication", "middleware", "jwt"], "code_patterns": ["authenticate(", "isAuthenticated"], "file_patterns": ["*.js", "*.ts"], "intent": "find auth middleware implementation" }, "directory": "/Users/dev/my-project" } Response includes: - File paths - Line numbers - Relevance scores - Code content - Reasoning for each match USAGE EXAMPLES Ask your AI assistant to search code naturally: "Use greb mcp to find authentication middleware" "Use greb mcp to find all API endpoints" "Use greb mcp to look for database connection setup" "Use greb mcp to find where user validation happens" "Use greb mcp to search for error handling patterns" LINKS Website: https://grebmcp.com Documentation: https://grebmcp.com/docs Get API Key: https://grebmcp.com/dashboard/api-keys

Codegraph Mcp

# Transform any MCP-compatible LLM into a codebase expert through semantic intelligence A blazingly fast graphRAG implementation. 100% Rust for indexing and querying large codebases with natural language. Supports multiple embedding providers: modes cpu (no graph just AST parsing), onnx (blazingly fast medium quality embeddings with Qdrant/all-MiniLM-L6-v2-onnx) and Ollama (time consuming SOTA embeddings with hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M). I would argue this is the fastest codebase indexer on the Github atm. Includes a Rust SDK made stdio MCP server so that your agents can query the indexed codegraph with natural language and get deep insights from your codebase before starting development or making changes. Currently supports typescript, javascript, rust, go, Python and C++ codebases. πŸ“Š Performance Benchmarking (M4 Max 128GB) Production Codebase Results (1,505 files, 2.5M lines, Python, Javascript, Typescript and Go) πŸŽ‰ INDEXING COMPLETE! πŸ“Š Performance Summary β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€. ─┐ β”‚ πŸ“„ Files: 1,505 indexed β”‚ β”‚ πŸ“ Lines: 2,477,824 processed β”‚ β”‚ πŸ”§ Functions: 30,669 extracted β”‚ β”‚ πŸ—οΈ Classes: 880 extracted β”‚ β”‚ πŸ’Ύ Embeddings: 538,972 generated β”‚ └───────────────. β”€β”˜ Embedding Provider Performance Comparison Provider Time Quality Use Case 🧠 Ollama nomic-embed-code ~15-18h SOTA retrieval accuracy Production, smaller codebases ⚑ ONNX all-MiniLM-L6-v2 32m 22s Good general embeddings Large codebases, lunch-break indexing πŸ“š LEANN ~4h The next best thing I could find in Github CodeGraph Advantages βœ… Incremental Updates: Only reprocess changed files (LEANN can't do this) βœ… Provider Choice: Speed vs. quality optimization based on needs βœ… Memory Optimization: Automatic optimisations based on your system βœ… Production Ready: Index 2.5M lines while having lunch Read the README.md carefully the installation is complex and requires you to download the embedding model in onnx format and Ollama and setting up multiple environment variables (I would recommend setting these in your bash configuration)