Sponsored by Deepsite.site

Codegraph Mcp

Created By
Jakedismo3 months ago
# Transform any MCP-compatible LLM into a codebase expert through semantic intelligence A blazingly fast graphRAG implementation. 100% Rust for indexing and querying large codebases with natural language. Supports multiple embedding providers: modes cpu (no graph just AST parsing), onnx (blazingly fast medium quality embeddings with Qdrant/all-MiniLM-L6-v2-onnx) and Ollama (time consuming SOTA embeddings with hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M). I would argue this is the fastest codebase indexer on the Github atm. Includes a Rust SDK made stdio MCP server so that your agents can query the indexed codegraph with natural language and get deep insights from your codebase before starting development or making changes. Currently supports typescript, javascript, rust, go, Python and C++ codebases. ๐Ÿ“Š Performance Benchmarking (M4 Max 128GB) Production Codebase Results (1,505 files, 2.5M lines, Python, Javascript, Typescript and Go) ๐ŸŽ‰ INDEXING COMPLETE! ๐Ÿ“Š Performance Summary โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€. โ”€โ” โ”‚ ๐Ÿ“„ Files: 1,505 indexed โ”‚ โ”‚ ๐Ÿ“ Lines: 2,477,824 processed โ”‚ โ”‚ ๐Ÿ”ง Functions: 30,669 extracted โ”‚ โ”‚ ๐Ÿ—๏ธ Classes: 880 extracted โ”‚ โ”‚ ๐Ÿ’พ Embeddings: 538,972 generated โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€. โ”€โ”˜ Embedding Provider Performance Comparison Provider Time Quality Use Case ๐Ÿง  Ollama nomic-embed-code ~15-18h SOTA retrieval accuracy Production, smaller codebases โšก ONNX all-MiniLM-L6-v2 32m 22s Good general embeddings Large codebases, lunch-break indexing ๐Ÿ“š LEANN ~4h The next best thing I could find in Github CodeGraph Advantages โœ… Incremental Updates: Only reprocess changed files (LEANN can't do this) โœ… Provider Choice: Speed vs. quality optimization based on needs โœ… Memory Optimization: Automatic optimisations based on your system โœ… Production Ready: Index 2.5M lines while having lunch Read the README.md carefully the installation is complex and requires you to download the embedding model in onnx format and Ollama and setting up multiple environment variables (I would recommend setting these in your bash configuration)
Content

CodeGraph MCP Intelligence Platform

๐Ÿš€ Revolutionary AI development intelligence platform with Qwen2.5-Coder-14B-128K integration

Transform any MCP-compatible LLM into a codebase expert through semantic intelligence

License Rust MCP Qwen

๐Ÿ“‹ Table of Contents

๐ŸŽฏ Revolutionary Overview

CodeGraph is the a MCP-based codebase intelligence platform that transforms any compatible LLM (Claude-4[1m], GPT-5, custom agents) into a codebase expert through advanced semantic analysis enhanced by Qwen2.5-Coder-14B-128K.

๐Ÿง  Core Innovation: MCP-First Intelligence

Architecture: Cloud LLMs โ†” MCP Protocol โ†” CodeGraph Server โ†” Qwen2.5-Coder-14B-128K

Any MCP-compatible AI agent can now:

  • Understand your specific codebase like a senior team member
  • Predict change impacts before modifications are made
  • Generate code following your team's exact patterns
  • Provide architectural insights impossible with generic AI

๐Ÿš€ Revolutionary Capabilities

  • ๐Ÿง  Semantic Intelligence: Qwen2.5-Coder-14B with 128K context for complete codebase understanding
  • โšก Impact Prediction: Shows what breaks BEFORE you make changes
  • ๐ŸŽฏ Team Intelligence: Learns and shares your team's coding patterns and conventions
  • ๐Ÿ’พ Intelligent Caching: Semantic similarity matching for 50-80% cache hit rates
  • ๐Ÿ“Š Pattern Detection: Analyzes team conventions with semantic analysis
  • ๐Ÿ”— MCP Protocol: Works with Claude Code, Codex CLI, Gemini CLi, Crush, Qwen-Code, and any MCP-compatible agent

๐ŸŒ Universal Programming Language Support

CodeGraph provides revolutionary AI intelligence across 11 programming languages, making it the most comprehensive local-first AI development platform available.

๐Ÿš€ Tier 1: Advanced Semantic Analysis (8 Languages)

Complete framework-aware semantic extractors with language-specific intelligence:

  • ๐Ÿฆ€ Rust - Complete ownership/borrowing analysis, trait relationships, async patterns, lifetimes
  • ๐Ÿ Python - Type hints, docstrings, dynamic analysis, framework detection
  • โšก JavaScript - Modern ES6+, async/await, functional patterns, React/Node.js intelligence
  • ๐Ÿ“˜ TypeScript - Type system analysis, generics, interface relationships, Angular/React patterns
  • ๐ŸŽ Swift - iOS/macOS development, SwiftUI patterns, protocol-oriented programming, Combine
  • ๐Ÿ”ท C# - .NET patterns, LINQ analysis, async/await, dependency injection, Entity Framework
  • ๐Ÿ’Ž Ruby - Rails patterns, metaprogramming, dynamic typing, gem analysis
  • ๐Ÿ˜ PHP - Laravel/Symfony patterns, namespace analysis, modern PHP features, Composer

๐Ÿ›  Tier 2: Basic Semantic Analysis (3 Languages)

Tree-sitter parsing with generic semantic extraction:

  • ๐Ÿน Go - Goroutines, interfaces, package management, concurrency patterns
  • โ˜• Java - OOP patterns, annotations, Spring framework detection, Maven/Gradle
  • โš™๏ธ C++ - Modern C++, templates, memory management patterns, CMake

๐Ÿ”ฎ Future Language Roadmap

Note: The gap between Tier 1 and Tier 2 will be eliminated in future updates. We're actively working on advanced semantic extractors for:

  • Kotlin (Android/JVM development) - In progress, version compatibility being resolved
  • Dart (Flutter/mobile development) - In progress, version compatibility being resolved
  • Zig (Systems programming)
  • Elixir (Functional/concurrent programming)
  • Haskell (Pure functional programming)

Adding new languages is now streamlined - each new language takes approximately 1-4 hours to implement with full semantic analysis.

๐ŸŽฏ Revolutionary MCP Tools (11 Available)

โœ… Available Immediately (No Model Required)

  • codegraph.pattern_detection: Team intelligence and coding convention analysis
  • vector.search: Advanced semantic search using FAISS + 90K lines of analysis
  • graph.neighbors & graph.traverse: Code relationship exploration
  • codegraph.performance_metrics: Real-time system monitoring
  • tools/list: MCP protocol compliance

๐Ÿง  Available Once Qwen2.5-Coder Downloads

  • codegraph.enhanced_search: Semantic search + AI analysis (2-3 seconds)
  • codegraph.semantic_intelligence: Comprehensive codebase analysis (4-6 seconds)
  • codegraph.impact_analysis: Revolutionary change impact prediction (3-5 seconds)

โšก Performance Achievements

Existing Performance (Proven)

Parsing: 170K lines in 0.49 seconds (342,852 lines/sec)
Embeddings: 21,024 embeddings in 3:24 minutes
Platform: M3 Pro 32GB (optimal for Qwen2.5-Coder-14B)

Revolutionary Performance (Validated)

TypeScript Extraction: 2,836 nodes from 2,871 lines (BREAKTHROUGH!)
Enhanced Search: 18s first run, cached for millisecond responses
Impact Analysis: 2.7s with structured risk assessment
Pattern Detection: Instant team intelligence analysis
Semantic Analysis: 90% confidence with 128K context window
Memory Usage: ~24GB VRAM (fits 32GB MacBook Pro perfectly)

Complete Local Stack Performance

Qwen2.5-Coder-14B-128K: SOTA code analysis (294-540 context tokens used)
nomic-embed-code: Code-specialized embeddings (3584 dimensions)
FAISS Indexing: High-performance vector search
Intelligent Caching: Semantic similarity matching for speed
Zero External Dependencies: 100% local processing

๐Ÿ“Š Performance Benchmarking (M4 Max 128GB)

Production Codebase Results (1,505 files, 2.5M lines)

๐ŸŽ‰ INDEXING COMPLETE!

๐Ÿ“Š Performance Summary
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿ“„ Files:   1,505 indexed                       โ”‚
โ”‚ ๐Ÿ“ Lines: 2,477,824 processed                   โ”‚
โ”‚ ๐Ÿ”ง Functions:  30,669 extracted                 โ”‚
โ”‚ ๐Ÿ—๏ธ  Classes:      880 extracted                 โ”‚
โ”‚ ๐Ÿ’พ Embeddings: 538,972 generated                โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Embedding Provider Performance Comparison

ProviderTimeQualityUse Case
๐Ÿง  Ollama nomic-embed-code~15-18hSOTA retrieval accuracyProduction, smaller codebases
โšก ONNX all-MiniLM-L6-v232m 22sGood general embeddingsLarge codebases, lunch-break indexing
๐Ÿ“š LEANN~4hNext best thing I could find in GithubNo incremental updates

CodeGraph Advantages

  • โœ… Incremental Updates: Only reprocess changed files (LEANN can't do this)
  • โœ… Provider Choice: Speed vs. quality optimization based on needs
  • โœ… Memory Optimization: Automatic 128GB M4 Max scaling
  • โœ… Production Ready: Index 2.5M lines while having lunch
  • โœ… Revolutionary MCP: Any LLM becomes codebase expert
# Daily development: Speed-optimized for quick iterations
export CODEGRAPH_EMBEDDING_PROVIDER=onnx
./target/release/codegraph index . --recursive

# Production deployment: Code-specialized for maximum quality
export CODEGRAPH_EMBEDDING_PROVIDER=ollama
./target/release/codegraph index . --recursive

# Best of both: Switch providers based on task urgency

๐ŸŽฏ Success Indicators

โœ… Working Correctly When You See:

  • Build completes without FAISS or model errors
  • TypeScript indexing generates 100+ nodes (not 0)
  • MCP server shows "Qwen2.5-Coder availability: true"
  • Enhanced search returns comprehensive analysis in 3-20 seconds
  • Cache hit rates improve with repeated queries
  • Claude Desktop shows CodeGraph as connected MCP server

๐Ÿšจ Needs Attention When You See:

  • Build errors about missing FAISS libraries โ†’ Check installation steps
  • "0 nodes generated" โ†’ Language extraction issue (should be fixed!)
  • "Model not found" errors โ†’ Install required Ollama models
  • Response times >30 seconds โ†’ Memory pressure or model loading
  • Generic AI responses โ†’ Qwen not being used or context not loaded

๐Ÿ“ˆ Expected Results

First-Time Setup

  • Model download: 5-30 minutes (8.4GB + 274MB)
  • Initial build: 2-5 minutes with all features
  • First indexing: 1-10 seconds depending on codebase size
  • First analysis: 10-20 seconds (then cached for speed)

Daily Usage

  • Subsequent indexing: Sub-second for small changes
  • Cached responses: Milliseconds for repeated queries
  • New analysis: 3-10 seconds for comprehensive insights
  • Team intelligence: Instant pattern detection and recommendations

โœจ Features

Core Features

  • Universal Language Intelligence

    • 11 programming languages with revolutionary semantic analysis
    • Tier 1 Advanced Analysis: Rust, Python, JavaScript, TypeScript, Swift, C#, Ruby, PHP
    • Tier 2 Basic Analysis: Go, Java, C++
    • Framework-specific intelligence (SwiftUI, Rails, Laravel, .NET, etc.)
    • Incremental indexing with file watching
    • Parallel processing with configurable workers
    • Smart caching for improved performance
  • MCP Server Management

    • STDIO transport for direct communication
    • HTTP streaming with SSE support
    • Dual transport mode for maximum flexibility
    • Background daemon mode with PID management
  • Code Search

    • Semantic search using embeddings
    • Exact match and fuzzy search
    • Regex and AST-based queries
    • Configurable similarity thresholds
  • Architecture Analysis

    • Component relationship mapping
    • Dependency analysis
    • Code pattern detection
    • Architecture visualization support

๐Ÿ—๏ธ Architecture

CodeGraph System Architecture
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                   CLI Interface                     โ”‚
โ”‚                  (codegraph CLI)                    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                           โ”‚
                           โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                   Core Engine                       โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚   Parser    โ”‚  โ”‚  Graph Store โ”‚  โ”‚   Vector   โ”‚  โ”‚ 
โ”‚  โ”‚ (Tree-sittr)โ”‚  โ”‚  (RocksDB)   โ”‚  โ”‚   Search   โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚  (FAISS)   โ”‚  โ”‚
โ”‚                                     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                           โ”‚
                           โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                  MCP Server Layer                   โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚    STDIO    โ”‚  โ”‚     HTTP     โ”‚  โ”‚    Dual    โ”‚  โ”‚
โ”‚  โ”‚  Transport  โ”‚  โ”‚  Transport   โ”‚  โ”‚    Mode    โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿง  Embeddings with ONNX Runtime (macOS)

  • Default provider: CPU EP. Works immediately with Homebrew onnxruntime.
  • Optional CoreML EP: Set CODEGRAPH_ONNX_EP=coreml to prefer CoreML when using an ONNX Runtime build that includes CoreML.
  • Fallback: If CoreML EP init fails, CodeGraph logs a warning and falls back to CPU.

How to use ONNX embeddings

# CPU-only (default)
export CODEGRAPH_EMBEDDING_PROVIDER=onnx
export CODEGRAPH_ONNX_EP=cpu
export CODEGRAPH_LOCAL_MODEL=/path/to/onnx-file

# CoreML (requires CoreML-enabled ORT build)
export CODEGRAPH_EMBEDDING_PROVIDER=onnx
export CODEGRAPH_ONNX_EP=coreml
export CODEGRAPH_LOCAL_MODEL=/path/to/onnx-file


# Install codegraph
cargo install --path crates/codegraph-mcp --features "embeddings,codegraph-vector/onnx,faiss"

Notes

  • ONNX Runtime on Apple platforms accelerates via CoreML, not Metal. If you need GPU acceleration on Apple Silicon, use CoreML where supported.
  • Some models/operators may still run on CPU if CoreML doesnโ€™t support them.

Enabling CoreML feature at build time

  • The CoreML registration path is gated by the Cargo feature onnx-coreml in codegraph-vector.
  • Build with: cargo build -p codegraph-vector --features "onnx,onnx-coreml"
  • In a full workspace build, enable it via your consuming crateโ€™s features or by adding: --features codegraph-vector/onnx,codegraph-vector/onnx-coreml.
  • You still need an ONNX Runtime library that was compiled with CoreML support; the feature only enables the registration call in our code.

๐Ÿ“ฆ Prerequisites

System Requirements

  • Operating System: Linux, macOS, or Windows
  • Rust: 1.75 or higher
  • Memory: Minimum 4GB RAM (8GB recommended for large codebases)
  • Disk Space: 1GB for installation + space for indexed data

Required Dependencies

# macOS
brew install cmake clang

# Ubuntu/Debian
sudo apt-get update
sudo apt-get install cmake clang libssl-dev pkg-config

# Fedora/RHEL
sudo dnf install cmake clang openssl-devel

Optional Dependencies

  • FAISS (for vector search acceleration)
    # macOS (required for FAISS feature)
    brew install faiss
    
    # Ubuntu/Debian
    sudo apt-get install libfaiss-dev
    
    # Fedora/RHEL
    sudo dnf install faiss-devel
    
  • Local Embeddings (HuggingFace + Candle + ONNX/ORT(coreML) osx-metal/cuda/cpu)
    • Enables on-device embedding generation (no external API calls)
    • Downloads models from HuggingFace Hub on first run and caches them locally
    • Internet access required for the initial model download (or pre-populate cache)
    • Default runs on CPU; advanced GPU backends (CUDA/Metal) require appropriate hardware and drivers
  • CUDA (for GPU-accelerated embeddings)
  • Git (for repository integration)

๐Ÿš€ Performance Benchmarks - pure raw speed!

Run repeatable, end-to-end benchmarks that measure indexing speed (with local embeddings + FAISS), vector search latency, and graph traversal throughput.

For reference indexing this repository with the example configuration yields the following:

2025-09-19T14:27:46.632335Z  INFO codegraph_parser::parser: Parsing completed: 361/361 files, 119401 lines in 0.08s (4485.7 files/s, 1483642 lines/s)
[00:00:51] [########################################] 14096/14096 Embeddings complete

Apple Macbook Pro M4 Max 128Gb 2025 onnx

Build with performance features

Pick one of the local embedding backends and enable FAISS:

# Option A: ONNX Runtime (CoreML on macOS, CPU otherwise)
cargo install --path crates/codegraph-mcp --features "embeddings,codegraph-vector/onnx,faiss"

# Option B: Local HF + Candle (CPU/Metal/CUDA)
cargo install --path crates/codegraph-mcp --features "embeddings-local,faiss"

Configure local embedding backend

ONNX (CoreML/CPU):

brew install huggingface_hub[cli]
hf auth login
hf download Qdrant/all-MiniLM-L6-v2
# Check download path
# Best to add these to your shell provider config
export CODEGRAPH_EMBEDDING_PROVIDER=onnx
# macOS: use CoreML
export CODEGRAPH_ONNX_EP=coreml   # or cpu
export CODEGRAPH_LOCAL_MODEL=/path/to/model/(not directly to .onnx)

Local HF + Candle (CPU/Metal/CUDA):

export CODEGRAPH_EMBEDDING_PROVIDER=local
# device: cpu | metal | cuda:<id>
export CODEGRAPH_LOCAL_MODEL=Qdrant/all-MiniLM-L6-v2

Run the benchmark

# Cold run (cleans .codegraph), warmup queries + timed trials
codegraph perf . \
  --langs rust,ts,go \
  --warmup 3 --trials 20 \
  --batch-size 512 --device metal \
  --clean --format json

What it measures

  • Indexing: total time to parse -> embed -> build FAISS (global + shards)
  • Embedding throughput: embeddings per second
  • Vector search: latency (avg/p50/p95) across repeated queries
  • Graph traversal: BFS depth=2 micro-benchmark

Sample output (numbers will vary by machine and codebase)

{
  "env": {
    "embedding_provider": "local",
    "device": "metal",
    "features": { "faiss": true, "embeddings": true }
  },
  "dataset": {
    "path": "/repo/large-project",
    "languages": ["rust","ts","go"],
    "files": 18234,
    "lines": 2583190
  },
  "indexing": {
    "total_seconds": 186.4,
    "embeddings": 53421,
    "throughput_embeddings_per_sec": 286.6
  },
  "vector_search": {
    "queries": 100,
    "latency_ms": { "avg": 18.7, "p50": 12.3, "p95": 32.9 }
  },
  "graph": {
    "bfs_depth": 2,
    "visited_nodes": 1000,
    "elapsed_ms": 41.8
  }
}

Tips for reproducibility

  • Use --clean for cold start numbers, and run a second time for warm cache numbers.
  • Close background processes that may compete for CPU/GPU.
  • Pin versions: rustc --version, FAISS build, and the embedding model.
  • Record the host: CPU/GPU, RAM, storage, OS version.

๐Ÿš€ Complete Installation Guide

Prerequisites

  • Hardware: 32GB RAM recommended (24GB minimum)
  • OS: macOS 11.0+ (or Linux with FAISS support)
  • Rust: 1.75+ with Cargo
  • Ollama: For local model serving

Step 1: Install System Dependencies

# macOS: Install FAISS for vector search
brew install faiss

# Verify FAISS installation
ls /opt/homebrew/opt/faiss/lib/

# Install Ollama for local models
curl -fsSL https://ollama.com/install.sh | sh
ollama serve &

Step 2: Install SOTA Models

# Install Qwen2.5-Coder-14B-128K (SOTA code analysis)
ollama pull hf.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF:Q4_K_M

# Install nomic-embed-code (SOTA code embeddings)
ollama pull hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M

# Verify models installed
ollama list | grep -E "qwen|nomic"

Step 3: Build CodeGraph with Complete Features

# Build with all revolutionary features
LIBRARY_PATH="/opt/homebrew/opt/faiss/lib:$LIBRARY_PATH" \
LD_LIBRARY_PATH="/opt/homebrew/opt/faiss/lib:$LD_LIBRARY_PATH" \
MACOSX_DEPLOYMENT_TARGET=11.0 \
cargo build --release -p codegraph-mcp \
  --features "qwen-integration,faiss,embeddings,embeddings-ollama,codegraph-vector/onnx"

# Verify build
./target/release/codegraph --version

Step 4: Environment Configuration

SOTA accuracy for small code-bases:

# Configure for complete local stack
export CODEGRAPH_MODEL="hf.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF:Q4_K_M"
export CODEGRAPH_EMBEDDING_PROVIDER=ollama
export CODEGRAPH_EMBEDDING_MODEL=nomic-embed-code
export RUST_LOG=off

Blazing speed for large-codebases:

# Configure for complete local stack
export CODEGRAPH_MODEL="hf.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF:Q4_K_M"
export CODEGRAPH_EMBEDDING_PROVIDER=onnx
export CODEGRAPH_EMBEDDING_MODEL=path/to/your/embedding_model_onnx_folder
export RUST_LOG=off

๐Ÿš€ Revolutionary Quick Start

Step 1: Initialize Your Project

# Navigate to your codebase
cd /path/to/your/project

# Initialize CodeGraph (creates .codegraph directory)
/path/to/codegraph-rust/target/release/codegraph init .

# Expected output:
# โœ“ Created .codegraph/config.toml
# โœ“ Created .codegraph/db/
# โœ“ Created .codegraph/vectors/
# โœ“ Created .codegraph/cache/

Step 2: Index Your Codebase (Optimized for Your System)

# Automatic optimization for 128GB M4 Max (recommended)
LIBRARY_PATH="/opt/homebrew/opt/faiss/lib:$LIBRARY_PATH" \
LD_LIBRARY_PATH="/opt/homebrew/opt/faiss/lib:$LD_LIBRARY_PATH" \
CODEGRAPH_EMBEDDING_PROVIDER=ollama \
CODEGRAPH_EMBEDDING_MODEL="hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M" \
./target/release/codegraph index . --recursive --languages typescript,javascript,rust,python

# Expected beautiful output:
# ๐Ÿš€ High-memory system detected (128GB) - performance optimized!
# Workers: 4 โ†’ 16 (optimized)
# Batch size: 100 โ†’ 20480 (optimized)
# ๐Ÿ’พ Memory capacity: ~20480 embeddings per batch
# ๐Ÿ“„ Parsing Files | Languages: typescript,javascript,rust,python
# ๐Ÿ’พ ๐Ÿš€ Ultra-High Performance (20K batch) | 95% success rate

# Custom high-performance indexing with large batches
./target/release/codegraph index . --recursive --batch-size 10240 --languages typescript,javascript

# Maximum performance for 128GB+ systems
./target/release/codegraph index . --recursive --batch-size 20480 --workers 16 --languages typescript,rust,python,go

Performance Expectations (128GB M4 Max)

โœ… Workers: Auto-optimized to 16 (4x parallelism)
โœ… Batch Size: Auto-optimized to 20,480 embeddings
โœ… Processing Speed: 150,000+ lines/second
โœ… Memory Utilization: Optimized for available capacity
โœ… Progress Visualization: Dual bars with success rates
โœ… Beautiful Output: Clean professional experience

Step 3: Start Revolutionary MCP Server

# Start MCP server for Claude Desktop/GPT-4 integration
CODEGRAPH_MODEL="hf.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF:Q4_K_M" \
RUST_LOG=error \
./target/release/codegraph start stdio

# Expected output:
# โœ… Qwen2.5-Coder-14B-128K available for CodeGraph intelligence
# โœ… Intelligent response cache initialized
# MCP server ready for connections

Step 4: Configure Claude Desktop

Add to your Claude Desktop configuration:

{
  "mcpServers": {
    "codegraph": {
      "command": "/path/to/codegraph-rust/target/release/codegraph",
      "args": ["start", "stdio"],
      "cwd": "/path/to/your/project",
      "env": {
        "RUST_LOG": "error",
        "CODEGRAPH_MODEL": "hf.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF:Q4_K_M",
        "CODEGRAPH_EMBEDDING_PROVIDER": "ollama"
      }
    }
  }
}

Step 5: Experience Revolutionary AI

Restart Claude Desktop and test:

"Analyze the coding patterns and architecture in this codebase"
โ†’ Claude gets team intelligence from your semantic analysis

"What would happen if I modify the authentication system?"
โ†’ Claude predicts impact before you make changes

"Find all GraphQL-related code and explain the patterns"
โ†’ Claude uses code-specialized search with perfect relevance

๐Ÿš€ High-Memory System Optimization

128GB M4 Max (Your System) - Ultra-High Performance

# Automatic optimization (recommended)
./target/release/codegraph index . --recursive --languages typescript,javascript,rust,python

# Expected optimization:
# ๐Ÿš€ High-memory system detected (128GB) - performance optimized!
# Workers: 4 โ†’ 16 (optimized)
# Batch size: 100 โ†’ 20480 (optimized)

# Custom ultra-high performance
./target/release/codegraph index . --batch-size 20480 --workers 16 --recursive

# Maximum performance testing
./target/release/codegraph index . --batch-size 40960 --workers 16 --recursive

Memory-Based Auto-Optimization

128GB+ Systems (M4 Max):
  Workers: 16 (maximum parallelism)
  Batch Size: 20,480 embeddings
  Memory Utilization: Ultra-high performance

64-95GB Systems:
  Workers: 12 (high parallelism)
  Batch Size: 10,240 embeddings
  Memory Utilization: High performance

32-63GB Systems:
  Workers: 8 (medium parallelism)
  Batch Size: 2,048 embeddings
  Memory Utilization: Balanced performance

16-31GB Systems:
  Workers: 6 (conservative)
  Batch Size: 512 embeddings
  Memory Utilization: Memory-conscious

Quality of Life Features

  • Dual Progress Bars: Files processed + success rates
  • Memory Detection: Automatic system optimization
  • Beautiful Output: Unicode progress bars and colored status
  • Performance Metrics: Real-time speed, ETA, and success rates
  • Intelligent Defaults: Respects user choices while optimizing

๐Ÿ“Š Embedding Provider Options

export CODEGRAPH_EMBEDDING_PROVIDER=ollama
export CODEGRAPH_EMBEDDING_MODEL="hf.co/nomic-ai/nomic-embed-code-GGUF:Q4_K_M"

# Benefits:
# - Code-specialized understanding (768-dim vectors)
# - Superior semantic search relevance
# - Local processing, zero external dependencies
# - Perfect for your 128GB M4 Max with large batches

ONNX (Alternative - Speed Optimized)

export CODEGRAPH_EMBEDDING_PROVIDER=onnx
export CODEGRAPH_LOCAL_MODEL=sentence-transformers/all-MiniLM-L6-v2

# Benefits:
# - Faster embedding generation
# - Lower memory usage
# - Good general-purpose embeddings
# - Better for smaller memory systems

Enabling Local Embeddings (Optional)

If you want to use a local embedding model (Hugging Face) instead of remote providers:

  1. Build with the local embeddings feature for crates that use vector search (the API and/or CLI server): ! Recommended to use the onnx version for better performance, see the begginning of the README for installation instructions
# Build API with local embeddings enabled
cargo build -p codegraph-api --features codegraph-vector/local-embeddings

# (Optional) If your CLI server crate depends on vector features, enable similarly:
cargo build -p core-rag-mcp-server --features codegraph-vector/local-embeddings
  1. Set environment variables to switch the provider at runtime:
export CODEGRAPH_EMBEDDING_PROVIDER=local
# Optional: choose a specific HF model (must provide onnx model)
export CODEGRAPH_LOCAL_MODEL=path/to/Qdrant/all-MiniLM-L6-v2
  1. Run as usual (the first run will download model files from Hugging Face and cache them locally):
cargo run -p codegraph-api --features codegraph-vector/local-embeddings

Model cache locations:

  • Default Hugging Face cache: ~/.cache/huggingface (or $HF_HOME) via hf-hub
  • You can pre-populate this cache to run offline after the first download

### Method 2: Install Pre-built Binary

```bash
# Download the latest release
curl -L https://github.com/jakedismo/codegraph-cli-mcp/releases/latest/download/codegraph-$(uname -s)-$(uname -m).tar.gz | tar xz

# Move to PATH
sudo mv codegraph /usr/local/bin/

# Verify installation
codegraph --version

Method 3: Using Cargo

# Install directly from crates.io (when published)
cargo install codegraph-mcp

# Verify installation
codegraph --version

๐ŸŽฏ Quick Start

1. Initialize a New Project

# Initialize CodeGraph in current directory
codegraph init

# Initialize with project name
codegraph init --name my-project

2. Index Your Codebase

# Index current directory
codegraph index .

# Index with specific languages (expanded support)
codegraph index . --languages rust,python,typescript,swift,csharp,ruby,php

# Or with more options in Osx
RUST_LOG=info,codegraph_vector=debug codegraph index . --workers 10 --batch-size 256 --max-seq-len 512 --force                                                    

# Index with file watching
codegraph index . --watch

3. Start MCP Server

# Start with STDIO transport (default)
codegraph start stdio

# Start with HTTP transport
codegraph start http --port 3000

# Start with dual transport
codegraph start dual --port 3000

### (Optional) Start with Local Embeddings

```bash
# Build with the feature (see installation step above), then:
export CODEGRAPH_EMBEDDING_PROVIDER=local
export CODEGRAPH_LOCAL_MODEL=Qdrant/all-MiniLM-L6-v2
cargo run -p codegraph-api --features codegraph-vector/local-embeddings

4. Search Your Code

# Semantic search
codegraph search "authentication handler"

# Exact match search
codegraph search "fn authenticate" --search-type exact

# AST-based search
codegraph search "function with async keyword" --search-type ast

๐Ÿ“– CLI Commands

Global Options

codegraph [OPTIONS] <COMMAND>

Options:
  -v, --verbose         Enable verbose logging
  --config <PATH>       Configuration file path
  -h, --help           Print help
  -V, --version        Print version

Command Reference

init - Initialize CodeGraph Project

codegraph init [OPTIONS] [PATH]

Arguments:
  [PATH]               Project directory (default: current directory)

Options:
  --name <NAME>        Project name
  --non-interactive    Skip interactive setup

start - Start MCP Server

codegraph start <TRANSPORT> [OPTIONS]

Transports:
  stdio                STDIO transport (default)
  http                 HTTP streaming transport
  dual                 Both STDIO and HTTP

Options:
  --config <PATH>      Server configuration file
  --daemon             Run in background
  --pid-file <PATH>    PID file location

HTTP Options:
  -h, --host <HOST>    Host to bind (default: 127.0.0.1)
  -p, --port <PORT>    Port to bind (default: 3000)
  --tls                Enable TLS/HTTPS
  --cert <PATH>        TLS certificate file
  --key <PATH>         TLS key file
  --cors               Enable CORS

stop - Stop MCP Server

codegraph stop [OPTIONS]

Options:
  --pid-file <PATH>    PID file location
  -f, --force          Force stop without graceful shutdown

status - Check Server Status

codegraph status [OPTIONS]

Options:
  --pid-file <PATH>    PID file location
  -d, --detailed       Show detailed status information

index - Index Project

codegraph index <PATH> [OPTIONS]

Arguments:
  <PATH>               Path to project directory

Options:
  -l, --languages <LANGS>     Languages to index (comma-separated)
  --exclude <PATTERNS>        Exclude patterns (gitignore format)
  --include <PATTERNS>        Include only these patterns
  -r, --recursive             Recursively index subdirectories
  --force                     Force reindex
  --watch                     Watch for changes
  --workers <N>               Number of parallel workers (default: 4)

search - Search Indexed Code

codegraph search <QUERY> [OPTIONS]

Arguments:
  <QUERY>              Search query

Options:
  -t, --search-type <TYPE>    Search type (semantic|exact|fuzzy|regex|ast)
  -l, --limit <N>             Maximum results (default: 10)
  --threshold <FLOAT>         Similarity threshold 0.0-1.0 (default: 0.7)
  -f, --format <FORMAT>       Output format (human|json|yaml|table)

config - Manage Configuration

codegraph config <ACTION> [OPTIONS]

Actions:
  show                 Show current configuration
  set <KEY> <VALUE>    Set configuration value
  get <KEY>            Get configuration value
  reset                Reset to defaults
  validate             Validate configuration

Options:
  --json               Output as JSON (for 'show')
  -y, --yes            Skip confirmation (for 'reset')

stats - Show Statistics

codegraph stats [OPTIONS]

Options:
  --index              Show index statistics
  --server             Show server statistics
  --performance        Show performance metrics
  -f, --format <FMT>   Output format (table|json|yaml|human)

clean - Clean Resources

codegraph clean [OPTIONS]

Options:
  --index              Clean index database
  --vectors            Clean vector embeddings
  --cache              Clean cache files
  --all                Clean all resources
  -y, --yes            Skip confirmation prompt

โš™๏ธ Configuration

Configuration File Structure

Create a .codegraph/config.toml file:

# General Configuration
[general]
project_name = "my-project"
version = "1.0.0"
log_level = "info"

# Indexing Configuration
[indexing]
languages = ["rust", "python", "typescript", "javascript", "go", "swift", "csharp", "ruby", "php"]
exclude_patterns = ["**/node_modules/**", "**/target/**", "**/.git/**"]
include_patterns = ["src/**", "lib/**"]
recursive = true
workers = 10
watch_enabled = false
incremental = true

# Embedding Configuration
[embedding]
model = "local"  # Options: openai, local, custom
dimension = 1536
batch_size = 512
cache_enabled = true
cache_size_mb = 500

# Vector Search Configuration
[vector]
index_type = "flat"  # Options: flat, ivf, hnsw
nprobe = 10
similarity_metric = "cosine"  # Options: cosine, euclidean, inner_product

# Database Configuration
[database]
path = "~/.codegraph/db"
cache_size_mb = 128
compression = true
write_buffer_size_mb = 64

# Server Configuration
[server]
default_transport = "stdio"
http_host = "127.0.0.1"
http_port = 3005
enable_tls = false
cors_enabled = true
max_connections = 100

# Performance Configuration
[performance]
max_file_size_kb = 1024
parallel_threads = 8
memory_limit_mb = 2048
optimization_level = "balanced"  # Options: speed, balanced, memory

Environment Variables

# Override configuration with environment variables
export CODEGRAPH_LOG_LEVEL=debug
export CODEGRAPH_DB_PATH=/custom/path/db
export CODEGRAPH_EMBEDDING_MODEL=local
export CODEGRAPH_HTTP_PORT=8080

Embedding Model Configuration

OpenAI Embeddings

[embedding.openai]
api_key = "${OPENAI_API_KEY}"  # Use environment variable
model = "text-embedding-3-large"
dimension = 3072

Local Embeddings

[embedding.local]
model_path = "~/.codegraph/models/codestral.gguf"
device = "cpu"  # Options: cpu, cuda, metal
context_length = 8192

๐Ÿ“š User Workflows

Workflow 1: Complete Project Setup and Analysis

# Step 1: Initialize project
codegraph init --name my-awesome-project

# Step 2: Configure settings
codegraph config set embedding.model local
codegraph config set performance.optimization_level speed

# Step 3: Index the codebase (universal language support)
codegraph index . --languages rust,python,swift,csharp,ruby,php --recursive

# Step 4: Start MCP server
codegraph start http --port 3000 --daemon

# Step 5: Search and analyze
codegraph search "database connection" --limit 20
codegraph stats --index --performance

Workflow 2: Continuous Development with Watch Mode

# Start indexing with watch mode
codegraph index . --watch --workers 8 &

# Start MCP server in dual mode
codegraph start dual --daemon

# Monitor changes
codegraph status --detailed

# Search while developing
codegraph search "TODO" --search-type exact

Workflow 3: Integration with AI Tools

# Start MCP server for Claude Desktop or VS Code
codegraph start stdio

# Configure for AI assistant integration
cat > ~/.codegraph/mcp-config.json << EOF
{
  "name": "codegraph-server",
  "version": "1.0.0",
  "tools": [
    {
      "name": "analyze_architecture",
      "description": "Analyze codebase architecture"
    },
    {
      "name": "find_patterns",
      "description": "Find code patterns and anti-patterns"
    }
  ]
}
EOF

Workflow 4: Large Codebase Optimization

# Optimize for large codebases
codegraph config set performance.memory_limit_mb 8192
codegraph config set vector.index_type ivf
codegraph config set database.compression true

# Index with optimizations
codegraph index /path/to/large/project \
  --workers 16 \
  --exclude "**/test/**,**/vendor/**"

# Use batch operations
codegraph search "class.*Controller" --search-type regex --limit 100

๐Ÿ”Œ Integration Guide

Integrating with Claude Desktop

  1. Add to Claude Desktop configuration:
{
  "mcpServers": {
    "codegraph": {
      "command": "codegraph",
      "args": ["start", "stdio"],
      "env": {
        "CODEGRAPH_CONFIG": "~/.codegraph/config.toml"
      }
    }
  }
}
  1. Restart Claude Desktop to load the MCP server

Integrating with VS Code

  1. Install the MCP extension for VS Code
  2. Add to VS Code settings:
{
  "mcp.servers": {
    "codegraph": {
      "command": "codegraph",
      "args": ["start", "stdio"],
      "rootPath": "${workspaceFolder}"
    }
  }
}

API Integration

import requests
import json

# Connect to HTTP MCP server
base_url = "http://localhost:3000"

# Index a project
response = requests.post(f"{base_url}/index", json={
    "path": "/path/to/project",
    "languages": ["python", "javascript"]
})

# Search code
response = requests.post(f"{base_url}/search", json={
    "query": "async function",
    "limit": 10
})

results = response.json()

Using with CI/CD

# GitHub Actions example
name: CodeGraph Analysis

on: [push, pull_request]

jobs:
  analyze:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      
      - name: Install CodeGraph
        run: |
          cargo install codegraph-mcp
      
      - name: Index Codebase
        run: |
          codegraph init --non-interactive
          codegraph index . --languages rust,python
      
      - name: Run Analysis
        run: |
          codegraph stats --index --format json > analysis.json
      
      - name: Upload Results
        uses: actions/upload-artifact@v2
        with:
          name: codegraph-analysis
          path: analysis.json

๐Ÿ”ง Troubleshooting

Common Issues and Solutions

Issue: Server fails to start

Solution:

# Check if port is already in use
lsof -i :3000

# Kill existing process
codegraph stop --force

# Start with different port
codegraph start http --port 3001

Issue: Indexing is slow

Solution:

# Increase workers
codegraph index . --workers 16

# Exclude unnecessary files
codegraph index . --exclude "**/node_modules/**,**/dist/**"

# Use incremental indexing
codegraph config set indexing.incremental true

Issue: Out of memory during indexing

Solution:

# Reduce batch size
codegraph config set embedding.batch_size 50

# Limit memory usage
codegraph config set performance.memory_limit_mb 1024

# Use streaming mode
codegraph index . --streaming

Issue: Vector search returns poor results

Solution:

# Adjust similarity threshold
codegraph search "query" --threshold 0.5

# Re-index with better embeddings
codegraph config set embedding.model openai
codegraph index . --force

# Use different search type
codegraph search "query" --search-type fuzzy

#### Issue: Hugging Face model fails to download

**Solution:**
```bash
# Ensure you have internet access and the model name is correct
export CODEGRAPH_LOCAL_MODEL=Qdrant/all-MiniLM-L6-v2

# If the model is private, set a HF token (if required by your environment)
export HF_TOKEN=your_hf_access_token

# Clear/inspect cache (default): ~/.cache/huggingface
ls -lah ~/.cache/huggingface

# Note: models must include safetensors weights; PyTorch .bin-only models are not supported by the local loader here

Issue: Local embeddings are slow

Solution:

# Reduce batch size via config or environment (CPU defaults prioritize stability)
# Consider using a smaller model (e.g., all-MiniLM-L6-v2) or enabling GPU backends.

# For Apple Silicon (Metal) or CUDA, additional wiring can be enabled in config.
# Current default uses CPU; contact maintainers to enable device selectors in your environment.

Issue: FAISS linking error during cargo install

Error: ld: library 'faiss_c' not found

Solution:

# On macOS: Install FAISS via Homebrew
brew install faiss

# Set library paths and retry installation
export LIBRARY_PATH="/opt/homebrew/opt/faiss/lib:$LIBRARY_PATH"
export LD_LIBRARY_PATH="/opt/homebrew/opt/faiss/lib:$LD_LIBRARY_PATH"

# Retry the cargo install command
cargo install --path crates/codegraph-mcp --features "embeddings,codegraph-vector/onnx,faiss"

Alternative Solution:

# On Ubuntu/Debian
sudo apt-get update
sudo apt-get install libfaiss-dev

# On Fedora/RHEL
sudo dnf install faiss-devel

# Then retry cargo install
cargo install --path crates/codegraph-mcp --features "embeddings,codegraph-vector/onnx,faiss"

### Debug Mode

Enable debug logging for troubleshooting:

```bash
# Set debug log level
export RUST_LOG=debug
codegraph --verbose index .

# Check logs
tail -f ~/.codegraph/logs/codegraph.log

Health Checks

# Check system health
codegraph status --detailed

# Validate configuration
codegraph config validate

# Test database connection
codegraph test db

# Verify embeddings
codegraph test embeddings

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

# Clone repository
git clone https://github.com/jakedismo/codegraph-cli-mcp.git
cd codegraph-cli-mcp

# Install development dependencies
cargo install cargo-watch cargo-nextest

# Run tests
cargo nextest run

# Run with watch mode
cargo watch -x check -x test

๐Ÿ“„ License

This project is dual-licensed under MIT and Apache 2.0 licenses. See LICENSE-MIT and LICENSE-APACHE for details.

๐Ÿ™ Acknowledgments


Made with โค๏ธ by the CodeGraph Team

Server Config

{
  "mcpServers": {
    "codegraph": {
      "command": "codegraph",
      "args": [
        "start",
        "stdio"
      ],
      "env": {
        "RUST_LOG": "error",
        "CODEGRAPH_MODEL": "hf.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF:Q4_K_M"
      }
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Howtocook McpๅŸบไบŽAnduin2017 / HowToCook ๏ผˆ็จ‹ๅบๅ‘˜ๅœจๅฎถๅš้ฅญๆŒ‡ๅ—๏ผ‰็š„mcp server๏ผŒๅธฎไฝ ๆŽจ่่œ่ฐฑใ€่ง„ๅˆ’่†ณ้ฃŸ๏ผŒ่งฃๅ†ณโ€œไปŠๅคฉๅƒไป€ไนˆโ€œ็š„ไธ–็บช้šพ้ข˜๏ผ› Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Tavily Mcp
Serper MCP ServerA Serper MCP Server
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
CursorThe AI Code Editor
WindsurfThe new purpose-built IDE to harness magic
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
DeepChatYour AI Partner on Desktop
Amap Maps้ซ˜ๅพทๅœฐๅ›พๅฎ˜ๆ–น MCP Server
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
ChatWiseThe second fastest AI chatbotโ„ข
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Playwright McpPlaywright MCP server
Baidu Map็™พๅบฆๅœฐๅ›พๆ ธๅฟƒAPI็Žฐๅทฒๅ…จ้ขๅ…ผๅฎนMCPๅ่ฎฎ๏ผŒๆ˜ฏๅ›ฝๅ†…้ฆ–ๅฎถๅ…ผๅฎนMCPๅ่ฎฎ็š„ๅœฐๅ›พๆœๅŠกๅ•†ใ€‚
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors