Sponsored by Deepsite.site

Coderag

Created By
packages18 days ago
Content

CodeRAG

Lightning-fast hybrid code search for AI assistants

npm version npm version CI License

Zero dependencies<50ms searchHybrid TF-IDF + VectorMCP ready

Quick StartFeaturesMCP SetupAPI


Why CodeRAG?

Traditional code search tools are either slow (full-text grep), inaccurate (keyword matching), or complex (require external services).

CodeRAG is different:

❌ Old way: Docker + ChromaDB + Ollama + 30 second startup
✅ CodeRAG: npx @sylphx/coderag-mcp (instant)
Featuregrep/ripgrepCloud RAGCodeRAG
Semantic understanding
Zero external deps
Offline support
Startup timeInstant10-30s<1s
Search latency~100ms~500ms<50ms

✨ Features

  • 🔍 Hybrid Search - TF-IDF + optional vector embeddings
  • 🧠 StarCoder2 Tokenizer - Code-aware tokenization (4.7MB, trained on code)
  • 📊 Smoothed IDF - No term gets ignored, stable ranking
  • <50ms Latency - Instant results even on large codebases

Indexing

  • 🚀 1000-2000 files/sec - Fast initial indexing
  • 💾 SQLite Persistence - Instant startup (<100ms) with cached index
  • Incremental Updates - Smart diff detection, no full rebuilds
  • 👁️ File Watching - Real-time index updates on file changes

Integration

  • 📦 MCP Server - Works with Claude Desktop, Cursor, VS Code, Windsurf
  • 🧠 Vector Search - Optional OpenAI embeddings for semantic search
  • 🌳 AST Chunking - Smart code splitting using Synth parsers
  • 💻 Low Memory Mode - SQL-based search for resource-constrained environments

🚀 Quick Start

npx @sylphx/coderag-mcp --root=/path/to/project

Or add to your MCP config:

{
  "mcpServers": {
    "coderag": {
      "command": "npx",
      "args": ["-y", "@sylphx/coderag-mcp", "--root=/path/to/project"]
    }
  }
}

See MCP Server Setup for Claude Desktop, Cursor, VS Code, etc.

Option 2: As a Library

npm install @sylphx/coderag
# or
bun add @sylphx/coderag
import { CodebaseIndexer, PersistentStorage } from '@sylphx/coderag'

// Create indexer with persistent storage
const storage = new PersistentStorage({ codebaseRoot: './my-project' })
const indexer = new CodebaseIndexer({
  codebaseRoot: './my-project',
  storage,
})

// Index codebase (instant on subsequent runs)
await indexer.index({ watch: true })

// Search
const results = await indexer.search('authentication logic', { limit: 10 })
console.log(results)
// [{ path: 'src/auth/login.ts', score: 0.87, matchedTerms: ['authentication', 'logic'], snippet: '...' }]

📦 Packages

PackageDescriptionInstall
@sylphx/coderagCore search librarynpm i @sylphx/coderag
@sylphx/coderag-mcpMCP server for AI assistantsnpx @sylphx/coderag-mcp

🔌 MCP Server Setup

Claude Desktop

Add to claude_desktop_config.json:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "coderag": {
      "command": "npx",
      "args": ["-y", "@sylphx/coderag-mcp", "--root=/path/to/project"]
    }
  }
}

Cursor

Add to ~/.cursor/mcp.json (macOS) or %USERPROFILE%\.cursor\mcp.json (Windows):

{
  "mcpServers": {
    "coderag": {
      "command": "npx",
      "args": ["-y", "@sylphx/coderag-mcp", "--root=/path/to/project"]
    }
  }
}

VS Code

Add to VS Code settings (JSON) or .vscode/mcp.json:

{
  "mcp": {
    "servers": {
      "coderag": {
        "command": "npx",
        "args": ["-y", "@sylphx/coderag-mcp", "--root=${workspaceFolder}"]
      }
    }
  }
}

Windsurf

Add to ~/.codeium/windsurf/mcp_config.json:

{
  "mcpServers": {
    "coderag": {
      "command": "npx",
      "args": ["-y", "@sylphx/coderag-mcp", "--root=/path/to/project"]
    }
  }
}

Claude Code

claude mcp add coderag -- npx -y @sylphx/coderag-mcp --root=/path/to/project

Search project source files with hybrid TF-IDF + vector ranking.

Parameters

ParameterTypeRequiredDefaultDescription
querystringYes-Search query
limitnumberNo10Max results
include_contentbooleanNotrueInclude code snippets
file_extensionsstring[]No-Filter by extension (e.g., [".ts", ".tsx"])
path_filterstringNo-Filter by path pattern
exclude_pathsstring[]No-Exclude paths (e.g., ["node_modules", "dist"])

Example

{
  "query": "user authentication login",
  "limit": 5,
  "file_extensions": [".ts", ".tsx"],
  "exclude_paths": ["node_modules", "dist", "test"]
}

Response Format

# 🔍 Codebase Search Results

**Query:** "user authentication login"
**Results:** 3 / 500 files

## 1. `src/auth/login.ts`

- **Score:** 0.87
- **Language:** TypeScript
- **Matched Terms:** authentication, login, user

**Snippet:**
```typescript
15: export async function authenticate(credentials) {
16:   const user = await findUser(credentials.email)
17:   return validatePassword(user, credentials.password)

📚 API Reference

CodebaseIndexer

Main class for indexing and searching.

import { CodebaseIndexer, PersistentStorage } from '@sylphx/coderag'

const storage = new PersistentStorage({ codebaseRoot: './project' })
const indexer = new CodebaseIndexer({
  codebaseRoot: './project',
  storage,
  maxFileSize: 1024 * 1024, // 1MB default
})

// Index with file watching
await indexer.index({ watch: true })

// Search with options
const results = await indexer.search('query', {
  limit: 10,
  includeContent: true,
  fileExtensions: ['.ts', '.js'],
  excludePaths: ['node_modules'],
})

// Stop watching
await indexer.stopWatch()

PersistentStorage

SQLite-backed storage for instant startup.

import { PersistentStorage } from '@sylphx/coderag'

const storage = new PersistentStorage({
  codebaseRoot: './project',  // Creates .coderag/ folder
  dbPath: './custom.db',      // Optional custom path
})

Low-Level TF-IDF Functions

import { buildSearchIndex, searchDocuments, initializeTokenizer } from '@sylphx/coderag'

// Initialize StarCoder2 tokenizer (4.7MB, one-time download)
await initializeTokenizer()

// Build index
const documents = [
  { uri: 'file://auth.ts', content: 'export function authenticate...' },
  { uri: 'file://user.ts', content: 'export class User...' },
]
const index = await buildSearchIndex(documents)

// Search
const results = await searchDocuments('authenticate user', index, { limit: 5 })

Vector Search (Optional)

For semantic search with embeddings:

import { hybridSearch, createEmbeddingProvider } from '@sylphx/coderag'

// Requires OPENAI_API_KEY environment variable
const results = await hybridSearch('authentication flow', indexer, {
  vectorWeight: 0.7,  // 70% vector, 30% TF-IDF
  limit: 10,
})

⚙️ Configuration

MCP Server Options

OptionDefaultDescription
--root=<path>Current directoryCodebase root path
--max-size=<bytes>1048576 (1MB)Max file size to index
--no-auto-indexfalseDisable auto-indexing on startup

Environment Variables

VariableDescription
OPENAI_API_KEYEnable vector search with OpenAI embeddings
OPENAI_BASE_URLCustom OpenAI-compatible endpoint
EMBEDDING_MODELEmbedding model (default: text-embedding-3-small)
EMBEDDING_DIMENSIONSCustom embedding dimensions

📊 Performance

MetricValue
Initial indexing~1000-2000 files/sec
Startup with cache<100ms
Search latency<50ms
Memory per 1000 files~1-2 MB
Tokenizer size4.7MB (StarCoder2)

Benchmarks

Tested on MacBook Pro M1, 16GB RAM:

CodebaseFilesIndex TimeSearch Time
Small (100 files)1000.5s<10ms
Medium (1000 files)1,0002s<30ms
Large (10000 files)10,00015s<50ms

🏗️ Architecture

coderag/
├── packages/
│   ├── core/                     # @sylphx/coderag
│   │   ├── src/
│   │   │   ├── indexer.ts           # Main indexer with file watching
│   │   │   ├── tfidf.ts             # TF-IDF with StarCoder2 tokenizer
│   │   │   ├── code-tokenizer.ts    # StarCoder2 tokenization
│   │   │   ├── hybrid-search.ts     # Vector + TF-IDF fusion
│   │   │   ├── incremental-tfidf.ts # Smart incremental updates
│   │   │   ├── storage-persistent.ts # SQLite storage
│   │   │   ├── vector-storage.ts    # LanceDB vector storage
│   │   │   ├── embeddings.ts        # OpenAI embeddings
│   │   │   └── ast-chunking.ts      # Synth AST chunking
│   │   └── package.json
│   │
│   └── mcp-server/               # @sylphx/coderag-mcp
│       ├── src/
│       │   └── index.ts             # MCP server
│       └── package.json

How It Works

  1. Indexing: Scans codebase, tokenizes with StarCoder2, builds TF-IDF index
  2. Storage: Persists to SQLite (.coderag/ folder) for instant startup
  3. Watching: Detects file changes, performs incremental updates
  4. Search: Hybrid TF-IDF + optional vector search with score fusion

🔧 Development

# Clone
git clone https://github.com/SylphxAI/coderag.git
cd coderag

# Install
bun install

# Build
bun run build

# Test
bun run test

# Lint & Format
bun run lint
bun run format

🤝 Contributing

Contributions are welcome! Please:

  1. Open an issue to discuss changes
  2. Fork and create a feature branch
  3. Run bun run lint and bun run test
  4. Submit a pull request

📄 License

MIT © Sylphx


Server Config

{
  "mcpServers": {
    "coderag": {
      "command": "npx",
      "args": [
        "-y",
        "@sylphx/coderag-mcp",
        "--root=/path/to/project"
      ]
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Tavily Mcp
DeepChatYour AI Partner on Desktop
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
WindsurfThe new purpose-built IDE to harness magic
ChatWiseThe second fastest AI chatbot™
CursorThe AI Code Editor
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Serper MCP ServerA Serper MCP Server
Playwright McpPlaywright MCP server
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Amap Maps高德地图官方 MCP Server
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.