Sponsored by Deepsite.site

Exocortex

Created By
fuwasegu4 hours ago
xocortex — an MCP server that gives your AI persistent memory across ALL your projects. • Semantic search recalls past solutions by meaning • Knowledge graph connects related insights • 100% local, no API keys Went deep into cognitive science for this: 🔥 Frustration Indexing (Somatic Marker Hypothesis) → Painful debugging sessions get PRIORITY recall. Those 3-hour bugs? Never forgotten. 😴 Sleep/Dream Mechanism → Background consolidation links & deduplicates memories. Like how human sleep organizes memory. Tech stack: • MCP (Cursor/Claude native) • KùzuDB (graph + vector in one embedded DB) • fastembed (local embeddings)
Content

Exocortex 🧠

"Extend your mind." - Your External Brain

日本語版はこちら (Japanese)


Exocortex is a local MCP (Model Context Protocol) server that acts as a developer's "second brain."

It persists development insights, technical decisions, and troubleshooting records, allowing AI assistants (like Cursor) to retrieve contextually relevant memories when needed.

Why Exocortex?

🌐 Cross-Project Knowledge Sharing

Unlike tools that store data per-repository (e.g., .serena/ in each project), Exocortex uses a single, centralized knowledge store.

Traditional approach (per-repository):
project-A/.serena/    ← isolated knowledge
project-B/.serena/    ← isolated knowledge
project-C/.serena/    ← isolated knowledge

Exocortex approach (centralized):
~/.exocortex/data/    ← shared knowledge across ALL projects
    ├── Insights from project-A
    ├── Insights from project-B
    └── Insights from project-C
    Cross-project learning!

Benefits:

  • 🔄 Knowledge Transfer: Lessons learned in one project are immediately available in others
  • 🏷️ Tag-based Discovery: Find related memories across projects via shared tags
  • 📈 Cumulative Learning: Your external brain grows smarter over time, not per project
  • 🔍 Pattern Recognition: Discover common problems and solutions across your entire development history

Features

  • 🔒 Fully Local: All data and AI processing stays on your machine. Privacy guaranteed.
  • 🔍 Semantic Search: Find memories by meaning, not just keywords.
  • 🕸️ Knowledge Graph: Maintains relationships between projects, tags, and memories with explicit links.
  • 🔗 Memory Links: Connect related memories to build a traversable knowledge network.
  • Lightweight & Fast: Uses embedded KùzuDB and lightweight fastembed models.
  • 🧠 Memory Dynamics: Smart recall based on recency and frequency—frequently accessed memories surface higher.
  • 🔥 Frustration Indexing: Prioritize "painful memories"—debugging nightmares get boosted in search results.
  • 🖥️ Web Dashboard: Beautiful cyberpunk-style UI for browsing memories, monitoring health, and visualizing the knowledge graph.

📚 Usage Guide

See the full usage guide

  • Tool reference with use cases
  • Practical workflows
  • Prompting tips
  • Tips & Tricks

Installation

# Clone the repository
git clone https://github.com/fuwasegu/exocortex.git
cd exocortex

# Install dependencies with uv
uv sync

Usage

Starting the Server

uv run exocortex

Cursor Configuration

Add the following to your ~/.cursor/mcp.json:

Auto-updates when uvx cache expires. No manual git pull needed.

{
  "mcpServers": {
    "exocortex": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/fuwasegu/exocortex", "exocortex"]
    }
  }
}

Option 2: Local Installation

For development or customization.

{
  "mcpServers": {
    "exocortex": {
      "command": "uv",
      "args": ["--directory", "/path/to/exocortex", "run", "exocortex"]
    }
  }
}

Note: Your data is stored in ~/.exocortex/ and is preserved regardless of which option you choose.

Use this method if you want to use Exocortex from multiple Cursor windows simultaneously.

KùzuDB doesn't support concurrent writes from multiple processes. With the stdio approach where each Cursor instance spawns its own server process, lock conflicts occur. Proxy mode automatically starts a single SSE server in the background, and each Cursor instance connects via proxy.

{
  "mcpServers": {
    "exocortex": {
      "command": "uvx",
      "args": [
        "--from", "git+https://github.com/fuwasegu/exocortex",
        "exocortex",
        "--mode", "proxy",
        "--ensure-server"
      ]
    }
  }
}

How it works:

  1. First Cursor starts Exocortex → SSE server automatically starts in background
  2. Subsequent Cursors connect to the existing SSE server
  3. All Cursors share the same server → No lock conflicts!

Note: No manual server startup required. The --ensure-server option automatically starts the server if it's not running.

Option 4: Manual Server Management (Advanced)

If you prefer to manage the server manually:

Step 1: Start the server

# Start the server in a terminal (can also run in background)
uv run --directory /path/to/exocortex exocortex --transport sse --port 8765

Step 2: Configure Cursor

{
  "mcpServers": {
    "exocortex": {
      "url": "http://127.0.0.1:8765/mcp/sse"
    }
  }
}

Bonus: With this setup, you can also access the web dashboard at http://127.0.0.1:8765/

Tip: To auto-start the server on system boot, use launchd on macOS or systemd on Linux.

MCP Tools

Basic Tools

ToolDescription
exo_pingHealth check to verify server is running
exo_store_memoryStore a new memory
exo_recall_memoriesRecall relevant memories via semantic search
exo_list_memoriesList stored memories with pagination
exo_get_memoryGet a specific memory by ID
exo_delete_memoryDelete a memory
exo_get_statsGet statistics about stored memories

Advanced Tools

ToolDescription
exo_link_memoriesCreate a link between two memories
exo_unlink_memoriesRemove a link between memories
exo_update_memoryUpdate content, tags, or type of a memory
exo_explore_relatedDiscover related memories via graph traversal
exo_get_memory_linksGet all outgoing links from a memory
exo_trace_lineage🕰️ Trace the evolution/lineage of a memory (temporal reasoning)
exo_curiosity_scan🤔 Scan for contradictions, outdated info, and knowledge gaps
exo_analyze_knowledgeAnalyze knowledge base health and get improvement suggestions
exo_sleepTrigger background consolidation (deduplication, orphan rescue, auto-linking)
exo_consolidateExtract abstract patterns from memory clusters

🤖 Knowledge Autonomy

Exocortex automatically improves your knowledge graph! When you store a memory, the system:

  1. Suggests Links: Finds similar existing memories and suggests connections
  2. Detects Duplicates: Warns if the new memory is too similar to an existing one
  3. Identifies Patterns: Recognizes when a success might resolve a past failure
// Example exo_store_memory response with suggestions
{
  "success": true,
  "memory_id": "...",
  "suggested_links": [
    {
      "target_id": "existing-memory-id",
      "similarity": 0.78,
      "suggested_relation": "extends",
      "reason": "High semantic similarity; may be an application of this insight"
    }
  ],
  "insights": [
    {
      "type": "potential_duplicate",
      "message": "This memory is very similar (94%) to an existing one.",
      "suggested_action": "Use exo_update_memory instead"
    }
  ]
}

🧠 Automatic Memory Consolidation

Like human sleep consolidates memories, Exocortex prompts the AI to organize after storing.

When exo_store_memory succeeds, the response includes next_actions that guide the AI to:

  1. Link high-similarity memories (similarity ≥ 0.7)
  2. Handle duplicates and contradictions
  3. Run periodic health checks (every 10 memories)
// Example response with next_actions
{
  "success": true,
  "memory_id": "abc123",
  "summary": "...",
  "consolidation_required": true,
  "consolidation_message": "🧠 Memory stored. 2 consolidation action(s) required.",
  "next_actions": [
    {
      "action": "link_memories",
      "priority": "high",
      "description": "Link to 2 related memories",
      "details": [
        {
          "call": "exo_link_memories",
          "args": {
            "source_id": "abc123",
            "target_id": "def456",
            "relation_type": "extends",
            "reason": "High semantic similarity"
          }
        }
      ]
    },
    {
      "action": "analyze_health",
      "priority": "low",
      "description": "Run knowledge base health check",
      "details": { "call": "exo_analyze_knowledge" }
    }
  ]
}

Expected Flow:

User: "Remember this insight"
AI: exo_store_memory() → receives next_actions
AI: exo_link_memories() for each high-priority action
AI: "Stored and linked to 2 related memories."

⚠️ Important Limitation: Execution of next_actions is at the AI agent's discretion. While the server strongly instructs consolidation via SERVER_INSTRUCTIONS and consolidation_required: true, execution is NOT 100% guaranteed. This is an inherent limitation of the MCP protocol—servers can only suggest, not force actions. In practice, most modern AI assistants follow these instructions, but they may be skipped during complex conversations or when competing with other tasks.

TypeDescription
relatedGenerally related memories
supersedesThis memory updates/replaces the target
contradictsThis memory contradicts the target
extendsThis memory extends/elaborates the target
depends_onThis memory depends on the target
evolved_fromThis memory evolved from the target (temporal reasoning)
rejected_becauseThis memory was rejected due to the target
caused_byThis memory was caused by the target

Temporal Reasoning with exo_trace_lineage

Trace the lineage of decisions and knowledge over time. Understand WHY something became the way it is.

ParameterDescriptionExample
memory_idStarting memory ID"abc123"
direction"backward" (find ancestors) or "forward" (find descendants)"backward"
relation_typesRelations to follow["evolved_from", "caused_by"]
max_depthMaximum traversal depth10 (default)

Example: Understanding Why a Decision Was Made

Current Architecture Decision
    ▼ trace_lineage(direction="backward")
    ├─ [depth 1] Previous Design (evolved_from)
    │      "Switched from monolith to microservices"
    └─ [depth 2] Original Problem (caused_by)
           "Scaling issues with single database"

Usage:

AI: exo_trace_lineage(memory_id="current-decision", direction="backward")
Result: Shows the evolution chain of how the current decision came to be

Use Cases:

  • 🔍 Architecture archaeology: "Why did we choose this approach?"
  • 🐛 Root cause analysis: "What led to this bug?"
  • 📚 Knowledge evolution: "How has our understanding changed?"

Curiosity Engine with exo_curiosity_scan

The Curiosity Engine actively questions your knowledge base like a curious human would. It scans for inconsistencies, finds unlinked memories, and generates questions to improve knowledge quality.

What it detects:

CategoryDescriptionExample
🔴 ContradictionsMemories that conflict with each otherSuccess vs Failure on same topic
📅 Outdated InfoOld knowledge that may need reviewMemories superseded but not linked
🔗 Suggested LinksUnlinked memories that should be connectedMemories sharing tags, context, or high similarity
QuestionsHuman-like questions about your knowledge"Is this still valid?"

Suggested Links Detection Strategies:

StrategyConfidenceDescription
Tag SharingHigh (0.7+)Memories sharing 2+ tags are likely related
Context SharingMedium (0.6)Same project + same type (insight/decision)
Semantic SimilarityHigh (0.7+)High vector similarity (>70%) but not linked

Example Output:

{
  "contradictions": [
    {
      "memory_a_summary": "Caching approach works perfectly",
      "memory_b_summary": "Caching approach failed badly",
      "reason": "success vs failure on same topic",
      "confidence": 0.85
    }
  ],
  "suggested_links": [
    {
      "source_summary": "Database optimization technique",
      "target_summary": "Query performance improvement",
      "reason": "Share 3 tags: database, performance, optimization",
      "link_type": "tag_shared",
      "confidence": 0.8,
      "suggested_relation": "related"
    }
  ],
  "outdated_knowledge": [],
  "questions": [
    "🤔 These memories seem to contradict. Are both still valid?",
    "🔗 Found unlinked related memories. Link them to strengthen the graph?"
  ],
  "next_actions": [
    {
      "action": "create_link",
      "priority": "medium",
      "details": {
        "call": "exo_link_memories",
        "args": {
          "source_id": "...",
          "target_id": "...",
          "relation_type": "related"
        }
      }
    }
  ]
}

Usage:

AI: exo_curiosity_scan(context_filter="my-project")
Result: Report of issues, suggested links, and questions
AI: Executes next_actions to create links
Result: Knowledge graph becomes richer and more connected!

Use Cases:

  • 🔍 Knowledge audit: "Are there any contradictions in my knowledge?"
  • 🔗 Graph enrichment: "Find unlinked memories that should be connected"
  • 🧹 Quality maintenance: "What needs to be cleaned up?"
  • 💡 Discovery: "What questions should I be asking about my knowledge?"

Optional: Enhanced Sentiment Analysis

For higher accuracy contradiction detection in exo_curiosity_scan, you can enable BERT-based sentiment analysis:

# Local installation
pip install exocortex[sentiment]
# or
uv sync --extra sentiment
// mcp.json with sentiment support
{
  "mcpServers": {
    "exocortex": {
      "command": "uvx",
      "args": [
        "--from", "exocortex[sentiment] @ git+https://github.com/fuwasegu/exocortex",
        "exocortex", "--mode", "proxy", "--ensure-server"
      ]
    }
  }
}

Note: Adds ~2.5GB of dependencies (PyTorch + Transformers). The default keyword-based detection works well for most cases and supports both English and Japanese.

Environment Variables

VariableDefaultDescription
EXOCORTEX_DATA_DIR~/.exocortexDatabase storage directory
EXOCORTEX_LOG_LEVELINFOLogging level (DEBUG/INFO/WARNING/ERROR)
EXOCORTEX_EMBEDDING_MODELsentence-transformers/all-MiniLM-L6-v2Embedding model to use
EXOCORTEX_TRANSPORTstdioTransport mode (stdio/sse/streamable-http)
EXOCORTEX_HOST127.0.0.1Server bind address (for HTTP modes)
EXOCORTEX_PORT8765Server port number (for HTTP modes)

Architecture

Stdio Mode (Default)

┌─────────────────┐     stdio      ┌─────────────────────────────┐
│  AI Assistant   │ ◄──────────► │       Exocortex MCP         │
│   (Cursor)      │    MCP        │                             │
└─────────────────┘               │  ┌─────────┐  ┌──────────┐  │
                                  │  │ Tools   │  │ Embedding│  │
                                  │  │ Handler │  │  Engine  │  │
                                  │  └────┬────┘  └────┬─────┘  │
                                  │       │            │        │
                                  │  ┌────▼────────────▼─────┐  │
                                  │  │       KùzuDB          │  │
                                  │  │  (Graph + Vector)     │  │
                                  │  └────────────────────────┘  │
                                  └─────────────────────────────┘

HTTP/SSE Mode (Multiple Instances)

┌─────────────────┐                
│  Cursor #1      │──────┐         
└─────────────────┘      │         
                         │  HTTP   ┌─────────────────────────────┐
┌─────────────────┐      ├────────►│       Exocortex MCP         │
│  Cursor #2      │──────┤   SSE   │      (Standalone)           │
└─────────────────┘      │         │                             │
                         │         │  ┌─────────┐  ┌──────────┐  │
┌─────────────────┐      │         │  │ Tools   │  │ Embedding│  │
│  Cursor #3      │──────┘         │  │ Handler │  │  Engine  │  │
└─────────────────┘                │  └────┬────┘  └────┬─────┘  │
                                   │       │            │        │
                                   │  ┌────▼────────────▼─────┐  │
                                   │  │       KùzuDB          │  │
                                   │  │  (Graph + Vector)     │  │
                                   │  └────────────────────────┘  │
                                   └─────────────────────────────┘

Knowledge Graph Structure

Memory ─── ORIGINATED_IN ──► Context (project)
Memory ─── TAGGED_WITH ────► Tag
Memory ─── RELATED_TO ─────► Memory (with relation type)

Memory Dynamics

Exocortex implements a Memory Dynamics system inspired by human cognition. Memories have "lifespan" and "strength" that affect search results:

Hybrid Scoring Formula:

Score = (S_vec × w_vec) + (S_recency × w_recency) + (S_freq × w_freq) + (S_frustration × w_frustration)
ComponentDescriptionDefault Weight
S_vecVector similarity (semantic relevance)0.50
S_recencyRecency score (exponential decay: e^(-λ×Δt))0.20
S_freqFrequency score (log scale: log(1 + count))0.15
S_frustrationFrustration score (painful memory boost)0.15

How it works:

  • Every time a memory is recalled, its last_accessed_at and access_count are updated
  • Frequently accessed memories gain higher S_freq scores
  • Recently accessed memories gain higher S_recency scores
  • Painful memories (debugging nightmares) get higher S_frustration scores for priority
  • Old, unused memories naturally decay but remain searchable

This creates an intelligent recall system where:

  • 📈 Important memories (frequently used) stay prominent
  • ⏰ Recent context is prioritized
  • 🔥 Painful memories are never forgotten—to avoid repeating mistakes
  • 🗃️ Old memories gracefully fade but don't disappear

Frustration Indexing (Somatic Marker Hypothesis)

Based on the neuroscience insight that "painful memories are prioritized in decision-making", Exocortex automatically boosts the importance of debugging struggles and hard-won solutions.

Usage:

# Explicitly mark as a painful memory
exo_store_memory(
    content="Spent 3 hours debugging KùzuDB lock issues. Root cause was...",
    context_name="exocortex",
    tags=["bug", "kuzu"],
    is_painful=True,          # ← Important!
    time_cost_hours=3.0       # ← Record time spent
)

Auto-detection: Even without is_painful, frustration level is auto-detected from content:

  • 😓 Low (0.2-0.4): "tricky", "weird", "workaround"
  • 🔥 Medium (0.4-0.6): "finally", "bug", "hours"
  • 🔥🔥 High (0.6-0.8): "stuck", "frustrated"
  • 🔥🔥🔥 Extreme (0.8-1.0): "nightmare", "impossible", "hell"

Search results:

{
  "memories": [
    {
      "id": "...",
      "summary": "KùzuDB lock issue resolution",
      "frustration_score": 0.85,
      "pain_indicator": "🔥🔥🔥",   // ← Visual emphasis
      "time_cost_hours": 3.0
    }
  ]
}

Sleep/Dream Mechanism

Like human sleep consolidates memories, Exocortex has a background consolidation process that organizes your knowledge graph:

┌─────────────────────────────────────────────────────────────┐
│                    exo_sleep() called                        │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│              Dream Worker (Detached Process)                 │
│  ┌──────────────────────────────────────────────────────┐   │
│  │ 1. Deduplication                                      │   │
│  │    - Find memories with similarity >= 95%             │   │
│  │    - Link newer → older with 'related' relation       │   │
│  ├──────────────────────────────────────────────────────┤   │
│  │ 2. Orphan Rescue                                      │   │
│  │    - Find memories with no tags and no links          │   │
│  │    - Link to most similar memory with 'related'       │   │
│  ├──────────────────────────────────────────────────────┤   │
│  │ 3. Auto-linking (High Confidence Only)                │   │
│  │    - Tag sharing: 3+ shared tags → 'related'          │   │
│  │    - Semantic: 80%+ similarity → 'related'            │   │
│  ├──────────────────────────────────────────────────────┤   │
│  │ 4. Pattern Mining (Phase 2)                           │   │
│  │    - Extract common patterns from memory clusters     │   │
│  └──────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────┘

Usage:

AI: "I've completed the task. Let me consolidate the knowledge base."
AI: exo_sleep() → Worker spawns in background
AI: "Consolidation process started. Your knowledge graph will be optimized."

Key Features:

  • 🔄 Non-blocking: Returns immediately, consolidation runs in background
  • 🔐 Safe: Uses file locking to avoid conflicts with active sessions
  • 📊 Logs: Enable logging with enable_logging=True to track progress

⚠️ Warning for Proxy Mode: When using proxy mode (--mode proxy), exo_sleep is NOT recommended. In proxy mode, the SSE server maintains a constant connection to KùzuDB. The Dream Worker spawned in the background cannot access the database and will timeout or cause conflicts.

Workarounds:

  • Don't use exo_sleep in proxy mode
  • Use it in stdio mode before ending a session
  • Manually stop the SSE server before running

Pattern Abstraction (Concept Formation)

Exocortex can extract abstract patterns from concrete memories, creating a hierarchical knowledge structure:

┌─────────────────────────────────────────────────────────────────┐
│                         Pattern Layer                            │
│  ┌───────────────────────────────────────────────────────────┐  │
│  │ "Always use connection pooling for database connections"   │  │
│  │ Confidence: 0.85 | Instances: 5                           │  │
│  └───────────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────────┘
                    ▲ INSTANCE_OF    ▲ INSTANCE_OF
       ┌────────────┴────────────────┴────────────┐
┌──────┴──────┐  ┌───────────┐  ┌───────────┐  ┌──┴────────┐
│ Memory #1   │  │ Memory #2 │  │ Memory #3 │  │ Memory #4 │
│ PostgreSQL  │  │ MySQL     │  │ Redis     │  │ MongoDB   │
│ pooling fix │  │ pool size │  │ conn reuse│  │ pool leak │
└─────────────┘  └───────────┘  └───────────┘  └───────────┘
                     Memory Layer (Concrete)

Usage:

AI: exo_consolidate(tag_filter="database") → Extracts patterns from database-related memories
Result: "Created 2 patterns from 8 memories"

Benefits:

  • 🎯 Generalization: Discover rules that apply across specific cases
  • 🔍 Meta-learning: Find what works (and what doesn't) across projects
  • 📈 Confidence Building: Patterns get stronger as more instances are linked

Web Dashboard

Exocortex includes a beautiful web dashboard for visualizing and managing your knowledge base.

Accessing the Dashboard

No terminal commands needed! When using proxy mode (--mode proxy --ensure-server) with Cursor, the SSE server is automatically running in the background.

Just open this in your browser:

http://127.0.0.1:8765/
Cursor starts
Proxy mode → SSE server auto-starts (port 8765)
├─ MCP: http://127.0.0.1:8765/mcp/sse ← Used by Cursor
└─ Dashboard: http://127.0.0.1:8765/ ← Just open in browser!

Starting the Server Manually

If you want to view the dashboard without using Cursor:

# Start SSE server (includes dashboard)
uv run exocortex --transport sse --port 8765

URLs:

  • Dashboard: http://127.0.0.1:8765/
  • MCP SSE: http://127.0.0.1:8765/mcp/sse

Dashboard Features

TabDescription
OverviewStatistics, contexts, tags, and knowledge base health score
MemoriesBrowse, filter, and search memories with pagination
Dream LogReal-time streaming log of background consolidation processes
GraphVisual knowledge graph showing memory connections

Screenshots

Overview Tab

  • Total memories count by type (Insights, Successes, Failures, Decisions, Notes)
  • Context and tag clouds for quick navigation
  • Health score with improvement suggestions

Memories Tab

  • Filter by type (Insight/Success/Failure/Decision/Note)
  • Filter by context (project)
  • Click any memory to see full details and links

Graph Tab

  • Interactive node visualization
  • Color-coded by memory type:
    • 🔵 Cyan: Insights
    • 🟠 Orange: Decisions
    • 🟢 Green: Successes
    • 🔴 Red: Failures
  • Lines show RELATED_TO connections between memories

Standalone Dashboard Mode

You can also run the dashboard separately on a different port:

uv run exocortex --mode dashboard --dashboard-port 8766

Note: In standalone mode, the dashboard connects to the same database but doesn't include the MCP server.

Documentation

Development

# Install dependencies
uv sync

# Run tests
uv run pytest

# Run with debug logging
EXOCORTEX_LOG_LEVEL=DEBUG uv run exocortex

License

MIT License

Server Config

{
  "mcpServers": {
    "exocortex": {
      "command": "uvx",
      "args": [
        "--from",
        "git+https://github.com/fuwasegu/exocortex",
        "exocortex",
        "--mode",
        "proxy",
        "--ensure-server"
      ]
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Playwright McpPlaywright MCP server
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
ChatWiseThe second fastest AI chatbot™
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Amap Maps高德地图官方 MCP Server
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Serper MCP ServerA Serper MCP Server
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
CursorThe AI Code Editor
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
WindsurfThe new purpose-built IDE to harness magic
Tavily Mcp
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
DeepChatYour AI Partner on Desktop
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code