Sponsored by Deepsite.site

Neuroverse

Created By
joshua4003 hours ago
Multilingual intelligence + memory + safety + voice layer for autonomous AI agents
Overview

version node python license MCP tests npm

NeuroVerse MCP Server Card

NeuroVerse MCP Server Score

🧠 NeuroVerse

📦 Install from npm | 🐙 GitHub Repository

Your AI agent only speaks English. NeuroVerse fixes that.
Your AI agent forgets everything. NeuroVerse remembers.
Your AI agent might do something dangerous. NeuroVerse stops that.
Your AI agent is locked to one model. NeuroVerse routes to the best one.

Multilingual intelligence + memory + safety + voice layer for autonomous AI agents.


🚀 What's New in v4.1

  • OpenRouter Reasoning: Integrated the stepfun/step-3.5-flash:free model for high-performance analytical tasks. Use the new neuroverse_reason tool for deep thinking.
  • Reasoning Tokens: Real-time tracking of reasoning tokens for every request.
  • Voice Layer (v2.0): Built-in support for Whisper STT and Coqui TTS.

🚀 What is NeuroVerse?

Every time you start a new chat with Cursor, VS Code Copilot, or any MCP-compatible AI agent, it starts from zero — no memory, no safety, no understanding of your language. NeuroVerse is an MCP server that gives your agents:

FeatureDescription
🌐Multilingual IntelligenceUnderstands mixed Indian languages — Tamil, Hindi, Telugu, Kannada, Malayalam, Bengali + English. Code-switching? No problem.
🎙️Voice LayerSTT via Whisper and TTS via Coqui. Transcribe user audio and synthesize agent responses.
🧠Intent ExtractionLLM-first structured intent extraction with deterministic rule-based fallback. Never misses a command.
💾Tiered MemoryShort-term (session), Episodic (recent), Semantic (long-term facts) — all with importance scoring.
🛡️3-Layer Safety (Kavach)Keyword blocklist → Intent risk classifier → LLM judge. Blocks DROP DATABASE before it's too late.
🤖Multi-Model Router (Marga)OpenAI · Anthropic · Sarvam AI · Ollama · OpenRouter — routes each task to the best model automatically.
🔗Agent-to-Agent (Setu)REST+JSON agent registry with automatic fallback. Agents calling agents calling agents.
Async EverythingFastAPI + asyncpg + httpx. Sub-millisecond safety checks. Zero blocking.

⚡ NeuroVerse is a modular intelligence layer — not a monolith. Plug in what you need. Ignore what you don't.


Table of Contents


🚀 Quick Start

1. Install

Option A: npm (recommended) — use anywhere

npm install neuroverse

Option B: From source (Python)

git clone https://github.com/joshua400/neuroverse.git
cd neuroverse
python -m pip install -e ".[dev]"

💡 Tip: If you installed via npm, the path is node_modules/neuroverse/dist/index.js. If from source, use the absolute path to your cloned directory.

2. Add NeuroVerse to your MCP config

NeuroVerse is a standard MCP server (stdio). Add it to your host's config:

Cursor / VS Code Copilot / Claude Desktop (npm)

{
  "mcpServers": {
    "neuroverse": {
      "command": "npx",
      "args": ["neuroverse"]
    }
  }
}

From source (Python)

{
  "mcpServers": {
    "neuroverse": {
      "command": "python",
      "args": ["mcp/server.py"],
      "cwd": "/path/to/neuroverse"
    }
  }
}

3. Tell your agent to use NeuroVerse

Add this to your agent's rules file (.md, .cursorrules, system prompt, etc.):

## NeuroVerse Integration
- Use `neuroverse_process` to handle any user request — it auto-detects language, extracts intent, checks safety, and executes.
- Use `neuroverse_reason` for complex tasks requiring analytical reasoning (powered by OpenRouter).
- Use `neuroverse_store` / `neuroverse_recall` for persistent context across sessions.
- Use `neuroverse_execute` for any potentially dangerous action — it will block destructive operations.

That's it. Two commands your agent needs to know:

CommandWhenWhat happens
neuroverse_process(text, user_id)Any user requestDetects language, extracts intent, safety-checks, executes
neuroverse_store(user_id, intent, ...)End of workSaves context for next session

Next session, your agent picks up exactly where it left off — like it never forgot.

Requirements

  • npm edition: Node.js 18+ (zero database deps — uses JSON files)
  • Python edition: Python 3.10+ + PostgreSQL (for persistent memory)

🤔 Why NeuroVerse?

Without NeuroVerseWith NeuroVerse
Agent only understands EnglishAgent understands Tamil, Hindi, Telugu, Kannada + English code-switching
"anna file ah csv convert pannu" → ❌ error"anna file ah csv convert pannu" → ✅ converts file to CSV
Every session starts from zeroAgent remembers what it did — across sessions, across agents
DROP DATABASE → 💀 your data is goneDROP DATABASE → 🛡️ blocked in < 1ms, zero tokens
Locked to one LLM providerRoutes to the best model for each task automatically
Two agents = chaosAgent A hands off to Agent B seamlessly

Token Efficiency

NeuroVerse's safety layer runs at zero token cost — pure regex and rule matching, no LLM calls wasted:

Safety ApproachCost per CheckLatency
LLM-based safety500–2,000 tokens1–5 seconds
Embedding-based100–500 tokens200–500ms
NeuroVerse Kavach0 tokens< 1ms

Over 100 tool calls per session, that's 50,000–200,000 tokens saved compared to LLM-based safety.


⚙️ How It Works

User Input (any language)
   ┌────┴────┐
   │  Vani   │ ← Language detection + keyword normalisation
   │ (भाषा)  │   Tamil/Hindi/Telugu → normalised internal format
   └────┬────┘
   ┌────┴────┐
   │  Bodhi  │ ← LLM intent extraction + rule-based fallback
   │ (बोधि)  │   Returns structured JSON with confidence
   └────┬────┘
   ┌────┴────┐
   │ Kavach  │ ← 3-layer safety: blocklist → risk → LLM judge
   │ (कवच)   │   Blocks dangerous actions at zero token cost
   └────┬────┘
   ┌────┴────┐
   │  Marga  │ ← Routes to best model (OpenAI/Anthropic/Sarvam/Ollama)
   │ (मार्ग)  │   Based on task type: multilingual/reasoning/local
   └────┬────┘
   ┌────┴────┐
   │ Smriti  │ ← Stores/recalls from tiered memory
   │ (स्मृति) │   Short-term + Episodic + Semantic (PostgreSQL)
   └────┬────┘
   Tool Execution + Response

🌐 Multilingual Intelligence — Vani

The Problem: Every MCP server speaks only English. 70% of India code-switches daily.

"anna indha file ah csv convert pannu"
"anna this file ah csv convert do"     ← keyword normalisation (not full translation)
Intent: convert_format { output_format: "csv" }

Hybrid Pipeline (Rule + LLM)

Input → Language Detect (langdetect) → Code-Switch Split → Keyword Normalise → Output

Key insight: Don't fully translate. Only normalise domain-critical keywords. The rest stays untouched — preserving context, tone, and nuance.

Supported Languages

LanguageKeywords MappedExample
🇮🇳 Tamilpannu → do, maathru → change, anuppu → send"file ah csv convert pannu"
🇮🇳 Hindikaro → do, banao → create, bhejo → send"report banao sales ka"
🇮🇳 Telugucheyyi → do, pampu → send, chupinchu → show"data chupinchu"
🇮🇳 KannadaSupport coming in v2
🇬🇧 EnglishPass-through"convert json to csv"

Code-Switch Detection

{
  "languages": ["ta", "en"],
  "confidence": 0.92,
  "is_code_switched": true,
  "original_text": "anna indha file ah csv convert pannu",
  "normalized_text": "anna this file ah csv convert do"
}

🧠 Intent Extraction — Bodhi

LLM-first. Rule-based fallback. Never fails.

LLM succeeds (confidence ≥ 0.5)?
   ├─ Yes → use LLM result
   └─ No  → rule-based parser (deterministic)

LLM Strategy

# Prompt to LLM:
"Extract structured intent from the following input.
 Return ONLY valid JSON: {intent, parameters, confidence}"

Rule-Based Fallback (7 patterns)

PatternIntentTrigger Keywords
Format conversionconvert_formatconvert, csv, json, excel, pdf
Summarisationsummarizesummarise, summary, brief, tldr
Report generationgenerate_reportreport, generate report
Deletiondelete_datadelete, remove, drop, clean
Data queryquery_dataquery, search, find, fetch, get
Communicationsend_messagesend, share, email, notify
Explanationexplainexplain, describe, what is, how to

Output

{
  "intent": "convert_format",
  "parameters": { "input_format": "json", "output_format": "csv" },
  "confidence": 0.87,
  "source": "rule"
}

The key difference: the code decides — not the LLM. If the LLM fails, hallucinates, or returns garbage, the rule engine takes over. Deterministic. Reliable.


💾 Tiered Memory — Smriti

The Problem: Raw logs are useless. Storing everything wastes resources. No relevance scoring.

NeuroVerse's approach: Score → Filter → Compress → Store.

Three Tiers

TierStorageLifetimeUse
Short-termIn-process dictCurrent sessionActive context, capped at 50 per user
EpisodicPostgreSQLRecent actionsWhat the agent did recently
SemanticPostgreSQLLong-term factsPersistent knowledge about users, projects, entities

Importance Scoring

if importance_score >= 0.4:
    persist_to_database()   # worth remembering
else:
    skip()                  # noise

Only important memories survive. No bloat. No irrelevant recall.

Context Compression

❌ Bad:  "The user asked about sales data three times in the last hour and seemed frustrated..."
✅ Good: { "intent": "sales_query", "frequency": 3, "sentiment": "frustrated" }

Structured JSON payloads, NOT raw text dumps. Compressed. Indexable. Queryable.

Memory Schema (PostgreSQL)

CREATE TABLE memory_records (
    id          TEXT PRIMARY KEY,
    user_id     TEXT NOT NULL,
    tier        TEXT NOT NULL,          -- short_term | episodic | semantic
    intent      TEXT NOT NULL,
    language    TEXT DEFAULT 'en',
    data        JSONB DEFAULT '{}',     -- compressed structured payload
    importance  REAL DEFAULT 0.5,
    created_at  TIMESTAMPTZ DEFAULT NOW(),
    updated_at  TIMESTAMPTZ DEFAULT NOW()
);
-- Indexed: user_id, intent, tier

🛡️ Safety Layer — Kavach

"The shield that never sleeps."

Three Layers — Defense in Depth

Agent calls tool  →  MCP Server receives request
                ┌───────────┴───────────┐
                │   Layer 1: Blocklist  │  ← regex + keywords, < 0.1ms
                └───────────┬───────────┘
                            │ pass
                ┌───────────┴───────────┐
                │  Layer 2: Risk Score  │  ← intent → risk classification
                └───────────┬───────────┘
                            │ pass
                ┌───────────┴───────────┐
                │  Layer 3: LLM Judge   │  ← optional model-based check
                └───────────┬───────────┘
                            │ pass
                     Execute handler

Layer 1 — Rule-Based Blocklist (Zero Cost)

Runs inside the MCP server. Pure regex. No network. No tokens.

Blocked keywords:

delete_all_data, drop_database, drop_table, system_shutdown,
format_disk, rm -rf, truncate, shutdown, reboot, erase_all, destroy

Blocked patterns (regex):

DROP (DATABASE|TABLE|SCHEMA)
DELETE FROM *
TRUNCATE TABLE
FORMAT [drive]:
rm (-rf|--force)

Layer 2 — Intent Risk Classification

Risk LevelIntentsAction
🟢 LOWconvert_format, summarize, generate_report, query_data, explain✅ Allow
🟡 MEDIUMsend_message, unknown⚠️ Block if strict mode
🔴 HIGHdelete_data❌ Block always
⛔ CRITICALdrop_database, system_shutdown❌ Block always

Layer 3 — LLM Safety Judge (Optional)

If Layers 1–2 pass, optionally ask an LLM: "Is this safe?"

// LLM returns:
{ "safe": false, "reason": "This action would delete all user data." }

Safety Verdict

{
  "allowed": false,
  "risk_level": "critical",
  "reason": "Blocked keyword detected: 'drop_database'",
  "blocked_by": "rule"
}

Token Cost: Zero

Most AI safety:  Agent → "rm -rf /" → Safety LLM → 2,000 tokens burned
NeuroVerse:      Agent → "rm -rf /" → regex match → BLOCKED (0 tokens, < 1ms)

Strict Mode

# .env
SAFETY_STRICT_MODE=true    # Also blocks MEDIUM risk (unknown/send)
SAFETY_STRICT_MODE=false   # Only blocks HIGH and CRITICAL

🤖 Multi-Model Router — Marga

The Problem: Vendor lock-in. One model for everything. Overpaying.

NeuroVerse's approach: Route each task to the best model. Automatically.

Routing Logic

def route_task(task):
    if task.type == "multilingual":
        return sarvam_model        # Best for Indian languages
    elif task.type == "reasoning":
        return claude_or_openai    # Best for complex analysis
    elif task.type == "local":
        return ollama              # Free, on-device, private
    else:
        return best_available      # Fallback chain

Supported Providers

ProviderDefault ModelBest ForCost
🇮🇳 Sarvam AIsarvam-2b-v0.5Indian languages, multilingualLow
🧩 OpenRouterstepfun/step-3.5-flash:freeHigh-performance reasoningFree
🧠 Anthropicclaude-sonnet-4-20250514Reasoning, analysisMedium
🤖 OpenAIgpt-4oGeneral tasks, codeMedium
🦙 Ollamallama3Local, private, offlineFree

Benefits

Without MargaWith Marga
CostPay GPT-4 for everythingUse Ollama for simple tasks
SpeedSame latency for all tasksLocal models for fast tasks
PrivacyEverything goes to cloudSensitive data stays local
Vendor lock-inStuck with one providerSwitch anytime

Fallback Chain

If your preferred provider is down or unconfigured:

OpenRouter → Anthropic → OpenAI → Sarvam → Ollama (local, always available)

🔗 Agent-to-Agent — Setu

Agents calling agents calling agents.

Agent Registry

register_agent({
    "agent_name": "report_agent",
    "endpoint": "http://localhost:8001/generate",
    "capabilities": ["generate_report", "sales_analysis"]
})

Routing

{
  "target_agent": "report_agent",
  "task": "generate_sales_report",
  "payload": { "quarter": "Q1", "year": 2026 }
}

Fallback

If the target agent is unreachable:

{
  "success": false,
  "error": "Agent unreachable: ConnectError",
  "fallback": true
}

The caller can fall back to local execution. No hard failures.


🧩 MCP Tools

NeuroVerse exposes 6 tools via the Model Context Protocol:

#Tool (npm)Tool (Python)Description
1neuroverse_processindia_mcp_process_multilingual_inputFull pipeline: detect → normalise → intent → safety → execute
2neuroverse_storeindia_mcp_store_memoryStore a memory record in the tiered system
3neuroverse_recallindia_mcp_recall_memoryRetrieve memories by user, intent, or tier
4neuroverse_executeindia_mcp_safe_executeEnd-to-end safe execution (convenience)
5neuroverse_routeindia_mcp_route_agentRoute a task to a registered downstream agent
6neuroverse_modelindia_mcp_model_routeQuery the multi-model router (optionally invoke)
7neuroverse_transcribeindia_mcp_transcribe_audioTranscribe audio to text via Whisper STT
8neuroverse_synthesizeindia_mcp_synthesize_speechSynthesize speech from text via Coqui TTS
9neuroverse_reasonN/AHigh-performance reasoning via OpenRouter

Real-World Example

── Session 1 (Agent Alpha, 2pm) ───────────────────────────
india_mcp_process_multilingual_input({
    text: "anna indha sales data ah csv convert pannu",
    user_id: "alpha",
    execute: true
})
→ Language: Tamil+English (code-switched)
→ Intent: convert_format { output_format: "csv" }
→ Safety: ✅ allowed (LOW risk)
→ Execution: ✅ success

india_mcp_store_memory({
    user_id: "alpha",
    intent: "convert_format",
    tier: "episodic",
    data: { "file": "sales_q1.json", "output": "csv" },
    importance_score: 0.8
})

── Session 2 (Agent Beta, next day) ───────────────────────
india_mcp_recall_memory({
    user_id: "alpha",
    intent: "convert_format",
    limit: 5
})
→ "Agent Alpha converted sales_q1.json to CSV yesterday"
→ Beta picks up exactly where Alpha left off

🌐 REST API

NeuroVerse also ships with a FastAPI REST layer — for non-MCP clients:

python app/main.py
# → http://localhost:8000/docs (Swagger UI)
EndpointMethodDescription
/healthGETHealth check
/api/processPOSTFull multilingual pipeline
/api/memory/storePOSTStore memory
/api/memory/recallPOSTRecall memories

⚙️ Configuration

All settings via environment variables (.env):

# Database (PostgreSQL required for persistent memory)
DATABASE_URL=postgresql+asyncpg://user:password@localhost:5432/neuroverse

# AI Model API Keys (configure the ones you have)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
SARVAM_API_KEY=...

# Ollama (local, free)
OLLAMA_BASE_URL=http://localhost:11434

# Safety
SAFETY_STRICT_MODE=true     # Block MEDIUM risk actions too

# MCP Transport
MCP_TRANSPORT=stdio          # or streamable_http
MCP_PORT=8000

🧪 Testing

python -m pytest tests/ -v
tests/test_intent.py     — 10 passed  (rule-based + async + mock LLM + fallback)
tests/test_language.py   — 10 passed  (keyword normalisation + detection + code-switch)
tests/test_pipeline.py   —  8 passed  (full e2e: English, Tamil, Hindi, dangerous, edges)
tests/test_safety.py     — 12 passed  (blocklist, regex, risk classification, pipeline)

============================= 40 passed in 0.87s ==============================

What's Tested

CategoryTestsCoverage
Language Detection10Tamil, Hindi, English, empty input, code-switch flag
Intent Extraction10All 7 rule patterns, LLM mock, LLM failure, empty
Safety Engine12Keywords, regex, risk levels, full pipeline, strict mode
Full Pipeline8E2E English, Tamil, Hindi, dangerous commands, edge cases

🏗️ Architecture

npm Edition (Node.js / TypeScript)

npm/
├── src/
│   ├── core/
│   │   ├── language.ts       # Vani  — Language detection (zero deps)
│   │   ├── intent.ts         # Bodhi — Intent extraction (LLM + fallback)
│   │   ├── memory.ts         # Smriti — Tiered memory (JSON files)
│   │   ├── safety.ts         # Kavach — 3-layer safety engine
│   │   └── router.ts         # Marga — Multi-model AI router
│   ├── services/
│   │   ├── executor.ts       # Tool registry + retry engine
│   │   └── agent-router.ts   # Setu — Agent-to-Agent routing
│   ├── types.ts              # TypeScript interfaces & enums
│   ├── constants.ts          # Shared constants
│   └── index.ts              # MCP Server — 6 tools (McpServer + Zod)
├── package.json              # npm publish config
├── tsconfig.json
└── LICENSE                   # Apache-2.0

Python Edition

app/
├── core/
│   ├── language.py           # Vani  — Language detection (langdetect)
│   ├── intent.py             # Bodhi — Intent extraction (LLM + fallback)
│   ├── memory.py             # Smriti — Tiered memory (PostgreSQL)
│   ├── safety.py             # Kavach — 3-layer safety engine
│   └── router.py             # Marga — Multi-model AI router
├── models/schemas.py         # 12 Pydantic v2 models
├── services/
│   ├── executor.py           # Tool registry + retry engine
│   └── agent_router.py       # Setu — Agent-to-Agent routing
├── config.py                 # Settings from environment
└── main.py                   # FastAPI REST entry point
mcp/server.py                 # MCP Server (FastMCP) — 6 tools
tests/                        # 40 tests (pytest)

Dependencies — Minimal

npm (3 packages):

PackagePurpose
@modelcontextprotocol/sdkMCP protocol
zodSchema validation
axiosHTTP requests

Python (7 packages):

PackagePurpose
mcp[cli]Model Context Protocol SDK
fastapi + uvicornREST API layer
pydanticInput validation (v2)
langdetectStatistical language identification
asyncpg + sqlalchemy[asyncio]PostgreSQL async driver
httpxAsync HTTP for model APIs

🚀 Roadmap

PhaseStatusWhat
v1.0✅ DoneMultilingual parsing + intent extraction + 5 tools
v1.0✅ DoneTiered memory system (PostgreSQL)
v1.0✅ Done3-layer safety engine (Kavach)
v1.0✅ DoneMulti-model router (Marga) + Agent routing (Setu)
v2.0✅ DoneVoice layer (Whisper/Coqui) + Extended Multilingual
v3.0✅ DoneRedis caching + Embedding-based semantic retrieval
v4.0✅ DoneReinforcement learning (RLHF) + Arachne contextual indexing
v4.1✅ DoneOpenRouter Reasoning Layer Integration
v5.0🔮 FutureAgent marketplace & external system plugins

🔐 Security

MeasureImplementation
API key managementEnvironment variables only — never in code
Input sanitisationPydantic v2 with field constraints on all inputs
Rate limitingPlanned for v2.0
Path traversalN/A — no file system access by tools
SQL injectionParameterised queries via SQLAlchemy
Encrypted storageDelegated to PostgreSQL TLS

🤝 Contributing

Contributions are welcome! Here's how to get started:

  1. Fork the repo
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'feat: add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Setup

# npm edition
git clone https://github.com/joshua400/neuroverse.git
cd neuroverse/npm
npm install
npm run build

# Python edition
cd neuroverse
python -m pip install -e ".[dev]"
python -m pytest tests/ -v    # All 40 should pass

📜 License

Apache-2.0


"I built NeuroVerse because it broke my heart watching agents forget everything every session — and not understand a word of Tamil."

Joshua Ragiland M
✉️ joshuaragiland@gmail.com
🌐 Portfolio Website

Built with 🧠 by Joshua — for the agents of tomorrow.

Server Config

{
  "mcpServers": {
    "neuroverse": {
      "command": "npx",
      "args": [
        "-y",
        "neuroverse@latest"
      ],
      "env": {
        "OPENAI_API_KEY": "<YOUR_OPENAI_API_KEY>",
        "REDIS_URL": "redis://localhost:6379",
        "GROQ_API_KEY": "<YOUR_GROQ_API_KEY>"
      }
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Serper MCP ServerA Serper MCP Server
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
ChatWiseThe second fastest AI chatbot™
Tavily Mcp
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
WindsurfThe new purpose-built IDE to harness magic
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
CursorThe AI Code Editor
DeepChatYour AI Partner on Desktop
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Playwright McpPlaywright MCP server
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Y GuiA web-based graphical interface for AI chat interactions with support for multiple AI models and MCP (Model Context Protocol) servers.
RedisA Model Context Protocol server that provides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Amap Maps高德地图官方 MCP Server
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code