Sponsored by Deepsite.site

MCP-of-MCPS

Created By
eliavamara month ago
MCP of MCPs is a meta-server that merges all your MCP servers into a single smart endpoint. It gives AI agents instant tool discovery, selective schema loading, and massively cheaper execution, so you stop wasting tokens and time. With persistent tool metadata, semantic search, and direct code execution between tools, it turns chaotic multi-server setups into a fast, efficient, hallucination-free workflow. It also automatically analyzes the tools output schemas if needed and preserves them across sessions for consistent behavior. In short: 🚀 Faster automation 🧠 Cleaner reasoning 💰 Drastically fewer tokens 📦 Persistent + analyzed schema metadata
Content

MCP of MCPs

MCP of MCPs is a meta-server that merges all your MCP servers into a single smart endpoint. It gives AI agents instant tool discovery, selective schema loading, and massively cheaper execution, so you stop wasting tokens and time.

With persistent tool metadata, semantic search, and direct code execution between tools, it turns chaotic multi-server setups into a fast, efficient, hallucination-free workflow. It also automatically analyzes the tools output schemas if needed and preserves them across sessions for consistent behavior.

In short:
🚀 Faster automation
🧠 Cleaner reasoning
💰 Drastically fewer tokens
📦 Persistent + analyzed schema metadata

Tool 1: semantic_search_tools

Semantic Discovery Tool - Search tools by describing the task you want to accomplish. Instead of browsing all of tool names—or when tool names don't clearly indicate what they do—just describe your intent in plain English (e.g., "send notifications", "query database", "process images") and get back only the most relevant tools instantly. This provides a fast and lightweight approach to investigate what tools are available across all connected servers without loading any full tool definitions.

// Search by task/intent, not by tool names:
// Input: { query: "send notifications to a channel", limit: 5 }
// Returns only relevant matches (ranked by similarity):
// [
//   {
//     serverName: "slack",
//     toolName: "post_message",
//     description: "Post a message to a Slack channel",
//     similarityScore: 0.94,
//     fullPath: "slack/post_message"
//   },
// ....
//   // Only 5 most relevant tools returned - fast and lightweight!
// ]

Perfect for quick investigation:

  • Describe what you need to do, not what tool you need
  • Get instant results without loading full schemas
  • Discover capabilities across all servers in milliseconds
  • No token overhead - just lightweight tool names and descriptions

Tool 2: get_mcps_servers_overview

Discovery Tool - This tool returns only tool names without full schemas, giving agents a lightweight overview in seconds instead of loading hundreds of detailed definitions upfront. By showing just what's available without overwhelming details, it prevents confusion and hallucinations while eliminating loading delays.

// Returns:
// google_drive/download_file
// google_drive/upload_file
// slack/send_message
// database/execute_query
// ...

Tool 3: get_tools_overview

Introspection Tool - Load only the tools you actually need instead of all 30+ tools, saving thousands of tokens through selective loading. This on-demand approach provides faster responses and focused context that reduces confusion and improves accuracy.

// Input: ["google_drive/download_file", "slack/send_message"]
// Returns: Full tool definitions with:
// - Parameter schemas
// - Required vs optional fields
// - Example usage code

Tool 4: run_functions_code

Execution Tool - Data flows directly between tools without round-trips through the model, so a 2MB file transfer uses ~100 tokens instead of 50,000+. The model only sees clean final results instead of noisy intermediate data, executing complex workflows in one shot without processing delays.

// Write code that:
// - Calls multiple tools in sequence or parallel
// - Processes and transforms data
// - Implements complex logic and error handling
// - Returns only final results to the model

How The Full Flow Solves All Problems

When you need to accomplish a task, start by using get_mcps_servers_overview to get a lightweight list of all available tool names across servers—this gives you a quick scan of what's available without loading any schemas. If you can't find the tools you need for your task or if tool names aren't clear, use semantic_search_tools to search by describing your intent in plain English (e.g., "send notifications to a channel"), which uses AI-powered semantic understanding to instantly return only the most relevant tools ranked by similarity. Once you've identified the specific tools you need, use get_tools_overview to load only those tool definitions with their full schemas and parameters—saving thousands of tokens by avoiding irrelevant tools and giving the model focused context. Finally, use run_functions_code to execute your workflow where data flows directly between tools in memory, keeping intermediate results as native objects rather than serializing them into tokens, with only the final result returned to the model. This pattern dramatically cuts token usage, speeds up execution by avoiding unnecessary model processing, and eliminates hallucinations by showing only relevant information at each step.

Real-World Example

Traditional Approach:

TOOL CALL: gdrive.getDocument(documentId: "abc123")
  → returns full transcript text (loads into context: 50,000 tokens)
  
TOOL CALL: salesforce.updateRecord(...)
  → model writes entire transcript again (doubles the tokens: +50,000 tokens)

Total: 100,000+ tokens processed, slow response time

With MCP of MCPs:

const transcript = (await gdrive.getDocument({ documentId: 'abc123' })).content;
await salesforce.updateRecord({
  objectType: 'SalesMeeting',
  data: { Notes: transcript }
});

The code executes in one operation. Data flows directly between tools. Only the final result returns to the model.

Total: 2,000 tokens processed (98.7% reduction) ⚡

Key Benefits

Faster Response Time - No need to load all tools upfront
Reduced Hallucinations - Model sees only relevant information
Progressive Disclosure - Load tools on-demand as needed
Code Composition - Orchestrate complex workflows with familiar JavaScript
Persistent Tool Metadata - Automatically preserves tool output schemas across sessions

Setup

Prerequisites

  • Node.js
  • npm or yarn
  • Configured MCP servers you want to aggregate

Add to Cline

Add this to your Cline MCP settings file:

Option 1: Using inline configuration

{
  "mcpServers": {
    "mcp-of-mcps": {
      "autoApprove": [],
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "@dbestai/mcp-of-mcps",
        "--config",
        "[{\"name\":\"weather\",\"command\":\"npx\",\"args\":[\"-y\",\"@h1deya/mcp-server-weather\"]},{\"name\":\"time\",\"command\":\"uvx\",\"args\":[\"mcp-server-time\"]}]"
      ]
    }
  }
}

Option 2: Using a config file

First, create a config.json file that specifies which MCP servers to connect to:

[
  {
    "name": "weather",
    "command": "npx",
    "args": ["-y", "@h1deya/mcp-server-weather"]
  },
  {
    "name": "time",
    "command": "uvx",
    "args": ["mcp-server-time"]
  }
]

Then reference this file in your Cline settings:

{
  "mcpServers": {
    "mcp-of-mcps": {
      "autoApprove": [],
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "@dbestai/mcp-of-mcps",
        "--config-file",
        "/absolute/path/to/your/config.json"
      ]
    }
  }
}

Configuration Options:

  • autoApprove: Array of tool names that don't require approval (e.g., ["get_mcps_servers_overview"])
  • disabled: Set to false to enable the server
  • timeout: Timeout in seconds for tool execution (default: 60)
  • type: Connection type, always "stdio" for MCP servers

Learn More

This implementation follows the patterns described in Anthropic's article on code execution with MCP:
📖 Code execution with MCP: Building more efficient agents

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

ISC

Server Config

{
  "mcpServers": {
    "mcp-of-mcps": {
      "autoApprove": [],
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "@dbestai/mcp-of-mcps",
        "--config",
        "[{\"name\":\"weather\",\"command\":\"npx\",\"args\":[\"-y\",\"@h1deya/mcp-server-weather\"]},{\"name\":\"time\",\"command\":\"uvx\",\"args\":[\"mcp-server-time\"]}]"
      ]
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Amap Maps高德地图官方 MCP Server
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
DeepChatYour AI Partner on Desktop
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
ChatWiseThe second fastest AI chatbot™
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
CursorThe AI Code Editor
WindsurfThe new purpose-built IDE to harness magic
Playwright McpPlaywright MCP server
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Serper MCP ServerA Serper MCP Server
Tavily Mcp