Sponsored by Deepsite.site

🦊 MCPBench: A Benchmark for Evaluating MCP Servers

Created By
modelscope8 months ago
The evaluation benchmark on MCP servers
Content

🦊 MCPBench: A Benchmark for Evaluating MCP Servers

Documentation Package License

MCPBench is an evaluation framework for MCP Servers. It supports the evaluation of three types of servers: Web Search, Database Query and GAIA, and is compatible with both local and remote MCP Servers. The framework primarily evaluates different MCP Servers (such as Brave Search, DuckDuckGo, etc.) in terms of task completion accuracy, latency, and token consumption under the same LLM and Agent configurations. Here is the evaluation report.

MCPBench Overview

The implementation refers to LangProBe: a Language Programs Benchmark.
Big thanks to Qingxu Fu for the initial implementation!


📋 Table of Contents

🔥 News

  • Apr. 29, 2025 🌟 Update the code for evaluating the MCP Server Package within GAIA.
  • Apr. 14, 2025 🌟 We are proud to announce that MCPBench is now open-sourced.

🛠️ Installation

The framework requires Python version >= 3.11, nodejs and jq.

conda create -n mcpbench python=3.11 -y
conda activate mcpbench
pip install -r requirements.txt

🚀 Quick Start

Launch MCP Server

Launch stdio MCP as SSE

If the MCP does not support SSE, write the config like:

{
    "mcp_pool": [
        {
            "name": "FireCrawl",
            "description": "A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.",
            "tools": [
                {
                    "tool_name": "firecrawl_search",
                    "tool_description": "Search the web and optionally extract content from search results.",
                    "inputs": [
                        {
                            "name": "query",
                            "type": "string",
                            "required": true,
                            "description": "your search query"
                        }
                    ]
                }
            ],
            "run_config": [
                {
                    "command": "npx -y firecrawl-mcp",
                    "args": "FIRECRAWL_API_KEY=xxx",
                    "port": 8005
                }
            ]
        }
    ]
}

Save this config file in the configs folder and launch it using:

sh launch_mcps_as_sse.sh YOUR_CONFIG_FILE

For example, if the config file is mcp_config_websearch.json, then run:

sh launch_mcps_as_sse.sh mcp_config_websearch.json

Launch SSE MCP

If your server supports SSE, you can use it directly. The URL will be http://localhost:8001/sse

For SSE-supported MCP Server, write the config like:

{
    "mcp_pool": [
        {
            "name": "browser_use",
            "description": "AI-driven browser automation server implementing the Model Context Protocol (MCP) for natural language browser control and web research.",
            "tools": [
                {
                    "tool_name": "browser_use",
                    "tool_description": "Executes a browser automation task based on natural language instructions and waits for it to complete.",
                    "inputs": [
                        {
                            "name": "query",
                            "type": "string",
                            "required": true,
                            "description": "Your query"
                        }
                    ]
                }
            ],
            "url": "http://0.0.0.0:8001/sse"
        }
    ]
}

where the url can be generated from the MCP market on ModelScope.

Launch Evaluation

To evaluate the MCP Server's performance on Web Search tasks:

sh evaluation_websearch.sh YOUR_CONFIG_FILE

To evaluate the MCP Server's performance on Database Query tasks:

sh evaluation_db.sh YOUR_CONFIG_FILE

To evaluate the MCP Server's performance on GAIA tasks:

sh evaluation_gaia.sh YOUR_CONFIG_FILE

Datasets and Experimental Results

Our framework provides two datasets for evaluation. For the WebSearch task, the dataset is located at MCPBench/langProBe/WebSearch/data/websearch_600.jsonl, containing 200 QA pairs each from Frames, news, and technology domains. Our framework for automatically constructing evaluation datasets will be open-sourced later.

For the Database Query task, the dataset is located at MCPBench/langProBe/DB/data/car_bi.jsonl. You can add your own dataset in the following format:

{
  "unique_id": "",
  "Prompt": "",
  "Answer": ""
}

We have evaluated mainstream MCP Servers on both tasks. For detailed experimental results, please refer to Documentation

🚰 Cite

If you find this work useful, please consider citing our project:

@misc{mcpbench,
  title={MCPBench: A Benchmark for Evaluating MCP Servers},
  author={Zhiling Luo, Xiaorong Shi, Xuanrui Lin, Jinyang Gao},
  howpublished = {\url{https://github.com/modelscope/MCPBench}},
  year={2025}
}

Alternatively, you may reference our report.

@article{mcpbench_report,
      title={Evaluation Report on MCP Servers}, 
      author={Zhiling Luo, Xiaorong Shi, Xuanrui Lin, Jinyang Gao},
      year={2025},
      journal={arXiv preprint arXiv:2504.11094},
      url={https://arxiv.org/abs/2504.11094},
      primaryClass={cs.AI}
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Tavily Mcp
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
DeepChatYour AI Partner on Desktop
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Amap Maps高德地图官方 MCP Server
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
CursorThe AI Code Editor
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Serper MCP ServerA Serper MCP Server
ChatWiseThe second fastest AI chatbot™
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Playwright McpPlaywright MCP server
WindsurfThe new purpose-built IDE to harness magic
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors