Sponsored by Deepsite.site

Scout Monitoring Mcp

Created By
Scout Monitoring Official2 months ago
Scout Monitoring's local MCP server empowers AI Assistants by integrating application performance and error data directly into their workflows. It allows AI models to access traces, errors with line-of-code information, and performance insights like N+1 queries, slow endpoints, and memory bloat from various frameworks including Rails, Django, and FastAPI. This direct data access enables AI Assistants to identify and suggest fixes for performance problems and errors right within your editor and codebase, significantly enhancing development efficiency and reducing debugging time.
Content

Scout Monitoring MCP

This repository contains code to locally run an MCP server that can access Scout Monitoring data via Scout's API. We provide a Docker image that can be pulled and run by your AI Assistant to access Scout Monitoring data.

This puts Scout Monitoring's performance and error data directly in the hands of your AI Assistant. For Rails, Django, FastAPI, Laravel and more. Use it to get traces and errors with line-of-code information that the AI can use to target fixes right in your editor and codebase. N+1 queries, slow endpoints, slow queries, memory bloat, throughput issues - all your favorite performance problems surfaced and explained right where you are working.

If this makes your life a tiny bit better, why not :star: it?!

Prerequisites

You will need to have or create a Scout Monitoring account and obtain an API key.

  1. Sign up
  2. Install the Scout Agent in your application and send Scout data!
    • Ruby
    • Python
    • PHP
    • If you are trying this out locally, make sure monitor: true, errors_enabled: true are set in your config for the best experience
  3. Visit settings to get or create an API key
    • This is not your "Agent Key"; it's the "API Key" that can be created on the Settings page
    • This is a read-only key that can only access data in your account
  4. Install Docker. Instructions below assume you can start a Docker container

The MCP server will not currently start without an API key set, either in the environment or by a command-line argument on startup.

Installation

We recommend using the provided Docker image to run the MCP server. It is intended to be started by your AI Assistant and configured with your Scout API key. Many local clients allow specifying a command to run the MCP server in some location. A few examples are provided below.

The Docker image is available on Docker Hub.

Of course, you can always clone this repo and run the MCP server directly; uv or other environment management tools are recommended.

Setup Wizard

The simplest way to configure and start using the Scout MCP is with our interactive setup wizard:

Run via npx:

npx @scout_apm/wizard

Build and run from source:

cd ./wizard
npm install
npm run build
node dist/wizard.js

The wizard will guide you through:

  • Selecting your AI coding platform (Cursor, Claude Code, Claude Desktop)
  • Entering your Scout API key
  • Automatically configuring the MCP server settings

Supported Platforms

The wizard currently supports setup for:

  • Cursor - Automatically configures MCP settings
  • Claude Code (CLI) - Provides the correct command to run
  • Claude Desktop - Updates the configuration file for Windows/Mac

Configure a local Client (e.g. Claude/Cursor/VS Code Copilot)

If you would like to configure the MCP manually, this usually just means supplying a command to run the MCP server with your API key in the environment to your AI Assistant's config. Here is the shape of the JSON (the top-level key varies):

{
  "mcpServers": {
    "scout-apm": {
      "command": "docker",
      "args": ["run", "--rm", "-i", "--env", "SCOUT_API_KEY", "scoutapp/scout-mcp-local"],
      "env": { "SCOUT_API_KEY": "your_scout_api_key_here"}
    }
  }
}
Claude Code
claude mcp add scoutmcp -e SCOUT_API_KEY=your_scout_api_key_here -- docker run --rm -i -e SCOUT_API_KEY scoutapp/scout-mcp-local
Cursor

Install MCP Server

MAKE SURE to update the SCOUT_API_KEY value to your actual api key in Arguments in the Cursor Settings > MCP

VS Code Copilot
Claude Desktop

Add the following to your claude config file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "scout-apm": {
      "command": "docker",
      "args": ["run", "--rm", "-i", "--env", "SCOUT_API_KEY", "scoutapp/scout-mcp-local"],
      "env": { "SCOUT_API_KEY": "your_scout_api_key_here"}
    }
  }
}

Token Usage

We are currently more interested in expanding available information than strictly controlling response size from our MCP tools. If your AI Assistant has a configurable token limit (e.g. Claude Code export MAX_MCP_OUTPUT_TOKENS=50000), we recommend setting it generously high, e.g. 50,000 tokens.

Usage

Scout's MCP is intended to put error and performance data directly in the... hands? of your AI Assistant. Use it to get traces and errors with line-of-code information that the AI can use to target fixes right in your editor.

Most assistants will show you both raw tool calls and perform analysis. Desktop assistants can readily create custom JS applications to explore whatever data you desire. Assistants integrated into code editors can use trace data and error backtraces to make fixes right in your codebase.

Combine Scout's MCP with your AI Assistant's other tools to:

  • Create rich GitHub/GitLab issues based on errors and performance data
  • Make JIRA fun - have your AI Assistant create tickets with all the details
  • Generate PRs that fix specific errors and performance problems

Tools

The Scout MCP provides the following tools for accessing Scout APM data:

  • list_apps - List available Scout APM applications, with optional filtering by last active date
  • get_app_metrics - Get individual metric data (response_time, throughput, etc.) for a specific application
  • get_app_endpoints - Get all endpoints for an application with aggregated performance metrics
  • get_endpoint_metrics - Get timeseries metrics for a specific endpoint in an application
  • get_app_endpoint_traces - Get recent traces for an app filtered to a specific endpoint
  • get_app_trace - Get an individual trace with all spans and detailed execution information
  • get_app_error_groups - Get recent error groups for an app, optionally filtered by endpoint
  • get_app_insights - Get performance insights including N+1 queries, memory bloat, and slow queries

Resources

The Scout MCP provides configuration templates as resources that your AI assistant can read and apply:

  • scoutapm://config-resources/{framework} - Setup instructions for supported framework or library (rails, django, flask, fastapi)
  • scoutapm://config-resources/list - List all available configuration templates
  • scoutapm://metrics - List of all available metrics for Scout APM

Useful Prompts

Setup & Configuration

  • "Help me set up Scout monitoring for my Rails application"
  • "Create a Scout APM config file for my Django project with key ABC123"

Performance & Monitoring

  • "Summarize the available tools in the Scout Monitoring MCP."
  • "Find the slowest endpoints for app my-app-name in the last 7 days. Generate a table with the results including the average response time, throughput, and P95 response time."
  • "Show me the highest-frequency errors for app Foo in the last 24 hours. Get the latest error detail, examine the backtrace and suggest a fix."
  • "Get any recent n+1 insights for app Bar. Pull the specific trace by id and help me optimize it based on the backtrace data."

Local Development

We use uv and taskipy to manage environments and run tasks for this project.

Run with Inspector

uv run task dev

Connect within inspector to add API key, set to STDIO transport

Build the Docker image

docker build -t scout-mcp-local .

Server Config

{
  "mcpServers": {
    "scout-apm": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "-i",
        "--env",
        "SCOUT_API_KEY",
        "scoutapp/scout-mcp-local"
      ],
      "env": {
        "SCOUT_API_KEY": "your_scout_api_key_here"
      }
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
ChatWiseThe second fastest AI chatbot™
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Tavily Mcp
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Amap Maps高德地图官方 MCP Server
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
DeepChatYour AI Partner on Desktop
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Playwright McpPlaywright MCP server
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
CursorThe AI Code Editor
WindsurfThe new purpose-built IDE to harness magic
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Serper MCP ServerA Serper MCP Server
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.