Sponsored by Deepsite.site

@openapi-mcp/server

Created By
sotayamashita8 months ago
Powerful bridge between OpenAPI specifications and AI assistants using the Model Context Protocol (MCP). Automatically converts any OpenAPI/Swagger API specification into MCP tools that can be used by AI assistants like Claude Desktop.
Content

@openapi-mcp/server

MCP Server npm version Test Commitizen friendly code style: prettier License: MIT

The openapi-mcp-server is a powerful bridge between OpenAPI specifications and AI assistants using the Model Context Protocol (MCP). It automatically converts any OpenAPI/Swagger API specification into MCP tools that can be used by AI assistants like Claude Desktop. This enables AI assistants to seamlessly interact with your APIs, making them capable of performing real-world actions through your services without requiring custom integrations.

Features

⚠️ Note: This server requires every operation in your OpenAPI/Swagger specification to have an operationId. If any operation is missing an operationId, the server will fail to start or process the specification. Always ensure that all operations are explicitly assigned a unique and descriptive operationId.

  • 🔌 OpenAPI Integration
    • Automatically converts OpenAPI/Swagger specifications into MCP tools
  • 📚 Multiple OpenAPI Versions
    • Support for OpenAPI v3.0.0 and v3.1.0
  • 🔐 Authentication Support:
    • HTTP authentication schemes:
      • Basic authentication
      • Bearer token authentication (static tokens, e.g., Personal Access Tokens)
      • Other HTTP schemes as defined by RFC 7235
    • API keys:
      • Header-based API keys

Limitations

⚠️ Version Support:

  • OpenAPI v2.0 (Swagger) is not currently supported

⚠️ Authentication Limitations:

  • OAuth 2.0 authentication is not supported
  • OpenID Connect Discovery is not supported
  • Query parameter-based API keys are not supported
  • Cookie-based authentication is not supported
  • Dynamic JWT authentication (login-generated tokens) is not supported

Installation

# Clone the repository
git clone https://github.com/sotayamashita/openapi-mcp-server.git
cd openapi-mcp-server

# Install dependencies
bun install

Usage

You can run the server by providing an OpenAPI specification URL or file path:

# Using a local file
bun run src/index.ts ./path/to/openapi.yml

# Using a URL
bun run src/index.ts --api https://example.com/api-spec.json

Configuration

Environment Variables

  • BASE_URL

  • HEADERS

    • Required: No
    • Default: {"Content-Type": "application/json","User-Agent": "openapi-mcp-server"}
    • Description: Custom headers that will overwrite default headers

Claude Desktop Integration

To use this MCP server with Claude Desktop:

  1. Open your Claude Desktop configuration file:

    # macOS/Linux
    code ~/Library/Application\ Support/Claude/claude_desktop_config.json
    
  2. Add the following configuration:

    {
      "mcpServer": {
        "openapi-mcp-server": {
          "command": "bun",
          "args": [
            "/path/to/openapi-mcp-server/src/index.ts",
            "/path/to/openapi-mcp-server/demo/openapi.yml"
          ],
          "env": {
            "BASE_URL": "https://api.example.com/v1/",
            "HEADERS": "{\"Authorization\": \"Bearer ****\"}"
          }
        }
      }
    }
    

For more detailed instructions, see the MCP quickstart guide.

Cursor Integration

To use this MCP server with Cursor as Global:

  1. Open Cursor
  2. Open Cursor Settings > MCP
  3. Click "+ Add new global MCP Server"
  4. Add the following configuration:
    {
      "mcpServer": {
        "openapi-mcp-server": {
          "command": "bun",
          "args": [
            "/path/to/openapi-mcp-server/src/index.ts",
            "/path/to/openapi-mcp-server/demo/openapi.yml"
          ],
          "env": {
            "BASE_URL": "https://api.example.com/v1/",
            "HEADERS": "{\"Authorization\": \"Bearer ****\"}"
          }
        }
      }
    }
    

For more detailed instructions, see the Cursor's Model Context Protocol.

Best Practices

OpenAPI/Swagger Specifications

Use Descriptive operationId Fields

The operationId field in your OpenAPI/Swagger specification plays a crucial role in how tools are presented to AI assistants. When converting your API to MCP tools:

  • Tool Naming: The operationId is used directly as the MCP tool name
  • Clarity: Descriptive operationId values make it easier for AI assistants to understand and use your API
  • Consistency: Use a consistent naming pattern (e.g., getUser, createUser, updateUserPassword)

Example of a well-defined operation:

paths:
  /users/{userId}:
    get:
      operationId: getUserById
      summary: Retrieve user information
      description: Returns detailed information about a specific user

Include Detailed Operation Descriptions

The description field for each operation is equally important:

  • Tool Selection: AI assistants use this description to determine which tool is appropriate for a given task
  • Understanding: Comprehensive descriptions help the AI understand exactly what the operation does
  • Context: Include information about parameters, expected responses, and potential errors

Example of a well-described operation:

paths:
  /users:
    post:
      operationId: createUser
      summary: Create a new user account
      description: |
        Creates a new user in the system. Requires a unique email address and a 
        password that meets security requirements (min 8 chars, including uppercase, 
        lowercase, number). Returns the created user object with an assigned user ID.

Without thorough descriptions, AI assistants may struggle to identify the right operations for user requests or may use them incorrectly. The quality of your API descriptions directly impacts how effectively AI can leverage your tools.

Development

Development Commands

# Run tests
bun vitest run

# Run tests with watch mode
bun vitest

# Run tests with coverage
bun vitest run --coverage

# Format code
bun prettier . --write

Manual Release Process (Using Release Branch)

This section outlines the manual steps for creating a new release using a dedicated release branch. This method helps isolate the release process from the main branch until publication.

Prerequisites:

  • All changes intended for the release have been merged into the main branch.
  • You are logged into your npm account (npm login).
  • Changesets CLI is accessible (this guide uses bunx).

Steps:

  1. Ensure main is up-to-date:

    git checkout main
    git pull origin main
    
  2. Create a release branch: Name it according to the version you intend to release (e.g., v0.1.0).

    git checkout -b release/vX.Y.Z main
    

    Replace vX.Y.Z with the target version.

  3. Bump versions and update Changelog: This command consumes the changeset files (in .changeset/), updates the version in package.json, and updates CHANGELOG.md.

    bunx @changesets/cli version
    

    Review the changes applied to package.json and CHANGELOG.md to ensure they are correct.

  4. Commit the versioning changes:

    git add .
    git commit -m "chore: update versions and changelogs for vX.Y.Z"
    

    Replace vX.Y.Z with the target version.

  5. Build the project: Ensure the distribution files are generated with the latest changes.

    bun run build
    
  6. Publish to npm: Publish the new version to the npm registry.

    npm publish
    

    Ensure your package.json includes "publishConfig": { "access": "public" } for scoped public packages. (You might also use bun publish, but confirm its behavior with scoped public packages if you choose this.)

  7. Tag the release in Git: Create a Git tag that matches the version published to npm.

    # Replace X.Y.Z with the actual version number, e.g., 0.1.0
    git tag @openapi-mcp/server@X.Y.Z
    
  8. Merge the release branch back into main: This brings the version bump and CHANGELOG updates into your main branch.

    git checkout main
    git merge --no-ff release/vX.Y.Z
    

    (Using --no-ff creates a merge commit, which can help in tracking releases in the Git history).

  9. Push main and the new tag to the remote repository:

    git push origin main --tags
    
  10. (Optional) Clean up: Delete the release branch locally and remotely if it's no longer needed.

    git branch -d release/vX.Y.Z
    git push origin --delete release/vX.Y.Z
    

For more detailed information on using Changesets, refer to the official Changesets documentation.

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
WindsurfThe new purpose-built IDE to harness magic
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
DeepChatYour AI Partner on Desktop
Tavily Mcp
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Amap Maps高德地图官方 MCP Server
ChatWiseThe second fastest AI chatbot™
CursorThe AI Code Editor
Serper MCP ServerA Serper MCP Server
Playwright McpPlaywright MCP server