Sponsored by Deepsite.site

MCP to LangChain/LangGraph Adapter

Created By
SDCalvo8 months ago
Addapter that turns MCP server tools into langchain usable tools
Content

MCP to LangChain/LangGraph Adapter

This project provides an adapter that allows you to use MCP (Multi-modal Conversational Procedure) server tools in LangChain and LangGraph applications. With this adapter, you can seamlessly integrate MCP's tools into your AI application pipelines.

Table of Contents

Introduction

The MCP to LangChain/LangGraph Adapter bridges the gap between MCP servers, which provide various tools through a standardized interface, and LangChain/LangGraph, popular frameworks for building applications with large language models. This adapter enables you to:

  • Connect to an MCP server
  • Discover available tools
  • Convert MCP tools to LangChain-compatible tools
  • Use these tools in LangChain agents, chains, and LangGraph agents

Installation

To use this adapter, you need to have the necessary packages installed:

# If using pipenv (recommended)
pipenv install mcp langchain langchain-openai langgraph python-dotenv

# If using pip
pip install mcp langchain langchain-openai langgraph python-dotenv

Setting Up API Keys

For examples using OpenAI models, you'll need an OpenAI API key. The recommended way to set this up is using a .env file:

  1. Create a .env file in your project root (based on .env.example):
OPENAI_API_KEY=your_actual_api_key_here
  1. Load the environment variables in your code:
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

Alternatively, you can set the API key directly in your environment or code:

import os
os.environ["OPENAI_API_KEY"] = "your_api_key_here"

Getting Started

Setting Up the MCP Server

Before using the adapter, you need to have an MCP server running. The adapter is designed to work with an MCP server script that you provide.

  1. Create a basic MCP server script (e.g., simple_server.py):
import mcp
from mcp.server import expose

@expose()
def add(a: int, b: int) -> int:
    """Add two numbers and return the result."""
    return a + b

@expose()
def get_weather(city: str) -> str:
    """
    Get the current weather for a city.

    Args:
        city: The name of the city to get weather for
    """
    # In a real application, you'd call a weather API here
    return f"Weather in {city}: Sunny +11°C"

if __name__ == "__main__":
    mcp.run(transport='stdio')

This example server exposes two tools:

  • add: Takes two integers and returns their sum
  • get_weather: Takes a city name and returns a simulated weather report

Connecting to the MCP Server

The adapter will automatically manage the connection to the MCP server:

from mcp_langchain_adapter import MCPAdapter

# Create an adapter instance, pointing to your MCP server script
adapter = MCPAdapter("simple_server.py")

# Initialize the connection and get the list of available tools
tools = adapter.get_tools()

# Print the available tools
print(f"Found {len(tools)} tools:")
for tool in tools:
    print(f"- {tool.name}: {tool.description}")

Using MCP Tools with LangChain

Once you have the tools, you can use them in LangChain applications:

from langchain.agents import AgentExecutor, create_react_agent
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

# Initialize the language model
llm = ChatOpenAI(model="gpt-3.5-turbo")

# Create a prompt template for the agent
template = """Answer the following questions as best you can using the provided tools.

Available tools:
{tools}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought: """

prompt_template = PromptTemplate.from_template(template)

# Create a LangChain agent with the MCP tools
agent = create_react_agent(llm, tools, prompt_template)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Run the agent
result = agent_executor.invoke({"input": "What is 5 + 7?"})
print(result["output"])

Using MCP Tools with LangGraph

LangGraph provides a more modern, flexible approach to building agents. Here's how to use our MCP tools with LangGraph:

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver

# Initialize the language model
llm = ChatOpenAI(model="gpt-3.5-turbo")

# Create a memory saver for conversation history
memory = MemorySaver()

# Create a LangGraph react agent with the MCP tools
agent = create_react_agent(
    llm,
    tools,
    prompt="You are a helpful AI assistant that can use tools to solve problems.",
    checkpointer=memory
)

# Create the configuration with thread ID for memory
config = {"configurable": {"thread_id": "example-thread"}}

# Run the agent with a question
result = agent.invoke(
    {"messages": [HumanMessage(content="What is 5 + 7?")]},
    config
)

# Get the final answer
final_answer = result["messages"][-1].content
print(final_answer)

# Continue the conversation with a follow-up question
state = memory.get("example-thread")
messages = state["messages"] + [HumanMessage(content="What's the weather in London?")]
result = agent.invoke({"messages": messages}, config)
print(result["messages"][-1].content)

API Reference

MCPAdapter

The MCPAdapter class manages the connection to the MCP server and converts MCP tools to LangChain tools.

Constructor

MCPAdapter(server_script_path: str, env: Dict[str, str] = None)
  • server_script_path: Path to the MCP server script to run
  • env: Optional environment variables for the server process

Methods

  • initialize(): Initialize the connection to the MCP server synchronously
  • get_tools() -> List[BaseTool]: Get all available tools as LangChain tools
  • get_tool_names() -> List[str]: Get the names of all available tools
  • get_tool_by_name(name: str) -> Optional[BaseTool]: Get a specific tool by name
  • close() -> None: Clean up resources (async method)

MCPToolWrapper

The MCPToolWrapper class extends LangChain's BaseTool to wrap MCP tools:

MCPToolWrapper(
    name: str,
    description: str,
    server_script_path: str,
    env: Optional[Dict[str, str]] = None,
    args_schema: Optional[Type[BaseModel]] = None
)
  • name: Name of the tool
  • description: Description of the tool
  • server_script_path: Path to the MCP server script
  • env: Optional environment variables for the server process
  • args_schema: Optional Pydantic model for tool arguments

Utility Functions

  • get_langchain_tools(server_script_path: str, env: Dict[str, str] = None) -> List[BaseTool]: Convenience function to get LangChain tools from an MCP server

Examples

Basic Usage

Here's a complete example of how to use the adapter:

from mcp_langchain_adapter import MCPAdapter

# Create an adapter instance
adapter = MCPAdapter("simple_server.py")

# Get all tools
tools = adapter.get_tools()

# Print information about the tools
print(f"Found {len(tools)} tools:")
for tool in tools:
    print(f"- {tool.name}: {tool.description}")

# Use a specific tool
add_tool = adapter.get_tool_by_name("add")
if add_tool:
    result = add_tool.run({"a": 5, "b": 7})
    print(f"Result of add(5, 7): {result}")

# Use another tool
weather_tool = adapter.get_tool_by_name("get_weather")
if weather_tool:
    result = weather_tool.run({"city": "London"})
    print(f"Result of get_weather('London'): {result}")

Integration with LangChain Agents

For a full example of integrating with LangChain agents, see the example_agent_integration.py file.

Key features:

  • Connects to an MCP server
  • Retrieves available tools
  • Creates a LangChain agent with the tools
  • Executes the agent with different types of queries

Integration with LangGraph Agents

For a full example of integrating with LangGraph agents, see the example_langgraph_integration.py file.

Key features:

  • Connects to an MCP server
  • Retrieves available tools
  • Creates a LangGraph react agent with the tools
  • Manages conversation history with checkpointing
  • Executes the agent with different types of queries
  • Shows how to stream the agent's thinking process

Troubleshooting

Common Issues

  1. MCP Server Connection Issues

    • Make sure the path to your MCP server script is correct
    • Check that the server script has proper permissions to run
    • Ensure the server script is properly implementing the MCP protocol
  2. Tool Execution Errors

    • Check the tool input format is correct
    • Ensure the tool is properly defined in the MCP server
    • Look for error messages in the tool response
  3. LangChain/LangGraph Integration Issues

    • Verify that the tools are properly converted to LangChain format
    • Check that the agent is configured correctly
    • Ensure you're passing the right format of input to the agent
    • For LangGraph issues, check the thread IDs and memory configuration

Debugging

To debug connection issues, you can add logging to your MCP server script:

import logging

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler("mcp_server.log"),
        logging.StreamHandler()
    ]
)

# Rest of your MCP server code...

Contributing

Contributions to improve the adapter are welcome! Here are some ways you can contribute:

  • Report bugs and issues
  • Add new features or improve existing ones
  • Improve documentation
  • Write tests
  • Share examples of integration with different LangChain/LangGraph components

Please follow these steps to contribute:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
WindsurfThe new purpose-built IDE to harness magic
Tavily Mcp
CursorThe AI Code Editor
ChatWiseThe second fastest AI chatbot™
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright McpPlaywright MCP server
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
DeepChatYour AI Partner on Desktop
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Amap Maps高德地图官方 MCP Server
Serper MCP ServerA Serper MCP Server
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"