Sponsored by Deepsite.site

Documentation Retrieval MCP Server (DOCRET)

Created By
Sreedeep-SS9 months ago
A Model Context Protocol (MCP) server
Content

Documentation Retrieval MCP Server (DOCRET)

This project implements a Model Context Protocol (MCP) server that enables AI assistants to access up-to-date documentation for various Python libraries, including LangChain, LlamaIndex, and OpenAI. By leveraging this server, AI assistants can dynamically fetch and provide relevant information from official documentation sources. The goal is to ensure that AI applications always have access to the latest official documentation.

What is an MCP Server?

The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.

Features

  • Dynamic Documentation Retrieval: Fetches the latest documentation content for specified Python libraries.
  • Asynchronous Web Searches: Utilizes the SERPER API to perform efficient web searches within targeted documentation sites.
  • HTML Parsing: Employs BeautifulSoup to extract readable text from HTML content.
  • Extensible Design: Easily add support for additional libraries by updating the configuration.

Prerequisites

  • Python 3.8 or higher
  • UV for Python Package Management (or pip if you're a pleb)
  • A Serper API key (for Google searches or "SERP"s)
  • Claude Desktop or Claude Code (for testing)

Installation

1. Clone the Repository

git clone https://github.com/Sreedeep-SS/docret-mcp-server.git
cd docret-mcp-server

2. Create and Activate a Virtual Environment

  • On macOS/Linux:

    python3 -m venv env
    source env/bin/activate
    
  • On Windows:

    python -m venv env
    .\env\Scripts\activate
    

3. Install Dependencies

With the virtual environment activated, install the required dependencies:

pip install -r requirements.txt

or if you are using uv:

uv sync

Set Up Environment Variables

Before running the application, configure the required environment variables. This project uses the SERPER API for searching documentation and requires an API key.

  1. Create a .env file in the root directory of the project.

  2. Add the following environment variable:

    SERPER_API_KEY=your_serper_api_key_here
    

Replace your_serper_api_key_here with your actual API key.

Running the MCP Server

Once the dependencies are installed and environment variables are set up, you can start the MCP server.

python main.py

This will launch the server and make it ready to handle requests.

Usage

The MCP server provides an API to fetch documentation content from supported libraries. It works by querying the SERPER API for relevant documentation links and scraping the page content.

Searching Documentation

To search for documentation on a specific topic within a library, use the get_docs function. This function takes two parameters:

  • query: The topic to search for (e.g., "Chroma DB")
  • library: The name of the library (e.g., "langchain")

Example usage:

from main import get_docs

result = await get_docs("memory management", "openai")
print(result)

This will return the extracted text from the relevant OpenAI documentation pages.

Integrating with AI Assistants

You can integrate this MCP server with AI assistants like Claude or custom-built AI models. To configure the assistant to interact with the server, use the following configuration:

{
  "servers": [
    {
      "name": "Documentation Retrieval Server",
      "command": "python /path/to/main.py"
    }
  ]
}

Ensure that the correct path to main.py is specified.

Extending the MCP Server

The server currently supports the following libraries:

  • LangChain
  • LlamaIndex
  • OpenAI

To add support for additional libraries, update the docs_urls dictionary in main.py with the library name and its documentation URL:

docs_urls = {
    "langchain": "python.langchain.com/docs",
    "llama-index": "docs.llamaindex.ai/en/stable",
    "openai": "platform.openai.com/docs",
    "new-library": "new-library-docs-url.com",
}

📌 Roadmap

Surely this is really exciting for me and I'm looking forward to build more on this and stay updated with the latest news and ideas that can be implemented

This is what I have on my mind:

  1. Add support for more libraries (e.g., Hugging Face, PyTorch)

    • Expand the docs_urls dictionary with additional libraries.
    • Modify the get_docs function to handle different formats of documentation pages.
    • Use regex-based or AI-powered parsing to better extract meaningful content.
    • Provide an API endpoint to dynamically add new libraries.
  2. Implement caching to reduce redundant API calls

    • Use Redis or an in-memory caching mechanism like functools.lru_cache

    • Implement time-based cache invalidation.

    • Cache results per library and per search term.

  3. Optimize web scraping with AI-powered summarization

    • Use GPT-4, BART, or T5 for summarizing scraped documentation.
    • There are also Claude 3 Haiku, Gemini 1.5 Pro, GPT-4-mini, Open-mistral-nemo, Hugging Face Models and many more that can be used. All of which are subject to debate.
    • Let users choose between raw documentation text and a summarized version.
  4. Introduce a REST API for external integrations

    • Use FastAPI to expose API endpoints. (Just because)

    • Build a simple frontend dashboard for API interaction. (Why not?)

  5. Add unit tests for better reliabilityReferences

    • Use pytest and unittest for API and scraping reliability tests. (Last thing we want is this thing turning into a nuclear bomb)
    • Implement CI/CD workflows to automatically run tests on every push. (The bread and butter of course)
  6. More MCP tools that can be useful during development

    • Database Integrations
    • Google Docs/Sheets/Drive Integration
    • File System Operations
    • Git Integration
    • Integrating Communication Platforms to convert ideas into product
    • Docker and Kubernetes management

References

For more details on MCP servers and their implementation, refer to the guide:

License

This project is licensed under the MIT License. See the LICENSE file for more details.

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
CursorThe AI Code Editor
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
DeepChatYour AI Partner on Desktop
WindsurfThe new purpose-built IDE to harness magic
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
ChatWiseThe second fastest AI chatbot™
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Amap Maps高德地图官方 MCP Server
Playwright McpPlaywright MCP server
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Tavily Mcp
Serper MCP ServerA Serper MCP Server