- mcp_llm_inferencer
mcp_llm_inferencer
mcp_llm_inferencer
Introduction
The mcp_llm_inferencer is an open-source library designed to leverage the power of Large Language Models (LLMs) such as Claude and OpenAI's GPT to convert prompt-mapped inputs into concrete components for MCP servers. These components include tools, resource templates, and prompt handlers, making it a versatile tool for developers working with MCP server environments.
Features
- LLM Call Engine: Efficiently calls LLMs with built-in retry and fallback logic to ensure reliable responses.
- Interchangeable Claude & OpenAI Support: Seamlessly switch between Claude and OpenAI APIs based on your preference or availability.
- Streaming Support for Claude Desktop: Stream responses directly from Claude Desktop, providing real-time feedback.
- Tool and Resource Response Validation: Ensures that the generated tools and resources meet predefined criteria before deployment.
- Structured Output Bundling: Organizes output into structured bundles per component, simplifying integration and use.
Installation Instructions
Prerequisites
- Python 3.6 or higher
- An API key from Claude or OpenAI
Installing mcp_llm_inferencer
-
Clone the repository:
git clone https://github.com/your-repo/mcp_llm_inferencer.git cd mcp_llm_inferencer -
Install the package using pip:
pip install . -
Set up your API keys as environment variables:
- For Claude:
export CLAUDE_API_KEY='your-claude-api-key' - For OpenAI:
export OPENAI_API_KEY='your-openai-api-key'
- For Claude:
Usage Examples
Basic Example
Here is a simple example demonstrating how to use mcp_llm_inferencer with the OpenAI API:
from mcp_llm_inferencer import MCPInferencer
# Initialize the inferencer with OpenAI API
inferencer = MCPInferencer(api_type='openai')
# Define your prompt
prompt = "Generate a tool to extract emails from text."
# Generate components
components = inferencer.generate_components(prompt)
print(components)
Advanced Example with Claude
This example shows how to use the library with Claude API and handle streaming responses:
from mcp_llm_inferencer import MCPInferencer
# Initialize the inferencer with Claude API
inferencer = MCPInferencer(api_type='claude', stream=True)
# Define your prompt
prompt = "Create a resource template for an S3 bucket."
# Generate components with streaming support
for component in inferencer.generate_components(prompt):
print(component)
API Documentation
Class: MCPInferencer
Initialization
MCPInferencer(api_type, api_key=None, stream=False)
- api_type (str): The type of LLM API to use ('claude' or 'openai').
- api_key (str, optional): The API key for the specified LLM. If not provided, it will attempt to read from environment variables.
- stream (bool, optional): Enable streaming support for Claude Desktop.
Methods
- generate_components(prompt)
- Generates MCP server components based on the given prompt.
- prompt (str): The input prompt to be sent to the LLM.
- Returns: A dictionary or stream of dictionaries containing generated components.
Example Method Usage
components = inferencer.generate_components("Generate a tool for sentiment analysis.")
License
mcp_llm_inferencer is released under the MIT License. See the LICENSE file for more details.
Feel free to contribute to mcp_llm_inferencer by submitting issues or pull requests on our GitHub repository.
⚠️ Development Status
This library is currently in early development. Some tests may be failing with the following issues:
Contributions to fix these issues are welcome! Please submit a pull request if you have a solution.