Sponsored by Deepsite.site

Agentic MCP Client

Created By
peakmojo10 months ago
A standalone agent runner that executes tasks using MCP (Model Context Protocol) tools via Anthropic Claude, AWS BedRock and OpenAI APIs. It enables AI agents to run autonomously in cloud environments and interact with various systems securely.
Content

Agentic MCP Client

Static Badge

A standalone agent runner that executes tasks using MCP (Model Context Protocol) tools via Anthropic Claude, AWS BedRock and OpenAI APIs. It enables AI agents to run autonomously in cloud environments and interact with various systems securely.

Current Features

  • Included a basic agent dashboard
  • Run standalone agents with tasks defined in JSON configuration files
  • Support for both Anthropic Claude and OpenAI models
  • Session logging for tracking agent progress

Run Dashboard Web

cd dashboard
npm i
npm run dev

Dashboard URL: http://localhost:3000
API Documentation: http://localhost:3000/api-docs

https://github.com/user-attachments/assets/c98be6d2-0096-40f2-bd78-d3fb256fec83

Installation

  1. Clone the repository

  2. Set up dependencies:

uv sync
  1. Create an agent_worker_task.json file

Here is an example configuration file:

{
    "task": "Find all image files in the current directory and tell me their sizes",
    "model": "claude-3-7-sonnet-20250219",
    "system_prompt": "You are a helpful assistant that completes tasks using available tools.",
    "verbose": true,
    "max_iterations": 10
}
  1. Run the agent:
uv run agentic_mcp_client/agent_worker/run.py

Configuration

The project requires a config.json file in the root directory to define the inference server settings and available MCP tools. Here's an example configuration:

{
   "inference_server": {
      "base_url": "https://api.anthropic.com/v1/",
      "api_key": "YOUR_API_KEY_HERE",
      "use_bedrock": true,
      "aws_region": "us-east-1",
      "aws_access_key_id": "YOUR_AWS_ACCESS_KEY",
      "aws_secret_access_key": "YOUR_AWS_SECRET_KEY"
   },
   "mcp_servers": {
    "mcp-remote-macos-use": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "-e",
        "MACOS_USERNAME=your_username",
        "-e",
        "MACOS_PASSWORD=your_password",
        "-e",
        "MACOS_HOST=your_host_ip",
        "--rm",
        "buryhuang/mcp-remote-macos-use:latest"
      ]
    },
    "mcp-my-apple-remembers": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "-e",
        "MACOS_USERNAME=your_username",
        "-e",
        "MACOS_PASSWORD=your_password",
        "-e",
        "MACOS_HOST=your_host_ip",
        "--rm",
        "buryhuang/mcp-my-apple-remembers:latest"
      ]
    }
  }
}

Configuration Sections

Inference Server

The inference_server section configures the connection to your language model provider:

  • base_url: The API endpoint for your chosen LLM provider
  • api_key: Your authentication key for the LLM service
  • use_bedrock: Set to true to use Amazon Bedrock for model inference
  • AWS credentials (when using Bedrock)

MCP Servers

The mcp_servers section defines available MCP tools. Each tool has:

  • A unique identifier (e.g., "mcp-remote-macos-use")
  • command: The command to execute (typically Docker for containerized tools)
  • args: Configuration parameters for the tool

This example shows MCP tools for remotely controlling a macOS system through Docker containers.

How MCP Works

The Model Context Protocol provides a standardized way for applications to:

  • Share contextual information with language models
  • Expose tools and capabilities to AI systems
  • Build composable integrations and workflows

The protocol uses JSON-RPC 2.0 messages to establish communication between hosts (LLM applications), clients (connectors within applications), and servers (services providing context and capabilities).

Our agent worker implements this workflow:

  1. Initialize MCP clients for all available tools
  2. Send the initial task message to the selected model
  3. Process model responses (either tool calls or text)
  4. If a tool call is made, execute the tool and send the result back to the model
  5. Repeat until the task is completed or maximum iterations reached
  6. Shut down all MCP clients
sequenceDiagram
    participant User
    participant AgentWorker
    participant LLM as Language Model
    participant MCP as MCP Tools

    User->>AgentWorker: Task + Configuration
    AgentWorker->>MCP: Initialize Tools
    AgentWorker->>LLM: Send Task
    loop Until completion
        LLM->>AgentWorker: Request Tool Use
        AgentWorker->>MCP: Execute Tool
        MCP->>AgentWorker: Tool Result
        AgentWorker->>LLM: Send Tool Result
        LLM->>AgentWorker: Response
    end
    AgentWorker->>User: Final Result

Contribution Guidelines

Contributions to Agentic MCP Client are welcome! To contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your fork.
  5. Create a pull request to the main repository.

Acknowledgments

This project was inspired by and builds upon the work the excellent open-source projects in the MCP ecosystem:

  • MCP-Bridge - A middleware that provides an OpenAI-compatible endpoint for calling MCP tools, which helped inform our approach to tool integration and standardization.

We are grateful to the contributors of these projects for their pioneering work in the MCP space, which has helped make autonomous agent development more accessible and powerful.

License

Agentic MCP Client is licensed under the Apache 2.0 License. See the LICENSE file for more information.

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Playwright McpPlaywright MCP server
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
DeepChatYour AI Partner on Desktop
ChatWiseThe second fastest AI chatbot™
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Tavily Mcp
Serper MCP ServerA Serper MCP Server
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
WindsurfThe new purpose-built IDE to harness magic
Amap Maps高德地图官方 MCP Server
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
CursorThe AI Code Editor
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Y GuiA web-based graphical interface for AI chat interactions with support for multiple AI models and MCP (Model Context Protocol) servers.
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.