Sponsored by Deepsite.site

Mcp Agent Kit

Created By
dominiquekossia month ago
a complete and intuitive SDK for building MCP Servers, MCP Agents, and LLM integrations (OpenAI, Claude, Gemini) with minimal effort. It abstracts all the complexity of the MCP protocol, provides an intelligent agent with automatic model routing, and includes a universal client for external APIs — all through a single, simple, and powerful interface. Perfect for chatbots, enterprise automation, internal system integrations, and rapid development of MCP-based ecosystems.
Content

mcp-agent-kit

The easiest way to create MCP servers, AI agents, and chatbots with any LLM

npm version License: MIT TypeScript

mcp-agent-kit is a TypeScript package that simplifies the creation of:

  • 🔌 MCP Servers (Model Context Protocol)
  • 🤖 AI Agents with multiple LLM providers
  • 🧠 Intelligent Routers for multi-LLM orchestration
  • 💬 Chatbots with conversation memory
  • 🌐 API Helpers with retry and timeout

Features

  • Zero Config: Works out of the box with smart defaults
  • Multi-Provider: OpenAI, Anthropic, Gemini, Ollama support
  • Type-Safe: Full TypeScript support with autocomplete
  • Production Ready: Built-in retry, timeout, and error handling
  • Developer Friendly: One-line setup for complex features
  • Extensible: Easy to add custom providers and middleware

Installation

npm install mcp-agent-kit

Quick Start

Create an AI Agent (1 line!)

import { createAgent } from "mcp-agent-kit";

const agent = createAgent({ provider: "openai" });
const response = await agent.chat("Hello!");
console.log(response.content);

Create an MCP Server (1 function!)

import { createMCPServer } from "mcp-agent-kit";

const server = createMCPServer({
  name: "my-server",
  tools: [
    {
      name: "get_weather",
      description: "Get weather for a location",
      inputSchema: {
        type: "object",
        properties: {
          location: { type: "string" },
        },
      },
      handler: async ({ location }) => {
        return `Weather in ${location}: Sunny, 72°F`;
      },
    },
  ],
});

await server.start();

Create a Chatbot with Memory

import { createChatbot, createAgent } from "mcp-agent-kit";

const bot = createChatbot({
  agent: createAgent({ provider: "openai" }),
  system: "You are a helpful assistant",
  maxHistory: 10,
});

await bot.chat("Hi, my name is John");
await bot.chat("What is my name?"); // Remembers context!

Documentation

Table of Contents


AI Agents

Create intelligent agents that work with multiple LLM providers.

Basic Usage

import { createAgent } from "mcp-agent-kit";

const agent = createAgent({
  provider: "openai",
  model: "gpt-4-turbo-preview",
  temperature: 0.7,
  maxTokens: 2000,
});

const response = await agent.chat("Explain TypeScript");
console.log(response.content);

Supported Providers

ProviderModelsAPI Key Required
OpenAIGPT-4, GPT-3.5✅ Yes
AnthropicClaude 3.5, Claude 3✅ Yes
GeminiGemini 2.0+✅ Yes
OllamaLocal models❌ No

With Tools (Function Calling)

const agent = createAgent({
  provider: "openai",
  tools: [
    {
      name: "calculate",
      description: "Perform calculations",
      parameters: {
        type: "object",
        properties: {
          operation: { type: "string", enum: ["add", "subtract"] },
          a: { type: "number" },
          b: { type: "number" },
        },
        required: ["operation", "a", "b"],
      },
      handler: async ({ operation, a, b }) => {
        return operation === "add" ? a + b : a - b;
      },
    },
  ],
});

const response = await agent.chat("What is 15 + 27?");

With System Prompt

const agent = createAgent({
  provider: "anthropic",
  system: "You are an expert Python developer. Always provide code examples.",
});

MCP Servers

Create Model Context Protocol servers to expose tools and resources.

Basic MCP Server

import { createMCPServer } from "mcp-agent-kit";

const server = createMCPServer({
  name: "my-mcp-server",
  port: 7777,
  logLevel: "info",
});

await server.start(); // Starts on stdio by default

With Tools

const server = createMCPServer({
  name: "weather-server",
  tools: [
    {
      name: "get_weather",
      description: "Get current weather",
      inputSchema: {
        type: "object",
        properties: {
          location: { type: "string" },
          units: { type: "string", enum: ["celsius", "fahrenheit"] },
        },
        required: ["location"],
      },
      handler: async ({ location, units = "celsius" }) => {
        // Your weather API logic here
        return { location, temp: 22, units, condition: "Sunny" };
      },
    },
  ],
});

With Resources

const server = createMCPServer({
  name: "data-server",
  resources: [
    {
      uri: "config://app-settings",
      name: "Application Settings",
      description: "Current app configuration",
      mimeType: "application/json",
      handler: async () => {
        return JSON.stringify({ version: "1.0.0", env: "production" });
      },
    },
  ],
});

WebSocket Transport

const server = createMCPServer({
  name: "ws-server",
  port: 8080,
});

await server.start("websocket"); // Use WebSocket instead of stdio

LLM Router

Route requests to different LLMs based on intelligent rules.

Basic Router

import { createLLMRouter } from "mcp-agent-kit";

const router = createLLMRouter({
  rules: [
    {
      when: (input) => input.length < 200,
      use: { provider: "openai", model: "gpt-4-turbo-preview" },
    },
    {
      when: (input) => input.includes("code"),
      use: { provider: "anthropic", model: "claude-3-5-sonnet-20241022" },
    },
    {
      default: true,
      use: { provider: "openai", model: "gpt-4-turbo-preview" },
    },
  ],
});

const response = await router.route("Write a function to sort an array");

With Fallback and Retry

const router = createLLMRouter({
  rules: [...],
  fallback: {
    provider: 'openai',
    model: 'gpt-4-turbo-preview'
  },
  retryAttempts: 3,
  logLevel: 'debug'
});

Router Statistics

const stats = router.getStats();
console.log(stats);
// { totalRules: 3, totalAgents: 2, hasFallback: true }

const agents = router.listAgents();
console.log(agents);
// ['openai:gpt-4-turbo-preview', 'anthropic:claude-3-5-sonnet-20241022']

Chatbots

Create conversational AI with automatic memory management.

Basic Chatbot

import { createChatbot, createAgent } from "mcp-agent-kit";

const bot = createChatbot({
  agent: createAgent({ provider: "openai" }),
  system: "You are a helpful assistant",
  maxHistory: 10,
});

await bot.chat("Hi, I am learning TypeScript");
await bot.chat("Can you help me with interfaces?");
await bot.chat("Thanks!");

With Router

const bot = createChatbot({
  router: createLLMRouter({ rules: [...] }),
  maxHistory: 20
});

Memory Management

// Get conversation history
const history = bot.getHistory();

// Get statistics
const stats = bot.getStats();
console.log(stats);
// {
//   messageCount: 6,
//   userMessages: 3,
//   assistantMessages: 3,
//   oldestMessage: Date,
//   newestMessage: Date
// }

// Reset conversation
bot.reset();

// Update system prompt
bot.setSystemPrompt("You are now a Python expert");

API Requests

Simplified HTTP requests with automatic retry and timeout.

Basic Request

import { api } from "mcp-agent-kit";

const response = await api.get("https://api.example.com/data");
console.log(response.data);

POST Request

const response = await api.post(
  "https://api.example.com/users",
  { name: "John", email: "john@example.com" },
  {
    name: "create-user",
    headers: { "Content-Type": "application/json" },
  }
);

With Retry and Timeout

const response = await api.request({
  name: "important-request",
  url: "https://api.example.com/data",
  method: "GET",
  timeout: 10000, // 10 seconds
  retries: 5, // 5 attempts
  query: { page: 1, limit: 10 },
});

All HTTP Methods

await api.get(url, config);
await api.post(url, body, config);
await api.put(url, body, config);
await api.patch(url, body, config);
await api.delete(url, config);

Configuration

Environment Variables

All configuration is optional. Set these environment variables or pass them in code:

# MCP Server
MCP_SERVER_NAME=my-server
MCP_PORT=7777

# Logging
LOG_LEVEL=info  # debug | info | warn | error

# LLM API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OLLAMA_HOST=http://localhost:11434

Using .env File

# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
LOG_LEVEL=debug

The package automatically loads .env files using dotenv.


Examples

Check out the /examples directory for complete working examples:

  • basic-agent.ts - Simple agent usage
  • mcp-server.ts - MCP server with tools and resources
  • mcp-server-websocket.ts - MCP server with WebSocket
  • llm-router.ts - Intelligent routing between LLMs
  • chatbot-basic.ts - Chatbot with conversation memory
  • chatbot-with-router.ts - Chatbot using router
  • api-requests.ts - HTTP requests with retry

Running Examples

# Install dependencies
npm install

# Run an example
npx ts-node examples/basic-agent.ts

Advanced Usage

Custom Provider

// Coming soon: Plugin system for custom providers

Middleware

// Coming soon: Middleware support for request/response processing

Streaming Responses

// Coming soon: Streaming support for real-time responses

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

MIT © Dominique Kossi


Acknowledgments

  • Built with TypeScript
  • Uses MCP SDK
  • Powered by OpenAI, Anthropic, Google, and Ollama

Support


Made by developers, for developers

Server Config

{
  "mcpServers": {
    "mcp-agent-kit": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-agent-kit"
      ],
      "env": {
        "MCP_SERVER_NAME": "my-server",
        "MCP_PORT": "7777",
        "LOG_LEVEL": "info",
        "OPENAI_API_KEY": "sk-...",
        "ANTHROPIC_API_KEY": "sk-ant-...",
        "GEMINI_API_KEY": "...",
        "OLLAMA_HOST": "http://localhost:11434"
      }
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Tavily Mcp
Playwright McpPlaywright MCP server
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
WindsurfThe new purpose-built IDE to harness magic
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
DeepChatYour AI Partner on Desktop
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
ChatWiseThe second fastest AI chatbot™
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Amap Maps高德地图官方 MCP Server
Serper MCP ServerA Serper MCP Server
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
CursorThe AI Code Editor