- Mcp Agent Kit
Mcp Agent Kit
mcp-agent-kit
The easiest way to create MCP servers, AI agents, and chatbots with any LLM
mcp-agent-kit is a TypeScript package that simplifies the creation of:
- 🔌 MCP Servers (Model Context Protocol)
- 🤖 AI Agents with multiple LLM providers
- 🧠 Intelligent Routers for multi-LLM orchestration
- 💬 Chatbots with conversation memory
- 🌐 API Helpers with retry and timeout
Features
- Zero Config: Works out of the box with smart defaults
- Multi-Provider: OpenAI, Anthropic, Gemini, Ollama support
- Type-Safe: Full TypeScript support with autocomplete
- Production Ready: Built-in retry, timeout, and error handling
- Developer Friendly: One-line setup for complex features
- Extensible: Easy to add custom providers and middleware
Installation
npm install mcp-agent-kit
Quick Start
Create an AI Agent (1 line!)
import { createAgent } from "mcp-agent-kit";
const agent = createAgent({ provider: "openai" });
const response = await agent.chat("Hello!");
console.log(response.content);
Create an MCP Server (1 function!)
import { createMCPServer } from "mcp-agent-kit";
const server = createMCPServer({
name: "my-server",
tools: [
{
name: "get_weather",
description: "Get weather for a location",
inputSchema: {
type: "object",
properties: {
location: { type: "string" },
},
},
handler: async ({ location }) => {
return `Weather in ${location}: Sunny, 72°F`;
},
},
],
});
await server.start();
Create a Chatbot with Memory
import { createChatbot, createAgent } from "mcp-agent-kit";
const bot = createChatbot({
agent: createAgent({ provider: "openai" }),
system: "You are a helpful assistant",
maxHistory: 10,
});
await bot.chat("Hi, my name is John");
await bot.chat("What is my name?"); // Remembers context!
Documentation
Table of Contents
AI Agents
Create intelligent agents that work with multiple LLM providers.
Basic Usage
import { createAgent } from "mcp-agent-kit";
const agent = createAgent({
provider: "openai",
model: "gpt-4-turbo-preview",
temperature: 0.7,
maxTokens: 2000,
});
const response = await agent.chat("Explain TypeScript");
console.log(response.content);
Supported Providers
| Provider | Models | API Key Required |
|---|---|---|
| OpenAI | GPT-4, GPT-3.5 | ✅ Yes |
| Anthropic | Claude 3.5, Claude 3 | ✅ Yes |
| Gemini | Gemini 2.0+ | ✅ Yes |
| Ollama | Local models | ❌ No |
With Tools (Function Calling)
const agent = createAgent({
provider: "openai",
tools: [
{
name: "calculate",
description: "Perform calculations",
parameters: {
type: "object",
properties: {
operation: { type: "string", enum: ["add", "subtract"] },
a: { type: "number" },
b: { type: "number" },
},
required: ["operation", "a", "b"],
},
handler: async ({ operation, a, b }) => {
return operation === "add" ? a + b : a - b;
},
},
],
});
const response = await agent.chat("What is 15 + 27?");
With System Prompt
const agent = createAgent({
provider: "anthropic",
system: "You are an expert Python developer. Always provide code examples.",
});
MCP Servers
Create Model Context Protocol servers to expose tools and resources.
Basic MCP Server
import { createMCPServer } from "mcp-agent-kit";
const server = createMCPServer({
name: "my-mcp-server",
port: 7777,
logLevel: "info",
});
await server.start(); // Starts on stdio by default
With Tools
const server = createMCPServer({
name: "weather-server",
tools: [
{
name: "get_weather",
description: "Get current weather",
inputSchema: {
type: "object",
properties: {
location: { type: "string" },
units: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
handler: async ({ location, units = "celsius" }) => {
// Your weather API logic here
return { location, temp: 22, units, condition: "Sunny" };
},
},
],
});
With Resources
const server = createMCPServer({
name: "data-server",
resources: [
{
uri: "config://app-settings",
name: "Application Settings",
description: "Current app configuration",
mimeType: "application/json",
handler: async () => {
return JSON.stringify({ version: "1.0.0", env: "production" });
},
},
],
});
WebSocket Transport
const server = createMCPServer({
name: "ws-server",
port: 8080,
});
await server.start("websocket"); // Use WebSocket instead of stdio
LLM Router
Route requests to different LLMs based on intelligent rules.
Basic Router
import { createLLMRouter } from "mcp-agent-kit";
const router = createLLMRouter({
rules: [
{
when: (input) => input.length < 200,
use: { provider: "openai", model: "gpt-4-turbo-preview" },
},
{
when: (input) => input.includes("code"),
use: { provider: "anthropic", model: "claude-3-5-sonnet-20241022" },
},
{
default: true,
use: { provider: "openai", model: "gpt-4-turbo-preview" },
},
],
});
const response = await router.route("Write a function to sort an array");
With Fallback and Retry
const router = createLLMRouter({
rules: [...],
fallback: {
provider: 'openai',
model: 'gpt-4-turbo-preview'
},
retryAttempts: 3,
logLevel: 'debug'
});
Router Statistics
const stats = router.getStats();
console.log(stats);
// { totalRules: 3, totalAgents: 2, hasFallback: true }
const agents = router.listAgents();
console.log(agents);
// ['openai:gpt-4-turbo-preview', 'anthropic:claude-3-5-sonnet-20241022']
Chatbots
Create conversational AI with automatic memory management.
Basic Chatbot
import { createChatbot, createAgent } from "mcp-agent-kit";
const bot = createChatbot({
agent: createAgent({ provider: "openai" }),
system: "You are a helpful assistant",
maxHistory: 10,
});
await bot.chat("Hi, I am learning TypeScript");
await bot.chat("Can you help me with interfaces?");
await bot.chat("Thanks!");
With Router
const bot = createChatbot({
router: createLLMRouter({ rules: [...] }),
maxHistory: 20
});
Memory Management
// Get conversation history
const history = bot.getHistory();
// Get statistics
const stats = bot.getStats();
console.log(stats);
// {
// messageCount: 6,
// userMessages: 3,
// assistantMessages: 3,
// oldestMessage: Date,
// newestMessage: Date
// }
// Reset conversation
bot.reset();
// Update system prompt
bot.setSystemPrompt("You are now a Python expert");
API Requests
Simplified HTTP requests with automatic retry and timeout.
Basic Request
import { api } from "mcp-agent-kit";
const response = await api.get("https://api.example.com/data");
console.log(response.data);
POST Request
const response = await api.post(
"https://api.example.com/users",
{ name: "John", email: "john@example.com" },
{
name: "create-user",
headers: { "Content-Type": "application/json" },
}
);
With Retry and Timeout
const response = await api.request({
name: "important-request",
url: "https://api.example.com/data",
method: "GET",
timeout: 10000, // 10 seconds
retries: 5, // 5 attempts
query: { page: 1, limit: 10 },
});
All HTTP Methods
await api.get(url, config);
await api.post(url, body, config);
await api.put(url, body, config);
await api.patch(url, body, config);
await api.delete(url, config);
Configuration
Environment Variables
All configuration is optional. Set these environment variables or pass them in code:
# MCP Server
MCP_SERVER_NAME=my-server
MCP_PORT=7777
# Logging
LOG_LEVEL=info # debug | info | warn | error
# LLM API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OLLAMA_HOST=http://localhost:11434
Using .env File
# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
LOG_LEVEL=debug
The package automatically loads .env files using dotenv.
Examples
Check out the /examples directory for complete working examples:
basic-agent.ts- Simple agent usagemcp-server.ts- MCP server with tools and resourcesmcp-server-websocket.ts- MCP server with WebSocketllm-router.ts- Intelligent routing between LLMschatbot-basic.ts- Chatbot with conversation memorychatbot-with-router.ts- Chatbot using routerapi-requests.ts- HTTP requests with retry
Running Examples
# Install dependencies
npm install
# Run an example
npx ts-node examples/basic-agent.ts
Advanced Usage
Custom Provider
// Coming soon: Plugin system for custom providers
Middleware
// Coming soon: Middleware support for request/response processing
Streaming Responses
// Coming soon: Streaming support for real-time responses
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
MIT © Dominique Kossi
Acknowledgments
- Built with TypeScript
- Uses MCP SDK
- Powered by OpenAI, Anthropic, Google, and Ollama
Support
- Email: houessoudominique@gmail.com
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made by developers, for developers
Server Config
{
"mcpServers": {
"mcp-agent-kit": {
"command": "npx",
"args": [
"-y",
"mcp-agent-kit"
],
"env": {
"MCP_SERVER_NAME": "my-server",
"MCP_PORT": "7777",
"LOG_LEVEL": "info",
"OPENAI_API_KEY": "sk-...",
"ANTHROPIC_API_KEY": "sk-ant-...",
"GEMINI_API_KEY": "...",
"OLLAMA_HOST": "http://localhost:11434"
}
}
}
}