Sponsored by Deepsite.site

Jentic

Created By
main7 months ago
Content

Jentic SDK & MCP Plugin [Beta]

Jentic MCP empowers AI agent builders to discover and integrate external APIs and workflows rapidly—without writing or maintaining API-specific code.

This repository contains the core Jentic SDK and the Jentic MCP Plugin.

  • Jentic SDK: A comprehensive Python library for discovering and executing APIs and workflows, particularly for LLM tool use.
  • Jentic MCP Plugin: A plugin enabling agents (like Windsurf, Claude Desktop & Cursor) to discover and use Jentic capabilities via MCP.

See the respective README files for more details:

The Jentic SDK is backed by the data in the Open Agentic Knowledge (OAK) repository.

Getting Started

Get Your Jentic UUID

To use the Jentic SDK or MCP Plugin, you must first obtain a Jentic UUID. The easiest way is using the Jentic CLI. You can optionally include an email address for higher rate limits and for early access to new features.

pip install jentic
jentic register --email '<your_email>'

This will print your UUID and an export command to set it in your environment:

export JENTIC_UUID=<your-jentic-uuid>

Alternatively, you can use curl to register and obtain your UUID:

curl -X POST https://api.jentic.com/api/v1/auth/register \
     -H "Content-Type: application/json" \
     -d '{"email": "<your_email>"}'

Jentic MCP Server

The quickest way to get started is to integrate the Jentic MCP plugin with your preferred MCP client (like Windsurf, Claude Desktop or Cursor).

The recommended method is to run the server directly from the GitHub repository using uvx. You will need to install uv first using:

brew install uv or pip install uv

Next, add the following configuration to your MCP client.

The location of the configuration file depends on the client you are using and your OS. Some common examples:

  • Windsurf: ~/.codeium/windsurf/mcp_config.json
  • Claude Desktop: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Claude Code: ~/.claude.json
  • Cursor: ~/cursor/.mcp.json

For other clients, check your client's documentation for how to add MCP servers.

{
    "mcpServers": {
        "jentic": {
            "command": "uvx",
            "args": [
                "--from",
                "git+https://github.com/jentic/jentic-tools.git@main#subdirectory=mcp",
                "mcp"
            ],
            "env": {
                "JENTIC_UUID": "<your-jentic-uuid>"
            }
        }
    }
}

Note: After saving the configuration file, you may need to restart the client application (Windsurf, Claude Desktop) for the changes to take effect.

MCP Tool Use

Once the MCP server is running, you can easily use the MCP tools in your LLM agent to discover and execute APIs and workflows.

  1. search_apis: Search for APIs in the Jentic directory that match specific functionality needs
  2. load_execution_info: Retrieve detailed specifications for APIs and operations from the Jentic directory. This will include auth information you may need to provide in your mcpServers.jentic.env configuration.
  3. execute: Execute a specific API or workflow operation.

Environment Variables

When you are using an API that requires authentication, the load_execution_info tool will describe the required environment variables. You environment variables via the command line in Windsurf, although in some clients like Claude Desktop, you'll need to add them to your MCP config:

{
    "mcpServers": {
        "jentic": {
            "command": "uvx",
            "args": [
                "--from",
                "git+https://github.com/jentic/jentic-tools.git@main#subdirectory=mcp",
                "mcp"
            ],
            "env": {
                "JENTIC_UUID": "<your-jentic-uuid>",
                "DISCORD_BOTTOKEN": "YOUR BOT TOKEN"
            }
        }
    }
}

Jentic SDK Use

pip install jentic

Jentic for Building and Executing LLM Tools

To provide tools to your LLM that you have selected at runtime, ask your coding agent to use the load_execution_info tool to retrieve the necessary information and save it to jentic.json at the root of your project.

A typical agent loop with tool use looks like this:

from jentic import Jentic

class MyAgent:
    def __init__(self):
        self.jentic = Jentic()
        # Generate tool definitions compatible with your LLM (e.g., "anthropic", "openai")
        self.jentic_tools = self.jentic.generate_llm_tool_definitions("anthropic")

    async def process_message(self, user_message):
        # Assume `messages` is your conversation history
        # Assume `self.client` is your LLM client (e.g., Anthropic client)

        response = self.client.messages.create(
            model='claude-3-5-sonnet-latest',
            messages=messages,
            tools=self.jentic_tools, # Pass the generated tools
        )

        while response.stop_reason == "tool_use":
            tool_use = next(block for block in response.content if block.type == "tool_use")
            tool_name = tool_use.name
            tool_input = tool_use.input

            # Execute the tool using the Jentic SDK
            tool_result = await self.jentic.run_llm_tool(
                tool_name,
                tool_input
            )
            # ... handle tool_result and continue the conversation ...

Server Config

{
  "mcpServers": {
    "jentic": {
      "command": "uvx",
      "args": [
        "--from",
        "git+https://github.com/jentic/jentic-tools.git@main#subdirectory=mcp",
        "mcp"
      ],
      "env": {
        "JENTIC_UUID": "<your-jentic-uuid>"
      }
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
CursorThe AI Code Editor
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Tavily Mcp
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
ChatWiseThe second fastest AI chatbot™
WindsurfThe new purpose-built IDE to harness magic
Amap Maps高德地图官方 MCP Server
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
DeepChatYour AI Partner on Desktop
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Serper MCP ServerA Serper MCP Server
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Playwright McpPlaywright MCP server
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.