Sponsored by Deepsite.site

🚀 GitHub AI Agent

Created By
cnoe-io6 months ago
GitHub AI Agent powered by official GitHub MCP Server. Uses LangGraph and LangChain MCP Adapters. Agent is exposed on various agent transport protocols (AGNTCY Slim, Google A2A, MCP Server)
Content

🚀 GitHub AI Agent

Python Poetry License

Conventional Commits Ruff Linter Super Linter Unit Tests

A2A Docker Build and Push

🧪 Evaluation Badges

ClaudeGeminiOpenAILlama
Claude EvalsGemini EvalsOpenAI EvalsLlama Evals

  • 🤖 GitHub Agent is an LLM-powered agent built using the LangGraph ReAct Agent workflow and MCP tools.
  • 🌐 Protocol Support: Compatible with ACP and A2A protocols for integration with external user clients.
  • 🛡️ Secure by Design: Enforces GitHub API token-based authentication and supports external authentication for strong access control.
  • 🔌 Integrated Communication: Uses langchain-mcp-adapters to connect with the GitHub MCP server within the LangGraph ReAct Agent workflow.
  • 🏭 First-Party MCP Server: The MCP server is generated by our first-party openapi-mcp-codegen utility, ensuring version/API compatibility and software supply chain integrity.

🏗️ Architecture

flowchart TD
  subgraph Client Layer
    A[User Client ACP/A2A]
  end

  subgraph Agent Transport Layer
    B[AGNTCY ACP<br/>or<br/>Google A2A]
  end

  subgraph Agent Graph Layer
    C[LangGraph ReAct Agent]
  end

  subgraph Tools/MCP Layer
    D[LangGraph MCP Adapter]
    E[GitHub MCP Server]
    F[GitHub API Server]
  end

  A --> B --> C
  C --> D
  D -.-> C
  D --> E --> F --> E

✨ Features

  • 🤖 LangGraph + LangChain MCP Adapter for agent orchestration
  • 🧠 Azure OpenAI GPT-4 as the LLM backend
  • 🔗 Connects to GitHub via a dedicated GitHub MCP server
  • 🔄 Multi-protocol support: Compatible with both ACP and A2A protocols for flexible integration and multi-agent orchestration
  • 📊 Comprehensive GitHub API Support:
    • Repository Management
    • Issue Management
    • Pull Request Management
    • Branch Management
    • Commit Operations
    • Project Management
    • Team Collaboration

🚀 Getting Started

1️⃣ Configure Environment

Setting Up Azure OpenAI

  1. Go to Azure Portal
  2. Create or select your Azure OpenAI resource
  3. Navigate to "Keys and Endpoint" section
  4. Copy the endpoint URL and one of the keys
  5. In Azure OpenAI Studio:
    • Create a deployment for GPT-4
    • Note down the deployment name
    • You'll need these values for your .env file:
      • AZURE_OPENAI_API_KEY
      • AZURE_OPENAI_ENDPOINT
      • AZURE_OPENAI_DEPLOYMENT

Setting Up GitHub Token

  1. Go to GitHub.com → Settings → Developer Settings → Personal Access Tokens → Tokens (classic)
  2. Click "Generate new token (classic)"
  3. Give your token a descriptive name
  4. Set an expiration date (recommended: 90 days)
  5. Select the required permissions:
    • repo (Full control of private repositories)
    • workflow (Update GitHub Action workflows)
    • admin:org (Full control of orgs and teams)
    • admin:public_key (Full control of public keys)
    • admin:repo_hook (Full control of repository hooks)
    • admin:org_hook (Full control of organization hooks)
    • gist (Create gists)
    • notifications (Access notifications)
    • user (Update ALL user data)
    • delete_repo (Delete repositories)
    • write:packages (Upload packages to GitHub Package Registry)
    • delete:packages (Delete packages from GitHub Package Registry)
    • admin:gpg_key (Full control of GPG keys)
    • admin:ssh_signing_key (Full control of SSH signing keys)
  6. Click "Generate token"
  7. Copy the token immediately (you won't be able to see it again)
  8. Save this as your GITHUB_PERSONAL_ACCESS_TOKEN

Port Configuration

  • Default ports:
    • A2A Agent: 8000
    • MCP Server: 9000
  • If port 8000 is already in use:
    1. Choose an available port (e.g., 8001, 8002, etc.)
    2. You'll need to update this in both:
      • Your .env file as A2A_AGENT_PORT
      • Your Docker run command port mapping

Now, create a .env file in the root directory with the following configuration:

############################
# Agent Configuration
############################
LLM_PROVIDER=azure-openai
AGENT_NAME=github

## A2A Agent Configuration
A2A_AGENT_HOST=localhost
A2A_AGENT_PORT=8000  # Change this if port 8000 is already in use

## MCP Server Configuration
MCP_HOST=localhost
MCP_PORT=9000

############################
# Azure OpenAI Configuration
############################
# Get these values from your Azure OpenAI resource in Azure Portal
AZURE_OPENAI_API_KEY=<your-azure-key>  # Found in Azure Portal under "Keys and Endpoint"
AZURE_OPENAI_API_VERSION=2025-04-01-preview
AZURE_OPENAI_DEPLOYMENT=gpt-4.1  # Your deployment name in Azure OpenAI
AZURE_OPENAI_ENDPOINT=<your-azure-endpoint>  # Found in Azure Portal under "Keys and Endpoint"

############################
# Google Gemini (if needed)
############################

GOOGLE_API_KEY=<your-google-api-key>

############################
# GitHub Configuration
############################
# Your GitHub Classic Token with the permissions listed above
GITHUB_PERSONAL_ACCESS_TOKEN=<your-github-token>

2️⃣ Start the Agent (A2A Mode)

  1. Pull the A2A image:
docker pull ghcr.io/cnoe-io/agent-github:a2a-latest
  1. Run the agent in a Docker container using your .env file:
docker pull ghcr.io/cnoe-io/agent-github:a2a-latest && \
docker run -it --rm \
  --env-file .env \
  -p 8000:8000 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  ghcr.io/cnoe-io/agent-github:a2a-latest

3️⃣ Run the Client

First, set up a virtual environment and install required tools:

Set Up Virtual Environment

For Linux/macOS:

# Create virtual environment
python -m venv .venv

# Activate virtual environment
source .venv/bin/activate

For Windows:

# Create virtual environment
python -m venv .venv

# Activate virtual environment
. .venv\Scripts\activate

Install uv

# Install uv package manager
pip install uv

Now you can use the agent-chat-cli to interact with the agent:

uvx https://github.com/cnoe-io/agent-chat-cli.git a2a

Local Development

Alternative: Running Locally

You can also run the agent locally without Docker:

First clone the repo:

git clone https://github.com/cnoe-io/agent-github.git
cd agent-github
make run-a2a

In a new terminal, start the A2A client:

make run-a2a-client

Alternative: ACP Mode (Legacy)

If you need to use ACP mode instead:

  1. Run the agent:
make run-acp
  1. In a new terminal, start the ACP client:
make run-acp-client

Quick Demos

▶️ Test with GitHub API

🏃 Quick Start: Set Up GitHub Access

  1. Create a GitHub Account: If you don't have one, sign up at GitHub.

  2. Generate a Personal Access Token:

    • Go to Settings → Developer settings → Personal access tokens
    • Generate a new token with appropriate permissions
    • Save the token securely
  3. Configure Your Token:

    • Add your token to the .env file as GITHUB_PERSONAL_ACCESS_TOKEN

Example Interactions

  1. Repository Management
User: list my repositories
Agent: Here are your GitHub repositories:

1. agent-github (Public)
   - Language: Python
   - Description: LLM-powered GitHub agent
   - Last updated: 2 days ago

2. agent-chat-cli (Public)
   - Language: Python
   - Description: CLI client for agent interaction
   - Last updated: 1 week ago

Let me know if you need more details about any repository!
  1. Issue Management
User: create an issue in agent-github
Agent: Issue created successfully!

Title: Add new feature
Repository: agent-github
Issue #: 42
URL: https://github.com/yourusername/agent-github/issues/42

Would you like me to add any labels or assign someone to this issue?

🔍 Troubleshooting

Common Issues

  1. Docker Issues

    • Ensure Docker daemon is running
    • Check if you have sufficient permissions
    • Verify the required images are available
  2. Port Conflicts

    • If port 8000 is in use, modify the port in .env
    • Check for other services using required ports
  3. Environment Variables

    • Verify all required variables in .env
    • Check API keys and tokens are valid
    • No trailing spaces in values
  4. Client Connection Issues

    • Server must be running before client
    • Port numbers should match
    • API keys must match between server and client

Contributing

Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
CursorThe AI Code Editor
Playwright McpPlaywright MCP server
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
Tavily Mcp
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
WindsurfThe new purpose-built IDE to harness magic
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
DeepChatYour AI Partner on Desktop
Amap Maps高德地图官方 MCP Server
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Serper MCP ServerA Serper MCP Server
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
ChatWiseThe second fastest AI chatbot™