Sponsored by Deepsite.site

Nft Log Analyser

Created By
mashish10 days ago
CP Log Analyzer automatically triages your log files using a local AI model. It scans for errors, deduplicates repeated events, analyses root causes using Ollama (deepseek-r1:14b), and files structured GitHub Issues — all running on your own machine with zero data sent to external servers.
Overview

🔍 NFT Log Analyzer

AI-powered log analysis that automatically files GitHub Issues — 100% local via Ollama, zero data leaves your machine.

Python Ollama MCP License


What It Does

Point it at any log file and it will:

  1. Scan 500MB+ files in seconds using ripgrep
  2. Parse error patterns, deduplicate repeated events
  3. Analyse using local LLM (Ollama + deepseek-r1:14b) via CrewAI agents
  4. Compose structured GitHub Issues with root cause and suggested fixes
  5. File Issues automatically to your repo — skipping duplicates

All processing happens locally on your machine. Raw log content never leaves your system.


Architecture

Claude Desktop / Cursor / LangChain
         ↓  MCP (stdio or HTTP+SSE)
   MCP Log Analyzer Server
   ripgrep pre-filter (2-4s on 500MB)
   mmap streaming parser + deduplicator
   CrewAI agents → Ollama (local LLM)
   GitHub Issues API

Requirements

RequirementVersionNotes
Python3.11+3.14 not supported
OllamaLatestbrew install ollama
deepseek-r1:14b~9GB download
ripgrepLatestbrew install ripgrep
RAM16GB min32GB recommended
macOSVentura 13+Apple Silicon recommended

Quick Start

1. Install system dependencies

brew install ollama ripgrep
brew services start ollama
ollama pull deepseek-r1:14b   # ~9GB — start this first

2. Clone and set up Python environment

git clone https://github.com/YOUR_ORG/mcp-log-analyzer
cd mcp-log-analyzer

/opt/homebrew/bin/python3.11 -m venv .venv
source .venv/bin/activate

pip install --upgrade pip
pip install mcp "crewai>=0.80.0" crewai-tools langchain-ollama \
    litellm fastapi uvicorn httpx httpx-sse \
    structlog loguru pydantic python-dotenv \
    tenacity rich typer

3. Configure environment

cp .env.example .env
nano .env   # fill in your values
GITHUB_PAT=ghp_your_token_here
GITHUB_REPO_OWNER=your-username
GITHUB_REPO_NAME=your-repo
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=deepseek-r1:14b
CREWAI_TELEMETRY_OPT_OUT=true
OTEL_SDK_DISABLED=true
OLLAMA_KEEP_ALIVE=-1

4. Create a GitHub PAT

Go to: github.com → Settings → Developer settings → Personal access tokens → Tokens (classic)

Enable scope: repo (full)

5. Register with Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "mcp-log-analyzer": {
      "command": "/path/to/mcp-log-analyzer/.venv/bin/python",
      "args": ["/path/to/mcp-log-analyzer/mcp_server/server.py"],
      "env": {
        "GITHUB_PAT": "ghp_your_token",
        "GITHUB_REPO_OWNER": "your-username",
        "GITHUB_REPO_NAME": "your-repo",
        "OLLAMA_BASE_URL": "http://localhost:11434",
        "OLLAMA_MODEL": "deepseek-r1:14b"
      }
    }
  }
}

Restart Claude Desktop. You should see the 🔨 tools icon appear.


Usage

Via Claude Desktop (natural language)

analyze the log file at /var/log/app.log and file GitHub issues for any errors
use analyze_log_file with path="/var/log/app.log" dry_run=true
check status of job abc12345

Via Python CLI

source .venv/bin/activate

python3 -c "
from dotenv import load_dotenv
load_dotenv()
from mcp_server.tools.analyze_tool import analyze_log_file
import asyncio, json

result = asyncio.run(analyze_log_file({
    'path': '/var/log/app.log',
    'severity': 'ERROR',
    'dry_run': False
}))
print(result[0].text)
"

MCP Tools Reference

ping

Health check — verifies the server and Ollama are running.

{}

Returns: "mcp-log-analyzer online — Ollama: deepseek-r1:14b"


analyze_log_file

Start async log analysis. Returns a job ID immediately — pipeline runs in background.

ParameterTypeRequiredDefaultDescription
pathstringAbsolute path to log file
severitystringERRORMinimum severity: WARN, ERROR, CRITICAL
dry_runbooleanfalsePreview issues without filing to GitHub

Returns:

{
  "job_id": "abc12345",
  "status": "started",
  "message": "Analysis started. Check progress with get_job_status('abc12345')."
}

get_job_status

Check the status of a running analysis job.

ParameterTypeRequiredDescription
job_idstringJob ID returned by analyze_log_file

Returns (running):

{
  "status": "running",
  "job_id": "abc12345",
  "lines_filtered": 487,
  "chunks": 1
}

Returns (done):

{
  "status": "done",
  "job_id": "abc12345",
  "lines_filtered": 487,
  "unique_events": 4,
  "chunks": 1,
  "issues_filed": 2,
  "github_issues": [
    {
      "title": "[CRITICAL][minting-service] DB connection pool exhausted (x117)",
      "url": "https://github.com/your-org/your-repo/issues/42",
      "number": 42
    }
  ]
}

Compatible MCP Clients

ClientTransportConfig
Claude Desktopstdioclaude_desktop_config.json
Claude Code CLIstdio.mcp.json in project root
Cursorstdio or HTTP+SSE.cursor/mcp.json
LangChainHTTP+SSEurl: http://localhost:8000/sse
n8nHTTP+SSEHTTP Request node → SSE

HTTP+SSE Transport (for Cursor, LangChain, n8n)

python mcp_server/server.py --transport sse --port 8000

Customising with Skills

Skills are plain English .md files that teach the agents your stack's error patterns. Three built-in skills ship with the project:

SkillPurpose
skills/nft-app-errors.skill.mdNFT/blockchain error classification
skills/infrastructure-errors.skill.mdInfrastructure error classification
skills/bug-composition.skill.mdGitHub Issue format rules

Writing your own skill

Create skills/my-stack-errors.skill.md:

# My Stack Error Classification

## CRITICAL — file bug immediately
- "FATAL: database connection refused" = service down
- "out of memory" = process crash imminent

## HIGH — file bug, non-urgent  
- "connection timeout" on external API = degraded performance

## IGNORE — known false positives
- "reconnecting..." during deploys = expected

Then load it in agents/crew.py:

_load_skill("my-stack-errors.skill.md")

Pipeline Internals

500MB log file
    ↓  ripgrep (2-4 seconds)
    ↓  Filters: ERROR|FATAL|CRITICAL|WARN|Exception|Traceback
~5MB of error lines
    ↓  mmap streaming parser
    ↓  LogEvent objects with timestamp, level, component, message
    ↓  Deduplicator (fingerprints strip req_id, numbers, hex)
4-20 unique error patterns
    ↓  Chunker (10 events per chunk, CRITICAL first)
1-3 chunks
    ↓  Single CrewAI agent → Ollama (local)
    ↓  Structured bug reports in markdown
    ↓  Title extractor + label classifier
    ↓  Duplicate check via GitHub search API
GitHub Issues filed

Performance

Tested on Apple Silicon (M2, 32GB):

File sizeFilter timeAnalysis timeTotal
10MB<1s3-5 min~5 min
100MB1-2s3-5 min~7 min
500MB3-5s5-10 min~15 min

Analysis time depends on number of unique error patterns found (not file size).


Troubleshooting

SymptomFix
ollama ps shows emptyRun ollama run deepseek-r1:14b then /bye to warm the model
MCP server disconnected in Claude DesktopCheck ~/Library/Logs/Claude/mcp-server-*.log for Python errors
Issues filed: 0Verify GITHUB_PAT in claude_desktop_config.json is a real token, not placeholder
Timeout after 600sAdd OLLAMA_KEEP_ALIVE=-1 to .env and restart Ollama
crewai install failsRequires Python 3.11 — not compatible with 3.13/3.14
Permission denied on /usr/local/binUse /opt/homebrew/bin/ instead on Apple Silicon

Roadmap

v1 (current)

  • Local filesystem log ingestion
  • ripgrep + mmap pipeline
  • Single-agent CrewAI analysis
  • GitHub Issues filing with dedup
  • Claude Desktop + stdio MCP transport

v2 (planned)

  • Datadog MCP integration
  • Splunk MCP integration
  • HTTP+SSE transport (Cursor, LangChain, n8n)
  • Scheduled analysis triggers
  • Parallel chunk processing
  • Web dashboard for job history

Contributing

Contributions welcome — especially new skill files for different stacks.

  1. Fork the repo
  2. Create skills/your-stack-errors.skill.md
  3. Test it against a real log file
  4. Open a PR with example output

License

MIT — see LICENSE

Server Config

{
  "mcpServers": {
    "nft-log-analyzer": {
      "command": "python",
      "args": [
        "/path/to/nft-log-analyzer/mcp_server/server.py"
      ],
      "env": {
        "GITHUB_PAT": "your_github_pat",
        "GITHUB_REPO_OWNER": "your-username",
        "GITHUB_REPO_NAME": "your-repo",
        "OLLAMA_BASE_URL": "http://localhost:11434",
        "OLLAMA_MODEL": "deepseek-r1:14b"
      }
    }
  }
}
Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Serper MCP ServerA Serper MCP Server
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
WindsurfThe new purpose-built IDE to harness magic
Y GuiA web-based graphical interface for AI chat interactions with support for multiple AI models and MCP (Model Context Protocol) servers.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
CursorThe AI Code Editor
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Amap Maps高德地图官方 MCP Server
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
DeepChatYour AI Partner on Desktop
RedisA Model Context Protocol server that provides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools.
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
ChatWiseThe second fastest AI chatbot™
Tavily Mcp
Playwright McpPlaywright MCP server