Sponsored by Deepsite.site

Thumbgate

Created By
IgorGanapolsky22 days ago
ThumbGate is an agent governance MCP server that prevents costly AI mistakes before they execute. It provides pre-action gates, shared lessons across agents, and team-level safeguards for AI coding workflows. Features include Thompson Sampling for intelligent decision-making, feedback loops, and integrations with Claude, Cursor, and OpenCode.
Content

ThumbGate

Stop AI agents before they make costly mistakes.

ThumbGate checks risky commands, file edits, deploys, API calls, and other agent actions before they run. Thumbs-up/down feedback becomes remembered lessons, repeated failures become Pre-Action Gates, and the next bad action gets blocked instead of becoming another cleanup bill.

CI npm License: MIT Start Sprint Open ThumbGate GPT

Workflow Hardening Sprint · Open ThumbGate GPT · ChatGPT Actions setup · Install Claude Desktop Extension · Claude Plugin Guide · Install Codex Plugin · ThumbGate Bench · Perplexity Command Center · Live Dashboard · Pro Page

Popular buyer questions: Stop repeated AI agent mistakes · Cursor guardrails · Codex CLI guardrails · Gemini CLI memory + enforcement

Running Claude Desktop? Download Claude bundle · Install + submission guide · Review packet zip

Running Codex? Download the standalone Codex plugin bundle · Codex install guide

First-dollar activation path

If someone is not already bought into ThumbGate, do not lead with architecture. Lead with one repeated mistake.

  1. Show the pain: open the ThumbGate GPT and paste the bad answer, risky command, deploy, PR action, or agent plan before it runs again.
  2. Capture the lesson: type thumbs down: or thumbs up: with one concrete sentence. Native ChatGPT rating buttons are not the ThumbGate capture path; typed feedback is.
  3. Enforce the repeat: run npx thumbgate init where the agent executes so the lesson can become a Pre-Action Gate instead of another reminder.
  4. Upgrade only after proof: Solo Pro is for the dashboard, DPO export, proof-ready evidence, and higher capture limits after one real blocked repeat. Team starts with the Workflow Hardening Sprint around one repeated failure, one owner, and one proof review.

The buying question is simple: what repeated AI mistake would be worth blocking before the next tool call?

ThumbGate GPT: start here

Use ThumbGate in ChatGPT now: Open the live ThumbGate GPT, paste the action your AI agent wants to run, and ask whether to allow, block, or checkpoint it before the mistake becomes expensive.

Try this first prompt:

Check this agent action before it runs: git push --force --tags

No, users do not have to keep chatting inside the ThumbGate GPT to use ThumbGate. The GPT is the fast demo, guided setup path, and thumbs-up/down memory surface for ChatGPT users. Think of the GPT as advice and checkpointing; the hard enforcement layer still runs where the work happens: your local coding agent, CI workflow, or MCP-compatible runtime after npx thumbgate init.

Developers can import the prepared GPT Actions OpenAPI spec with the ChatGPT Actions setup guide. Regular ChatGPT users should just open the GPT and type what happened.

Official directory pending review? Claude Code users can install today with /plugin marketplace add IgorGanapolsky/ThumbGate then /plugin install thumbgate@thumbgate-marketplace.

Using Perplexity Max? ThumbGate ships a Perplexity Command Center that runs AI-search visibility checks, Search API lead discovery, Agent API strategy briefs, and official Perplexity MCP config generation. It is scheduled in GitHub Actions and uploads artifacts without committing runtime .thumbgate state.

Need proof that gates improve safety without killing capability? Run ThumbGate Bench:

npm run thumbgate:bench

It scores deterministic GitHub, npm, database, Railway, shell, and filesystem scenarios with unsafeActionRate, capabilityRate, positivePromotionRate, and replayStability so teams can inspect the Reliability Gateway before a Workflow Hardening Sprint.


What problem does this solve?

AI agents repeat expensive mistakes. You fix the same problem in session after session — force-push to main, broken migrations, unauthorized file edits, risky deploys — because the agent has no durable memory of your feedback and no gate before execution.

ThumbGate sells three concrete outcomes:

  • Prevent expensive AI mistakes — catch bad commands, destructive database actions, unsafe publishes, and risky API calls before they run.
  • Make AI stop repeating mistakes — fix it once, turn the lesson into a rule, and block the repeat before the next tool call lands.
  • Turn AI into a reliable operator — move from a smart assistant that apologizes after damage to a production-ready operator with checkpoints, proof, and enforcement.
┌─────────────────────────────────────────────────────────────┐
│                    THE PROBLEM                              │
│                                                             │
│  Session 1: Agent breaks something. You fix it.             │
│  Session 2: Agent breaks it again. You fix it again.        │
│  Session 3: Same thing. Again.                              │
│                                                             │
│                    THE SOLUTION                             │
│                                                             │
│  Session 1: Agent breaks something. You 👎 it.              │
│  Session 2: ⛔ Gate blocks the mistake before it happens.   │
│  Session 3+: Never see it again.                            │
└─────────────────────────────────────────────────────────────┘

ThumbGate is the Reliability Gateway for AI coding agents — turning your feedback into enforced rules, not suggestions.


How It Works in 3 Steps

  STEP 1              STEP 2                 STEP 3
  ────────            ────────               ────────

  You react           ThumbGate learns       The gate holds

  👎 on a bad    ──►  Feedback becomes  ──►  Next time the
  agent action        a saved lesson         agent tries the
                      and a block rule       same thing:
  👍 on a good   ──►  Good pattern gets      ⛔ BLOCKED
  agent action        reinforced                 (or ✅ allowed)

That's it. No manual rule-writing. No config files to maintain. Your reactions teach the agent what your team actually wants.


Before / After

WITHOUT THUMBGATE              │  WITH THUMBGATE
───────────────────────────────┼───────────────────────────────
Session 1:                     │  Session 1:
  Agent force-pushes to main.  │    Agent force-pushes to main.
  You correct it manually.     │    You 👎 it.
Session 2:                     │  Session 2:
  Agent force-pushes again.    │    ⛔ Gate blocks force-push.
  It learned nothing.          │    Agent uses safe push instead.
Session 3:                     │  Session 3+:
  Same mistake. Again.         │    Permanently fixed.
  And again.                   │

The Feedback Loop

┌──────────┐    ┌──────────┐    ┌──────────┐    ┌──────────┐    ┌──────────┐
│ Capture  │───►│  Learn   │───►│ Remember │───►│   Rule   │───►│   Gate   │
│          │    │          │    │          │    │          │    │          │
│ 👍 / 👎  │    │ Feedback │    │ Stored   │    │ Auto-    │    │ Blocks   │
│          │    │ becomes  │    │ lessons  │    │ generated│    │ bad      │
│          │    │ a lesson │    │ & search │    │ from     │    │ actions  │
│          │    │          │    │          │    │ feedback │    │ live     │
└──────────┘    └──────────┘    └──────────┘    └──────────┘    └──────────┘

Get Started

Best first paid motion for teams: the Workflow Hardening Sprint — qualify one repeated failure before committing to a full rollout. Start intake →

Best first technical motion: install the CLI-first and let init wire hooks for the agent you already use.

Paid path for individual operators: ThumbGate Pro is the self-serve side lane for a personal dashboard and export-ready evidence.

Plain product line: GPT preview = advice and checkpointing. Free local CLI (3 daily feedback captures, 5 daily lesson searches) = basic enforcement on one machine. Pro ($19/mo or $149/yr) = personal enforcement proof, dashboard, and exports. Team = shared hosted lesson DB, org dashboard, and shared enforcement so one correction protects every seat.


Quick Start

npx thumbgate init    # detects your agent and wires everything up
npx thumbgate doctor  # health check
npx thumbgate lessons # see what's been learned
npx thumbgate explore # terminal explorer for lessons, gates, and stats
npx thumbgate dashboard # open local dashboard

Or wire MCP directly: claude mcp add thumbgate -- npx --yes --package thumbgate thumbgate serve

Works with Claude Code, Cursor, Codex, Gemini CLI, Amp, OpenCode, and any MCP-compatible agent.


Install for Your Agent

Claude Code

npx thumbgate init --agent claude-code

Wires hooks automatically. Works immediately.

Cursor

npx thumbgate init --agent cursor

Installs as a Cursor extension with 4 skills: capture feedback, manage rules, search lessons, recall context.

Codex

npx thumbgate init --agent codex

Bridges to Codex CLI with 6 skills including adversarial review and second-pass analysis.

Gemini CLI

npx thumbgate init --agent gemini

Amp

npx thumbgate init --agent amp

Any MCP-Compatible Agent

npx thumbgate serve

Starts the MCP server on stdio. Connect from any MCP-compatible client.

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "thumbgate": {
      "command": "npx",
      "args": ["--yes", "--package", "thumbgate", "thumbgate", "serve"]
    }
  }
}

Or download the packaged extension bundle and install directly.


Use Cases

  • Stop force-push to main — A gate blocks git push --force on protected branches before it runs
  • Prevent repeated migration failures — Each mistake becomes a searchable lesson that fires before the next attempt
  • Block unauthorized file edits — Control which files agents can touch with path-based rules
  • Memory across sessions — The agent remembers your feedback from yesterday without any manual rule-writing
  • Shared team safety — One developer's thumbs-down protects the whole team from the same mistake
  • Auto-improving without feedback — Self-improvement mode evaluates outcomes and generates rules automatically

Feedback Sessions

Give the agent more context when a thumbs-down isn't enough:

👎 thumbs down
  └─► open_feedback_session
        └─► "you lied about deployment"    (append_feedback_context)
        └─► "tests were actually failing"  (append_feedback_context)
        └─► finalize_feedback_session
              └─► lesson inferred from full conversation

ThumbGate uses up to 8 prior conversation entries to turn vague, history-aware negative signals into specific, actionable lessons. A 60-second follow-up window stays open for additional context via open_feedback_sessionappend_feedback_contextfinalize_feedback_session.

Free and self-hosted users can invoke search_lessons directly through MCP, and via the CLI with npx thumbgate lessons.


Built-in Gates

┌─────────────────────────────────────────────────────────┐
│                   ENFORCEMENT LAYER                     │
│                                                         │
│  ⛔ force-push          → blocks git push --force       │
│  ⛔ protected-branch    → blocks direct push to main    │
│  ⛔ unresolved-threads  → blocks push with open reviews │
│  ⛔ package-lock-reset  → blocks destructive lock edits │
│  ⛔ env-file-edit       → blocks .env secret exposure   │
│                                                         │
│  + custom gates in config/gates/custom.json             │
└─────────────────────────────────────────────────────────┘

Pricing

┌──────────────────┬──────────────────────────────┬──────────────────────┐
│   FREE           │  TEAM  $99/seat/mo (min 3)   │  PRO  $19/mo · $149/yr│
├──────────────────┼──────────────────────────────┼──────────────────────┤
│ Local CLI        │ Workflow Hardening Sprint     │ Personal dashboard   │
│ Enforced gates   │ Shared hosted lesson DB       │ Export feedback data │
│ 3 captures/day   │ Org-wide dashboard            │ Review-ready exports │
│ 5 searches/day   │ Approval + audit proof        │                      │
│ Unlimited recall │ Isolated execution guidance   │                      │
└──────────────────┴──────────────────────────────┴──────────────────────┘

Start Workflow Hardening Sprint · Live Dashboard · See Pro

Where to start:

  • Teams: Begin with the Workflow Hardening Sprint — prove one costly repeat failure can be blocked before committing to a full rollout
  • Solo operators: ThumbGate Pro adds personal enforcement proof, a gate debugger, and export-ready evidence
  • Individuals & open source: Free CLI tier, self-hosted, with local Pre-Action Gates after install

Tech Stack

┌──────────────────────┬──────────────────────┬──────────────────────┐
│   STORAGE            │   INTELLIGENCE        │   ENFORCEMENT        │
│                      │                       │                      │
│ SQLite + FTS5        │ MemAlign dual recall  │ PreToolUse hook      │
│ LanceDB vectors      │ Thompson Sampling     │ engine               │
│ JSONL logs           │ (adaptive lesson      │ Gates config         │
│ File-based context   │  selection)           │ Hook wiring          │
│                      │                       │                      │
│                      │                       │                      │
├──────────────────────┼──────────────────────┼──────────────────────┤
│   INTERFACES         │   BILLING             │   EXECUTION          │
│                      │                       │                      │
│ MCP stdio            │ Stripe                │ Railway              │
│ HTTP API             │                       │ Cloudflare Workers   │
│ CLI                  │                       │ Docker Sandboxes     │
│ Node.js >=18         │                       │                      │
└──────────────────────┴──────────────────────┴──────────────────────┘

FAQ

Is ThumbGate a model fine-tuning tool? No. ThumbGate does not update model weights in frontier LLMs. It captures your feedback, stores lessons, injects context at runtime, and blocks bad actions before they execute.

How is this different from CLAUDE.md or .cursorrules? Those are suggestions the agent can ignore. ThumbGate gates are enforced — they physically block the action before it runs. They also auto-generate from feedback instead of requiring manual writing.

Does it work with my agent? Yes. It's MCP-compatible and works with Claude Code, Claude Desktop, Cursor, Codex, Gemini CLI, Amp, OpenCode, and any agent that supports MCP or pre-action hooks.

What's self-improvement mode? ThumbGate can watch for failure signals (test failures, reverted edits, error patterns) and auto-generate prevention rules — no thumbs-down required. Your agent gets smarter every session.

Is it free? Free tier: 3 daily feedback captures, 5 daily lesson searches, unlimited recall, enforced gates. History-aware distillation turns vague feedback into specific lessons. Pro is $19/mo or $149/yr for a personal dashboard and exports. Team rollout starts at $99/seat/mo (3-seat minimum) with shared hosted lesson DB, org dashboard, approval + audit proof, and isolated execution guidance.


Enterprise Story

ThumbGate is the control plane for AI coding agents:

  • Feedback becomes enforcement — repeated failures stop at the gate instead of reappearing in review.
  • Workflow Sentinel scores blast radius before execution, so risky PR, release, and publish flows are visible early.
  • High-risk local actions route into Docker Sandboxes; hosted team automations use a signed isolated sandbox lane.
  • Team rollout stays tied to Verification Evidence instead of trust-me operator claims.

Release Confidence

  • Every PR must carry a Changeset entry — each shipped version has a customer-readable explanation before publish.
  • Version-sync checks keep package.json, CHANGELOG.md, plugin manifests, and installer metadata aligned.
  • Final close-out requires verifying the exact main merge commit, with proof anchored in Verification Evidence.

See Release Confidence for the full trust chain.


Docs

Pro overlay: thumbgate-pro — separate repo/package inheriting from this base.


License

MIT. See LICENSE.

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
RedisA Model Context Protocol server that provides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools.
WindsurfThe new purpose-built IDE to harness magic
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Tavily Mcp
Playwright McpPlaywright MCP server
DeepChatYour AI Partner on Desktop
CursorThe AI Code Editor
ChatWiseThe second fastest AI chatbot™
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Y GuiA web-based graphical interface for AI chat interactions with support for multiple AI models and MCP (Model Context Protocol) servers.
Amap Maps高德地图官方 MCP Server
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Serper MCP ServerA Serper MCP Server