I built a proentropic memory layer for AI coding agents — every mistake makes the system stronger
Every AI coding agent has the same problem: it makes a mistake, you correct it, and next session it makes the exact same mistake again. I built ThumbGate to fix this. It's an open-source MCP server...

Source: DEV Community
Every AI coding agent has the same problem: it makes a mistake, you correct it, and next session it makes the exact same mistake again. I built ThumbGate to fix this. It's an open-source MCP server that turns thumbs-up/down feedback into pre-action gates — hard enforcement that physically blocks the tool call before execution. How it works Your agent makes a mistake → you give 👎 with context ThumbGate auto-generates a prevention rule Next time the agent tries the same thing → PreToolUse hook fires → BLOCKED The key insight: every mistake makes the system stronger. More errors = more rules = more reliable agent. It's proentropic — built to get stronger from chaos. What's included (free) recall — injects past failures at session start Pre-action gates — hard blocks, not prompt suggestions Thompson Sampling — adapts which gates fire Domain skill packs (Stripe, Railway, DB migrations) Hallucination detection — decomposes claims into verifiable sub-claims PII scanning — blocks sensitive da