EdgeCLI - AI-Powered Incident Triage for Your Terminal

Inspiration

It's 3 AM. Your app is down. Logs are flooding in and you're drowning in stack traces trying to find one needle in a haystack of noise.

Every developer knows this pain. We built EdgeCLI to fix it. an AI assistant that sits beside you in the terminal, instantly triaging errors, predicting root causes, and suggesting fixes. Then we asked: what if it could just tell you out loud? Now it can.


What It Does

EdgeCLI is a language-agnostic CLI tool that pipes into any app's output and gives you AI-powered incident triage in real time. Works with Python, Java, Go, Rust, JavaScript, Ruby, PHP, C#, Elixir — anything that writes to stdout/stderr.

# Drop it into any workflow
npm run dev 2>&1 | edgecli watch --stdin --voice
python manage.py runserver 2>&1 | edgecli watch --stdin --voice
kubectl logs -f pod-name 2>&1 | edgecli watch --stdin --voice

Key features:

  • Two-Stage Triage — Fast light triage first (severity + hypothesis + confidence score). If confidence < 65%, auto-escalates to deep analysis with root cause detection, affected file identification, and patch generation in unified diff format.
  • Voice Alerts via ElevenLabs — 30+ AI voices, 74 languages, <75ms latency with Flash V2.5. Hear "Critical database connection failure" without glancing at your screen.
  • Automated Patch Suggestions — Production-ready fixes in unified diff format, not just error descriptions.
  • Transparent Metrics — Every analysis shows latency, token usage, and confidence score. No black boxes.

How We Built It

Stack: TypeScript + Google Gemini API + ElevenLabs API

Google Gemini powers both triage stages with structured JSON responses and context-aware analysis. We use gemini-2.5-flash by default for the best speed/quality balance.

ElevenLabs handles real-time voice synthesis. Flash V2.5 gets us under 75ms latency — fast enough to feel instant.

CLI layer: Commander.js + Chokidar + Ora + Chalk. We cared about the terminal experience as much as the AI logic.

The core pipeline:

logProcessor.on('error', async (errorLog) => {
  const triage = await geminiClient.lightTriage(errorLog);
  if (triage.confidence < 0.65) {
    const deep = await geminiClient.deepAnalysis(errorLog);
    displayPatch(deep.patch);
  }
});

Challenges

Cross-platform audio was the hardest problem. Windows doesn't support stdin audio streaming like Unix does, so we built a dual-strategy player — direct stream on macOS/Linux, temp file + PowerShell MediaPlayer on Windows.

AI response reliability required strict JSON schema enforcement in prompts, fallback parsing, and confidence-based re-analysis triggers to handle LLM inconsistency.

Real-time performance meant we couldn't analyze every line. We added regex pre-filtering, intelligent buffering, and the two-stage escalation model to keep costs and latency low.

npm naming collisionedgecli was blocked for being too similar to edge-cli. Published as @ceasermikes002/edgecli instead.


What We Learned

  • Prompt engineering is the hard part. Temperature=0, explicit JSON schemas, and chained prompts (light -> deep) were the difference between reliable and unreliable outputs.
  • Voice UX needs restraint. Not every error deserves a voice alert. Severity thresholds and smart filtering make voice useful rather than annoying.
  • CLI tools live or die by polish. Spinners, gradient colors, and clear errors aren't cosmetic, they're what makes developers trust a tool.
  • Language-agnostic means massive reach. Analyzing text output instead of source code means EdgeCLI works with any stack, any language, any team.

What We're Proud Of

  • Published to npm — installable globally right now
  • Works with any programming language or runtime
  • 30+ professional AI voices, 74 language support
  • Full cross-platform support (Windows, macOS, Linux)
  • Privacy-first — no persistent log storage, sensitive data masking
  • MIT licensed and open source

What's Next

Slack/Discord alerts for team notifications, CI/CD pipeline integration, multi-file watching, historical error analytics, and VS Code extension support.


Try It

npm install -g @ceasermikes002/edgecli
edgecli init
npm run dev 2>&1 | edgecli watch --stdin --voice

npm · 📖 Docs · 💻 GitHub

Built for HackLondon 2026

Built With

Share this project:

Updates