Skip to content

ceasermikes002/edgecli

Repository files navigation

EdgeCLI

AI-powered CLI tool for intelligent log analysis and incident triage using Google Gemini API.

Built for HackLondon 2026 :)

npm Package: @ceasermikes/edgecli
Documentation: https://ceasermikes002.github.io/edgecli/

Features

  • 🔍 Real-time log watching (files or stdin)
  • 🤖 AI-powered triage with confidence scoring
  • 🔬 Deep analysis with root cause detection
  • 💊 Automated patch suggestions (diff format)
  • 🎙️ Voice alerts with ElevenLabs AI (74 languages)
  • 📊 Transparent metrics (latency, tokens)
  • 🎭 Mock simulation mode for testing
  • ✨ Beautiful gradient UI with brand colors
  • 🌍 Language-agnostic - Works with ANY programming language!

Installation

For Production (Published Package)

npm install -g @ceasermikes/edgecli

That's it! No cloning, no setup - just install and go.

For Development (Local)

# Clone or navigate to the project
cd edgecli

# Install dependencies
npm install

# Build the project
npm run build

# Link globally for local development
npm link

# Now you can use edgecli command
edgecli --help

Setup

  1. Get your Gemini API key from Google AI Studio

  2. Run the interactive setup:

edgecli init

This will:

  • Prompt you to enter your Gemini API key (securely)
  • Let you choose which Gemini model to use
  • Optionally configure ElevenLabs voice alerts
  • Save your configuration locally

Voice Alerts (Optional)

EdgeCLI supports AI-powered voice alerts using ElevenLabs. During setup, you can:

  • Enable voice notifications for critical incidents
  • Choose from 30+ professional voices (male/female, various accents)
  • Select severity threshold (info/warning/error/critical)
  • Pick from multiple voice models (multilingual, turbo, flash)

Get your ElevenLabs API key from ElevenLabs Settings

Available Models

  • gemini-2.5-flash ⭐ (Recommended) - Latest flash model, fast and efficient
  • gemini-2.5-pro - Most capable 2.5 model for complex analysis
  • gemini-2.0-flash - Stable 2.0 flash model
  • gemini-3-flash - Next-gen flash model
  • gemini-3-pro - Next-gen pro model with maximum capability

Alternative: Environment Variable

You can also set API keys via environment variables (overrides config):

# Linux/macOS
export GEMINI_API_KEY="your-api-key-here"
export ELEVENLABS_API_KEY="your-elevenlabs-key-here"

# Windows PowerShell
$env:GEMINI_API_KEY="your-api-key-here"
$env:ELEVENLABS_API_KEY="your-elevenlabs-key-here"

Usage

Watch log file

edgecli watch app.log

Watch with voice alerts

edgecli watch app.log --voice

Watch live output (pipe)

npm run dev 2>&1 | edgecli watch --stdin

Watch with voice disabled

edgecli watch app.log --no-voice

Generate patch for a file

edgecli suggest --file src/auth.js

Simulate errors (demo mode)

edgecli simulate

View session stats

edgecli stats

Configure voice alerts

# Interactive configuration
edgecli voice

# Enable voice alerts
edgecli voice --enable

# Disable voice alerts
edgecli voice --disable

# Test voice output
edgecli voice --test

Language Support

EdgeCLI is language-agnostic - it works with ANY programming language! If your application writes to stdout/stderr, EdgeCLI can monitor it.

Supported Languages & Frameworks

JavaScript/TypeScript:

# Node.js / Express
npm run dev 2>&1 | edgecli watch --stdin --voice

# NestJS
npm run start:dev 2>&1 | edgecli watch --stdin --voice

# Next.js
npm run dev 2>&1 | edgecli watch --stdin --voice

Python:

# Django
python manage.py runserver 2>&1 | edgecli watch --stdin --voice

# Flask
flask run 2>&1 | edgecli watch --stdin --voice

# FastAPI
uvicorn main:app --reload 2>&1 | edgecli watch --stdin --voice

Java:

# Spring Boot
./mvnw spring-boot:run 2>&1 | edgecli watch --stdin --voice

# Gradle
./gradlew bootRun 2>&1 | edgecli watch --stdin --voice

Go:

go run main.go 2>&1 | edgecli watch --stdin --voice

Ruby:

# Rails
rails server 2>&1 | edgecli watch --stdin --voice

PHP:

# Laravel
php artisan serve 2>&1 | edgecli watch --stdin --voice

Rust:

cargo run 2>&1 | edgecli watch --stdin --voice

C# / .NET:

dotnet run 2>&1 | edgecli watch --stdin --voice

Elixir:

# Phoenix
mix phx.server 2>&1 | edgecli watch --stdin --voice

Docker:

docker logs -f container_name 2>&1 | edgecli watch --stdin --voice

Kubernetes:

kubectl logs -f pod-name 2>&1 | edgecli watch --stdin --voice

System Logs:

tail -f /var/log/syslog 2>&1 | edgecli watch --stdin --voice

Why It Works with Any Language

EdgeCLI analyzes text output, not code:

  • ✅ Reads stdout/stderr from any application
  • ✅ AI understands error patterns across all languages
  • ✅ Recognizes stack traces, exceptions, and error messages universally
  • ✅ Automatically detects language and framework from logs

How It Works

  1. Light Triage: Quick classification (severity, hypothesis, confidence)
  2. Auto-escalation: If confidence < 65%, chains to deep analysis
  3. Deep Analysis: Root cause detection + patch generation
  4. Voice Alerts: Optional AI voice notifications for critical incidents
  5. Privacy-first: Logs summarized locally, sensitive data masked

Voice Features

EdgeCLI integrates ElevenLabs for professional voice alerts:

  • 30+ Voices: Choose from male/female voices with various accents (American, British, Australian, Irish, Italian-English)
  • 4 Models: Multilingual V2 (emotionally rich), Turbo V2.5 (low latency), Flash V2.5 (fastest), Flash V2
  • Smart Filtering: Only speak alerts above your chosen severity threshold
  • Streaming: Low-latency audio streaming for instant notifications
  • 74 Languages: Multilingual support for global teams

Perfect for:

  • On-call engineers monitoring multiple terminals
  • Hands-free incident response
  • Accessibility and screen-free monitoring
  • High-pressure situations requiring immediate attention

Demo Scenario

# Terminal 1: Run your app
npm run dev 2>&1 | tee app.log

# Terminal 2: Watch with EdgeCLI
edgecli watch app.log

# See AI triage in real-time!

Documentation

Comprehensive HTML documentation is available at: https://ceasermikes002.github.io/edgecli/

Or view locally by opening docs/index.html in your browser for:

  • Complete command reference
  • Voice alerts guide
  • Configuration options
  • Language support examples
  • Troubleshooting tips
  • API reference
  • Examples and use cases

Development

Install dependencies

npm install

Build

npm run build

Run tests

npm test

Link for local development

npm link

Requirements


Built for HackLondon 2026 :)

About

AI-powered CLI tool for instant intelligent log analysis and incident triage using Google Gemini and Elevenlabs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors