Open Source & Free

Your logs, decoded by AI. Entirely on your machine.

Aggregate logs from files, Docker, Kubernetes, GCP, AWS, Azure, and syslog — then ask AI to find patterns, debug errors, and explain anomalies. Privacy-first, local-first.

Available on
log-talon — logs
10:23:01INFOServer started on port 3000
10:23:02DEBUGDatabase connection pool initialized
10:23:03INFOHealth check endpoint registered
10:23:05WARNCache miss rate above threshold (23%)
10:23:06ERRORFailed to connect to redis://cache:6379
10:23:07INFORetrying connection in 5s...
10:23:08DEBUGRequest GET /api/users - 142ms
10:23:09INFOWebSocket connection established
10:23:10WARNMemory usage at 78% of limit
10:23:11ERRORTimeout: upstream service /auth (>3000ms)
10:23:12INFOAuto-scaling triggered: 2 → 4 pods
10:23:14DEBUGProcessing batch job #4827
10:23:01INFOServer started on port 3000
10:23:02DEBUGDatabase connection pool initialized
10:23:03INFOHealth check endpoint registered
10:23:05WARNCache miss rate above threshold (23%)
10:23:06ERRORFailed to connect to redis://cache:6379
10:23:07INFORetrying connection in 5s...
10:23:08DEBUGRequest GET /api/users - 142ms
10:23:09INFOWebSocket connection established
10:23:10WARNMemory usage at 78% of limit
10:23:11ERRORTimeout: upstream service /auth (>3000ms)
10:23:12INFOAuto-scaling triggered: 2 → 4 pods
10:23:14DEBUGProcessing batch job #4827
AI Assistant
Why is the auth service timing out?
The /auth endpoint exceeded the 3s timeout. Redis connection failures at 10:23:06 suggest the cache layer is down, causing auth to fall back to DB queries.

Works with the tools you already use

Docker
Kubernetes
GCP
AWS
Azure
Ollama
OpenAI
Anthropic
OpenRouter
Features

Everything you need to debug faster

Log Talon combines powerful log aggregation with AI-powered analysis, all running locally on your machine.

Aggregation

Every log source, one view

Files, Docker, K8s, GCP Cloud Logging, AWS CloudWatch, Azure Monitor, syslog — all normalized and searchable in a single timeline.

FilesDockerKubernetesGCPAWSAzureSyslog
Privacy

Your machine only

Logs stay on disk. Ollama runs AI locally — zero data leaves your network.

network: none
telemetry: disabled
storage: local
AI Chat

Ask in plain English

“Why did latency spike at 2pm?” — get answers with specific timestamps and root-cause analysis.

$ which pods restarted in the last hour?
3 pods restarted: web-2 (OOMKilled), api-5 (CrashLoopBackOff), worker-1 (manual)
Streaming

Real-time tail

Logs stream in live. Errors light up red, warnings amber — you see problems the instant they happen.

14:23:01INFOreq completed 23ms
14:23:02WARNslow query >500ms
14:23:03ERR conn refused db-primary
streaming...
Filtering

Regex & structured queries

Full regex, JSON path queries, severity filters. Save presets and share them with your team.

level=error source=k8s | grep "OOM"
Analytics

Built-in dashboards

Error rates, log volume over time, p95 latency — auto-generated from your log data without config.

New

Chat from Telegram & Discord

Ask questions about your logs from anywhere. Connect a Telegram or Discord bot and get AI-powered log analysis directly in your chat — no need to open the desktop app.

Telegram BotDiscord BotRemote AccessConversation History
You:why is auth-service down?
Bot:auth-service crashed at 14:02 due to OOMKilled. Pod restarted 3x in the last hour. Recommend increasing memory limits.

AI Chat Interface

Ask questions about your logs in natural language. Log Talon's AI understands context across all your sources and provides actionable insights with references to specific log lines.

  • Natural language queries
  • Cross-source context awareness
  • Root cause analysis
log-talon — AI Chat
You:What caused the spike in 500 errors between 2pm and 3pm?
AI:

The 500 error spike correlates with 3 events:

  1. Redis connection pool exhaustion at 14:02:31
  2. Cascading timeout failures in /api/auth
  3. Memory limit reached on pod web-3 at 14:15:00

Recommend: increase Redis pool size and set circuit breaker on auth service.

Multi-Source Aggregation

Connect all your log sources in one place — from local files to GCP, AWS, and Azure cloud logs. Log Talon normalizes timestamps, parses structured formats, and gives you a unified view across your entire infrastructure.

  • Unified timeline view
  • Automatic format detection
  • Live connection status
log-talon — Sources
SourceTypeStatusLines/min
api-gatewayDockerLive2.4k
auth-serviceK8s PodLive890
gcp-prod-logsGCPLive1.8k
cloudwatch-usAWSLive956
/var/log/sysFileLive156
192.168.1.10SyslogWarn3.1k

Real-Time Monitoring

Watch logs stream in as they happen with sub-second latency. Automatic syntax highlighting, error detection, and smart grouping help you spot issues instantly.

  • Sub-second latency
  • Auto error highlighting
  • Smart log grouping
log-talon — Real-Time
14:23:01.234INFORequest completed in 23ms
14:23:01.456WARNSlow query detected (>500ms)
14:23:01.789ERRORConnection refused: db-primary
14:23:02.012INFOFailover to db-replica-1
14:23:02.345INFOService recovered, latency normal
Streaming from 4 sources...

Telegram & Discord Bot

Query your logs from anywhere without opening the desktop app. Connect a Telegram or Discord bot and get AI-powered analysis right in your chat. Perfect for on-call, mobile triage, or team collaboration — ask questions and get answers with full conversation history.

  • Telegram & Discord support
  • Remote log analysis from any device
  • Persistent conversation history
log-talon — Telegram Bot
@dev_alex:any errors in the last 30 minutes?
LogTalon:

Found 12 errors across 3 sources:

  • 8x connection timeout in api-gateway
  • 3x OOMKilled in worker-pod-2
  • 1x disk full warning on /var/log
@dev_alex:what caused the api-gateway timeouts?
LogTalon:Redis connection pool hit max at 14:32:15. Upstream latency jumped from 12ms to 3.2s.
How It Works

Up and running in 3 steps

No cloud accounts, no complex setup. Download, connect your sources, and start asking questions.

01

Download

Install Log Talon on macOS, Windows, or Linux. It's a lightweight desktop app — no cloud signup required.

02

Add Sources

Point Log Talon at your log files, Docker containers, Kubernetes pods, or syslog streams. Setup takes seconds.

03

Ask Questions

Use AI chat to interrogate your logs. Ask about errors, find patterns, or get root-cause analysis — all in natural language.

Integrations

Connects to your stack

Log Talon works with the log sources and AI providers you already use.

Log Sources

Ingest logs from anywhere in your infrastructure

Log Files
Any .log, .txt, or structured file
Docker
Container logs via Docker API
Kubernetes
Pod logs with namespace filtering
Syslog
UDP/TCP syslog receiver
Google Cloud
Cloud Logging via GCP APIs
AWS
CloudWatch Logs integration
Azure
Azure Monitor & Log Analytics

AI Providers

Choose your AI backend — local or cloud

Ollama
100% local, no API key needed
OpenAI
GPT-4o and GPT-4o-mini
Anthropic
Claude 4 Sonnet & Opus
OpenRouter
Access 100+ models

Communication Channels

Query your logs remotely via chat bots

Telegram
Bot via @BotFather, chat-based queries
Discord
Guild & DM support, mention-based triggers

Set up in seconds — just add your bot token and start chatting. Tokens are stored securely in your system keyring.

Supported Log Formats

JSON
Logfmt
Apache/Nginx
Syslog RFC 5424
CSV/TSV
Custom regex
Open Source

Free & Open Source

MIT Licensed

Log Talon is completely free to use. No hidden fees, no usage limits, no telemetry. Built in the open with Rust, React, and Tauri.

FAQ

Frequently asked questions

Everything you need to know about Log Talon.

Is Log Talon really free?
Yes. Log Talon is 100% free and open source under the MIT license. There are no paid tiers, usage limits, or hidden fees. You can use it for personal or commercial purposes.
Does it work offline?
Absolutely. Log Talon is a desktop app that runs entirely on your machine. When paired with Ollama for local AI, you get full functionality without any internet connection.
What platforms are supported?
Log Talon is available for macOS (Intel & Apple Silicon), Windows (x64), and Linux (AppImage, .deb). Built with Tauri for a native experience on every platform.
How does it compare to Datadog or Splunk?
Log Talon is designed for developers who want to analyze logs locally without sending data to the cloud. Unlike Datadog or Splunk, there's no per-GB pricing, no data retention limits, and no vendor lock-in. It's best for local development, debugging, and small-to-medium infrastructure.
Do I need API keys for AI features?
Not if you use Ollama — it runs AI models locally with no API key required. For cloud providers (OpenAI, Anthropic, OpenRouter), you'll need your own API key. Your keys are stored locally and never sent to our servers.
How do the Telegram and Discord bots work?
Log Talon includes a built-in bot service that connects to Telegram and Discord. Just add your bot token (from @BotFather for Telegram or the Discord Developer Portal), and you can ask questions about your logs directly from your chat app. The bot uses the same AI pipeline as the desktop app, with full conversation history and context. Tokens are stored securely in your system keyring.
Is there enterprise support?
Log Talon is a community-driven project. For enterprise needs, you can self-host, contribute to the project, or reach out on GitHub Discussions for community support.

Ready to make sense of your logs?

Download Log Talon and start debugging smarter — with AI-powered analysis, right on your machine.

Download Log Talon

Available for macOS

The app is not code signed. After installing, run xattr -cr /Applications/LogTalon.app in Terminal to bypass macOS Gatekeeper.