Your AI agent doesn't know what's broken in production. This fixes that.
Last9 MCP Server connects Claude, Cursor, Windsurf, and any other MCP-capable AI assistant directly to your production observability data — logs, metrics, traces, exceptions, database queries, alerts, and deployments. The agent stops guessing and starts reading the actual signal.
No binary to install. No tokens to manage. One URL, OAuth in your browser, done.
Find your org slug in your Last9 URL: app.last9.io/<org_slug>/...
claude mcp add --transport http last9 https://app.last9.io/api/v4/organizations/<org_slug>/mcpType /mcp, select last9, authenticate. That's it.
Settings > MCP > Add New MCP Server:
{
"mcpServers": {
"last9": {
"type": "http",
"url": "https://app.last9.io/api/v4/organizations/<org_slug>/mcp"
}
}
}Click Connect, complete OAuth.
Requires v1.99+. Open Command Palette → MCP: Add Server, paste the URL, authenticate.
Or directly in settings.json:
{
"mcp": {
"servers": {
"last9": {
"type": "http",
"url": "https://app.last9.io/api/v4/organizations/<org_slug>/mcp"
}
}
}
}Settings > Cascade > Open MCP Marketplace > gear icon (mcp_config.json):
{
"mcpServers": {
"last9": {
"serverUrl": "https://app.last9.io/api/v4/organizations/<org_slug>/mcp"
}
}
}Settings > Connectors > Add custom connector. Name it last9, paste the URL, authenticate.
Requires admin access to your Claude organization.
Use this when your MCP client doesn't support HTTP transport, or when you need the server running locally.
Homebrew:
brew install last9/tap/last9-mcpNPM:
npm install -g @last9/mcp-server@latest
# or directly:
npx -y @last9/mcp-server@latestBinary releases (Windows / manual):
Download from GitHub Releases:
| Platform | Archive |
|---|---|
| Windows (x64) | last9-mcp-server_Windows_x86_64.zip |
| Windows (ARM64) | last9-mcp-server_Windows_arm64.zip |
| Linux (x64) | last9-mcp-server_Linux_x86_64.tar.gz |
| Linux (ARM64) | last9-mcp-server_Linux_arm64.tar.gz |
| macOS (x64) | last9-mcp-server_Darwin_x86_64.tar.gz |
| macOS (ARM64) | last9-mcp-server_Darwin_arm64.tar.gz |
Only admins can create tokens.
- Go to API Access
- Click Generate Token with Write permissions
- Copy it
Homebrew:
{
"mcpServers": {
"last9": {
"command": "/opt/homebrew/bin/last9-mcp",
"env": {
"LAST9_REFRESH_TOKEN": "<your_refresh_token>"
}
}
}
}NPM:
{
"mcpServers": {
"last9": {
"command": "npx",
"args": ["-y", "@last9/mcp-server@latest"],
"env": {
"LAST9_REFRESH_TOKEN": "<your_refresh_token>"
}
}
}
}Where to paste this:
| Client | Location |
|---|---|
| Claude Web/Desktop | Settings > Developer > Edit Config (claude_desktop_config.json) |
| Cursor | Settings > Cursor Settings > MCP > Add New Global MCP Server |
| Windsurf | Settings > Cascade > MCP Marketplace > gear icon (mcp_config.json) |
| VS Code | Wrap in { "mcp": { "servers": { ... } } } in settings.json — details |
VS Code STDIO config
{
"mcp": {
"servers": {
"last9": {
"type": "stdio",
"command": "/opt/homebrew/bin/last9-mcp",
"env": {
"LAST9_REFRESH_TOKEN": "<your_refresh_token>"
}
}
}
}
}For NPM: use "command": "npx" and add "args": ["-y", "@last9/mcp-server@latest"].
Windows
After downloading from GitHub Releases, extract and point to the full path:
{
"mcpServers": {
"last9": {
"command": "C:\\Users\\<user>\\AppData\\Local\\Programs\\last9-mcp-server.exe",
"env": {
"LAST9_REFRESH_TOKEN": "<your_refresh_token>"
}
}
}
}The NPM route is easier on Windows — no path management.
| Variable | Default | Description |
|---|---|---|
LAST9_REFRESH_TOKEN |
(required) | Refresh token from API Access |
LAST9_DATASOURCE |
org default | Datasource/cluster name — useful when you have multiple Levitate clusters |
LAST9_API_HOST |
app.last9.io |
Override the API host |
LAST9_MAX_GET_LOGS_ENTRIES |
5000 |
Max entries for chunked get_logs requests |
LAST9_DEBUG_CHUNKING |
false |
Set true to log chunk-planning details for get_logs, get_service_logs, get_traces |
LAST9_DISABLE_TELEMETRY |
true |
Set false to enable internal OTel tracing |
OTEL_SDK_DISABLED |
— | Standard OTel env var. Overrides LAST9_DISABLE_TELEMETRY |
OTEL_EXPORTER_OTLP_ENDPOINT |
— | OTLP collector endpoint (only when telemetry is enabled) |
OTEL_EXPORTER_OTLP_HEADERS |
— | OTLP auth headers (only when telemetry is enabled) |
get_service_summary— Throughput, error rate, p95 response time across all servicesget_service_environments— Available environments for your services. Run this first — other APM tools needenvfrom hereget_service_performance_details— Full breakdown: throughput, error rate, p50/p90/p95/avg/max, apdex, availabilityget_service_operations_summary— Operations grouped by HTTP endpoints, DB calls, messaging, HTTP clientsget_service_dependency_graph— Dependency map with throughput, latency, and error rates for upstream/downstream/infraget_exceptions— Server-side exceptions with service and span filters
Four tools that go directly at your database performance, derived from OpenTelemetry trace spans. No extra instrumentation needed if you're already using OTel.
get_databases— Discover all databases across your infrastructure: DB type, host, throughput (queries/min), p95 latency, error rate, number of dependent servicesget_database_slow_queries— The actual slowest query executions, ordered by duration, with trace IDs for drilling into full tracesget_database_queries— Query patterns and aggregates: how often a query runs, average/p95 duration, error rateget_database_server_metrics— Server-side metrics from the DB host itself (CPU, connections, buffer hit rates — depends on your DB system)
Supports PostgreSQL, MySQL, MongoDB, Redis, Aerospike, and anything else OTel traces with a db_system attribute.
prometheus_range_query— PromQL range queries over any metricprometheus_instant_query— Instant queries; use rollup functions likeavg_over_time,sum_over_timeprometheus_label_values— Label values for a given seriesprometheus_labels— All labels available for a series
Point these at a different datasource/cluster than the default by setting LAST9_DATASOURCE.
get_logs— Full JSON pipeline log queries (aggregations, filters, field extraction)get_service_logs— Raw log lines for a service, filterable by severity and body contentget_log_attributes— Available attributes in the log schema for a time windowget_drop_rules— Log drop rules from Last9 Control Planeadd_drop_rule— Create a new drop rule to cut log volume at the source
get_traces— JSON pipeline trace queries for broad searches and aggregationsget_service_traces— Traces by exact trace ID or service name. Use this when you have a trace ID — it's fasterget_trace_attributes— Available attributes in the trace schema
get_change_events— Deployments, config changes, rollbacks. Correlate incidents with what changedget_alert_config— Alert rule configurations — searchable by name, severity, type, tagsget_alerts— Currently firing alerts within a time windowget_notification_channels— Configured notification channels (Slack, PagerDuty, email, etc.)
did_you_mean— When the agent isn't sure about an entity name, this returns the closest matches from your catalog (services, environments, hosts, databases, K8s deployments/namespaces, jobs). Up to 3 suggestions with similarity scores. The server calls this automatically before most tools when a name lookup returns empty.
Deep links on every response. Every tool returns a deep_link field — a direct URL into the Last9 dashboard for that exact query and time range. The agent can hand you the link; you click it; you're there.
Live attribute caching. At startup, the server fetches the actual log and trace attribute names from your data and embeds them into tool descriptions. This means the AI assistant knows what fields exist in your schema, not just a generic list. The cache refreshes every 2 hours.
Chunked large results. get_logs and get_traces handle large result sets through chunking rather than truncating. The default limit is 5000 entries for logs; configurable via LAST9_MAX_GET_LOGS_ENTRIES.
HTTP mode, curl testing, building from source
export LAST9_REFRESH_TOKEN="your_refresh_token"
export LAST9_HTTP=true
export LAST9_PORT=8080
./last9-mcp-serverServer starts at http://localhost:8080/mcp.
MCP Streamable HTTP requires an initialize handshake first. Don't set Mcp-Session-Id on the first request.
# Step 1: Initialize
SESSION_ID=$(curl -si -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {"name": "curl-test", "version": "1.0"}
}
}' | grep -i "^Mcp-Session-Id:" | awk '{print $2}' | tr -d '\r')
echo "Session: $SESSION_ID"
# Step 2: Send initialized notification
curl -s -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SESSION_ID" \
-d '{"jsonrpc": "2.0", "method": "notifications/initialized", "params": {}}'
# Step 3: List tools
curl -s -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SESSION_ID" \
-d '{"jsonrpc": "2.0", "id": 2, "method": "tools/list", "params": {}}'
# Step 4: Call a tool
curl -s -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SESSION_ID" \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "get_service_logs",
"arguments": {
"service": "your-service-name",
"lookback_minutes": 30,
"limit": 10
}
}
}'git clone https://github.com/last9/last9-mcp-server.git
cd last9-mcp-server
go build -o last9-mcp-server
LAST9_HTTP=true ./last9-mcp-serverLAST9_HTTP=true is for local development. For actual usage, the hosted HTTP endpoint is easier.
All parameters, time input standards, and details
- Absolute times (
start_time_iso/end_time_iso, ortime_iso) take precedence overlookback_minutes. - For relative windows: use
lookback_minutes. - For absolute windows: use RFC3339/ISO8601 —
2026-02-09T15:04:05Z. - Legacy
YYYY-MM-DD HH:MM:SSis accepted for compatibility only.
limit(integer, optional): Max exceptions. Default: 20.lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional): Absolute time range.service_name(string, optional): Filter by service.span_name(string, optional): Filter by span name.deployment_environment(string, optional): Filter by environment.
start_time_iso/end_time_iso(string, optional)env(string, optional): Defaults toprod.
start_time_iso/end_time_iso(string, optional)
All other APM tools require an
envvalue. Use""if this returns empty.
service_name(string, required)lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional)env(string, optional): Defaults toprod.
service_name(string, required)lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional)env(string, optional): Defaults toprod.
service_name(string, optional)lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional)env(string, optional): Defaults toprod.
env(string, optional): Filter by environment. Default: all.lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional)
db_system(string, optional): e.g.postgresql,mysql,mongodb,redis.host(string, optional): Database host (net_peer_name).service_name(string, optional): Calling service name.env(string, optional)min_duration_ms(float, optional): Minimum query duration in ms.lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional)limit(integer, optional): Default: 20.
db_system(string, optional)host(string, optional)service_name(string, optional)env(string, optional)lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional)limit(integer, optional): Default: 20.
db_system(string, required): e.g.postgresql,mysql,mongodb,redis,aerospike.host(string, optional)lookback_minutes(integer, optional): Default: 60.start_time_iso/end_time_iso(string, optional)
query(string, required): The PromQL query.start_time_iso/end_time_iso(string, optional): Defaults to last 60 min.lookback_minutes(float, optional): Default: 60.
query(string, required)time_iso(string, optional): Defaults to now.lookback_minutes(float, optional)
match_query(string, optional): PromQL filter.label(string, required): Label name.start_time_iso/end_time_iso(string, optional)
match_query(string, optional): PromQL filter.start_time_iso/end_time_iso(string, optional)
logjson_query(array, required): JSON pipeline query.lookback_minutes(integer, optional): Default: 5.start_time_iso/end_time_iso(string, optional)limit(integer, optional): Server default: 5000.index(string, optional):physical_index:<name>orrehydration_index:<block_name>.
service(string, required)lookback_minutes(integer, optional): Default: 60.limit(integer, optional): Default: 20.env(string, optional)severity_filters(array, optional): e.g.["error", "warn"]. OR logic.body_filters(array, optional): e.g.["timeout", "failed"]. OR logic.start_time_iso/end_time_iso(string, optional)index(string, optional)
Multiple filter types combine with AND. Each array uses OR internally.
lookback_minutes(integer, optional): Default: 15.start_time_iso/end_time_iso(string, optional)region(string, optional)index(string, optional)
No parameters.
name(string, required)filters(array, required): Each filter:key,value,operator(equals/not_equals),conjunction(and).
Use for broad searches and aggregations. For exact trace ID lookup, use get_service_traces.
tracejson_query(array, required)start_time_iso/end_time_iso(string, optional)lookback_minutes(integer, optional): Default: 60.limit(integer, optional): Default: 5000.
Exactly one of trace_id or service_name is required.
trace_id(string, optional): Default lookback: 72 hours.service_name(string, optional): Default lookback: 60 min.lookback_minutes(integer, optional)start_time_iso/end_time_iso(string, optional)limit(integer, optional): Default: 10.env(string, optional)
lookback_minutes(integer, optional): Default: 15.start_time_iso/end_time_iso(string, optional)region(string, optional)
start_time_iso/end_time_iso(string, optional)lookback_minutes(integer, optional): Default: 60.service(string, optional)environment(string, optional)event_name(string, optional): Call without this first to getavailable_event_names.
search_term(string, optional): Free-text search across name, group, data source, tags.rule_name(string, optional)severity(string, optional)rule_type(string, optional):staticoranomaly.alert_group_name/alert_group_type/data_source_name(string, optional)tags(array, optional): All must match (AND logic).
time_iso(string, optional): Evaluation time in RFC3339.window(integer, optional): Lookback in seconds. Default: 900. Range: 60–86400.lookback_minutes(integer, optional): Range: 1–1440.
No parameters. Returns all configured notification channels (Slack, PagerDuty, email, webhooks, etc.).
query(string, required): The name to search for — partial, misspelled, or abbreviated.type(string, optional): Restrict to entity type:service,environment,host,database,k8s_deployment,k8s_namespace,job.
Returns up to 3 closest matches with similarity scores. Use this before any tool call where the entity name is uncertain. If a previous call returned empty results, try this before retrying.
See TESTING.md for integration test setup and instructions.

