This post isn’t a dunk. It’s a practical map of where Kibana reporting shines, where it predictably breaks, and what teams do when they hit the wall.
Kibana’s PDF/PNG reporting is built around rendering what you see on screen into an export. Under the hood, reports are generated on the Kibana server as background jobs coordinated through Elasticsearch documents.
And the rendering itself is based on headless Chromium (Kibana manages Chromium binaries and drives the browser for screenshotting / PDF exports).
This architecture implies two important truths:
Reporting inherits UI fragility. If the UI struggles to render a view reliably, reporting will struggle too.
Reporting is “what’s on the screen,” not “what’s true.” It’s a presentation capture mechanism, not a semantic reporting engine.
That’s fine—until your use case is not a screenshot.
Most reporting needs are comparative:
“Did error rate change since last week?”
“Is this spike new or just seasonality?”
“Only notify if the KPI moved materially.”
“Send the PDF only if something significant changed.”
Kibana reporting doesn’t have a native concept of diffing between runs, conditional delivery, or “what changed since last export.” It creates a PDF/PNG of the current dashboard state.
Elastic’s own wording around reporting reinforces this “what you see” model: PDF reports are tied directly to what is seen on screen.
Why this matters: In real teams, attention is the scarce resource. Static scheduled PDFs quickly become noise—people stop reading them because they don’t answer the question “why am I being pinged?”
If your dashboards are small and tidy, Kibana reporting can be smooth. But real dashboards aren’t always small and tidy—especially in mid-market orgs where dashboards become living shared artifacts.
Elastic’s own troubleshooting guidance acknowledges that large pixel counts (big dashboards, lots of panels) can demand more memory/CPU and suggests splitting dashboards into smaller artifacts.
In practice, teams run into:
PDFs with unusable pagination or layout
Panels stretched, clipped, or missing
“For printing” exports that time out or format awkwardly
These aren’t hypothetical. Community reports describe large dashboards producing a single giant unprintable page or poorly paginated PDFs with cut-off / stretched visualizations.
And for truly huge canvases or dashboards, people end up increasing memory and timeouts dramatically and still failing—because you’re essentially asking a headless browser to deterministically render a complex app view into a document.
When reporting fails, Kibana surfaces errors like “Max attempts reached.” Elastic documents two common causes:
The export spans a large amount of data and Kibana hits xpack.reporting.queue.timeout
Reverse-proxy / server settings are not configured correctly
This reveals a hidden cost: your team becomes the operator of a rendering farm.
Instead of “schedule report,” your backlog becomes:
tuning queue timeouts
tuning capture timeouts
resizing dashboards
splitting dashboards
debugging reverse proxy edge cases
chasing nondeterministic Chromium issues
That’s not reporting. That’s maintaining an internal PDF renderer.
Kibana alerting is solid for what it’s built to do: create rules against Elasticsearch data and send actions through connectors. Elastic positions it as a consistent interface across use cases, with integrations and scripting available.
But alerting and reporting live in different mental models:
Alerts are about signals: something crossed a threshold, a rule matched, an anomaly score tripped.
Reports are about communication: what changed, what it means, and what to do.
You can send an alert to Slack. You can attach a PDF. But Kibana doesn’t give you a first-class, built-in way to reliably produce human-ready, contextual, change-aware narratives (because its primitives are rules and screenshots).
So teams either:
spam alerts (and burn attention), or
schedule reports (and hope people read them), or
manually add context (and burn engineering time)
For compliance, the question is rarely “what does the dashboard look like right now?”
It’s:
“What was true on that date?”
“Can you prove it wasn’t tampered with?”
“Can you show consistent evidence collection over time?”
Kibana reporting can generate PDFs, but it’s not designed as a compliance evidence pipeline. If you’re in a regulated environment, you’ll feel the gap quickly: lack of run-to-run comparison, lack of explicit evidence controls, and the ease with which dashboards change after the fact.
(If you’re already collecting screenshots into a GRC folder manually, you know exactly what this costs.)
These are the patterns that appear again and again once Kibana reporting doesn’t fit.
This is even recommended in Elastic troubleshooting guidance.
Cost: redesign work and fractured storytelling. People lose the “single pane” view that made the dashboard valuable.
Elastic explicitly points at timeout settings like xpack.reporting.queue.timeout when exports fail.
Cost: ongoing ops toil. Reporting becomes another service to babysit.
Some teams rebuild reports in Canvas because it gives more control over page-like layouts (and community responses often point users there).
Cost: now you’re maintaining two artifacts: operational dashboards and report layouts.
This works—because Kibana itself uses a headless browser approach.
Cost: brittle scripts, auth headaches, constant UI changes breaking automation.
There’s a whole ecosystem of third-party Kibana reporting tools that exist for a eason: teams want scheduled delivery, fewer license constraints, and more control.
Cost: additional platform, integration, and security review—plus you still often end up with static screenshots.
You export small-to-medium dashboards
You’re okay with static PDFs/PNGs
You don’t need cross-tool reporting
“Send every Monday” is acceptable even when nothing changed
Stakeholders ask “what changed?” more than “what is it?”
Reports are frequently failing on large dashboards (timeouts/layout)
You need conditional delivery (only notify on meaningful change)
You need compliance-ready evidence artifacts
Your reality is multi-tool (Kibana + Grafana + SaaS + internal UIs)
This isn’t a moral failing of Kibana. It’s just not what Kibana reporting was designed to be.
If you’re designing for real-world reporting needs, these primitives matter:
Conditional reporting
“Only send if KPI moved by X”
“Only send if visual changed”
Run-to-run diff
detect change, summarize deltas, highlight what matters
Narrative context
explain “why this matters,” not just present charts
Multi-source support
authenticated web UIs + APIs, not just one stack
Operational reliability
reporting should not require you to become a Chromium SRE
Kibana reporting gives you the screenshot. Many teams need the communication system.
What Kibana lacks isn’t another export format.It’s a layer that understands change over time.
Teams that move past screenshot-based reporting introduce a thin reporting layer that:
captures dashboards or data at regular intervals
compares current state to previous runs
generates reports only when something meaningfully changes
adds minimal narrative context for humans
Crucially, this layer does not replace Kibana.
Kibana remains the system of exploration.The reporting layer becomes the system of communication.
Once teams adopt this pattern, reporting stops being noisy—and starts being trusted.
]]>Capture the Kibana Discover's hits count integer value of the last hour
Capture the same, but for the last 24 hours
Use the "calculate" action to obtain the hourly mean hit count over the past 24h
Use the "conditional block" action to compare the last hour reading to the 24h mean
If the difference is less than 20%, don't send the alert
This example is by default included in the job templates in any new installation of Anaphora.
]]>Capture a string/numeric value from a website
Capture numeric/string value from another website
Compare the two
Skip sending the report if condition is met
This is useful for brute force attack detection, or any other alerts about high log event count over time.
Capture Kibana discovery query results “count” for the last hour
Compare if this value is > 1 million
Send the report (notify the system admin)
]]>Save time by anticipating some common tasks
Pre-bake some company-branded PDF templates
Reuse the boilerplate of a login process to a particular website
You can create unlimited templates. You can also promote a job to become a template.


You can now make your PDF prettier by adding a page background (PNG, SVG, solid colors, gradients, etc). Trick: use lower opacity to watermark the background!
]]>A space is a virtual container of Jobs and Delivery Interfaces. Users and roles can be associated to a number of spaces via permissions of any access levels between read-only, read-write, or Admin.

Users that have admin access level to 2 or more spaces can copy resources across between spaces

Navigate to settings > system settings to find all the available authentication and authorization connectors configurations.

We support LDAP from the major open source LDAP server implementations like OpenLDAP or LemonLDAP, but also Microsoft Active Directory Domain Services (AD DS), IBM Security Directory Server, etc.
SAML SSO 2.0 is available to connect to your enterprise centralised authentication, it works with Keycloak, Azure ADFS, Okta, One Login, and many others.

OpenID Connect support is finally added! Keycloak is our reference implementation, but this one is a real passepartout for the world of the internet.

You can also get the debug files containing the browser internal state, so it’s easier for us to support you when things are hard to debug.

No danger of getting spammed, as a maximum notification frequency can be set.
Alternatively, you can use the REST API to monitor the green, yellow, red state of each job.
]]>Symptoms:
- White/black screen in output
- Missing dashboard elements
- Incomplete rendering
Solutions:
- Increase wait time
- Check viewport size
- Verify CSS selectors
Check Browser Logs
// Example log pattern
[ERROR] Failed to load resource: net::ERR_CONNECTION_TIMED_OUT
Verify Network Access
Review Resource Usage
chromium_flags:
- --disable-gpu
- --no-sandbox
- --disable-dev-shm-usage
- --window-size=1920,1080
Enable debug mode for detailed logs:
docker run -e DEBUG=true beshultd/anaphora
Debug output includes:
If issues persist:
Need more help? Join our community forum for expert assistance.
]]>Anaphora provides powerful tools to monitor your reporting jobs. This guide covers everything you need to know about job monitoring and management.
The Jobs dashboard is your command center for monitoring all reporting activities:
Jobs in Anaphora can be in several states:
PENDING: Waiting to start
RUNNING: Currently executing
COMPLETED: Successfully finished
FAILED: Encountered an error
PAUSED: Temporarily suspended
Configure notifications for:
# Example API call to check job status
response = requests.get(
'https://your-anaphora/api/v1/jobs/status',
headers={'Authorization': 'Bearer YOUR_TOKEN'}
)
Create personalized views:
Regular maintenance ensures smooth operation:
Need help? Check our troubleshooting guide or contact support.
]]>Before we begin, make sure you have:
Getting started with Anaphora is simple:
docker run --name=anaphora -p 3000:3000 --rm -d beshultd/anaphora
Then visit http://localhost:3000 and log in with:
adminadminGo to composer and start creating your first report!
Here you can rows and columns to your report.

Pages can have custom backgrounds, like a logo or a large image.
Next, let's go to the deliver screen, where you can specify where to send your report.
By default a newly created Job will have assigned a dummy delivery interface. This is very useful for testing, and creating jobs right away without first going through configuring the details of a real delivery interface.
Anaphora supports many delivery protocols, SMTP, S3, Webhook (i.e. for Mattermost chat), Slack, Mailgun, and more to come.

This time, let's select the SMTP interface, here is what to set up:

You can test your configuration straigth away using the included test feature

If your email was received, you can go back to the Job configuration and assign the new DI to the job.

In this guide, we covered:
With these basics, you're ready to start creating and sharing professional reports from your Kibana dashboards.
]]>S3 delivery interface
ready made template jobs
better PDF composer
“Calculate” capture step: for math expressions on variables
Max notification interval (no spam)

Kibana Connector waits for Dashboard to be fully loaded
New action: wait an arbitrary amount of time before continue
Templates: Logo upload, color theme and default font
When testing, capture will output the correct error message on wrong credentials
Cron job not triggering correctly
]]>