For AI agents: a documentation index is available at /llms.txt — markdown versions of all pages are available by appending .md to any URL path.
Skip to content

AFDocsTest your docs against the Agent-Friendly Documentation Spec

Measure how well AI agents can read, navigate, and use your documentation site.

Agent-Friendly Docs Scorecard
==============================

  Overall Score: 72 / 100 (C)

  Category Scores:
    Content Discoverability       72 / 100 (C)
    Markdown Availability         60 / 100 (D)
    Page Size & Truncation Risk   45 / 100 (F)
    Content Structure             82 / 100 (B)
    URL Stability                 93 / 100 (A)
    Observability                 71 / 100 (C)
    Authentication                100 / 100 (A+)

Your docs have a new audience

Claude Code, Cursor, GitHub Copilot, Windsurf, Codex, Gemini CLI; millions of developers use AI coding agents that read your documentation in real time. When an agent can't read your docs, it falls back on training data or other sources, and developers get bad answers. You won't get bug reports about it. The developer blames the agent, or your product, and moves on. Or the agent recommends a different product it can understand and use better, and developers never discover your product at all.

Many documentation sites have problems agents can't work around: client-side rendering that delivers empty shells, pages so bloated with CSS and JavaScript that content gets truncated, no discovery path to clean markdown versions. These are invisible to human readers but dealbreakers for agents.

The good news: most fixes are configuration changes, not content rewrites. Adding an llms.txt, enabling server-side rendering, or serving .md URLs can move a site from an F to a B in a single sprint. Read the full business case →

Get your score

bash
npx afdocs check https://docs.example.com --format scorecard

The scorecard shows category breakdowns, system-level diagnostics, and per-check results with fix suggestions.