22 Checks, 7 Categories
From content discoverability to markdown availability, AFDocs tests everything that affects how agents interact with your docs.
Measure how well AI agents can read, navigate, and use your documentation site.
Agent-Friendly Docs Scorecard
==============================
Overall Score: 72 / 100 (C)
Category Scores:
Content Discoverability 72 / 100 (C)
Markdown Availability 60 / 100 (D)
Page Size & Truncation Risk 45 / 100 (F)
Content Structure 82 / 100 (B)
URL Stability 93 / 100 (A)
Observability 71 / 100 (C)
Authentication 100 / 100 (A+)Claude Code, Cursor, GitHub Copilot, Windsurf, Codex, Gemini CLI; millions of developers use AI coding agents that read your documentation in real time. When an agent can't read your docs, it falls back on training data or other sources, and developers get bad answers. You won't get bug reports about it. The developer blames the agent, or your product, and moves on. Or the agent recommends a different product it can understand and use better, and developers never discover your product at all.
Many documentation sites have problems agents can't work around: client-side rendering that delivers empty shells, pages so bloated with CSS and JavaScript that content gets truncated, no discovery path to clean markdown versions. These are invisible to human readers but dealbreakers for agents.
The good news: most fixes are configuration changes, not content rewrites. Adding an llms.txt, enabling server-side rendering, or serving .md URLs can move a site from an F to a B in a single sprint. Read the full business case →
npx afdocs check https://docs.example.com --format scorecardThe scorecard shows category breakdowns, system-level diagnostics, and per-check results with fix suggestions.