For AI agents: a documentation index is available at /llms.txt — markdown versions of all pages are available by appending .md to any URL path.
Skip to content

About AFDocs

AFDocs is an open-source tool that tests documentation sites against the Agent-Friendly Documentation Spec. The spec defines what makes documentation accessible to AI coding agents, based on observed behavior across real agent platforms. AFDocs automates those observations into 22 checks that produce a score and actionable fix suggestions.

The Agent-Friendly Documentation Spec

The Agent-Friendly Documentation Spec is the foundation for everything AFDocs checks. It documents:

  • How agents actually discover, fetch, and consume documentation
  • What fails in practice (truncation, empty SPA shells, auth gates, broken redirects)
  • What works (llms.txt, markdown availability, content negotiation, proper status codes)
  • Specific agent behaviors observed across Claude Code, Cursor, GitHub Copilot, OpenAI Codex, Gemini CLI, and others

The spec is maintained at github.com/agent-ecosystem/agent-docs-spec and is open for contributions.

AFDocs implements spec v0.3.0 (2026-03-31).

Status

AFDocs is in early development (0.x). Check IDs, CLI flags, and output formats may change between minor versions. The tool is usable today, but don't build automation against specific output details until 1.0.

Contributing

AFDocs is developed at github.com/agent-ecosystem/afdocs. Issues, bug reports, and pull requests are welcome.

If you've tested AFDocs against your docs site and found a check that doesn't accurately reflect agent behavior, or a failure mode that isn't covered, that's especially valuable feedback. The checks are based on observed behavior, and more observations make them better.

License

MIT