For AI agents: a documentation index is available at /llms.txt — markdown versions of all pages are available by appending .md to any URL path.
Skip to content

Quick Start

Requirements

Node.js 22 or later.

Run your first check

No install needed. Point AFDocs at your documentation site:

bash
npx afdocs check https://docs.example.com --format scorecard

This discovers pages from your site (via llms.txt, sitemap, or both), samples up to 50, runs all 22 checks, and produces a scorecard with your overall score, per-category breakdowns, and fix suggestions:

Agent-Friendly Docs Scorecard
==============================

  Overall Score: 72 / 100 (C)

  Category Scores:
    Content Discoverability           72 / 100 (C)
    Markdown Availability             60 / 100 (C)
    Page Size and Truncation Risk     45 / 100 (D)
    ...

  Interaction Diagnostics:
    [!] Markdown support is undiscoverable
        Your site serves markdown at .md URLs, but agents have no way to
        discover this. ...

  Check Results:
    Content Discoverability
      PASS  llms-txt-exists        llms.txt found at /llms.txt
      WARN  llms-txt-size          llms.txt is 65,000 characters
      FAIL  llms-txt-directive     No directive detected on any tested page
            Fix: Add a blockquote near the top of each page ...

To understand what the score means and how it's calculated, see What Is the Agent Score?.

See what's wrong and how to fix it

Get your score

The scorecard shown above is the best starting point. It gives you the overall score, per-category breakdowns, interaction diagnostics, and fix suggestions for every failing check, all in one view:

bash
npx afdocs check https://docs.example.com --format scorecard

Dig into per-page details

When you're ready to fix specific issues, switch to the text format with --verbose and --fixes. This tells you exactly which pages have problems and what to do about them:

bash
npx afdocs check https://docs.example.com --verbose --fixes

The scorecard tells you what's wrong. The verbose text output tells you where.

Get machine-readable output

For scripting and automation, use JSON output. Add --score to include scoring data and fix suggestions:

bash
npx afdocs check https://docs.example.com --format json --score

Run specific checks

If you're working on a particular issue, you don't need to run all 22 checks every time. Pass a comma-separated list of check IDs:

bash
npx afdocs check https://docs.example.com --checks llms-txt-exists,llms-txt-valid,llms-txt-size

Some checks depend on others. For example, llms-txt-valid requires llms-txt-exists to pass first; if you run llms-txt-valid alone, it will skip. When running a subset, include the dependencies too.

See the Checks Reference for the full list of check IDs and dependencies.

Check a specific page

Skip page discovery entirely and check just one URL with --sampling none:

bash
npx afdocs check https://docs.example.com/api/auth --sampling none

You can combine this with --checks to run a single check against a single page:

bash
npx afdocs check https://docs.example.com/api/auth --sampling none --checks rendering-strategy

Get consistent results between runs

By default, AFDocs randomly samples pages, so results can vary between runs. For reproducible results (useful when verifying a fix), use deterministic sampling:

bash
npx afdocs check https://docs.example.com --sampling deterministic

This sorts discovered URLs alphabetically and picks an even spread, producing the same sample every time as long as your site's URL set is stable.

Tune request behavior

AFDocs is designed to be a good citizen. It enforces delays between requests and caps concurrent connections. If you need to adjust these:

bash
# Slower, gentler requests (for rate-limited servers)
npx afdocs check https://docs.example.com --request-delay 500 --max-concurrency 1

# Faster runs (for your own infrastructure)
npx afdocs check https://docs.example.com --request-delay 50 --max-concurrency 10

# Sample fewer pages for a quicker check
npx afdocs check https://docs.example.com --max-links 10

For the full list of flags, see the CLI Reference.

Exit codes

AFDocs exits with 0 if all checks pass or warn, and 1 if any check fails. This makes it usable in CI pipelines and shell scripts. See CI Integration for full setup.

Installing

npx downloads and runs AFDocs on demand, which is fine for getting started and one-off checks. If you run it regularly, you can install it for faster startup:

bash
# Global install — puts `afdocs` on your PATH
npm install -g afdocs
afdocs check https://docs.example.com

# Project dev dependency — for CI and test suites
npm install -D afdocs

For CI integration with vitest helpers, see CI Integration.

Next steps