For AI agents: a documentation index is available at /llms.txt — markdown versions of all pages are available by appending index.md to any URL path.

Research and Standards
for the Agent Ecosystem

The agent ecosystem is forming right now. We’re building the research and standards infrastructure to help it mature into reliable, long-lived tooling for software development. Open source specs and tools, 4 live sites, and a daily automated research pipeline already in production.

What We Do

Building Infrastructure for the Agent Ecosystem

We research how agents interact with documentation, tools, and infrastructure, then build specifications and tooling to make it all work better.

Research

Systematic study of how coding agents consume documentation, use tools, and interact with the broader ecosystem. Published findings and reports that help the industry make better decisions.

Standards

Specifications like the Agent-Friendly Documentation Spec that codify best practices and give developers concrete guidance for making their tools and docs work with agents.

Tooling

Open-source tools like afdocs that let anyone audit and improve their documentation’s agent-friendliness. Practical infrastructure, not just papers.

Our Work

Published Research
and Standards

Concrete outputs from the research program, freely available to the community.

Agent-Friendly Documentation Spec

A 22-check specification defining what makes documentation accessible to coding agents. Covers llms.txt, markdown availability, page size, content structure, URL stability, discoverability, and more. Built from empirical observation of agent behavior across hundreds of documentation sites.

Agent Skill Report

Qualitative analysis of 673+ public Agent Skills, including findings on spec compliance issues across the ecosystem. The first systematic evaluation of agent tool quality.

Automated Research Infrastructure

A four-stage daily pipeline: news-gather scans RSS, arXiv, and GitHub releases; research-sourcing evaluates items and tracks themes using vector search; shift-sourcing drafts and fact-checks commentary articles; and a dashboard synthesizes it all. 13 articles published on aeshift.com to date, with new content generated daily.

Explore ResearchDetails

The Problem

Why This Work Matters

Ecosystem health directly affects product adoption. When infrastructure doesn’t work with agents, everyone loses.

The agent ecosystem is fragmented

Standards are being drafted. Best practices don’t exist yet. Most companies building in this space focus on model capabilities and leave the surrounding ecosystem to chance.

Documentation Failures

When docs don’t work with agents, developers blame the agent. We study these failure modes systematically.

Tool Quality Gaps

If tool integrations are unreliable, developers stop using them. We evaluate and report on tool quality.

Neutral ground for pre-competitive research

  • Standards and best practices benefit everyone, but no single company wants to fund them alone or be seen as controlling them

  • Sponsors help shape practical standards without the appearance of self-dealing, and associate their brand with credible independent work

  • Published findings reflect what the data shows, not what sponsors prefer. That independence is what makes the research useful

Become a Sponsor

By the Numbers

A Research Program
Already in Production

This isn’t a proposal. It’s an operational research program
with published outputs and automated infrastructure.

22-Check Documentation Spec

A comprehensive specification defining what makes documentation accessible to coding agents, covering structure, discoverability, and content quality.

673+ Agent Skills Audited

The first systematic evaluation of public Agent Skills, revealing patterns in quality, compliance, and developer experience across the ecosystem.

6 Open Source Projects

Specifications, validation tools, community research, and an enterprise variant, all publicly available under the agent-ecosystem GitHub organization. Backed by internal research infrastructure powering the daily pipeline.

Four-Stage Daily Pipeline

Automated news gathering, research evaluation, article drafting, and dashboard synthesis running daily on self-hosted infrastructure with MongoDB Atlas and vector search.

4 Live Sites

agentdocsspec.com hosts the specification, aeshift.com publishes daily ecosystem commentary, agentskillreport.com presents skill audit findings, and agentskillimplementation.com hosts cross-platform loading behavior research.

Multiple Distribution Channels

Tools available via npm, Homebrew, Go install, and pre-commit hooks. Enterprise variant with AWS Bedrock integration for organizations with existing cloud infrastructure.

Sponsorship Tiers

Sponsor the Research

Every company building agent tooling, developer platforms, or AI-powered developer tools benefits from a healthier agent ecosystem.

Sustaining Sponsor

Support ongoing research and standards work. Perfect for companies that benefit from a healthier agent ecosystem.

$2,000per month
  • Logo and attribution on all published reports and the program website
  • Early access to research findings (2 weeks before public release)
  • Quarterly briefings on research themes and emerging patterns
  • Input on research direction and priorities
  • Named acknowledgment in articles and presentations
  • One custom analysis per quarter
Get in Touch

Founding Sponsor

Founding

Shape the research program from its earliest days. For companies that want to lead in agent ecosystem standards.

$5,000per month
  • Everything in Sustaining, plus:
  • Co-branded report option for one publication per year
  • Direct access to raw research data and pipeline outputs
  • Invitation to shape the research roadmap in annual planning
Get in Touch

Help Build the Agent Ecosystem’s Infrastructure

This research is independent by design. Sponsors support the program; they don’t direct conclusions. That independence is what makes the research credible and useful to the industry.

Become a Sponsor