Is your feature request related to a problem?
Yes. DDEV already uses AI-assisted development and review (agents and copilots for code, docs, and review commentary; humans merge). Contributors use different tools (Claude Code, Codex, Cursor, Copilot, etc.) with no shared provenance for what produced a change. Black-box models also drift day to day, so identical prompts can yield different results—hard to debug or compare. The gap is not “turn on AI” but standards, verification, and telemetry so that reality is consistent, observable, and measurable for every PR—without duplicating what already works (make staticrequired, rich CI, AGENTS.md).
Describe your solution
Adopt a harness-first incremental approach on top of existing practice:
- Optional agent metadata (machine-readable fingerprint) and repository impact map before/ alongside generation; validate or score against the final diff over time.
- Harness score as a rollup from existing lint/test CI (not a parallel QA stack).
- Later: unified telemetry / shadow / canary where maintainers want it.
Extend AGENTS.md and the PR template rather than replacing them. Align with org AGENTS.md where useful.
Artifacts (no login required):
Describe alternatives
- Status quo — keep current CI and docs only; accept uneven provenance and harder measurement.
- Lighter v1 — metadata + PR template only; defer impact-map tool and shadow pipeline.
- Heavier — full shadow/canary early; more infra upfront.
Additional context
Deck repo: https://github.com/jonesrussell/ddev-ai-deck
Reported via structured body for maintainers; happy to open a docs/ RFC PR if preferred.
Is your feature request related to a problem?
Yes. DDEV already uses AI-assisted development and review (agents and copilots for code, docs, and review commentary; humans merge). Contributors use different tools (Claude Code, Codex, Cursor, Copilot, etc.) with no shared provenance for what produced a change. Black-box models also drift day to day, so identical prompts can yield different results—hard to debug or compare. The gap is not “turn on AI” but standards, verification, and telemetry so that reality is consistent, observable, and measurable for every PR—without duplicating what already works (
make staticrequired, rich CI,AGENTS.md).Describe your solution
Adopt a harness-first incremental approach on top of existing practice:
Extend
AGENTS.mdand the PR template rather than replacing them. Align with orgAGENTS.mdwhere useful.Artifacts (no login required):
Describe alternatives
Additional context
Deck repo: https://github.com/jonesrussell/ddev-ai-deck
Reported via structured body for maintainers; happy to open a
docs/RFC PR if preferred.