This directory contains Claude Code agents, skills, and configuration for the Seqera Platform documentation repository.
What's in .claude/:
- Agents - Editorial review assistants that check documentation quality
- Skills - Task-specific workflows for documentation automation
- Configuration - Settings for CLI and GitHub Actions integration
Available to:
- Claude Code CLI users working on this project
- Claude Desktop app (synced projects)
- GitHub Actions workflows via Claude API
Skills are AI-powered workflows that automate specific documentation tasks.
Generates OpenAPI overlay files for Seqera Platform API documentation updates.
Use when:
- Analyzing Speakeasy comparison overlays
- Generating operations, parameters, or schemas overlay files
- Documenting new API endpoints or Platform version updates
- Validating overlay files against documentation standards
Documentation: See skills/openapi-overlay-generator/SKILL.md
Invocation: /openapi-overlay-generator
Runs comprehensive editorial reviews on documentation files or directories.
Use when:
- Pre-commit review of changed files
- Directory-wide quality checks
- Targeted review (voice-tone only, terminology only)
Invocation:
/review <file-or-directory> [--profile=<profile>]Profiles:
quick- Voice-tone and terminology onlycomprehensive- All agentsnew-content- Includes structure checks
Agents are specialized editorial reviewers that check documentation for specific quality criteria. They run automatically in GitHub Actions on PRs or manually via /review.
Ensures documentation uses second person, active voice, and present tense.
Configuration: .claude/agents/voice-tone.md
Checks:
- Second person ("you") vs third person ("the user")
- Active vs passive voice
- Present vs future tense
- Hedging language ("may", "might", "could")
Enforces consistent product names, feature names, and formatting.
Configuration: .claude/agents/terminology.md
Checks:
- Product names (Seqera Platform, Studios, Nextflow)
- Feature terminology (drop-down, compute environment)
- UI formatting (bold for buttons, backticks for code)
- RNA-Seq capitalization
Special rules:
- Tower: Acceptable in legacy contexts
- TowerForge: Always acceptable
- drop-down: Always hyphenated
Improves readability by flagging complex sentences and jargon.
Configuration: .claude/agents/clarity.md
Status: Currently disabled in workflows
Checks:
- Sentence length (>30 words)
- Undefined jargon
- Complex constructions
- Missing prerequisites
Ensures consistent punctuation across documentation.
Status: Not yet implemented as separate agent
Checks:
- Oxford commas
- List punctuation
- Quotation marks
- Dash usage
File: .github/workflows/docs-review.yml
Triggers (GitHub Actions, on-demand only):
- PR comment: Comment
/editorial-reviewon any PR - Manual workflow dispatch: Via GitHub Actions UI (see below)
Editorial review can also be run locally via Claude Code CLI using the /editorial-review command; this runs outside the .github/workflows/docs-review.yml workflow.
NOT triggered by:
- PR creation, updates, or commits (to conserve tokens)
How it works:
- User comments
/editorial-reviewon PR - Workflow validates bash scripts (fails fast if errors)
- Classifies PR type ("rename" or "content")
- Smart-gate checks (automatic waste prevention):
- Blocks if reviewed <60 min ago
- Blocks if <10 lines changed
- Blocks if >5 formatting issues (run markdownlint first)
- If gates pass: Invokes
/editorial-reviewskill - Skill orchestrates agents (voice-tone, terminology)
- Posts up to 60 inline suggestions
- Saves full report as artifact (30-day retention)
Key architecture: Workflow invokes the /editorial-review skill rather than calling agents directly. This ensures local and CI behavior is identical.
Manual workflow dispatch:
- Go to Actions → Documentation Review → Run workflow
- Enter PR number and select review type (
all,voice-tone,terminology) - Smart-gate still applies - manual trigger doesn't bypass automation
Outputs:
- Inline suggestions on specific lines (click to apply)
- Comment with download link if >60 suggestions found
- Summary report with PR type and agent status
.github/scripts/post-inline-suggestions.sh
- Converts agent findings to GitHub inline suggestions
- Posts via GitHub Review API
.github/scripts/classify-pr-type.sh
- Analyzes git diff to determine PR type
- Outputs "rename" or "content" for workflow decisions
| Agent | Status | Used in CI |
|---|---|---|
| voice-tone | ✅ Active | Yes |
| terminology | ✅ Active | Yes |
| punctuation | 📋 Planned | No |
| clarity | No | |
| docs-fix | 📝 Local only | No |
Agents output structured suggestions:
FILE: path/to/file.md
LINE: 42
ISSUE: Brief description
ORIGINAL: |
exact original text
SUGGESTION: |
corrected text
---
This format is parsed by post-inline-suggestions.sh and converted to GitHub's inline suggestion syntax.
When working on API documentation:
- Claude Code automatically detects and offers relevant skills
- Skills provide specialized knowledge about documentation standards
- Skills include scripts ensuring consistency across API docs
Local development (before PR):
- Make doc changes locally
- Run
/editorial-review <file>in Claude Code - Review findings and apply fixes
- Commit and push
PR review (on-demand):
- Open PR with documentation changes
- Comment
/editorial-reviewto trigger review - Review inline suggestions on affected lines
- Apply fixes individually or batch-apply multiple
- Comment
/editorial-reviewagain to verify fixes
# Test specific agent
/review --profile=quick platform-enterprise_docs/quickstart.md
# Review entire directory
/review platform-cloud/docs/
# Test skill
/openapi-overlay-generator- Edit agent definition in
.claude/agents/<agent-name>.md - Test locally with
/review - Create PR (agents will review their own changes)
- Merge after approval
Current limit: 60 inline suggestions per PR
To change: Edit .github/workflows/docs-review.yml lines 268-284
- Create
.claude/agents/<new-agent>.md - Add to
.github/workflows/docs-review.yml - Update documentation in
.claude/README.mdandCLAUDE.md - Test on sample content
- Skills and agents are version-controlled with the repository
- Updates to skills should be reviewed like any other code change
- Test changes locally before committing
- Monitor the Actions tab in GitHub for workflow issues
- Artifacts auto-delete after 30 days
To minimize token usage and environmental impact:
- Check PR timeline before re-running
/editorial-review - Use static analysis first: Run
markdownlintorvalelocally before LLM review - Skip minor changes: Don't review single typo fixes or whitespace changes
- Batch changes: Fix multiple issues, then run one review
Consider running fast, local checks before using LLM agents:
# Markdown formatting (instant, zero cost)
npx markdownlint-cli2 "**/*.md"
# Simple pattern checks (instant, zero cost)
grep -r "Tower" --include="*.md" platform-enterprise_docs/
# Vale style checks if configured (instant, zero cost)
vale platform-enterprise_docs/Benefits: Reduces LLM usage by 50-60% by catching simple issues locally first.
- API keys stored in GitHub Secrets (never in code or logs)
- Reviews only read files (no write access to production)
- Manual triggers prevent automated abuse
- All output is publicly visible for transparency
.claude/
├── README.md # This file
├── agents/
│ ├── voice-tone.md # Agent definitions
│ ├── terminology.md
│ └── clarity.md
└── skills/
└── openapi-overlay-generator/
└── SKILL.md
- User Guide: See
CLAUDE.mdin repository root - Workflows: See
.github/workflows/docs-review.yml - Scripts: See
.github/scripts/