An iterative pitch review engine powered by Claude Code. Multiple AI personas score a pitch, extract improvements, and refine it in a loop — inspired by Andrej Karpathy's autoresearch.
pitch.* (input)
|
[Iteration 1]
Persona panel scores the pitch (out of 60)
Extract fatal concerns + improvements
|
Claude Code generates an improved pitch
Record changelog (memory for next iteration)
|
[Iteration N]
Save scores, improved pitch, and changelog
|
summary.json + diff_report.md (output)
Each iteration:
- Score — Each persona reviews and scores the pitch on 6 axes (10 pts each, 60 total)
- Identify — Extract fatal concerns and improvement suggestions
- Improve — Generate an improved pitch addressing the single most important issue
- Validate — Re-score to ensure the change actually helped (discard if score drops)
- Repeat — Carry forward inter-iteration memory to avoid redundant fixes
autopitch is part of the Pitchmy ecosystem:
~/pitchmy/
├── agent-mentor-persona/ # Mentor / judge personas
├── agent-user-persona/ # End-user personas (optional)
└── autopitch/ # This repository (loop engine)
autopitch/
├── CLAUDE.md # Instructions for Claude Code
├── config.yml # Panel definitions, persona paths, defaults
├── sanitize_pitch.py # Prompt injection scanner
└── work/ # Per-project work directories
└── {project}/
├── pitch.* # Review target (.md / .txt / .docx / .pptx / .pdf)
├── config.yml # (Optional) Project-specific overrides
└── results/ # Output (gitignored)
├── iter_N_reviews.json
├── iter_N_improved_pitch.md
├── iter_N_changelog.md
├── summary.json
└── diff_report.md
- Claude Code CLI
- Python 3.10+ (for the injection scanner)
- markitdown (optional, for
.docx/.pptx/.pdfpitch files)pipx install 'markitdown[pdf]'
-
Clone the Pitchmy repositories into a shared parent directory:
mkdir pitchmy && cd pitchmy git clone <agent-mentor-persona-repo> git clone <agent-user-persona-repo> git clone <autopitch-repo>
-
Add your pitch to a project directory:
mkdir -p autopitch/work/my-project # Place your pitch file (pitch.md, pitch.txt, pitch.docx, etc.) cp my-pitch.md autopitch/work/my-project/pitch.md -
(Optional) Configure your panel — Skip this step to use the defaults (
hackathon_preseedpanel, 5 iterations). To customize, editconfig.yml:defaults: active_panel: hackathon_preseed loop: iterations: 5
-
Run autopitch:
cd autopitch claude "Run autopitch on work/my-project"
Claude Code will:
- Scan the pitch for prompt injection
- Ask pre-scan questions to fill in missing context
- Confirm iteration count
- Run the review loop and save results to
work/my-project/results/
Panels define which personas review the pitch. Configure them in config.yml:
panels:
hackathon_preseed:
description: "Solana Hackathon standard use"
members:
- { id: gatekeeper, type: mentor }Built-in panels:
| Panel | Description | Members |
|---|---|---|
hackathon_preseed |
Solana Hackathon standard | 1 mentor |
hackathon_full |
Full review | 1 mentor |
mentor_only |
Fast review | 1 mentor |
defi_beginner |
DeFi track (beginner) | 1 mentor + 1 user |
defi_intermediate |
DeFi track (intermediate) | 1 mentor + 1 user |
defi_advanced |
DeFi track (advanced) | 1 mentor + 1 user |
Mentor-type personas (mentor/judge perspective):
| Axis | Max |
|---|---|
| Functionality | 10 |
| Potential Impact | 10 |
| Novelty | 10 |
| UX | 10 |
| Open-source | 10 |
| Business Plan | 10 |
User-type personas (end-user perspective):
| Axis | Max |
|---|---|
| Pain Relevance | 10 |
| Trust | 10 |
| UX Intuitiveness | 10 |
| Willingness to Pay | 10 |
| Shareability | 10 |
| Stickiness | 10 |
Total per persona: 60 points. The iteration score is the average across all panel members.
Pitch files are untrusted external input. autopitch includes a built-in injection scanner (sanitize_pitch.py) that runs before every review loop:
python3 sanitize_pitch.py work/my-project/pitch.md --json- HIGH findings — Loop is blocked until operator approval
- MEDIUM findings — Reported for operator review
- PASS — Loop proceeds normally
The scanner detects role overrides, score manipulation, file access instructions, data exfiltration attempts, and other prompt injection patterns.
- Claude Code is the agent — No Claude API calls. Claude Code itself orchestrates the loop.
- Persona-agnostic engine — autopitch doesn't depend on specific personas. Swap panels via config.
- Inter-iteration memory — Changelogs prevent redundant improvements across iterations.
- Factual data protection — Traction numbers, team info, and fundraising terms are never modified.
- Voice preservation — The founder's tone and key phrases are preserved across iterations.
- Keep/Discard validation — Every improvement is re-scored; regressions are rejected.
The personas used by autopitch (via agent-mentor-persona) simulate thinking patterns based on publicly available information (interviews, talks, blog posts, social media). They are NOT reviews by the actual individuals. The personas do not represent the official views of the referenced individuals, nor are they supervised, endorsed, or affiliated with them or their organizations. Outputs are for reference only and should not be used as the basis for investment or business decisions.
MIT