A community-driven skills marketplace for AI-assisted research. Browse, install, and share modular AI workflows for ML research.
Browse Skills | Platform Design | Contributing
Reading time: PI: 5 min | Researcher: 15 min | Student: 10 min
npm install -g @anthropic-ai/claude-code # if you don't have Claude Code yetIn a Claude Code session, run:
/plugin marketplace add rpatrik96/research-agora
/plugin install academic@research-agora
/plugin install development@research-agora
/plugin install formatting@research-agora
/plugin install office@research-agora
/plugin install editorial@research-agora
/plugin install research-agents@research-agora
New here? Run
/onboardin Claude Code, or take the 2-minute quiz in your browser — no installation needed.
Run citation verification on any project with a .bib file:
cd /path/to/your/project && claude
/paper-referencesEvery entry marked mismatch or not found is a potential hallucinated or corrupted reference. Cost: ~$0.10–0.30.
No .bib file? No CLI? Take the onboarding quiz — it runs in your browser and recommends where to start.
PI: Evaluate and deploy for your group
61 public AI workflows for the full paper lifecycle. Skills encode your group's standards in a shared CLAUDE.md — every student and postdoc runs the same verified checks.
- Cost: $20/mo Pro + ~$5–80/mo API tokens depending on usage. Team plan (see Anthropic pricing) includes a GDPR DPA.
- Privacy: No patient data or unpublished results on Pro. Team plan required for institutional compliance. Full guide →
- Rollout: (1) Pilot one high-pain task, (2) Create shared
CLAUDE.md, (3) Set verification standards, (4) Review monthly. - Skills are plain Markdown — they transfer across providers. No lock-in.
Start with: Quickstart → Verification guide → CLAUDE.md template
Researcher: Get productive today
| I want to... | Run this |
|---|---|
| Verify citations | /paper-references |
| Critical review of my draft | /paper-review path/to/paper.pdf |
| Find related work | /literature-synthesizer |
| Debug LaTeX | /latex-debugger |
| Clean up code | /code-simplify |
Not sure which skill? Run /choose-skill — describe your task and get matched recommendations.
Start with: Quickstart → Examples by domain → CLAUDE.md template
Student: Learn by doing
AI tools amplify expertise — they don't replace it. Verify everything. Build understanding before optimizing speed.
Week 1: Run /paper-references on your bibliography. Check 3 entries manually.
Week 2: Set up CLAUDE.md for your project with /onboard.
Week 3: Try /paper-review on a section draft. Do you agree with the critique?
Week 4: Connect GitHub MCP, try /pr-automation.
Rule of thumb: If you couldn't do the task without AI, the AI shouldn't do it for you yet.
Start with: Concepts → Examples → Verification guide
| Doc | What it covers |
|---|---|
| Quickstart | Install → first task → 5-minute win |
| Concepts | Evolution stack, key terms, delegate vs. protect |
| Verification | TDR recipes, hierarchy, limits |
| Privacy & GDPR | Compliance checklist, paid plans, medical data |
| Examples | Domain-specific prompts by use case |
| CLAUDE.md Template | Commented template — customizing it IS the tutorial |
Paper writing, research, and dissemination skills:
| Skill | Description |
|---|---|
paper-introduction |
Write introduction sections for ML papers |
paper-abstract |
Write or improve paper abstracts |
literature-synthesizer |
Write related work and discover relevant literature |
paper-experiments |
Document experimental setups with GitHub integration |
paper-discussion |
Write discussion and limitations sections |
paper-review |
Generate critical reviews simulating skeptical reviewers |
paper-references |
Fact-check citations using bibtex-updater |
paper-verify-experiments |
Verify claims against source code |
paper-poster |
Create academic conference posters |
paper-slides |
Create presentation slides from papers |
experiment-tracker |
Sync experiment results to paper drafts |
benchmark-scout |
Find benchmarks and generate experiment plans |
openreview-submission |
Prepare OpenReview metadata: plain-text abstract, keywords, TL;DR, lay summary |
Code quality and automation skills:
| Skill | Description |
|---|---|
commit |
Create conventional commits with co-authorship |
code-simplify |
Remove dead code, eliminate duplication |
pr-automation |
Create GitHub pull requests from changes |
python-cicd |
Set up CI/CD with GitHub Actions |
htcondor |
Generate HTCondor submission files for cluster jobs |
latex-sync-setup |
Initialize latex-code-sync in a project |
latex-sync-annotate |
Link functions to paper equations via decorators |
latex-sync-verify |
Verify paper equations match code implementations |
Document and code formatting skills:
| Skill | Description |
|---|---|
latex-consistency |
Enforce consistent LaTeX formatting |
tikz-figures |
Create TikZ/PGF diagrams for ML papers |
Microsoft Office document creation:
| Skill | Description |
|---|---|
pptx-create |
Create PowerPoint presentations |
docx-create |
Create Word documents |
xlsx-create |
Create Excel spreadsheets |
Editorial intelligence and writing analysis skills:
| Skill | Description |
|---|---|
writing-verify |
Quantitative writing quality scoring (A-F grade) |
writing-diagnosis |
Diagnose writing issues across genres |
argument-autopsy |
Dissect argument structure |
register-translator |
Translate between registers |
editorial-brain |
Comprehensive editorial intelligence |
Specialized research analysis agents:
| Agent | Description |
|---|---|
devils-advocate |
Challenge arguments and identify biases |
claim-auditor |
Deep verify all paper claims |
perspective-synthesizer |
Synthesize multiple viewpoints |
audience-checker |
Evaluate audience alignment |
clarity-optimizer |
Analyze readability and reduce jargon |
statistical-validator |
Verify statistical rigor |
figure-storyteller |
Generate publication-quality figures |
reviewer-response-generator |
Generate structured rebuttals |
latex-debugger |
Parse logs and diagnose compilation errors |
artifact-packager |
Prepare code/data for public release |
state-generator |
Generate research-state.json for parallel analysis pipelines |
content-archaeologist |
Map blog posts into book structure |
voice-drift-detector |
Detect voice inconsistency across documents |
redundancy-radar |
Find semantic overlap across documents |
reader-simulation |
Simulate first-time reader experience |
proof-auditor |
Decompose and verify proofs step-by-step (T1-T6 hierarchy) |
bounds-analyst |
Analyze convergence rates and complexity bounds |
notation-consistency-checker |
Build symbol table, detect notation inconsistencies |
theorem-dependency-mapper |
Build theorem/lemma dependency DAG with criticality scores |
proof-strategy-advisor |
Suggest proof approaches for theorems and conjectures |
counterexample-searcher |
Stress-test theorems by dropping assumptions |
intuition-formalizer |
Translate informal intuitions into formal theorem statements |
theory-connector |
Find cross-domain theoretical connections and analogies |
Some skills use presentation templates. After cloning, install them to your local config:
mkdir -p ~/.claude/skills/templates
cp -r plugins/office/templates/slides ~/.claude/skills/templates/
cp -r plugins/academic/templates/posters ~/.claude/skills/templates/To add new templates:
cd templates
python analyze_template.py /path/to/your/template.pptx --output slides --name "template-name"For the paper-references skill:
pip install bibtex-updater- Primary audience: ML researchers
- Target venues: NeurIPS, ICML, ICLR, AAAI
- LaTeX packages: cleveref, booktabs, amsmath
- Figures: matplotlib/seaborn, colorblind-safe palettes, PDF export
See CONTRIBUTING.md for guidelines on adding new skills and agents.
MIT License - see LICENSE for details.
