A Claude AI skill for systematic auditing of academic manuscript references before submission.
投稿前对学术稿件参考文献进行系统性全面审查的 Claude AI Skill。
| Layer | Scope |
|---|---|
| L1 | Authenticity — detect fabricated/hallucinated references, verify DOIs |
| L2 | Bibliographic accuracy — authors, year, volume, pages, journal |
| L3 | Text–list consistency — every in-text citation ↔ reference list entry |
| L4 | Citation appropriateness — does each citation support its claim? |
| L5 | Formatting & versions — style uniformity, R/Python/software version match |
AI-assisted academic writing frequently introduces hallucinated references — papers that don't exist, or DOIs that resolve to unrelated articles. This skill provides a structured, repeatable workflow to catch these and other citation errors before submission.
AI 辅助学术写作极易引入"幻觉文献"——不存在的论文或指向不相关文章的 DOI。此 skill 提供结构化、可重复的审查流程。
citation-audit-skill/
├── SKILL.md # Main skill instructions (bilingual)
├── LICENSE.txt # Apache 2.0
└── scripts/
├── crossref_batch_check.py # CrossRef API batch query
└── extract_docx.py # Extract text from .docx manuscripts
- Install as a Claude skill (copy to
.claude/skills/or.agent/skills/) - Open a manuscript file (
.docx,.tex,.bib) - Ask Claude: "Audit the references in this manuscript" or "检查这篇稿件的参考文献"
- Dual verification: CrossRef API + web search, never trusts a single source
- Multi-environment: R, Python, Julia, MATLAB version checking
- Data source coverage: MODIS, WorldClim, GBIF, GenBank, and more
- Bilingual: Full Chinese–English support throughout
Apache 2.0