An LLM-powered tool for reviewing written content. It comes with sensible defaults for technical writing (e.g., blog posts, docs, guides), and you can customize the review criteria to match your own standards.
- Sensible defaults out of the box for technical writing
- Customizable review criteria via instruction files
- Structured output with severity levels (error, warning, suggestion)
- Multiple LLM providers (OpenAI, Anthropic, Google)
- Fact-checking via web search to verify claims in content
export OPENAI_API_KEY="sk-..."
npx @content-reviewer/cli article.mdSee CLI Documentation for details.
npm install @content-reviewer/coreimport { ContentReviewer, createReviewConfig } from '@content-reviewer/core';
const config = createReviewConfig({
language: 'en',
llm: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY },
});
const reviewer = new ContentReviewer(config);
const result = await reviewer.review({
rawContent: '# My Article\n\nContent here...',
source: 'article.md',
});See Core Documentation for API reference.
You can provide your own review criteria via an instruction file. This replaces the default instructions, so include everything you want checked.
content-review article.md --instruction my-standards.md## error
- ...
- Product name must be "MyProduct" (not "myproduct")
- Code blocks must specify language
## warning
- ...
- Avoid "latest version" - use exact version numbers
## ignore (do NOT report)
- Passive voice
- Paragraph length
- Minor wording suggestions| Package | Description | Version |
|---|---|---|
| @content-reviewer/cli | Command-line interface | |
| @content-reviewer/core | Core library for programmatic use |
Automate reviews in Pull Requests with Content Reviewer Action.
- Node.js >= 20.0.0
- API key for OpenAI, Anthropic, or Google
pnpm install
pnpm build
pnpm testMIT