CLI tool that generates YouTube chapters from video transcripts using LLMs. Works with local models via Ollama by default, with optional cloud provider support.
pip install -e .With OpenAI support:
pip install -e ".[openai]"With dev/test dependencies:
pip install -e ".[all]"- Install and start Ollama: https://ollama.com/
- Pull a model:
ollama pull llama3.1- Generate chapters:
chaptergen generate --input transcript.srt --format youtubeexport OPENAI_API_KEY=sk-...
chaptergen generate --input transcript.srt --provider openai --model gpt-4o-mini --format youtube| Format | Extensions | Notes |
|---|---|---|
| Plain text | .txt, .md |
Timestamps optional (MM:SS or HH:MM:SS prefix) |
| SubRip | .srt |
Standard subtitle format |
| WebVTT | .vtt |
Web subtitle format |
chaptergen generate --input <file> [options]Options:
| Flag | Default | Description |
|---|---|---|
--input, -i |
(required) | Path to transcript file |
--provider, -p |
ollama |
LLM provider (ollama, openai) |
--model, -m |
llama3.1 / gpt-4o-mini |
Model name |
--format, -f |
chapters |
Output format (chapters, youtube, json) |
--output, -o |
stdout | Write to file |
--api-key |
— | API key (prefer --api-key-env) |
--api-key-env |
— | Env var name holding API key |
--base-url |
— | Override provider URL |
--temperature |
0.0 |
Sampling temperature |
--max-chapters |
— | Suggest max chapter count to LLM |
--min-gap |
30 |
Minimum seconds between chapters |
Verify that your provider is reachable and the model is available:
chaptergen check --provider ollama --model llama3.100:00 Introduction
02:15 Setting Up the Project
05:30 Writing Core Logic
Ready to paste into a YouTube video description:
Chapters:
00:00 Introduction
02:15 Setting Up the Project
05:30 Writing Core Logic
{
"provider": "ollama",
"model": "llama3.1",
"chapters": [
{"timestamp": "00:00", "start_seconds": 0, "title": "Introduction"},
{"timestamp": "02:15", "start_seconds": 135, "title": "Setting Up the Project"}
]
}| Variable | Description |
|---|---|
CHAPTERGEN_PROVIDER |
Default provider |
CHAPTERGEN_MODEL |
Default model |
OLLAMA_HOST |
Ollama server URL (default: http://localhost:11434) |
OPENAI_API_KEY |
OpenAI API key |
OPENAI_BASE_URL |
OpenAI-compatible base URL |
- llama3.1 — Good balance of speed and quality
- llama3.1:70b — Higher quality, needs more VRAM
- mistral — Fast, decent results
- gpt-5-mini — Cost-effective, good quality
- gpt-5.3 — Best quality
pip install -e ".[all]"
pytest"Cannot connect to Ollama" — Make sure Ollama is running (ollama serve) and accessible at the expected URL.
"Model not found" — Pull the model first: ollama pull llama3.1
"No valid chapters found" — The LLM failed to return structured output. Try a different model or re-run (local models can be inconsistent). Adding --temperature 0 helps with determinism.
Chapters look wrong — Adjust --min-gap to control spacing, or use --max-chapters to limit count.