AI-powered spec-driven development - Build apps through conversation with your local LLM.
Paird CLI is a command-line tool that helps you build software through a structured, AI-assisted workflow. It takes your idea and generates:
- Requirements - What to build (user stories, acceptance criteria)
- Design - How to build it (architecture, data models)
- Tasks - Step-by-step implementation plan
- Code - Actual working code files
- Tests - Verification tests
The magic: Use two terminals - one generates code, one lets you chat and iterate in real-time.
# Terminal 1: Generate a project
paird-cli "create a todo app with dark theme" --autopilot --keep-alive
# Terminal 2: Watch and iterate
paird-cli --watch
# In the watch terminal, refine your app:
> /generate add a delete button to each todo
> /spec add user authentication with Google login# Clone and install
git clone https://github.com/danaia/pairdCLI.git
cd pairdCLI
pip install -e .
# Configure your LLM (LM Studio, Ollama, etc.)
paird-cli config init
paird-cli config set llm.endpoint http://localhost:1234/v1
paird-cli config set llm.model qwen3-30b-instructpaird-cli "build a REST API with Express"paird-cli "create a landing page" --autopilot --no-testsTerminal 1 - Work:
paird-cli "create a modern login page" --autopilot --keep-aliveTerminal 2 - Watch & Iterate:
paird-cli --watch
# Then use commands like:
> /generate make background dark
> /spec add forgot password flow| Feature | Description |
|---|---|
| Dual Terminal Mode | One terminal generates, one lets you iterate |
/spec Command |
Generate detailed specs with iterative refinement |
/generate Command |
Quick code changes in real-time |
| Auto npm install | Dependencies installed automatically |
| Resume Support | Save progress, resume anytime |
| Local-first | Works with LM Studio, Ollama, any OpenAI-compatible API |
| Command | Description |
|---|---|
/generate <prompt> |
Quick code generation |
/spec <prompt> |
Detailed spec with refinement loop |
/read <file> |
Add file to context |
/status |
Show pipeline status |
/help |
All commands |
paird-cli config init # Create config
paird-cli config set llm.endpoint <url> # Set LLM endpoint
paird-cli config set llm.model <model-name> # Set model
paird-cli config show # View configRecommended: Qwen3 30B+ models for best results (60K context window)
your-project/
├── .paird/specs/project-name/
│ ├── requirements.md # Generated requirements
│ ├── design.md # Technical design
│ ├── tasks.md # Implementation tasks
│ └── session.json # Resume state
├── src/ # Generated code
├── tests/ # Generated tests
└── package.json # Auto-generated
- Quick Start Guide - Full tutorial with examples
- CLI Reference - All commands and options
- Python 3.9+
- LM Studio, Ollama, or OpenAI-compatible endpoint
- macOS or Linux
Contributions welcome! Please open an issue or PR.
MIT License - see LICENSE for details.
Built with ❤️ using Rich, Prompt Toolkit, and local LLMs.