Universal AI Developer Workflows - Transform any project into an AI-driven development environment
Jean Claude (jc) is a CLI tool that enables programmatic AI agent orchestration for any codebase. Execute prompts, run multi-phase workflows, and monitor agent activity with a unified interface.
- One-Command Setup:
jc initinstalls skill, updates CLAUDE.md, configures everything - Jean Claude CLI Skill: 358-line comprehensive guide auto-installed to
.claude/skills/ - Two-Agent Workflow: Opus plans features, Sonnet implements them
- Coordinator Pattern: Intelligent agent-to-human communication with smart escalation
- Beads Integration: Execute workflows directly from Beads tasks with
jc work - Real-Time Streaming: Watch agent output as it works with
--streammode - SDK-Based Execution: Claude Agent SDK with Bedrock authentication
- Workflow Composition: Multi-phase SDLC workflows (plan → implement → verify → complete)
- Git Worktree Isolation: Safe parallel development without conflicts
- Real-Time Telemetry: SQLite event store with live monitoring UI
- Push Notifications: ntfy.sh integration for critical decision escalations
- Multi-Project Support: Project names in notifications when running multiple instances
# Install with uv
uv pip install jean-claude
# Or install globally with uvx
uvx install jean-claude
# Verify installation
jc --version# Initialize Jean Claude in your project (one-time setup)
cd my-project/
jc init
# This creates:
# - .jc-project.yaml (configuration)
# - .claude/skills/jean-claude-cli/ (comprehensive CLI documentation skill)
# - .claude/commands/ (slash commands like /prime)
# - CLAUDE.md section (project-specific quick reference)
# - specs/, agents/, .jc/ directories
# Run an adhoc prompt
jc prompt "Analyze the codebase structure"
# Run a two-agent workflow (Opus plans, Sonnet implements)
jc workflow "Add user authentication with JWT"
# Or work on a Beads task directly
jc work jean_claude-2sz.4
# Run a simple chore workflow
jc run chore "Add error handling to login"jc init # Initialize ADW in current project
jc prompt "your prompt here" # Execute adhoc prompt
jc prompt "your prompt" --stream # Stream output in real-time
jc run chore "task description" # Run chore workflow
jc run feature "feature description" # Run feature workflowThe flagship workflow pattern: Opus plans, Sonnet implements.
# Full workflow (plan + implement)
jc workflow "Build user authentication"
# Use custom models
jc workflow "Complex feature" -i opus -c opus
# Run initializer and coder separately (modular approach)
jc initialize "Task description" -w my-workflow
jc resume my-workflowJean Claude implements a coordinator pattern where the main Claude Code instance intelligently triages agent questions, only escalating to you when truly needed.
How it works:
- Sub-agents can call
ask_userornotify_usertools when they need help - Messages appear in the workflow's INBOX directory
- Coordinator (main Claude Code instance) sees questions and decides:
- Answer directly (90% of cases) - Technical questions, test failures, code conventions
- Escalate to human (10% of cases) - Business decisions, security, architecture choices
- You only get ntfy push notifications for critical decisions
Setting up ntfy notifications:
# 1. Install ntfy app on your phone (iOS/Android/Web)
# 2. Choose a unique topic name
echo "jean-claude-$(openssl rand -hex 6)"
# 3. Subscribe to the topic in the ntfy app
# 4. Configure Jean Claude
export JEAN_CLAUDE_NTFY_TOPIC="your-topic-name"See NTFY_SETUP.md and docs/coordinator-pattern.md for details.
When you get escalated:
- SSH to your machine
- Write response to
agents/{workflow-id}/OUTBOX/response.json - Agent receives your decision and continues
Slash commands:
/mailbox- View and respond to agent messages (coordinator use)
Work directly from Beads tasks:
# Execute workflow from Beads task
jc work jean_claude-2sz.4
# Plan only, don't implement
jc work task-123 --dry-run
# Pause for approval after planning
jc work task-123 --show-plan
# Use Opus for both agents
jc work task-123 --model opusjc status # Check workflow status
jc logs # View workflow event logs
jc dashboard # Start web monitoring dashboard
jc version # Display version informationjc init # Initialize new project (skill + CLAUDE.md + config)
jc migrate # Update existing project to latest version
jc prime # Gather project context
jc upgrade # Upgrade jc to latest version
jc upgrade --check # Check for updates onlyWhat jc init creates:
.claude/skills/jean-claude-cli/- Comprehensive CLI documentation skillCLAUDE.mdsection - Project-specific quick reference.claude/commands/- Slash commands (/prime, etc.)- Configuration and directories
What jc migrate does:
- Installs missing skill
- Updates CLAUDE.md if needed
- Adds new slash commands
- Creates missing directories
- Safe for existing projects (doesn't overwrite)
Keep your project clean by removing temporary files created by agents:
jc cleanup # Remove files older than 7 days
jc cleanup --age 14 # Remove files older than 14 days
jc cleanup --all # Remove all temporary files
jc cleanup --agents # Also clean agent work directories
jc cleanup --dry-run # Preview what will be deletedTemporary files are organized in .jc/ subdirectories:
.jc/temp/- Verification scripts (check_.py, demo_.py).jc/reports/- Status reports (*_COMPLETE.md, *_VERIFICATION.md).jc/verification/- Test outputs and logs
Real-time streaming displays output as the agent works:
# Stream output in real-time
jc prompt "Analyze the codebase" --stream
# Show tool uses and thinking process
jc prompt "Refactor authentication" --stream --show-thinking
# Different models with streaming
jc prompt "Quick question" --stream -m haiku
jc prompt "Complex analysis" --stream -m opusBenefits of streaming:
- See progress in real-time as the agent works
- Better user experience for long-running prompts
- Optional visibility into tool uses and thinking process
- Graceful interrupt handling (Ctrl+C)
Jean Claude uses .jc-project.yaml for project-specific configuration:
directories:
specs: specs/
agents: agents/
trees: trees/
source: src/
tests: tests/
tooling:
test_command: uv run pytest
linter_command: uv run ruff check .
workflows:
default_model: sonnet
auto_commit: trueproject/
├── agents/ # Agent working directories
│ └── {workflow_id}/
│ ├── state.json # Workflow state with features
│ └── events.jsonl # Event log (JSONL format)
├── trees/ # Git worktrees (isolated execution)
├── specs/ # Workflow specifications
│ └── beads-{task_id}.md # Auto-generated from Beads tasks
├── .jc/ # Internal state and temporary files
│ ├── events.db # SQLite event store
│ ├── temp/ # Temporary verification scripts
│ ├── reports/ # Status and completion reports
│ └── verification/ # Test outputs and logs
└── .jc-project.yaml # Project configuration
export ANTHROPIC_API_KEY="your-api-key"export CLAUDE_CODE_USE_BEDROCK=1
export AWS_PROFILE="your-profile"
export AWS_REGION="us-west-2"# Clone and install
git clone https://github.com/joshuaoliphant/jean-claude
cd jean-claude
uv sync
# Run CLI in development
uv run jc --helpJean Claude has a focused test suite that tests OUR code, not external dependencies. We mock external tools like Beads rather than testing their implementation.
# Run all tests (parallel execution)
uv run pytest
# Run with verbose output
uv run pytest -v
# Run specific test file
uv run pytest tests/test_work_command.py
# Run specific test class
uv run pytest tests/test_work_command.py::TestWorkCommand
# Run tests matching a pattern
uv run pytest -k "beads"
# Run with coverage
uv run pytest --cov=jean_claude --cov-report=htmltests/
├── conftest.py # Root fixtures (cli_runner, mock_beads_task, etc.)
├── core/
│ ├── conftest.py # Core module fixtures (sample_beads_task, subprocess mocks)
│ ├── test_beads*.py # Beads model and integration tests
│ └── ...
├── orchestration/
│ ├── conftest.py # Orchestration fixtures (workflow_state, execution_result)
│ ├── test_auto_continue.py # Auto-continue workflow tests
│ └── test_two_agent.py # Two-agent pattern tests
├── templates/
│ ├── conftest.py # Template path fixtures
│ └── test_beads_spec.py # Template tests
├── test_work_command.py # CLI work command tests
├── test_dashboard.py # Dashboard tests
└── ...
| Fixture | Location | Purpose |
|---|---|---|
cli_runner |
conftest.py | Click CLI testing |
mock_beads_task |
conftest.py | Standard BeadsTask for testing |
mock_beads_task_factory |
conftest.py | Factory for custom BeadsTask |
work_command_mocks |
conftest.py | All mocks for work command |
sample_beads_task |
core/conftest.py | Fully-populated BeadsTask |
mock_subprocess_success |
core/conftest.py | Subprocess mock for success |
mock_project_root |
orchestration/conftest.py | Temp project with agents/ dir |
sample_workflow_state |
orchestration/conftest.py | Pre-configured workflow state |
- Unit tests: Test individual functions and classes
- Integration tests: Test CLI commands and workflows
- Model tests: Test Pydantic models and validation
- Template tests: Test Jinja template rendering
- Only test OUR code - Mock external tools (Beads, etc.), don't test them
- Write tests first (TDD approach)
- Use existing fixtures from conftest.py files
- Follow the mock patching rule: patch where used, not where defined
- Run full test suite before submitting PR
MIT