Skip to content

ArnoFrost/Infinite-Engine

Repository files navigation

Infinite Engine

A Long-Running Agent Orchestrator That Drives AI Coding CLIs Beyond Infinite Context Windows

简体中文 | English

Stars Forks Last Commit License: MIT Tests Python 3.10+

Quick StartHow It WorksRunning ModesConfigurationChangelog


⚡ 30-Second Overview

┌──────────────────────────────────────────────────────────────────┐
│                        tmux session                              │
│  ┌──────────────────────┐  ┌───────────────────────────────────┐ │
│  │   Engine Control      │  │   CLI Execution (Native TUI)     │ │
│  │                       │  │                                   │ │
│  │  Session 1 ✅         │  │  🧠 AI is thinking...            │ │
│  │  Session 2 ✅         │  │  🔧 Calling write_file(...)      │ │
│  │  Session 3 🔄         │  │  📝 Generating src/App.vue       │ │
│  │  Progress: 7/10       │  │  ✅ Tests passing                │ │
│  │                       │  │                                   │ │
│  └──────────────────────┘  └───────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘

Infinite Engine orchestrates AI Coding CLIs (Claude Code, Gemini CLI, Codex CLI) to autonomously develop complete projects across unlimited sessions — each session picks up exactly where the last one left off.

💡 Based on the Dual-Agent Pattern from Anthropic Research: an outer orchestrator + inner AI Coding CLI working together for long-running autonomous coding.


🎯 What is Infinite Engine?

Feature Description
🔄 Infinite Context Window Automatically splits work into sessions with cross-session memory and progress transfer
🖥️ Native tmux TUI Watch AI think, call tools, and generate code in real-time
🧠 Smart Orchestration Goal contracts, regression detection, phase awareness, failure memory injection
🔌 Multi-CLI Support Pluggable drivers for Claude Code / Gemini CLI / Codex CLI
📦 External Project Dir Engine and project fully separated — cd project && ./launch.sh
🤖 Skill-Guided Creation Answer 7 questions in a conversation to auto-generate spec + config + launch script

🧠 How It Works

graph TB
    subgraph Engine["Infinite Engine - Outer Orchestrator"]
        ORC[Orchestrator] --> TS[Target Selector]
        ORC --> PM[Prompt Builder]
        ORC --> SH[Session History]
        ORC --> FS[Feature Status]
        ORC --> RD[Regression Detector]
    end

    subgraph CLI["AI Coding CLI - Inner Agent"]
        CC[Claude Code]
        GC[Gemini CLI]
        CX[Codex CLI]
    end

    ORC -- "Session N prompt + context" --> CLI
    CLI -- "progress.txt + feature_list.json" --> ORC

    style Engine fill:#1a1a2e,stroke:#16213e,color:#fff
    style CLI fill:#0f3460,stroke:#533483,color:#fff
Loading

Session 1 (Initializer Agent): Reads the spec, generates a feature list, scaffolds the project, initializes git.

Session 2+ (Coding Agent): Resumes from where the previous session stopped. Each session focuses on one target feature, commits progress, and updates status. The engine injects context: session history summary, known issues, phase strategy, and priority overrides.


🚀 Quick Start

Option 1: Skill-Guided Creation (Recommended)

Use the Infinite Engine Skill in Claude Code:

Use the Infinite Engine skill to create a blog system with 10 features.

The Skill guides you through project type, tech stack, and scale — then auto-generates:

  • app_spec.txt — Full project specification
  • config.yaml — Engine configuration
  • launch.sh — One-click launch script
cd /path/to/your-project && ./launch.sh

Option 2: Manual Setup

# 1. Install dependencies
cd infinite-engine
pip install -r requirements.txt

# 2. Create project directory
mkdir /path/to/my-project && cd /path/to/my-project

# 3. Write app_spec.txt (project specification)
cp /path/to/infinite-engine/specs/example_web_app.txt app_spec.txt
# Edit app_spec.txt with your requirements

# 4. Create config.yaml
cat > config.yaml << 'EOF'
project:
  name: "my-project"
  spec_file: "app_spec.txt"
  feature_count: 30

driver:
  name: claude
  command: claude
  timeout: 600
  extra_args: ["--dangerously-skip-permissions"]

engine:
  max_iterations: null
  delay_between_sessions: 3
  tmux_mode: "always"

prompts:
  initializer: "prompts/initializer.md"
  coding: "prompts/coding.md"

progress:
  feature_list: "feature_list.json"
  progress_file: "progress.txt"
EOF

# 5. Create launch.sh
cat > launch.sh << 'EOF'
#!/bin/bash
ENGINE_DIR="/path/to/infinite-engine"
PROJECT_DIR="$(cd "$(dirname "$0")" && pwd)"
exec python3 "$ENGINE_DIR/infinity.py" --config "$PROJECT_DIR/config.yaml" "$@"
EOF
chmod +x launch.sh

# 6. Launch
./launch.sh

🖥️ Running Modes

tmux Background Mode (Recommended)

# Set tmux_mode: "always" in config.yaml
./launch.sh

# Watch AI working in real-time
tmux attach -t ie-my-project

# Detach without stopping the engine
# Press Ctrl+B then D

# Stop the engine
tmux kill-session -t ie-my-project

tmux Auto Mode

# Set tmux_mode: "auto" in config.yaml
# Inside tmux → auto split panes side by side
# Outside tmux → runs in foreground mode
./launch.sh

Foreground Pipe Mode

# Set tmux_mode: "never" in config.yaml
./launch.sh

⌨️ CLI Arguments

# Use default config.yaml
python infinity.py

# Custom config file
python infinity.py --config /path/to/project/config.yaml

# Override running mode
python infinity.py --config project/config.yaml --tmux always

# Override CLI driver
python infinity.py --driver gemini-internal

# Limit iterations (useful for testing)
python infinity.py --max-iterations 3

🔌 Supported CLI Tools

Driver Package Command Mode
Claude Code @anthropic-ai/claude-code claude Native (official)
Claude Code Internal @tencent/claude-code-internal claude-internal Internal fork
Gemini CLI @google/gemini-cli gemini Native (official)
Gemini CLI Internal @tencent/gemini-cli-internal gemini-internal Internal fork
Codex CLI @openai/codex codex Native (official)
Codex CLI Internal @tencent/codex-internal codex-internal Internal fork

Recommended: Use the native (official) drivers for open-source usage. Internal drivers are for Tencent's internal infrastructure.


⚙️ Configuration

# Project
project:
  name: "my-project"           # Project name
  spec_file: "app_spec.txt"    # Project specification file
  # output_dir: "."            # Defaults to config file directory
  feature_count: 30            # Number of features

# CLI Driver
driver:
  name: claude                   # Driver name
  command: claude                # Executable name
  timeout: 600                 # Session timeout (seconds)
  extra_args:                  # Additional CLI arguments
    - "--dangerously-skip-permissions"

# Engine Settings
engine:
  max_iterations: null         # null = unlimited
  delay_between_sessions: 3    # Delay between sessions (seconds)
  tmux_mode: "always"          # always / auto / never

# Prompt Templates (resolved relative to engine directory)
prompts:
  initializer: "prompts/initializer.md"
  coding: "prompts/coding.md"

# Progress Tracking (resolved relative to project directory)
progress:
  feature_list: "feature_list.json"
  progress_file: "progress.txt"

Path Resolution Rules

Field Resolution
output_dir Omitted → config file directory; relative → relative to config dir; absolute → as-is
spec_file Relative to config file directory
prompts.* Looks in config dir first, falls back to engine dir
progress.* Relative to project directory

📐 Architecture

infinite-engine/
├── infinity.py              # Entry point + CLI argument parsing
├── config.yaml              # Default configuration template
├── requirements.txt         # Python dependencies
│
├── engine/                  # Core orchestration logic
│   ├── orchestrator.py      # Main loop — session scheduling + lifecycle
│   ├── prompts.py           # Prompt template loading + dynamic injection
│   ├── session.py           # Session execution abstraction
│   ├── session_history.py   # Structured history — cross-session memory
│   ├── target_selector.py   # Target feature selection contract
│   ├── feature_status.py    # Feature state machine
│   ├── error_analysis.py    # Structured error snapshot analysis
│   ├── progress.py          # Progress tracking
│   ├── tmux_manager.py      # tmux lifecycle + pane interaction
│   ├── schema.py            # Configuration validation
│   ├── defaults.py          # Default value management
│   ├── validators.py        # Runtime validators
│   ├── lockfile.py          # Process lock
│   ├── logger.py            # Session logger
│   └── ui.py                # Unified CLI output layer (Rich)
│
├── drivers/                 # CLI driver adapters
│   ├── __init__.py          # Driver registry + auto-discovery
│   ├── base.py              # BaseDriver + DriverResult
│   ├── claude.py            # Claude Code driver (official)
│   ├── claude_internal.py   # Claude Code Internal driver
│   ├── gemini.py            # Gemini CLI driver (official)
│   ├── gemini_internal.py   # Gemini CLI Internal driver
│   ├── codex.py             # Codex CLI driver (official)
│   └── codex_internal.py    # Codex CLI Internal driver
│
├── prompts/                 # Prompt templates
│   ├── initializer.md       # Session 1 — project initialization
│   └── coding.md            # Session 2+ — incremental coding
│
├── specs/                   # Example specs
│   ├── example_web_app.txt
│   └── hello_world.txt
│
├── tests/                   # Test suite (387 tests)
│   └── ...                  # 17 test modules
│
└── docs/                    # Documentation
    ├── user-guide.md
    ├── template-variables.md
    └── custom-templates.md

🔧 Adding a New CLI Driver

# drivers/my_tool.py
from drivers.base import BaseDriver

class MyToolDriver(BaseDriver):
    name = "my-tool"
    default_command = "my-tool"
    install_package = "@scope/my-tool"

    def build_command(self, prompt: str, cwd: str) -> list[str]:
        return [self.command, "--prompt", prompt, *self.extra_args]

    def build_command_tmux(self, prompt: str, cwd: str) -> list[str]:
        """Command for tmux mode (optional, shows native TUI)."""
        return [self.command, "--prompt", prompt, "--verbose", *self.extra_args]

Register it in drivers/__init__.py and you're good to go.


📝 Prompt Template Variables

Variable Description
{spec_file} Specification file name
{feature_count} Target feature count
{project_dir} Project directory path
{progress_file} Progress log file name
{feature_list_file} Feature list file name
{project_name} Project name
{session_history_summary} Historical session summary (auto-injected)
{known_issues_block} Known issues / failure memory (dynamically injected)
{phase_hint_block} Phase strategy hint (dynamically injected)
{priority_override} Urgent priority override (conditionally injected)

See docs/template-variables.md for details.


📊 Version History

See CHANGELOG.md for the full history.

graph LR
    V1["v1 - Core Framework"] --> V2["v2 - Self-Evolution"]
    V2 --> V3["v3 - Bootstrap + Infra"]
    V3 --> V4["v4 - Standardization + tmux"]
    V4 --> V5["v5 - External Projects + Skill"]
    V5 --> V2_0["v2.0 Current"]

    style V2_0 fill:#2da44e,stroke:#1a7f37,color:#fff
Loading

📚 References


🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.


📄 License

This project is licensed under the MIT License — see the LICENSE file for details.


Made with ❤️ by ArnoFrost

About

Autonomous coding runtime implementing Anthropic's "Long-running Agents" pattern. Python + Tmux + JSON State Machine.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors