┌─────────────────────────────────────────────────────────────────┐
│ │
│ RevertIQ: Build a Production-Grade Mean-Reversion API │
│ From Comprehensive Specs to Working Implementation │
│ │
└─────────────────────────────────────────────────────────────────┘
A statistically rigorous API that analyzes historical market data to identify when and where mean-reversion trading strategies work best.
{
"ticker": "AAPL",
"horizon": {"start": "2023-01-01", "end": "2024-12-31"},
"signal": {"detrend": "ema", "zscore": {...}},
"params": {"entry_grid": [-1.0, -1.5], ...}
}{
"windows_ranked": [
{
"dow": "Tue",
"window": "10:45-11:30",
"oos_sharpe": 1.32,
"oos_ret_per_trade_bp": 3.4,
"fdr_adj_p": 0.03,
"half_life_min": 27
}
],
"diagnostics": {
"stationarity": {"adf_p": 0.01, "hurst": 0.38},
"ou_half_life_min": 27.6
},
"provenance": {"data_hash": "sha256:...", "version": "1.0.0"}
}Build this without starter code. You get:
- ✅ Complete specifications
- ✅ Mathematical formulas
- ✅ API contracts
- ✅ Architecture blueprints
- ✅ Test scenarios
You provide:
- 🔨 Implementation
- 🧪 Tests
- 🚀 Deployment
revertiq/
├── README.md ← Start here!
├── QUICKSTART.md ← 15-minute setup
├── CONTRIBUTING.md ← How to share your work
├── PROJECT_OVERVIEW.md ← This file
├── .gitignore
│
├── docs/
│ ├── README.md ← Documentation index
│ ├── 00-implementation-guide.md ← Step-by-step checklist
│ ├── 01-product-requirements.md ← Math & statistics
│ ├── 02-api-specification.md ← REST API contract
│ ├── 03-system-architecture.md ← System design
│ ├── 04-ux-design.md ← User experience
│ ├── 05-wireframe-flows.md ← UI/CLI flows
│ ├── 06-starter-templates.md ← Boilerplate code
│ ├── 07-validation-testing.md ← Test scenarios
│ └── 08-faq.md ← Common questions
│
└── [Your implementation goes here]
┌─────────────┐
│ Client │ CLI, Web UI, API consumers
└──────┬──────┘
│
┌──────▼──────┐
│ API Layer │ FastAPI / Axum / Express
└──────┬──────┘
│
┌──────▼──────┐
│ Core Logic │ Z-scores, Walk-forward, FDR
└──────┬──────┘
│
┌──────▼──────┐
│ Data Layer │ Polygon API → Parquet → PostgreSQL
└─────────────┘
Languages: Python (recommended), Rust, Julia, Go
Storage: PostgreSQL + Parquet files
Cache: Redis
Queue: Redis/RabbitMQ/SQS
Data: Polygon.io (free tier works)
Week 1: Foundation
├── Day 1-2: Read docs, setup environment
├── Day 3-4: Polygon API + z-score calculation
└── Day 5-7: Statistical tests (ADF, Hurst)
Week 2: Core Analytics
├── Day 8-10: Walk-forward validation
├── Day 11-12: FDR correction
└── Day 13-14: Cost modeling
Week 3: API & Infrastructure
├── Day 15-17: REST endpoints
├── Day 18-19: Async jobs + caching
└── Day 20-21: Auth + rate limiting
Week 4: Polish
├── Day 22-24: CLI tool
├── Day 25-26: Tests
└── Day 27-30: Deployment + docs
Your implementation is complete when:
- POST /v1/analyze returns ranked windows
- Walk-forward prevents overfitting
- FDR correction controls false discoveries
- Cost modeling integrated
- Async job support
- ADF, KPSS, Hurst tests
- Bootstrap confidence intervals
- OU half-life estimation
- Deterministic outputs
- API matches spec exactly
- Provenance tracking (data_hash + version)
- Result caching
- Error handling
- Tests (>80% coverage)
- CLI with pretty output
- Web dashboard with heatmaps
- Webhooks for async notifications
- Docker deployment
- Live monitoring
- Mean Reversion: Prices return to average after deviating
- Z-Score:
(price - mean) / std_dev— normalized deviation - Walk-Forward: Train on past, test on future, roll forward
- FDR: Control false discoveries when testing many hypotheses
- Hurst < 0.5: Indicates mean-reverting behavior
- Provenance: Every response includes data_hash + version
- Idempotency: Same request → same result (safe retries)
- Vectorization: Use numpy/pandas, not Python loops
- Caching: Store results, intermediate calculations
By completing this, you'll gain expertise in:
- Quantitative Finance: Mean reversion, z-scores, OU processes
- Statistics: Hypothesis testing, multiple testing correction
- Time Series: Stationarity tests, autocorrelation
- API Design: REST, async jobs, versioning
- Data Engineering: Parquet, caching, provenance
- System Architecture: Queues, workers, deployment
- Read
QUICKSTART.md - Follow
docs/00-implementation-guide.md - Use templates from
docs/06-starter-templates.md - Validate with
docs/07-validation-testing.md
- Read all docs in order (00 → 08)
- Design your architecture
- Implement from scratch
- Compare with reference specs
- Skim
01-product-requirements.md - Copy starter template from
06 - Build minimal viable API
- Iterate and expand
- Docs:
/docsfolder (10 comprehensive guides) - Community: Tag
#revertiq-vibe-coding - Questions: See
docs/08-faq.md - Help: Open a discussion
This is vibe coding: you're given the vision (production-quality specs) and the vibe (statistical rigor + clean APIs), then you code it your way.
No hand-holding. No starter code. Just specs and your skills.
This mirrors real-world engineering: requirements → architecture → implementation.
- Statistical rigor — Not just backtesting, proper hypothesis testing
- Reproducibility — Deterministic outputs with full provenance
- Performance — Efficient vectorized operations on large datasets
- API design — Clean, well-documented REST API
- Production-ready — Caching, rate limiting, async jobs
📝 License: Docs are reference material. Your code is yours.
🤝 Sharing: Encouraged! Tag your repos, share findings, help others.
Ready to vibe code?
Start with README.md → QUICKSTART.md → docs/00-implementation-guide.md
Let's build something amazing! 🚀