Inspiration

The quantitative finance industry has a talent concentration problem. Alpha-generating insight doesn't cluster exclusively at elite institutions in a handful of cities, it's distributed across the globe. A brilliant trader in Mumbai or Chengdu with an intellectual edge has no pathway to institutional capital, not because their ideas lack merit, but because they lack access. We called this "Dark Talent", intellectual capital that exists but goes untapped due to institutional gatekeeping.

OpenQuant was born from the belief that alpha can come from anyone, and that the right infrastructure will unlock it.

We were also frustrated by how reductive conventional strategy evaluation is. Reducing a trading strategy to solely its Sharpe ratio misses out on key underlying information.


What It Does

OpenQuant is an open-submission platform and reasoning framework for evaluating quantitative trading strategies. Anyone, anywhere can submit a trading pitch, their thesis, data, methodology, and backtest results. The system then:

  1. Validates the pitch through Eva, a multi-agent pipeline that checks for data fabrication, coding errors (look-ahead bias, survivorship bias, overfitting), and data quality issues. Eva triages each submission into one of three outcomes: rejected, sent back for clarification, or advanced to scoring.

  2. Scores the strategy using a two-stage mathematical framework: admissibility gates that enforce benchmark dominance (positive alpha, positive excess return, Sharpe > 0, profit factor > 1, max drawdown > -50%), followed by a nonlinear composite scoring function across 12 metrics mapped through calibrated sigmoid transforms. Outlined in our 7-Page Thesis.

  3. Recommends capital allocation scaled to the score and time horizon up to £20,000 and, via integration with xStocks on Solana, can execute the strategy autonomously on-chain.


How We Built It

The OpenQuant team divided along natural lines of expertise. Alex drove the research direction and overall architecture. Cameron built the mathematical framework: the admissibility gates, sigmoid scoring functional, z-score population normalisation, and allocation mapping. Kamai developed the computational metric system that translates raw backtest data into the feature vectors the scorer consumes. Edward engineered the agentic ML system, including Eva's concurrent LLM-backed agents (built on Google's Gemini at temperature zero for deterministic outputs) running via a thread pool.

The full stack spans: a structured pitch intake pipeline with deterministic data quality scoring, two concurrent validation agents (fabrication detector and coding errors auditor), a 12-component weighted scoring engine with population-relative normalisation, and a Solana/xStocks integration for tokenized agentic trade execution.


Challenges We Ran Into

Designing the scoring functional without degenerate solutions was the core theoretical challenge. Any weighted scoring system risks a strategy gaming one heavily-weighted metric while being structurally broken elsewhere. The sigmoid transform with calibrated thresholds, combined with hard admissibility gates, was our solution, but calibrating the 12 sets of parameters to reflect genuine institutional intuition required significant iteration.

Fabrication detection is computationally complex. Rule-based checks catch obvious cases (constant prices, negative volume), but sophisticated manipulation, like unnaturally smooth return series, post-hoc timestamp editing, requires contextual reasoning. Prompt engineering the fabrication agent to produce structured, severity-coded JSON verdicts reliably took considerable work.

Population normalisation has edge cases. The z-score re-scoring triggers a full pool re-score when any new strategy is added, which is computationally straightforward but creates ranking instability that users need to be prepared for.

Bridging TradFi evaluation logic with on-chain execution via xStocks required careful interface design, the Eva validation output needed to translate cleanly into actionable execution parameters on Solana.


Accomplishments That OpenQuant are Proud Of

We're proud of producing a framework that is genuinely principled rather than heuristic. The separation of feasibility from optimisation, admissibility gates before scoring, mirrors how sophisticated capital allocators actually 'think'. We formalised it rigorously enough to publish as a thesis. The sigmoid scoring functional has mathematically motivated properties (bounded output, diminishing marginal reward, discrimination concentrated at economically meaningful thresholds) rather than being an arbitrary weighted average.

We're also proud of Eva's triage design. Outright rejection is rare; the clarification loop treats submitters as collaborators rather than suspects, guiding them toward compliance rather than simply gatekeeping.

The end-to-end pipeline from free-text pitch submission to validated, scored, capital-allocated, and on-chain-executable strategy, is, as far as we know, genuinely novel.


What We Learned

Building this forced us to confront how many implicit assumptions live inside seemingly simple metrics. The Sharpe ratio is not just "return divided by volatility" in practice, its interpretation depends entirely on the benchmark period, market regime, and comparison set. Designing a system that makes these assumptions explicit and consistent taught us a lot about the gap between academic finance and operational evaluation.

We also learned that multi-agent systems need careful output schema design. When two agents run concurrently and their outputs must merge into a unified flag set, even small inconsistencies in output format cause integration failures. Structured JSON with strict severity enumerations (low/medium/high/critical) was essential.

On the product side: the three-outcome triage (blocked / needs clarification / advanced) was not our first design. We started with binary accept/reject and found it unworkable. The clarification loop emerged from recognising that most methodological problems are fixable, and that a platform aiming to push dark talent shouldn't punish mistakes the same way it punishes fraud.


What's Next for OpenQuant?

OpenQuant's roadmap has 4 stages.

First, we will pick one niche, and excel at it, US equity strategies.

Second, we source talent through Kaggle, universities, Discord, Twitter, and LinkedIn, all communities where exceptional quantitative capability exists outside of the institutional barriers.

Third, we start small - fund Eva-validated trades with modest capital allocations and build a track record of return on capital.

Fourth, we will scale. Raise a dedicated fund to deploy larger allocations into top-scoring strategies, and offer Eva as an agentic product via paid.ai.

On the technical roadmap, the most exciting direction is enabling AI agent bots on Solana to call Eva directly. Such that autonomous trading agents can have their strategies validated and funded programmatically, without human intermediation. This would make OpenQuant not just a platform for human quants, but the infrastructure for the emerging ecosystem of agentic finance.

Pitch Deck

[Demo (may not function due to resource constraints)] http://207.180.193.218/app/

Built With

Share this project:

Updates