Inspiration
We were reading a peer-reviewed paper from researchers at IMDEA Networks and Oxford Internet Institute published August 2025. They analyzed one year of Polymarket data and found $40 million in arbitrage profit sitting there, exploitable, on a single platform. Not theoretical. Actually extracted by a small number of sophisticated bots. That number stopped us cold. Because the opportunity isn't just on one platform - it's in the gaps between platforms. Polymarket, Kalshi, and Manifold are all pricing the same questions about the same world, completely independently, with no shared infrastructure connecting them. No indexes. No cross-platform tooling. No retail access to any of it. Every other asset class has infrastructure. Stocks have ETFs. Bonds have indexes. Crypto has DEX aggregators. Prediction markets - one of the most information-dense financial instruments alive - have nothing. We decided to build it.
What it does
3 platforms. 7,000+ markets. 4 AI agents. One place.
- You open the app and type a thesis - plain English, whatever you actually believe about the world. "AI regulation tightens before the end of 2025." Hit enter.
- Gemini parses your input, fans out across Polymarket, Kalshi, and Manifold simultaneously, embeds thousands of market questions as vectors, and returns every market that prices your thesis - ranked by relevance and confidence.
- One click turns those results into a basket. Your index fund, built in seconds.
- Your basket tracks live prices across all three platforms. If position weights drift more than 5% from your target allocation, an agent detects it and generates specific buy/sell instructions to correct it. NAV drift monitoring, the kind of thing a quant fund has a whole team doing manually.
- An arbitrage scanner runs continuously in the background, comparing semantically identical questions across platforms. When it finds a spread - the same underlying bet priced at 38% on Kalshi and 61% on Polymarket - an agent validates it, scores it for execution risk, and surfaces it as a real-time alert.
The guaranteed profit on a long position is:
$$\left|1 - \sum_{i} p_i\right|$$
where (p_i) is the price of each YES token across the dependent conditions.
- Positions are minted as SPL tokens on Solana. Near-zero fees, instant settlement, non-custodial.
Before Slicefund: open three browser tabs, manually search each platform, copy prices into a spreadsheet, try to figure out if two differently-worded questions are actually the same bet, miss the arb window while you're doing all of that. After Slicefund: type one sentence, get a portfolio, get alerted when the spread is live, execute.
How we built it
The stack is React + Vite on the frontend, Express.js on the backend, and three AI layers in between.
- The AI intelligence layer is the core of the product. We built four persistent Backboard agents (ThesisResearcher, ArbitrageScanner, IndexRebalancer, and AlertDispatcher) each with a stateful thread that persists across sessions. These aren't chatbots that forget everything after each call. They remember what they've seen, recognize patterns over time, and coordinate with each other. We implemented automatic thread and assistant recreation on 404 errors so agents recover gracefully when sessions expire.
- The semantic matching layer is what makes cross-platform arb detection actually work. We use Gemini 2.5 Flash-Lite with function calling to map natural language theses to structured market metadata across all three platforms. This is the exact LLM-plus-embeddings methodology independently validated by the Oxford/IMDEA paper, implemented in production.
- The backend is a 2,400+ line Express server handling the full analysis pipeline - thesis mapping, arb scanning, basket rebalancing, market data aggregation from three APIs, and agent orchestration. File-based caching with 1-hour TTL for trending markets, in-memory relationship cache for market pairs.
- Auth0 handles authentication. Solana handles settlement.
Challenges we ran into
- Credits. Mid-build our Backboard API credits ran dry - four agents designed, system prompts written, assistant IDs saved, and we couldn't fire a single call. We built and validated the entire agent architecture blind, writing error recovery logic and thread recreation systems without being able to confirm anything actually worked.
- Prompt engineering. Getting the thesis mapper to return genuinely relevant markets - not just keyword matches - took serious iteration. The difference between a market that mentions your keywords and one that actually prices your thesis is subtle. The arb scorer had the same problem - early versions flagged every spread above 4% as urgent regardless of liquidity or execution risk.
- Making it one product. Four features worked independently for a long time. Stitching thesis search, basket management, arb scanning, and rebalancing into a single coherent user flow - where the output of one step feeds naturally into the next - took real product thinking on top of the engineering.
Accomplishments that we're proud of
- The arb scanner fires on real live opportunities. Not mocked data, but real price discrepancies, real markets, real platforms, detected and scored in real time
- Four persistent AI agents with stateful memory coordinating across the full pipeline, with graceful failure recovery and cross-session context
- Seamless end-to-end flow - type a thesis, get a basket, see arb alerts, trigger a rebalance, execute on-chain
- Independently converged on the exact methodology from a peer-reviewed Oxford/IMDEA paper - LLM semantic dependency detection plus embedding-based market matching - mid-build, without knowing it
What we learned
- Prediction market inefficiency is structural, not accidental - platforms have no incentive to talk to each other, so the gaps persist
- The hardest part isn't the AI. It's the data. Every platform has a different schema, different probability representations, different resolution criteria - normalization is unglamorous and harder than it looks
What's next for Slicefund
- Social layer. Public baskets, leaderboard of top-performing theses, community prediction library
- More power for existing users. Custom weighting, automated rebalancing, stop-loss/take-profit on basket positions, historical performance tracking
- API access for institutional players and third-party developers to build on top of the infrastructure The platform we have now is the foundation.
A few more months and it's a serious financial product.
Built With
- auth0
- backboard
- express.js
- gemini
- node.js
- phantom
- react
- solana
- supabase
- vite

Log in or sign up for Devpost to join the conversation.