Live (InsForge): https://55e7ng49.insforge.site/
Structured idea diligence before commit: market, economics, regulation, competition, distribution, community, and customer acquisition—materialized as typed, comparable artifacts, not a single free-form completion.
ap13 is not a monolithic “ask the model everything” interface. It is a decomposed inference stack whose value is coverage × contract × skepticism:
- Coverage: Seven orthogonal research tracks, each with a dedicated question set derived from the specific idea (not a static questionnaire).
- Contract: Every stage emits machine-checkable structure (JSON schemas, per-track
trackData, citation-indexed summaries) so the client can render charts, radars, and source-backed text without ad-hoc parsing. - Skepticism: Prompts encode epistemic constraints—conflict reporting, sponsor-bias awareness, stage-consistent projections, calibrated signal scores, and a synthesis pass that stress-tests optimism rather than summarizing it.
The differentiator is diligence-shaped output under orchestration, not conversational fluency.
The system implements a multi-stage, role-bounded workflow—not one prompt pretending to be planner, researcher, and UI formatter at once.
-
Planner (specialist A)
Classifies the thesis and emits falsifiable, web-search-groundable questions per track—explicitly biased toward contradiction, segment-definition risk, and bear cases. -
Executors (specialists B₁…B₇, fan-out)
For each track, an isolated completion receives one scoped question plus shared context. Executors run in parallel (Promise.allSettledon the client orchestrator; independent API routes server-side). Each completion is constrained to a strict executor schema: narrative summary with<cite index="N">hooks, four calibrated signals, a source list, and track-specific structured payloads (e.g. market time series with stage-realism rules, competition radar dimensions, regulatory rows). -
Synthesiser (specialist C, fan-in)
Consumes the full planner object plus all executor payloads and produces a second-order verdict: GO / CAUTION / NO-GO with bias correction, damped aggregates when upstream charts disagree with stated stage, and concrete validation steps.
This is agentic in the engineering sense: bounded responsibility per call, explicit handoff artifacts, orchestrated fan-out and fan-in, and downstream critique of upstream narratives. Failure modes are localized (per-track rejection without collapsing the entire run).
Language-model calls are centralized through @insforge/sdk (createClient, server mode): a single gateway for chat completions and optional web-backed context on executor paths when the gateway exposes it. Transient failures trigger retriable classification (rate limits, 5xx, network) with exponential backoff; the executor path falls back from web-augmented to non-web completion if the augmented route fails—preserving availability without silently equating “no search” with “no answer.”
- App: Next.js (App Router) + React + TypeScript — see
web/. - API: Route handlers under
web/src/app/api/validate/*(planner,executor,synthesiser). - Prompts & contracts:
web/src/lib/prompts.ts(server-only system prompts and schema rules). - InsForge client wrapper:
web/src/lib/validate-insforge-ai.ts.
Local development: from web/, install dependencies and run npm run dev (configure INSFORGE_ANON_KEY and related env in web/.env.local).
© 2026 ap13. All rights reserved.