Inspiration

The gap between retail traders and institutional investors is widening, not because of access to data, but access to process. Hedge funds employ armies of analysts to read 10-Ks, build DCF models, and rigorous "Red Teams" to tear apart investment theses before capital is deployed.

We wanted to democratize this rigorous workflow. However, standard LLMs suffer from a "Yes Man" bias—they agree with the user's premise. We were inspired by the concept of Adversarial Thinking and Chain-of-Thought reasoning. What if we built an agent that doesn't just answer a question, but actively tries to disprove its own initial conclusion to ensure higher conviction?

What it does

VeridionAlpha is a "Marathon Agent" that compresses a 72-hour institutional research cycle into a real-time interactive experience.

  1. Autonomous Ingestion: It simulates parsing SEC filings (10-Ks), earnings transcripts, and real-time news to build a knowledge base.
  2. Financial Modeling: It constructs financial projections, calculating key metrics like Free Cash Flow (FCF) margins and revenue growth rates.
  3. Self-Correction Engine: Uniquely, VeridionAlpha employs a "Devil's Advocate" loop. It explicitly tags thoughts with <critique> to challenge its assumptions (e.g., "I assumed strong pricing power, but competitor analysis suggests a race to the bottom").
  4. Thesis Generation: It synthesizes these conflicting data points into a high-conviction Investment Memo with a clear rating, price target, and confidence score.
  5. Visual Dashboard: It renders interactive charts and allows users to export the full research report as a PDF, Markdown, or JSON.

How we built it

We built VeridionAlpha using React and TypeScript for a robust, type-safe frontend, styled with TailwindCSS to evoke a professional "Bloomberg Terminal" aesthetic.

  • AI Core: We utilized the Google Gemini API (gemini-3-flash-preview and gemini-2.5), leveraging its massive context window to handle complex financial reasoning.
  • Prompt Engineering: We implemented a system instruction set that forces the model to use specific XML-like tags (<thought>, <critique>, <action>) in its stream. This allows us to parse the "internal monologue" and display it in a Matrix-style terminal log while the report generates in the background.
  • Data Visualization: We integrated Recharts to turn the JSON financial data returned by the LLM into dynamic Area and Bar charts.
  • Export Engine: We used html2canvas and jsPDF to take high-fidelity snapshots of the React DOM and compile them into a paginated, print-ready PDF report.

Challenges we ran into

  • The "Yes Man" Problem: Early versions of the agent were too agreeable. We had to aggressively tune the system prompts to reward skepticism and "Contra-Evidence" seeking.
  • PDF Layout: Generating a clean PDF from a dynamic web view is notoriously difficult. Handling page breaks, dark mode backgrounds, and chart rendering required a custom implementation using canvas scaling and strict CSS overrides during capture.
  • Visualizing "Thinking": We wanted the user to feel the work being done. Simply showing a loading spinner wasn't enough. We built a log parser that extracts the AI's "thoughts" in real-time, creating a transparent view into the agent's logic before the final report appears.

Accomplishments that we're proud of

  • The "Critique" Loop: Seeing the logs show the AI correcting itself in real-time (e.g., "Self-Correction: Adjusting growth estimates down due to macro headwinds") feels like magic and builds genuine user trust.
  • Aesthetics: The UI feels like a premium, specialized tool rather than just another chatbot wrapper.
  • Robust Exports: The ability to download a formatted, multi-page PDF makes the output immediately shareable and professional.

What we learned

  • Process over Prediction: Prompting an AI to follow a strict process (Ingest -> Model -> Critique -> Synthesize) yields significantly better results than asking for a prediction directly.
  • User Psychology: Showing the "work" (the logs and skeleton loaders) makes users more patient and increases their confidence in the final output. They trust the answer because they saw the derivation.

What's next for VeridionAlpha

  • Live Brokerage Integration: Connecting to APIs like Alpaca or Robinhood to execute the trade structure generated by the agent.
  • Multi-Agent Swarm: Splitting the single agent into three distinct personas: a Fundamental Analyst, a Technical Trader, and a Risk Manager, who debate each other in a chat interface before reaching a consensus.
  • Portfolio Awareness: Allowing the agent to analyze how a new ticker fits into an existing portfolio's risk profile.

Built With

Share this project:

Updates