Problem Statement
When multiple agents analyse large volumes of news and social data, it becomes unclear which information they used, how strongly it influenced their reasoning, and how final insights were formed.
Solution
Trace by the Graph, Observe by the Claim, Explain by the Weights (You can read the example for a clearer understanding before reading the How)
How the System Works
We developed a multi-agent system for sentiment analysis for stocks in my portfolio.
1) Ingest Data and Create Claims
- The system reads large batches of headlines and social posts. Each agent converts raw text into short claims and lists the exact IDs of the data points it used.
- Every data point receives a weight. The heuristics for this particular POC: Better quality publishers or more recent articles get higher weight in the news agent. Posts with higher engagement get higher weight in the social agent.
2) Build the Claim Graph
Claims are grouped by asset or topic. Downstream agents receive only these claims, not the raw data. They combine or refine them into higher level insights and cite which earlier claims influenced them.
3) Trace, Observe, Explain
Every step cites its inputs, creating a complete reasoning path. You can trace any final insight back to the claims that formed it and then to the underlying data. The system also shows how strongly each data source contributed.
4) Chat Based Exploration
A chatbot lets you ask why the system reached a conclusion or to show the evidence. It uses the same claim graph to answer with fully traceable reasoning.
Example
Raw Data
** Each news headline and tweet is provided an id **
H1: “Nvidia data center demand continues to surge.” {publisher: Forbes, date: 16/11/2025}
H2: “AI demand boost data center sales.” {publisher: Reuters, date: 15/11/2025}
H3: “Analysts expect Nvidia margins to improve next quarter.” {publisher: Bloomberg, date: 14/11/2025}
S1: Tweet: “NVDA still strong, going kind of crazy; hyperscaler orders not slowing.” {engagement: 50k}
S2: Tweet: “I am going all in on Nvidia because I believe in it!!!” {engagement: 90k}
Step 1: News Agent Creates Claims
Claim N1: “Nvidia Stock price looks strong because demand in data center remains strong.” Cites {H1, H2} with weights {H1: 0.60, H2:0.4}. higher weight for h1 because its a more recent article.
Claim N2: “Margin expectations for Nvidia remain positive.” Cites {H3} with {H3:1.0}.
Step 2: Social Agent Creates Claims
- Claim S1: “Social sentiment supports strong Nvidia demand.” Cites {S1, S2} with higher weight on S2 because of higher engagement.
Step 3: Aggregation Agent
Receives N1, N2, S1 and forms a higher level insight.
- Final Claim F1: “Overall outlook for Nvidia is positive based on strong demand and improving margins.” Cites {N1, N2, S1}
Step 4: Explainability Layer
You can trace F1 to N1 to H1 and H2. You can trace F1 to N2 to H3. You can also trace F1 to S1 to tweets S1 and S2. The UI makes this visually appealing and clearer to analyze with bolded edges for higher weights to measure the impact of data sources to a claim.
Step 5: Chatbot Interaction
Question: “Why is the Nvidia outlook positive?” Answer: “Because N1 from H1 and H2, N2 from H3, and S1 from S1 and S2 all indicate rising demand and improving margins.”
Features
- Alternative to LangSmith
- Interactive graph visualization of claims and their relationships
- Real-time filtering by asset, sentiment, and source
- AI-powered chat interface for querying claims
- Citation tracking and highlighting
- Support for multiple data sources (news, social media)

Log in or sign up for Devpost to join the conversation.