Inspiration
With approximately 750,000 pending cases, the current jury system is overwhelmed. Each case can take days and requires a complete jury, making the process both time-intensive and costly. Travel and accommodation expenses, often around $200 per person per day, create additional financial strain and accessibility barriers. Proceedings typically span 2–3 days, and errors such as missing evidence or overlooked perspectives can lead to case recalls. As a result, many cases face delayed resolutions, highlighting the urgent need for a more efficient and scalable approach.
What it does
We simulate a structured, multi-perspective deliberation process using autonomous AI agents. Diverse personas are preprocessed based on geographic, cultural, or ideological backgrounds and grounded in provided case files, testimonies, and relevant documents. Each agent embodies one of these personas and actively researches, formulates arguments, and defends a specific stance using available tools and sources. The agents engage in structured debate, challenging one another’s claims, exposing weaknesses, and refining their positions through adversarial analysis.
As the debate unfolds, a unbiased model evaluates the strength, coherence, and evidential support of each argument, shifting alignment as perspectives evolve. This process surfaces blind spots, strengthens viable ideas, and filters out weaker reasoning. An arbiter agent then synthesizes the discussion, distilling the most well-developed solutions, outlining key trade-offs, and summarizing major pros and cons. The final output is a clear, structured evaluation provided to a human decision-maker, who retains full authority while benefiting from a thoroughly stress-tested and multi-dimensional analysis.
Accomplishments that we're proud of
We built a working system that can be used by judges to simplify the jury system.
What we learned
We gained significant insight into how web agents can autonomously gather, synthesize, and leverage real-time information to strengthen structured reasoning processes. By integrating tools like search APIs and document retrieval systems, we saw how agents move beyond static knowledge and actively research evidence tailored to their specific stance. This highlighted the practical power of web agents in grounding arguments in up-to-date, relevant information rather than relying solely on pretrained knowledge.
We also learned how multi-agent systems can simulate dynamic, adversarial deliberation. When agents independently research, argue, and rebut one another, the resulting analysis becomes more nuanced and stress-tested. The inclusion of a reactive jury model further demonstrated how we can use these refined arguments to come up with a more useful decision compared to our initial ideas.
What's next for ConsensusCore
We would like to expand to having a factual filtering mechanism to provide unbiased information for each agent in the need for possible queries agents may have. Each agents could also reference internal documents for describing
Log in or sign up for Devpost to join the conversation.