Inspiration
We were inspired to create EquiCourt by the significant access-to-justice gap for everyday people. Minor disputes over issues like landlord-tenant disagreements or parking tickets currently clog small claims courts, dragging cases out for months. The evidence for these cases is often messy—a mix of photos, voice notes, and screenshots that most online tools can't handle. Furthermore, self-represented litigants face a bewildering legal maze of overlapping municipal, provincial, and federal laws. This led us to ask: “What if an AI judge could parse any evidence, understand the legal hierarchy, and deliver a fair ruling in minutes?”
What it does
EquiCourt is a multimodal, multi-LLM micro-litigation platform designed to resolve petty disputes swiftly and fairly. The system ingests various forms of evidence, including text from user input or PDFs via Docling, and records live conversations using the MDN Web Speech API. A Cohere Command A3 model summarizes each turn of the conversation to build a clear narrative. This context, along with the initial evidence, is fed to a Gemini 2.0 Flash model which consults the Canadian Constitution and relevant statutes via RAG to make an informed decision. Ultimately, EquiCourt delivers clear, legally-grounded remedies—such as paying damages or dismissing a claim—in a convenient, online setting.
How we built it
We built EquiCourt with a multi-layered architecture. For multimodal intake, we use Docling for text and PDF context and the MDN Web Speech API to capture and record live conversations between disputants. The reasoning core is a dual-LLM system where a Cohere Command A3 model summarizes the dialogue, and a Gemini 2.0 Flash model makes the final decision based on all evidence and legal precedents retrieved through a RAG system connected to Canadian law. We employed efficient context management and compression to optimize speed and stay within token limits. The entire platform is built with a Vite.
Challenges we ran into
We encountered several technical challenges during development. To manage the context-window limits of the models when dealing with extensive evidence, we implemented chunking for large files and used on-the-fly RAG to feed in only the most relevant information.
Accomplishments that we're proud of
We are particularly proud of demonstrating EquiCourt's capabilities by delivering our first complete, multimodal verdict in just 180 seconds during a live demo. Beyond the proof-of-concept, we have also achieved real-world traction by securing the equicourt.com domain and launching an MVP that is currently being used in a pilot program by two pro-bono mediators for actual cases.
What we learned
This project provided several key insights. We learned that using an ensemble of specialized models, like Cohere for summarization and Gemini for reasoning, yields more reliable and nuanced results for high-stakes tasks than a single model. We also discovered that fine-grained chunking of legal statutes dramatically improves the precision of our RAG system. On the user-facing side, we confirmed that transparency through clear citations and confidence bars significantly increases user acceptance of the outcomes, even when they lose. Lastly, analyzing the tone in voice evidence proved invaluable, often revealing elements of coercion or bad faith that text alone would miss.
What's next for EquiCourt
Looking ahead, we have an ambitious roadmap for EquiCourt. Our immediate priority is to launch a realtime mediation mode that can propose settlements and de-escalate disputes before a formal ruling is even necessary. We also plan to introduce multilingual support to serve broader jurisdictions, develop an e-Filing API to push certified judgments directly to official court systems, and create GovCloud or on-premise editions for data-sovereign regions. To continuously improve our platform, we will build an adaptive fine-tuning loop using appeal outcomes and are planning a major accessibility upgrade, including text-to-speech verdicts and a screen-reader-first UI.
Built With
- lovable
Log in or sign up for Devpost to join the conversation.