Inspiration
Healthcare is one of the most critical and demanding professions in the world. Clinicians undergo years of rigorous training to accurately assess, diagnose, and treat paitents–often under intense time pressure on limited bandwidth.
Even with extensive experience, the growing volume and complexity of patient data makes it easy to miss subtle, but important, details buried in clinical notes. We wanted to build a tool that helps surface those details, and makes clinical reasoning easier to follow.
What it does
Asclepion aims to reduce cognitive load by transforming unstructured clinical notes into an interactive, visual reasoning graph. Instead of replacing clinicians, it supplements their workflow by making symptoms, findings and diagnoses easier to understand at a single glance.
Furthermore, it can give a rough cost breakdown based on the extracted information, giving patients and providers a clearer picture of the treatment plan. Using this breakdown, Asclepion can highlight how relevant specific procedures or charges are to a patient's diagnosis.
Using the cost breakdowns generated, it uses a proprietary AI model that we built to rate each procedure or billing item on a scale from 1.0 - 10.0 regarding how relevant that procedure was to treating the patient's final diagnoses. This tool is to help expedite the process of insurance companies reviewing claims to determine what is under coverage and what is not.
How we built it
The front end was built using React, Sigma.js, and Vite. The backend used Express.js, and Ollama to host large-language-models locally. The locally hosted LLMs are Google’s fine-tuned MedGemma3-1.5 (4b params) and Mistral-Nemo (12b params), allowing us to maintain performance while keeping patient data on-device. In addition, we developed a pytorch model to rate the relevance of individual billing items relative to a patient’s diagnosis. This model was trained on a small, truncated dataset and exposed via an internal API. The frontend shows this model with Unicorn-based API requests, enabling real-time scoring and integration into clinical reasoning workflow.
Challenges we ran into
Our first challenge was overcoming Gemini’s rate limiting. Using the free tier led to frequent timeouts, which pushed us to fully commit to local model hosting. This aligned with security goals, since handling patient data locally is inherently more secure than relying on third-party APIs.
Another challenge was ensuring output readability. Because LLMs are non-deterministic, they occasionally produce malformed data. To address this, we enforced strict backend schemas and validation checks before returning results to the front end.
Accomplishments that we're proud of
The workflow is something that we are extremely proud of. Our entire vision for this application was to reduce the amount of time medical professionals take with working through documentation. The insurance page extends that further as it helps claims agents better understand a diagnosis and a doctor's line of thinking through clear visualizations, while also surfacing whether a given billing item is covered.
What we learned
We learned the importance of model accuracy, performance, and determinism. Since we needed structured outputs, enforcing schemas and validation turned out to be just as important as the model selection itself.
What's next for Asclepion
We want to improve the accuracy of the model; this would be done by allocating more power and resources to our backend to provide the infrastructure for these high-parameter LLMs to run. In addition, we want to create another model for determining billing, because currently, the billing prediction has a lower-than-optimal accuracy. Having a separation between “Flow” and “Description” would allow for more accuracy at lower resource cost. To ensure that data is protected, another step would be to develop a login/signup page. By logging in, doctors and patients can communicate to one another by sending notes to each other. In addition, we could also add a feature to upload file patient history document features to provide more accurate results. Finally, we could utilize an existing Level Service Calculator to ensure more accurate billing data without requiring an LLM.
Built with
Front End: React, Sigma.js, Vite Back End: Express.js, Ollama. ML / Data: Python, Pytorch, Pandas, scikit-learn.
Built With
- express.js
- ollama
- python
- pytorch
- react
- sigma.js
- typescript
- vite
Log in or sign up for Devpost to join the conversation.