Inspiration
Primary care visits are short, intake is noisy, and clinicians don’t have time to hunt for the one guideline paragraph that matters. We set out to build a physician-facing “intake → summary → evidence” assistant that’s fast, explainable, and demo-able without real PHI.
What it does
Medigator turns a brief structured intake into a clinician-ready note and just-in-time evidence:
- Converts patient answers into a clean HPI/ROS summary (no diagnoses or advice; input-faithful).
- Computes simple boolean flags (e.g., ischemic_features) and suggests likely ICD/CPT/E/M candidates using a small rules table.
- Surfaces 2–3 high-yield evidence cards via lightweight RAG (guidelines, summaries) aligned to the patient’s presentation.
- Generates a one-page clinician view with copy-to-clipboard sections for rapid charting.
Where will users utilize it?
- Outpatient clinics and urgent care (pre-visit or rooming).
- Telemedicine intake prior to video visits.
- Triage desks to standardize handoff notes.
- Training settings for junior clinicians to see “what to ask next” and supporting evidence.
How to utilize it
- Scheduler sends a secure intake link to the patient (tokenized URL).
- Patient answers 5–6 targeted questions (chief complaint specific).
- Medigator produces an HPI/ROS summary, sets flags, ranks evidence, and suggests codes.
- Clinician opens the encounter in the dashboard, reviews/edits, copies sections into the EHR, and proceeds with the visit.
- Optional: export a PDF encounter summary for audit/teaching.
How we built it
- Front end: React + Vite + Tailwind. Two faces: a minimal patient intake form and a clinician dashboard (encounter list → detail panel with HPI/ROS, flags, evidence, suggested codes).
- API: FastAPI with typed Pydantic models for request/response contracts.
LLM layer:
- GPT-4o-mini for deterministic JSON output (response_format JSON schema, temperature 0.1).
- Hardened system prompt with strict schema and “no diagnosis/advice” guardrails.
- Fallback template generator if the LLM call fails or times out.
RAG:
- Sentence-Transformers (all-MiniLM-L6-v2) embeddings, FAISS (IndexFlatIP) store.
- Hybrid retrieval: embeddings + optional BM25, re-ranked; top-2 evidence cards.
- Metadata tagging (year/section/tags) for cleaner cards.
Rules engine:
- CSV-backed mappings for symptom→ICD candidates, flags→CPT triggers, and simple E/M buckets.
- SQLite for fast local lookups; seeded from
data/rules/*.csv.
Security/ops:
- PHI-redaction middleware for HIPAA-mode demo (masks names/phones if present).
- Frontend deployed; backend runs locally behind a secure tunnel (Cloudflare/Ngrok) so keys/RAG index stay off the client.
- Mock FHIR bundle for realistic encounters; no real PHI used.
Challenges we ran into
- LLM determinism: Forcing strict JSON under tight latency. Solved with response schema, caching, and a regex JSON extractor fallback.
- RAG quality: Avoiding generic snippets. Solved with symptom/flag-driven queries, small curated corpus, and re-ranking.
- Scope creep: Insurance pricing is complex; we removed cost ranges to keep the demo reliable and focused.
- CORS/tunnel: Keeping a deployed frontend talking to a local API securely during live demo.
- Time budget: Balancing a polished UI with guardrailed backend logic in <36 hours.
Authentication
- Tokenized intake links for patients (opaque, short-lived, single-use).
- Clinician dashboard protected by a simple passcode/magic-link for demo.
- No real patient accounts; HIPAA-mode masks any accidental PHI before LLM/RAG calls.
Dashboard Details
- Encounter list: sortable by time/chief complaint; quick status badges.
- Summary panel: HPI (≤5 sentences), ROS (CV/Resp/Constitutional), PMH/Meds.
- Flags: computed booleans with human-readable rationales.
- Evidence cards: title, snippet, year, and source link; copy link.
- Codes: top ICD/CPT/E/M suggestions with labels and “why” tooltips.
- Actions: copy sections, export PDF, mark reviewed.
Accomplishments that we’re proud of
- End-to-end flow from intake link → clinician dashboard in under 36 hours.
- Hardened LLM outputs with JSON schema validation and safe fallbacks.
- RAG that’s fast enough for clinic use (sub-second on small corpora) and actually shows relevant lines, not noise.
- A UI clinicians can parse at a glance: one page, no chat clutter.
What we learned
- Small, curated corpora + precise prompts beat massive, noisy retrieval for clinical UX.
- Strict schemas and fallbacks matter more than fancy prompts when live-demoing healthcare tools.
- E/M and coding logic is best started as transparent rules before any ML.
- “Demo-safe” privacy patterns (no PHI, tunnels, masking) reduce friction with stakeholders.
What’s next for Medigator
- SMART on FHIR read to prefill meds/allergies/problems from sandbox EHRs.
- Human-in-the-loop edits that retrain/adjust prompts and ranking over time.
- Specialty packs (chest pain, headache, abdominal pain, diabetes follow-up).
- Prospective usability testing with clinicians; measure time saved per note.
- Deployment hardening: audit logs, SSO, on-prem/virtual private deployment.
Built with
- Frontend: React, Vite, TypeScript, Tailwind
- Backend: FastAPI, Pydantic, Uvicorn
- LLM: OpenAI GPT-4o-mini (JSON mode), schema-validated prompts
- RAG: Sentence-Transformers, FAISS, optional BM25
- Data: SQLite, CSV rule tables, synthetic FHIR bundles
- Dev/Ops: Python, Node, Docker (optional), Cloudflare Tunnel/Ngrok, GitHub Actions (lint/test)

Log in or sign up for Devpost to join the conversation.