Inspiration

Medical malpractice and communication gaps in hospitals especially during handoffs and at the bedside lead to avoidable harm. We were inspired by two main problems: safety (adverse events from drug–drug interactions, allergies, and procedures that don’t match a patient’s conditions) and interprofessional communication (incomplete or inaccurate handoffs that cause redundancy, wasted time, and risk). We wanted to build an AI layer that sits alongside clinicians, checks what’s being said and done against standards, and helps close those gaps before they become harm.

What it does

MedXP is an agentic AI assistant for doctors and nurses that turns voice notes and session context into structured, actionable support. Clinicians record notes while working with a patient; the system transcribes that input and enriches it with the right SOPs, hospital policies, and treatment guidelines (we’re starting with non–small cell lung cancer). It then flags risks in near real time e.g. anticoagulation with hemoptysis, neutropenic fever, hypoxia, drug–allergy or drug–interaction issues—and surfaces the relevant protocols and next steps. In short: it helps prevent wrong or unsafe procedures, improves handoff quality, and supports continuity of care by giving every provider the right context and checks at the point of care.

How we built it

We used Python for the agentic backend (FastAPI, Pydantic), Node.js for middleware/API orchestration, and React + TypeScript for the frontend. The core “brain” is a context enrichment agent that: (1) accepts JSON payloads with patient data, provider info, and transcript/voice-note text; (2) runs keyword- and rule-based retrieval over simulated knowledge bases (SOPs, policies, NSCLC guidelines) to find what’s relevant; (3) summarizes patient context and risk factors (with optional LLM integration for richer summaries); and (4) generates clinical warnings (contraindications, critical values, drug interactions, missing documentation). Data flows from the frontend through the middleware to the Python agent; the agent returns enriched context—relevant SOPs, policies, guidelines, and warnings—for display and for downstream analysis or alerting.

Challenges we ran into

Designing retrieval that’s both fast and clinically relevant without heavy ML infra was a challenge—we used weighted keyword matching and simple triggers (e.g. labs, vitals) so the prototype stays interpretable and quick. Mapping free-text transcripts and voice notes into structured patient context (diagnoses, meds, symptoms) required clear schemas and fallbacks when the LLM isn’t available. Balancing actionable alerts without alert fatigue meant defining severity levels and tying each warning to a specific action (e.g. “hold anticoagulation,” “blood cultures within 60 minutes”). We also had to simulate realistic knowledge bases (SOPs, policies, guidelines) so the system could demonstrate value before plugging into real hospital systems.

Accomplishments that we're proud of

We’re proud of building an end-to-end agentic pipeline—from input (patient + transcript) to retrieval, summarization, and warning generation—that runs without requiring an LLM API key (graceful mock/rule-based fallbacks). We built NSCLC-focused knowledge bases (10 SOPs, 10 policies, 11 guidelines) and rule-based warning logic that catches high-impact cases (anticoagulation + bleeding, neutropenic fever, hypoxia, allergies, code-status gaps). The enrichment API is documented, testable with sample inputs/outputs, and designed to slot into a larger MedXP stack (recording → transcription → enrichment → analysis → notifications). We’re also proud of articulating the “why” clearly: safety and interprofessional communication as the two pillars that drive everything we built.

What we learned

We learned that safety and communication are two sides of the same coin—better handoffs and continuity reduce redundant or conflicting care, which in turn reduces adverse events. We saw how structured context (vitals, labs, meds, allergies) plus unstructured text (transcripts) can be combined in a lightweight agent that doesn’t need heavy ML to be useful. We also learned that clinician trust depends on explainability: linking every warning and recommendation to a specific SOP, policy, or guideline, and making the “why” visible in the UI and in the API response.

What's next for MEDXP

Next steps include: connecting to real LLM APIs (e.g. MiniMax) for richer summarization and explanation; adding an analysis agent that compares documented actions to the enriched context and flags compliance gaps; integrating with live EHR and policy systems instead of simulated JSON; extending beyond NSCLC to other high-risk specialties; and building out the React frontend so nurses and doctors can record voice notes, see enriched context and warnings in real time, and act on recommendations. Long term, we want MedXP to be the agentic AI co-pilot that helps medical professionals provide the best care possible—solving communication issues, preventing unsafe procedures, and making every handoff and every decision a bit safer.

Built With

Share this project:

Updates