MedXP is a proof-of-concept clinical handoff platform designed to reduce medical malpractice and shift-change issues in hospital nursing settings. The system features AI-powered audio recording, transcription, and agentic analysis against medical standards of care.
The platform offers two frontend options:
Frontend (React) - A modern web application with in-browser audio recording, real-time waveform visualization, and a comprehensive shift board for managing patient handoffs.
Frontend2 (Streamlit) - A Python-based web interface providing audio upload, transcription, validation, and patient timeline visualization. This frontend connects to both the backend (for context enrichment) and middleware (for transcript validation and malpractice analysis).
Both frontends help healthcare professionals create structured SBAR handoff documentation efficiently while monitoring interactions against patient data and medical standards.
MedXP streamlines clinical handoffs by recording audio notes and automatically transcribing them. The platform helps healthcare professionals create structured SBAR handoff documentation efficiently while monitoring interactions against patient data and medical standards.
- π€ Audio Recording: Record clinical handoffs directly in the browser
- π AI Transcription: Automatic transcription using OpenAI Whisper API
- π SBAR Format: Structured handoff documentation
- π₯ Patient Management: Select and manage patient handoffs
- π Shift Board: Kanban and timeline views for shift management
- π Notifications Portal: Alerts for nurse managers when care gaps are detected
- π¨ Modern UI: Built with React, Tailwind CSS, and Shadcn/ui
- π Audio Upload: Upload audio files in multiple formats (WAV, MP3, M4A, WebM, OGG)
- ποΈ Transcription: Speech-to-text conversion using OpenAI Whisper
- πΎ Transcript Export: Save transcripts as text files with timestamps
β οΈ Clinical Validation: Validate against medical SOPs, policies, and guidelines- π Patient Timeline: Visual timeline with warnings and events
- βοΈ Risk Assessment: Malpractice risk and compliance scoring
- π SBAR Display: Structured handoff card visualization
Launch:
# 1. Copy .env.example to .env and add MINIMAX_API_KEY or OPENAI_API_KEY
# 2. Start all services (creates .venv and runs npm install if needed)
python start_all.py- Backend http://localhost:8000
- Middleware http://localhost:5001
- Frontend http://localhost:8080
Press Ctrl+C to stop all. Use python stop_all.py to kill services by port if needed.
Preflight: .env must exist (errors and quits if missing). .venv and frontend/node_modules are created/installed automatically.
- Navigate to the frontend directory:
cd frontend- Install dependencies:
npm install- Start the development server:
npm run devThe app will be available at http://localhost:8080/
For detailed frontend documentation, see frontend/README.md
- Navigate to the backend-api directory:
cd backend-api- Run the setup script:
./setup.shOr manually:
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt- Configure your OpenAI API credentials:
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY- Start the backend server:
python main.pyThe API will be available at http://localhost:8000/
For detailed backend documentation, see backend-api/README.md
You need to run both the frontend and backend servers:
Terminal 1 - Frontend:
cd frontend
npm run devTerminal 2 - Backend:
cd backend-api
source venv/bin/activate
python main.pyThis section describes the complete MedXP system with three interconnected services:
- backend - FastAPI server providing context enrichment and medical knowledge retrieval
- middleware - Flask API handling transcript validation and malpractice analysis
- frontend2 - Streamlit web interface for recording and viewing handoffs
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β frontend2 ββββββΆβ middleware ββββββΆβ backend β
β (Streamlit)β β (Flask) β β (FastAPI) β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β β β
β Submit audio/ β Enrich context β Retrieve
β transcript β + risk analysis β SOPs/guidelines
β β β
βββββββββββββββββββββββββββββββββββββββββ
β Timeline with β Warnings + β
β warnings β risk assessment β
| Service | Port | Description |
|---|---|---|
| frontend2 | 8501 | Streamlit web UI |
| middleware | 5001 | Transcript validation API |
| backend | 8000 | Context enrichment API |
Option A β Use start_all.py (recommended): See Quick Start above. Uses a single project .venv at repo root.
Option B β Manual (multiple terminals):
Terminal 1 - Backend (Context Enrichment):
# Use project venv at repo root
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt -r backend/requirements.txt -r middleware/requirements.txt
cd backend
python main.pyThe backend will be available at http://localhost:8000
Terminal 2 - Middleware (Validation & Risk Analysis):
source .venv/bin/activate # (from project root)
cd middleware
PYTHONPATH=.. python app.pyThe middleware will be available at http://localhost:5001
Terminal 3 - Frontend2 (Streamlit UI):
cd frontend2
# Create virtual environment if not exists
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# Start the Streamlit app
streamlit run app.pyThe frontend will be available at http://localhost:8501
-
Open your browser to http://localhost:8501
-
New Handoff Flow:
- Select a patient from the dropdown
- Upload an audio file (WAV, MP3, M4A, WebM, OGG)
- Click "Transcribe Audio" to convert speech to text
- Review and edit the transcript if needed
- Click "Save Transcript" to save as a text file
- Click "Validate & Analyze" to run validation
-
Validation Process:
- Middleware receives the transcript
- Forwards to backend for context enrichment (SOPs, policies, guidelines)
- Backend generates clinical warnings
- Malpractice agent analyzes for risk/compliance issues
- Results are returned to frontend
-
View Results:
- Clinical warnings are displayed with severity levels
- SBAR handoff card is shown
- Patient timeline is updated with warnings
- Risk assessment shows compliance score
- π Record Handoff: Upload audio files for transcription
- π Patient Timeline: Visual timeline of events and warnings
- π₯ Patient Overview: View patient information and vitals
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/transcribe |
Transcribe audio (backend-api) |
| POST | /api/v1/transcripts |
Validate transcript (middleware) |
| GET | /health |
Health check (all services) |
- Docker and Docker Compose
- Python 3.11+
- Node.js 18+
-
Clone the repository
git clone <repository-url> cd MedXP
-
Set up environment variables
Copy the example environment file and fill in the required values:
cp .env.example .env
Edit
.envand configure the following settings:# Database POSTGRES_PASSWORD=your_secure_password # Flask FLASK_SECRET_KEY=your_flask_secret_key # MinIO MINIO_ROOT_USER=minioadmin MINIO_ROOT_PASSWORD=your_minio_password # AI/LLM LLM_API_KEY=your_llm_api_key
-
Start the development environment
cd docker ./setup.shOr manually with Docker Compose:
docker-compose up -d
-
Access the application
- Frontend: http://localhost:3000 (Docker) or http://localhost:8080 (Dev)
- API: http://localhost:5001 (Flask middleware) or http://localhost:8000 (FastAPI backend)
- MinIO Console: http://localhost:9001
MedXP/
βββ .venv/ # Project Python venv (created by start_all.py)
βββ start_all.py # One-command launcher (preflight + venv)
βββ stop_all.py # Stop services by port
βββ clean_deps.py # Remove .venv, frontend/node_modules, package-lock.json
βββ docs/
β βββ docs-for-ai/ # Project documentation for AI context
βββ frontend/ # React + TypeScript + Vite frontend
β βββ src/
β βββ components/ # UI components
β βββ hooks/ # Custom React hooks (audio recording)
β βββ pages/ # Page components
β βββ lib/ # Utilities and API clients
βββ frontend2/ # Streamlit Python frontend
β βββ app.py # Main Streamlit application
β βββ requirements.txt # Python dependencies
βββ backend-api/ # FastAPI backend with audio transcription
β βββ main.py # FastAPI server with transcription endpoint
β βββ requirements.txt # Python dependencies
βββ middleware/ # Flask API layer for transcript validation
β βββ app.py # Flask application
β βββ requirements.txt # Python dependencies
βββ backend/ # Agentic AI backend services
β βββ agents/ # AI agents (context enrichment, malpractice)
β βββ services/ # AI services
βββ audio/ # Audio recordings storage
βββ docker/ # Docker configuration
βββ Data/ # Sample data files
β βββ scenarios.json # Transcript scenarios
β βββ patients_ehr.json # Patient EHR data
βββ LICENSE
- React 18 with TypeScript
- Vite for development
- Tailwind CSS
- Radix UI / Shadcn/ui
- React Router
- React Query
- Custom audio recording hook
- Streamlit for rapid Python UI development
- httpx for HTTP client requests
- Python 3.11+
- FastAPI
- OpenAI Whisper API for transcription
- Uvicorn
- Python 3.11+
- Flask (Python)
- httpx for backend communication
- Malpractice agent integration
- Python-based AI agents
- Context enrichment with medical knowledge
- Medical standards database (SOPs, policies, guidelines)
Once the backend-api is running, visit:
- Interactive API docs: http://localhost:8000/docs
- Alternative docs: http://localhost:8000/redoc
The middleware API handles transcript validation and forwards to the backend for enrichment.
| Method | Endpoint | Service | Description |
|---|---|---|---|
| POST | /api/transcribe |
backend-api | Transcribe audio using OpenAI Whisper |
| POST | /api/v1/transcripts |
middleware | Validate transcript with backend enrichment |
| POST | /api/v1/enrich |
backend | Context enrichment with medical knowledge |
| GET | /health |
all | Health check endpoint |
When submitting a transcript via /api/v1/transcripts:
- Middleware receives
PatientIDandTranscript - Generates enrichment request using
scn_1.jsontemplate - Forwards to backend at
/api/v1/enrich - Backend retrieves relevant SOPs, policies, and guidelines
- Backend generates clinical warnings
- Malpractice agent analyzes for risk/compliance
- Combined response returned to frontend
BACKEND_URL=http://127.0.0.1:8000
MIDDLEWARE_URL=http://127.0.0.1:5001
TRANSCRIPT_API_URL=http://127.0.0.1:8000
OPENAI_API_KEY=your_openai_api_key_here
Middleware loads from project root .env. Set MINIMAX_API_KEY or OPENAI_API_KEY.
Sample data files are located in the Data/ directory:
scenarios.json: Sample nurse-patient session transcriptspatients_ehr.json: Sample patient electronic health records
All configuration is managed through environment variables. Copy .env.example to .env and customize as needed for your environment.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
Copyright (c) 2026. All rights reserved. No use without permission.
See the LICENSE file for details.
For issues and questions, please open an issue on GitHub.