Luma AI is a dementia care companion platform built for caregivers and patients. It combines AI agents, serverless workflows, and real-time monitoring to support memory assistance, medication reminders, wandering alerts, and MRI-based cognitive insights. The system is designed to run on DigitalOcean (App Platform, Functions, Spaces, Managed Postgres, Managed Valkey/Redis, OpenSearch).
Disclaimer
This is a research prototype, not a diagnostic medical tool. Do not use for real medical decision-making. All interfaces must display: "This is a research prototype, not a diagnostic medical tool."
- Features
- Architecture Overview
- Project Structure
- Prerequisites
- Getting Started
- Configuration & Environment
- Components in Detail
- Deployment
- Healthcare Disclaimer
| Feature | Description |
|---|---|
| Memory companion | Store and search memories (text + images) with semantic search via Gemini embeddings and OpenSearch. Caretaker-defined βusual spotsβ are merged into search results. |
| Caregiver intelligence | Chat and weekly risk summaries: MRI analysis, wandering/memory patterns, and actionable next steps. Powered by configurable AI agents (e.g. Gradient/DO Agents). |
| MRI analysis | Upload brain MRI scans; images are stored in DO Spaces, analyzed by a GPU inference service (ONNX ViT), and results are persisted and summarized by the caregiver agent. |
| Medication reminders | Timezone-aware medication schedules in Postgres; DigitalOcean Functions + Twilio send SMS reminders and update last_sent_at. |
| Wandering detection | GPS events streamed to Valkey (Redis); a detector compares distance from a safe zone and triggers Twilio SMS alerts + logs to Postgres for risk scoring. |
| Dashboard | Caretaker dashboard with per-patient charts: wandering counts, memory search success/failure, medication distribution, and setup metrics (meds, usual spots, MRIs). |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Caregiver / Patient β
β (Browser β Django App) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β luma_web (Django) β
β β’ Auth (caretaker vs patient), sessions, patient CRUD β
β β’ Proxies: /agents/mri, /agents/memory/store, /agents/memory/search, β
β /agents/chat β DO Agents (BASE_URL + ACCESS_TOKEN) β
β β’ Uploads to DO Spaces (memories, MRI studies) β
β β’ Dashboard data from Postgres (wander_alerts, memory_retrieval_logs, β
β medication_schedules, mri_studies, usual_spots) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β β
βΌ βΌ βΌ βΌ
ββββββββββββββββ βββββββββββββββββββ ββββββββββββββββ βββββββββββββββββββ
β DO Agents β β DO Serverless β β DO Spaces β β Managed β
β (Caregiver, β β Functions β β (S3-compat) β β Postgres β
β Memory, Chat)β β (see below) β β β β β
ββββββββββββββββ βββββββββββββββββββ ββββββββββββββββ βββββββββββββββββββ
β β β
β β memory_store, memory_retrieve, β
β β mri_analysis, medication_worker, β
β β risk_analyser β
β βΌ β
β βββββββββββββββββββ βββββββββββββββββββ β
β β OpenSearch β β MRI Inference β β
β β (embeddings, β β (FastAPI+ONNX β β
β β patient-memories)β β on GPU VM) β β
β βββββββββββββββββββ βββββββββββββββββββ β
β β
ββββββββββ΄βββββββββ β
β Wandering β Valkey (Redis) streams β simulator / GPS β
β detector β β Twilio SMS + INSERT into wander_alerts ββββββββββ
βββββββββββββββββββ
- Django is the main web app: auth, dashboard, patient management, and proxy to AI agents.
- DigitalOcean Agents (Caregiver, Memory, Chat) are invoked by Django via
DO_*_AGENT_BASE_URLandDO_*_AGENT_ACCESS_TOKEN. Agents can call serverless functions and tools. - Serverless functions handle: memory store/retrieve (Gemini + OpenSearch + Spaces), MRI pipeline (Spaces β inference URL β Postgres), medication SMS (Postgres + Twilio), risk summary (Postgres aggregations).
- MRI inference is a separate FastAPI service (e.g. on a GPU droplet) exposing
/predict; themri_analysisfunction calls it viaINFERENCE_URL.
LumaAI/
βββ luma_web/ # Django application (main web app)
β βββ luma_web/ # Project settings, urls, wsgi
β βββ luma_app/ # App: models, views, services, templates
β βββ requirements.txt
β βββ env-example # Example environment variables
β βββ Dockerfile # For App Platform / GHCR
β
βββ functions/ # DigitalOcean Serverless Functions
β βββ memory_store/ # Store memory (text/image β Gemini β OpenSearch)
β βββ memory_retrieve/ # Semantic + keyword search, usual spots, logging
β βββ mri_analysis/ # Fetch image from Spaces β inference β Postgres
β βββ medication_worker/ # Cron: due meds β Twilio SMS, update last_sent_at
β βββ risk_analyser/ # Aggregates logs for caregiver agent (wander, memory, MRI)
β
βββ mri_detection/ # Standalone MRI inference API (FastAPI + ONNX)
β βββ app.py # /predict endpoint (ViT model)
β βββ alzheimer_onnx/ # ONNX model (or restore from alzheimer_onnx_backup.tar.gz)
β βββ commands.txt # Setup and curl examples
β
βββ wandering_notice/ # Wandering alert pipeline
β βββ simulator.py # Pushes GPS events to Valkey stream (patient:2:gps_stream)
β βββ detector.py # Consumes stream, geodesic distance, Twilio SMS + Postgres
β
βββ agents/ # Agent prompts / instructions (for Gradient/DO Agents)
β βββ Luma_AI_Caregiver_Agent.md
β βββ Luma_AI_Patient_Intelligence_Agent.md
β βββ Luma_AI_Supervisor_Agent.md
β
βββ frontend/ # Static HTML/CSS reference (dashboard, chat, memories, MRI, etc.)
βββ KnowledgeBases/ # PDFs for agent context (e.g. NIA caregiver guides)
βββ .github/workflows/ # CI (e.g. build and push luma_web to GHCR)
- Python 3.10+ (3.11 for luma_web Dockerfile)
- PostgreSQL (e.g. DigitalOcean Managed Database)
- Redis/Valkey (e.g. DO Managed Valkey) for wandering streams
- OpenSearch (for memory embeddings index
patient-memories) - S3-compatible storage (e.g. DO Spaces) for images and MRI studies
- Twilio account (SMS for medication reminders and wandering alerts)
- Google Gemini API key (embeddings and vision in memory store/retrieve)
- DigitalOcean Agents (or compatible OpenAI-style API) for Caregiver, Memory, and Chat
- doctl and DigitalOcean Serverless plugin for deploying functions
git clone <repo-url>
cd LumaAIcd luma_web
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
cp env-example .env
# Edit .env with your DATABASE_*, SPACES_*, and DO_*_AGENT_* variables
python manage.py migrate
python manage.py runserver- App: http://127.0.0.1:8000
- Login/signup, then dashboard, patients, memories, MRI upload, chat.
Used by the mri_analysis function. Run on a machine with GPU for best performance:
cd mri_detection
python3 -m venv alz_env && source alz_env/bin/activate
pip install "numpy==1.26.4" optimum[onnxruntime] transformers fastapi uvicorn python-multipart Pillow torch torchvision accelerate
# If you have alzheimer_onnx_backup.tar.gz: tar -xzf alzheimer_onnx_backup.tar.gz
python app.py- API: http://0.0.0.0:8000 (health:
/, predict:POST /predictwith[email protected]). - Set
INFERENCE_URLin themri_analysisfunction environment to this host.
-
Simulator (sends fake GPS to Valkey):
cd wandering_notice pip install redis python-dotenv # Set VALKEY_URL in .env python simulator.py
-
Detector (consumes stream, sends SMS and writes to Postgres):
pip install redis psycopg geopy python-dotenv twilio # Set VALKEY_URL, DATABASE_URL, TWILIO_*, CAREGIVER_PHONE in .env python detector.py
From the root of each function (e.g. functions/memory_retrieve):
doctl serverless connect
doctl serverless deploy . --env .env --remote-buildDeploy each of: memory_store, memory_retrieve, mri_analysis, medication_worker, risk_analyser. Ensure their .env (or DO namespace env) has the required keys (e.g. DATABASE_URL, OS_*, SPACES_*, GEMINI_API_KEY, TWILIO_*, INFERENCE_URL for mri_analysis).
Copy luma_web/env-example to luma_web/.env and set:
| Variable | Purpose |
|---|---|
DJANGO_SECRET_KEY |
Django secret key |
DATABASE_PASSWORD, DATABASE_HOST |
Postgres (DB name/user/port in settings.py) |
SPACES_BUCKET, SPACES_REGION, SPACES_ACCESS_KEY, SPACES_SECRET_KEY, SPACES_BASE_URL, SPACES_ENDPOINT |
DO Spaces for uploads |
DO_CAREGIVER_AGENT_BASE_URL, DO_CAREGIVER_AGENT_ACCESS_TOKEN |
Caregiver agent (MRI + risk summary) |
DO_MEMORY_AGENT_BASE_URL, DO_MEMORY_AGENT_ACCESS_TOKEN |
Memory store/search agent |
DO_CHAT_AGENT_BASE_URL, DO_CHAT_AGENT_ACCESS_TOKEN |
Chat agent |
Each functionβs environment typically needs:
- memory_store / memory_retrieve:
OS_HOST,OS_USER,OS_PASS,GEMINI_API_KEY,SPACES_*,DATABASE_URL(retrieve also logs tomemory_retrieval_logsand usesusual_spots). - mri_analysis:
SPACES_*,DATABASE_URL,INFERENCE_URL(e.g.http://<mri-detection-host>:8000/predict). - medication_worker:
DATABASE_URL,TWILIO_SID,TWILIO_AUTH_TOKEN(and Twilio βfromβ number in code if different). - risk_analyser:
DATABASE_URL.
- simulator:
VALKEY_URL - detector:
VALKEY_URL,DATABASE_URL,TWILIO_SID,TWILIO_AUTH_TOKEN,CAREGIVER_PHONE
- Models:
Patient,MedicationSchedules,MemoryRetrievalLogs,MriStudies,UsualSpots,WanderAlerts(most unmanaged, mapping to existing tables). - Views: Login/signup, dashboard, patient CRUD, memories page, MRI upload, chat. Agent proxies build prompts (with patient_id, image URLs, etc.) and call
invoke_agent(prefix, payload)fromservices.py. - Agent integration:
services.invoke_agent(prefix, payload)uses{prefix}_BASE_URLand{prefix}_ACCESS_TOKEN, POSTs to an OpenAI-compatible/api/v1/chat/completionsendpoint, returns markdown and raw JSON.
- memory_store: Input:
patient_id,text_note,image_url. Fetches image from Spaces if present, describes with Gemini vision, builds unified text, embeds with Gemini, indexes in OpenSearchpatient-memories. - memory_retrieve: Input:
patient_id,query. Embeds query, runs OpenSearch k-NN + keyword, merges caretaker βusual spotsβ from Postgres, logs tomemory_retrieval_logs, returns combined results with optional presigned image URLs. - mri_analysis: Input:
patient_id,image_url. Fetches image from Spaces, normalizes, callsINFERENCE_URLfor prediction, writes row tomri_studies. - medication_worker: Runs on schedule; selects rows from
medication_scheduleswhere reminder time (in patient timezone) is due and not yet sent today, sends Twilio SMS, updateslast_sent_at. - risk_analyser: Input:
patient_id,days. Returns recent memory retrievals, wander alerts, and latest MRI for the caregiver agent.
- FastAPI app loading an ONNX Vision Transformer from
./alzheimer_onnx, exposesPOST /predict(image file) and returnsprediction,confidence, and disclaimer text.
- simulator: Appends GPS-like entries to Valkey stream
patient:2:gps_stream. - detector: Reads stream, computes geodesic distance from
SAFE_ZONE_CENTER; if >SAFE_RADIUS_METERS, sends Twilio SMS and inserts intowander_alerts.
- Django: The repo includes a
Dockerfileunderluma_web/. GitHub Actions (e.g..github/workflows/publish-ghcr.yml) can build and push the image to GHCR; point DigitalOcean App Platform at that image and set env vars. - Functions: Deploy with
doctl serverless deploy . --env .env --remote-buildfrom each function directory; configure triggers (e.g. cron formedication_worker) and wire agent tools to the function endpoints. - MRI inference: Deploy
mri_detectionon a VM or GPU instance; setINFERENCE_URLin themri_analysisfunction. - Wandering: Run
detector.pyas a long-lived process (e.g. systemd or container) with access to Valkey and Postgres.
All interfaces and outputs in this project must clearly state:
"This is a research prototype, not a diagnostic medical tool."
Do not use this system for real medical decision-making. Ensure compliance with local regulations and clinical validation before any clinical or diagnostic use.
See repository license file.