Inspiration
Your definition of pain changes over time, especially if you have a chronic or pre-existing condition. Multiple studies find that when people experience consistent pain, they tend to "get used to it" over time and don't see a doctor, even when the situation can be objectively harmful. On top of this, doctors and nurses don't have the bandwidth to follow up on every patient, and some have found that up 40-80% of the information they share is immediately forgotten. Our goal was to create an app that proactively checks in on users, creates a long-term profile of their health conditions, and acts as a supplement to their interaction with doctors and nurses.
What it does
Since we're aiming to engage the elderly, the check-ins operate over the phone. The main UI has a call button, where the user can input their number and start a phone call. After hanging up, the agent creates a personalized check-up schedule based on the conversation and determines the best frequency at which to check in (e.g. daily or weekly). The agent remembers context from past meetings and uses it to inform the conversation in the next meeting, allowing it to aggregate a profile for the user.
Over time, the agent observes long-term trends about the patient's health and pain. Our algorithm tracks the patients' own pain ratings on a scale of 1 to 7, but we also track and display an internal definition of "actual pain" which factor in external factors such as mood, physical condition, and habituation. For each call log, we record and display biotrackers so that users can review the conditions of each meeting.
How we built it
We focused on two main components-- proactive checkups, and long-term analysis. For independent check-ups, we started with a basic conversational pipeline, using Twilio to initiate a phone call and iteratively call the OpenAI API for follow-up questions. After each call ends, we call a scheduler that reads the transcript and independently creates and executes a follow-up schedule. The execution happens through node-cron, which we use to make calls to Twilio on an internally held schedule. To make it easy for users to input their phone number and view the long-term data, we built a web app using Node.js and Express. We also wrote custom backend pipelines for the scheduling, symptom tracking, and long-term pain analysis.
After building the basic infrastructure, we implemented more intricate features. Highlights include real-time biomarker tracking (tone, breathing patterns, etc.) with Twilio media stream, integrating an exponential model for scheduling, and custom pain correction algorithm that accounts for habituation ("getting used to the pain") to gauge realistic pain levels for users with chronic conditions.
Fundamentally, the problem we're trying to fix is that people normalize their own health over time, and doctor's appointments only provide local updates on information. Our app provides global information. What was shocking pain a few months ago might now just be another Tuesday, and a patient at the doctor's office might not be able to contextualize how they're feeling at that one moment in time versus in general. We also tracked different ailments, separating out pain in different parts of the body to more clearly distinguish different symptoms over time. Our app takes into account a set of the existing biomarkers, semantic analysis, a user-focused pain rating over time, pain in different body parts that might be interfering with each other, and context clues from the user's speech and more features in a novel algorithm to generate expected pain values.
Daily calls can be tiring, so we implemented a scheduler that exponentially decays symptom checking for fading symptoms (in terms of their actual pain). This uses the severity of symptom as well as frequency and last check to indicate it should be checked at a later vs earlier date, so that patients aren't flooded with information and don't have to keep reiterating their symptoms or be reminded of previous ailments. We focused on making the app as easily usable as possible.
We also take a novel approach by treating voice as a compounded health biomarker in extracting acoustic and temporal features such as breathing cadence, vocal stability, pauses, energy distribution, and spectral characteristics during natural conversation. Research shows vocal signals can reflect respiratory strain, neurological changes, stress, fatigue, and medication side effects because speech production integrates respiratory, cognitive, and motor systems. Our system continuously analyzes these signals in real time without storing raw audio, building a personalized baseline and detecting deviations over time. By combining conversational context with passive vocal biomarkers, MedSec transforms everyday speech into a longitudinal health signal that helps users recognize emerging patterns earlier.
Challenges we ran into
One of our main challenges was collecting enough quantifiable data to build a profile, without sacrificing the user's experience (especially because they're already speaking to an AI). We found that it was difficult to standardize symptom tracking across multiple sessions while keeping it conversational, because the LLM is biased towards semantics. We handled this by internally keeping a patient health profile throughout the conversation, to inform the agent's follow-up questions and gently prod for more data. Our goal was to ease the user into providing more information without interrogating them with a checklist of symptoms.
Accomplishments that we're proud of
- Medsec agentically provides homecare and checks back in, allowing for more support when it's hard to get appointments with healthcare professionals.
- Our UI landing page is intuitive and has all the information you need, and we're proud of its accessibility and seamlessness (especially for our target audience).
- Agent uses exponential decay, mood, health, to calculate reported pain.
- Medsec measures biomarkers for other factors influencing reported pain and health, monitoring other decline.
- Visualizations for doctors and other stakeholders
What we learned
Given that our project has a lot of moving parts, and a few of the pipelines are entirely internal, we learned a ton about system design-- namely designing intuitive pipelines that we could easily come back to and optimize. We also learned a lot about phone call integration, which none of us had really considered or worked with before, so we found it pretty interesting.
Our team was built the day TreeHacks started, and we learned a lot about teamwork, collaboration, and making full use of different strengths and working styles to put together a product.
What's next for MedSec
- More guardrails
- Privacy protections and solid encryption
- A separate interface for doctors vs. users
Log in or sign up for Devpost to join the conversation.