PWSense

Inspiration

Prader-Willi Syndrome affects 1 in 15,000 individuals worldwide with both patients and caregivers facing significant daily challenges. The most notable of these challenges is hyperphagia, which is an unrelenting and physiologically driven sensation of hunger.

One of our teammates, EmmiLee, has firsthand experience with what it is like to grow up supporting a loved one with Angelman Syndrome, a rare neurodevelopmental disorder with significant symptomatic and genetic overlap with PWS. Her experience illuminated an urgent, unmet need for tools that empower families not only to respond to crises but also to anticipate and manage them with confidence. Guided by her as well as the IPWSO's speakers' perspectives, we set to build a tool that managed and mitigated the symptoms of PWS.

What It Does

PWSense pairs a vagus nerve stimulation (VNS) implant with a web application to continuously monitor heart rate and detect speech patterns in real time. This multimodal data is organized into five core tracking categories:

  • Heart Rate Tracking
  • Hunger Tracking
  • Emotional State Tracking
  • Outburst Entries Over Time
  • Additional Manually Added Symptom Info (ex. severity, distress signals, and possible triggers)

By passively collecting and analyzing this data, PWSense gives caregivers and clinicians a clearer picture of flare-up patterns without giving primary caretakers the burden of manually entering data.

How We Built It

We began with a comprehensive review of PWS symptoms and current research on symptom management, during which we identified vagus nerve stimulation as a clinically supported foundation. This led us to introduce heart rate tracking and speech detection as core sensing modalities, from which we designed insights capturing hunger levels, emotional state, and outburst frequency.

We conducted extensive research to validate every claim, from how vagus nerve stimulation works to how it may lead to long-term symptom relief.

Once clinical validity was established, we moved on to the implementation phase. This process was split into two parts: implementing the backend in Flask that integrated all of the main core features and designing a mockup on Figma for our core interactions and analytics flows for the frontend. This mockup was later translated into React. We leveraged the Claude API to compute hunger scores from speech transcripts, Wispr Flow to transcribe audio to text, and integrated a CNN model for emotion classification from audio (forked from https://github.com/yfliao/Emotion-Classification-Ravdess). These AI models formed the backbone of our hunger and emotional state tracking pipeline.

Challenges We Ran Into

  • Deciding What Data To Use and Collect: Determining which symptoms to track and how to quantify them was difficult. PWS manifests differently across individuals. Furthermore, the scarcity of PWS-specific biosignal datasets made it difficult to determine what our device needed to measure.

  • Training the Sentiment Analysis Model: Training the emotion-classification CNN to run reliably in our pipeline was definitely a technical challenge. Oftentimes, the model would completely misclassify the tone of uploaded audio and required signififcant fine tuning and debugging.

  • Designing the Device for Highly Sensitive Individuals: PWS patients frequently experience heightened skin irritability, making noninvasive hardware design a clinical constraint. We researched implant form factors that were realistic, minimally invasive, and tolerable for this specific population.

What We're Proud Of

  • Implementing Full AI Pipeline Successfully integrating a CNN model, the Claude API, and Wispr Flow into the Flask endpoints to create a pipeline that interprets both raw biosignals and contextual symptom data made our demo unique and novel.

  • Multimodal Data Architecture: We designed a system that passively captures and synthesizes five symptom categories, eliminating manual caregiver input while improving PWS symptom tracking

  • Validation from Experts: IPWSO guest speakers with clinical expertise affirmed both the feasibility and impact of PWSense, which helped us confirm that it would help address a genuine gap in PWS care.

  • Backed by Clinical Research: Every product decision was grounded in peer-reviewed research, from VNS efficacy to sensor feasibility, ensuring PWSense is clinically feasible.

What We Learned

PWSense taught us how to bridge the gap between what's being done in clinical research and with predictive technology. Designing a pipeline that transforms passive biosignals into actionable caregiver insights deepened our understanding of full-stack development and multimodal AI systems. We also learned how critical human-centered design is when building for sensitive populations, where hardware comfort, interface clarity, and caregiver trust matter as much. An accurate device is as good as a rock if patients are willing to use it.

Next for PWSense

  • Mobile and Multilingual Expansion: We'd like to build out a full mobile app with multilingual support to make continuous symptom monitoring accessible to underserved PWS communities globally.

  • Global Awareness and Early Diagnosis: We'd like to partner with PWS clinicians and research institutions to validate our biosignal models with real patient data and use our platform to advocate for earlier diagnosis of PWS worldwide.

  • Predictive Analytics: We'd like to improve our analytic pipeline so that it's less reactive and more proactive in terms of forecasting flare-ups.

Built With

Share this project:

Updates