Inspiration

At five years old, Anish was diagnosed with astrocytoma, a type of brain cancer. Every day, oncologists would walk in, explain scans, treatment plans, and side effects in typical healthcare jargon and then leave before Anish understood anything. Sure, what they said was accurate, but Anish left the doctor’s office confused and feeling worse about his condition than before. And he had a right to feel that way - he did not even understand why his own body was doing this to him, and so it is no surprise he felt scared and isolated. We built Medflix to solve this problem by making medical understanding accessible at a child’s level.

What it does

Medflix takes in real doctor-patient notes and infuses them with data grounded in medical ground truth. We then transform this complex data into short, animated episodes personalized to each child We generate these episodes using characters and visual styles inspired by the shows and media each child already loves, so the experience feels familiar and comforting rather than clinical. On top of that, children can speak with a live AI avatar trained on their medical context, allowing them to ask questions and receive simple, personalized explanations in real time.

How we built it

We built a React frontend with Tailwind in order to make our interface feel kid-friendly and welcoming to a young child. We also build a Node/Express backend that is responsible for communicating with all of our different components. To start, we pull real medication and clinical label data from OpenFDA and DailyMed, then feed everything into Perplexity’s Sonar API to further refine what our data misses using deep medical search. We also built a strict video pipeline that enforces video generation frame-by-frame which gave us dramatically cohesive videos - rare for traditional ai video creation pipelines. We feed all healthcare data we fetch into HeyGen’s Video Generation API in order to create personalized videos for the patients. We also integrated HeyGen’s LiveAvatar API to power an interactive voice-based AI avatar that knows each child’s name, diagnosis, and medications so the child can feel comfortable talking about their diagnosis. We also added gamification (battle cards, quizzes, progression) and curated the UI specifically for children to make the application seem friendly and welcome to a child.

Challenges we ran into

Early on, we ran into a lot of issues with AI hallucination. Our AI scripts mixed diagnoses, medications, and lifestyle advice into single episodes and confused even basic information. In order to solve this we made strict guardrail prompting and also topic-bounded scene generators. Another huge issue for us was video generation latency where each episode took a painful 1- 5 minutes. In order for us to solve this, we had to redesign the UX for asynchronous background rendering and added status tracking for each individual episode. We also ran into a lot of issues with real time AI avatar interaction from HeyGen’s LiveAvatar API which included things like WebRTC autoplay issues, audio track timing, session concurrency limits, and infinite echo loops. In addition, when trying to add Poke as our message assistant/reminder manager for the parents, it sometimes took us time to convince the stubborn guy to collaborate. Though once we went through this barries, Poke proved amazing Also, one of the hardest things for us was actually non-technical - we had to translate chemotherapy, antibiotics, and inhalers into language a young child could easily understand. We actually worked around this by using specific prompting in our AI video generation along with heavy ground sourcing in real health care data

Accomplishments that we're proud of

A newly diagnosed child can now watch seven personalized episodes explaining their condition and medications in language they actually understand. The AI Health Buddy works end-to-end so kids can ask questions in their own voice and receive contextual, grounded responses. We grounded every single episode in ground truth medical data and didn’t rely on raw AI output We built out the full features from doctor and patients for our platform, not just a demo from the patients side We made a professional looking UI during a hackathon

What we learned

We learned how to chain AI systems together in order to produce a novel effect on patients since video generation alone is generic and seems fake, but if designed and combined properly, AI systems can complement each other instead of working against one another. We also learned a fair bit about AI LLM prompting, especially if you want to build a product that is robust, you need to get pretty creative. If you can figure out how to combine and use the instruments that, at the first glance, is impossible to combine, the products that could appear will surprise you in the best way possible.

In addition, essentially, you need to focus on your customer. Children require clarity, empathy, and emotional safety. As such, we had to prompt our models and adjust the data in an intelligent way to cater to their specific needs.

Most importantly, though, we got reminded that the best work is done in collaboration. By seeking feedback from each other, listening attentively, and actively trying to help each other, we built a product none of us would be able to in such a short period of time. As a matter of fact, during brainstorming everybody participated so proactively that at the end we realized that by ourselves we wouldn’t even be able to come up with such an idea. When everybody is engaged and excited, everything is within reach

What's next for Medflix

In the future, we will continue hyper-personalizing care for every unique patient–ensuring every child can clearly understand their diagnosis, treatment, and recovery. But we don’t want to just help children; we want to help everyone. As such, we are going to expand this model to adults as well, delivering personalized explainer videos that help patients better understand their care and recover more effectively. Our goal is simple: make medical understanding the default, not the exception.

Built With

  • dailymed-api
  • express.js
  • heygen-api?(video?generation)
  • heygen-liveavatar-api?(real-time-ai?avatar)
  • html5
  • javascript?(es?modules)
  • livekit?(webrtc)
  • localstorage
  • mediadevices
  • model?context-protocol-(mcp)
  • node.js
  • openfda?api
  • perplexity-ai?/-sonar-api?(context?layer)
  • poke-sdk?(mcp?-medication?reminders)
  • react
  • tailwind-css
  • vite
  • webrtc?/
Share this project:

Updates