Inspiration

According to the US Census Bureau, 14 million elderly individuals live alone – sometimes because they have nowhere else to go or that they do not want to burden their family. At this stage in life, it is common for them to encounter various health-related issues. They often struggle to manage their health, including difficulties accessing timely, personalized medical advice. Furthermore, the digital divide exacerbates these issues, as many elderly are not comfortable with existing digital health solutions. Managing your health is one aspect; ensuring emergency assistance is another critical concern. 36 million falls are recorded yearly amongst the elderly and when paired with the rather common independent lifestyle, it can be dangerous. Traditionally, the responsibility to initiate emergency communication falls upon the individual; however, this is unreliable because they may be injured and not even have full information.

If only there was a 24/7 companion that could offer not just comfort but real-time, actionable health guidance and emergency support. Imagine a solution that's digitally intuitive, seamlessly integrating into their daily life without overwhelming them, bridging the gap between technology and practical usability.

What it does

A hardware AI companion designed specifically for the elderly living alone, addressing their unique health management and emergency assistance needs.

  1. Emergency support: Our AI companion not only offers advanced fall detection but also actively listens for verbal cues indicating discomfort or severe health issues. We asses the severity of the situation based on the user's verbal expressions. In cases of significant health concerns, the system either calls their designated caregivers or sends them a detailed text message, depending on the assessed urgency. Additionally, when a fall is detected, the system provides essential information about the user's appearance and exact location to facilitate quicker and more efficient identification upon the caregiver's arrival.

  2. Real-time, actionable health guidance: By integrating with electronic health records, it contextualizes its guidance with the user's medical history, ensuring that all suggestions are relevant, safe, and tailored to each individual's needs. This approach enables the elderly to make informed decisions about their daily health management. We prompt the user about reminders such as taking pills etc. too. We learn their behavior over time.

  3. Digitally intuitive hardware companion: Understanding that ease of use is crucial for technology adoption among the elderly, our hardware companion is designed to be as user-friendly as an "Alexa for old people." It features simple voice commands, tactile buttons for those less comfortable with voice interaction, and a large, easy-to-read display for visual prompts and reminders. Its design is intuitive, eliminating the barriers to digital technology use and making it accessible to everyone, regardless of their technological proficiency.

How we built it

Our project harnesses a multi-faceted approach to enhance the safety and well-being of elderly individuals, integrating advanced technologies and methodologies:

OpenAI's AI Agents: These agents facilitate dynamic conversational interactions, enabling tasks such as emergency communication and real-time news updates. Twilio API: Integration allows for seamless initiation of emergency calls to loved ones, ensuring rapid communication during critical situations. Google Maps API: Utilization to pinpoint the exact location of elderly individuals in emergencies, enabling swift assistance. GNews API: Incorporation provides up-to-date news updates on various topics, promoting engagement and connectivity. Whisper AI: Employed for fluid text-to-speech conversion, enhancing communication efficiency. Fusion: Utilized to craft immersive 3D models, enhancing the visual appeal of the user experience. Rag Technique and Langchain: Implemented for efficient analysis of electronic health records, facilitating access to vital medical details. OpenAI's Multi-Modal Capabilities for Fall Detection: By integrating multi-modal data sources such as audio, video, and sensor data, our system can accurately analyze and identify potential falls in real-time. This approach enables robust fall detection, not only based on visual cues but also by analyzing accompanying audio signals or sensor data. Through the synergistic integration of these technologies, our solution provides a comprehensive AI virtual assistant addressing the safety, well-being, and informational needs of elderly individuals, ensuring a holistic approach to their care and support.

Challenges we ran into

We were limited by the hardware we had available to us. However, we were determined to create prototype as a proof of concept. We wanted to design an encasing that would hold one of our phones, which would serve as the visual and audio input. Using Fusion 360 and the PRL, we 3D printed hardware that houses the iPhone 14 Pro Max with cutouts for camera and mics. We underwent multiple iterations using feedback and ended up with a modern design inspired by stars.

One of the most significant challenges we faced was ensuring the reliability of fall detection and emergency communication features. To get accurate results, the approach was to use Llava. However, given the time constraint, we used OpenAI’s model, passing video as frames. Audio streaming took us significant amount of time, especially on iOS. We are very glad that we were able to do this, as this greatly improves the user experience

Accomplishments that we're proud of

One of our proudest accomplishments has been our direct engagement with elderly individuals, actively seeking their feedback throughout the development process. This ensured that our solution resonated with their unique needs and preferences, ultimately leading to a more tailored and effective product.

We also made strides in leveraging AI technology to handle diverse tasks, providing users with flexible and adaptive support. Our team experienced breakthrough moments, such as enabling AI-driven emergency calls and seamlessly integrating contextual medical records and location data of elderly individuals

Additionally, overcoming obstacles in iOS development to successfully implement audio streaming was a notable achievement.

What we learned

We were not familiar with real time audio and video streaming. We also learnt different RAG techniques.

What's next for Hoshi AI

We are really excited by our mission. We would love to create a self-housed prototype that would be local to ensure full privacy and limited calls to APIs. Through extensive testing, we would be interested to build out the mental health offering of Hoshi. For example, use previous discussions to inform and prompt seniors to reminisce past stories in a meaningful way.

We also would look to improve our range of detection to provide more comprehensive emergency services. • Seizures • Mobility Reduction • Prolonged Stillness

Built With

Share this project:

Updates