About the Project

The idea for this project was born out of personal experience and empathy. Having gone through a shoulder injury myself, along with other members of the team facing similar challenges, we wanted to create a tool to assist people going through physical rehabilitation exercises. Physiotherapists can’t always be with patients every day to ensure they're performing their exercises correctly. Our app aims to bridge this gap by helping patients perform exercises optimally and improving their outcomes.

In addition to helping with exercise form, the game aspect of the project also includes a feature to test reaction speed, which is essential for improving hand-eye coordination.

What Inspired Me

After experiencing a physical injury, I realized how difficult it was to accurately perform rehabilitation exercises without guidance. This motivated me to build a solution that could track users' movements, give real-time feedback, and ensure exercises are done correctly, even without a therapist's presence.

What I Learned

This project gave me a deeper understanding of how AI can be applied in healthcare, especially in physical rehabilitation. We learned how AI-driven tools can analyze body movements using pose estimation, providing real-time feedback to patients on whether they are performing exercises correctly. We also learned about the complexities of using computer vision (CV) models like Mediapipe for human pose estimation and how combining it with technologies like OpenCV can create a powerful system for tracking and correcting posture.

Additionally, We gained insights into creating user-friendly interfaces using Streamlit and integrating APIs like Gemini for enhanced functionality. This project deepened our understanding of how AI and machine learning can be used to make healthcare more accessible and effective for patients, especially in post-injury rehabilitation.

How we Built the Project

The project was built using Python, OpenCV, Streamlit, Mediapipe, and the Gemini API. OpenCV was used for handling image modifications, and Mediapipe provided human pose estimation, which is central to the functionality of the app. Streamlit was used for creating the user interface, while the Gemini API helped with integrating additional features like game-based feedback for testing reaction speed.

Challenges Faced

One of the main challenges we faced was understanding how to measure correctness in posture through angles and distances on-screen. It took time to figure out how to use these measurements to provide meaningful feedback on the user’s form. Another challenge was ensuring the app’s user interface was intuitive and user-friendly, as the target users may not be very tech-savvy.

From a technical perspective, working with OpenCV and understanding how it handles image modifications for pose estimation was difficult. Additionally, Mediapipe’s human pose estimation concepts took some time to master, especially when it came to interpreting the data it provided and transforming it into actionable feedback for the user.

Built With

Share this project:

Updates