Inspiration

The inspiration for Air Mouse came from the desire to create a more intuitive and natural human-computer interaction. The team was inspired by the gesture recognition technology already present in Apple Pro Vision, Meta Quest, and VR headsets.

What it does

Air Mouse is a human interface device that uses a computer’s camera to capture hand gestures and control the mouse cursor and other parts of the user interface. It uses a server-client architecture with the server written in Python using TensorFlow and MediaPipe, and the client written in React.js. The server captures the video feed from the camera, passes it to TensorFlow and MediaPipe, which in turn provides a skeleton mesh of the hand.

How we built it

Challenges we ran into

One of the main challenges the AirMouse team faced was managing the complexity of translating hand gestures into precise cursor actions. To accomplish this, we developed a robust set of rules, including calculating the Euclidean distance, the area of a quad, and analyzing other spatial data derived from the 2D coordinates provided by the hand's skeleton mesh. To enhance user experience by making cursor movement more friendly and predictable, we implemented exponential smoothing. This technique significantly reduced unnecessary noise and jitter, ensuring the cursor's responsiveness remained intact while smoothing out its motion. This approach allowed us to strike the perfect balance between accuracy and fluidity, making AirMouse not just innovative but also incredibly intuitive and user-friendly

Accomplishments that we're proud of

The team is proud of successfully implementing a system that can accurately track hand movements and translate them into cursor actions. This not only enhances the user experience but also promotes a cleaner, touch-free computing environment

What we learned

The team learned a lot about gesture recognition technology, TensorFlow, MediaPipe, and React.js. They also gained valuable experience in designing a server-client architecture and dealing with the challenges of translating complex hand gestures into precise cursor actions.

What's next for Air Mouse

The next step for Air Mouse is to refine the gesture recognition technology to make it even more accurate and responsive. The team also plans to add more features and expand the range of gestures that can be recognized. They are also considering the possibility of integrating Air Mouse with other systems and devices to create a more seamless and intuitive computing environment.

Built With

Share this project:

Updates