Inspiration

We were inspired by our love for childhood games, and how we could recreate that nostalgia in a digital environment so that feeling can be shared by friends around the world. In addition to that, we wanted to explore other sensory experiences when interacting with our devices. Upon these goals, we came up with the idea of creating an interactive playground that allows you to have fun with your friends or by yourself through multi-sensual mediums, with an emphasis on hand gestures for this project.

What it does

handful is a digital playground with accessibility and practicality in mind. With handful, you are able to play various mini games with camera-based gestural movements. On the home page, you have a selection of different movement-based games, and within each one you have the choice to play with your friends who are active or with the bot. In game mode, the camera will detect your hand and your gestures will represent all your moves for playing.

How we built it

We started with mapping out our idea and user flow chart on FigJam guided by Apple’s Human-Interface Guidelines. With an emphasis on creating accessible and user-friendly design, we built many iterations of our designs before finalizing on a comprehensive prototype on Figma, which was then translated into a native iOS app.

handful was built using SwiftUI, UIKit, AVKit, and Apple’s Vision framework. It’s the first time we’ve used these technologies, and we couldn’t be happier to how it works. With the Vision and AVKit frameworks, we mapped out the 20 joints (including finger tips) available on the hand, along with the wrist joint. We extended upon framework, adding classification for finger, joint, and its relation to other joints. It’s this relation that allows our gesture recognition to work.

handful uses handput, a simple gesture and hand joint recognizer we built - it’s extremely efficient, doing realtime video and pose detection. handput continued to exist as a debug tool for us, allowing us to see joint actions within handful. handful gives handput a reason to exist, as a gesture-controlled rock paper scissors game.

Challenges we ran into

Each individual group member ran into challenges regarding the numerous softwares/languages used to design the project. For the design side, we struggled with unifying the app as a whole through our style guide as well as implementing gifs/animations into the Figma to make the prototype as realistic as possible.

On the development side, with no experience building apps involving vision and camera systems, we knew that a lot of our time would go into learning and developing proof-of-concept apps (hence handput.) In the beginning, we thought to use ML models we would train - but due to the time crunch, it wasn’t reasonable to train and test all of our gestures against the model. We turned to the Vision framework, a whole other beast - and using realtime video from AVKit. There wasn’t a moment that wasn’t a challenge, but we got through it.

Accomplishments that we're proud of

We are super proud of being able to implement our vision into a functional, interactive, and aesthetic playground in under 24 hours.

It was our first in-person Hackathon and we all came in with different skillsets, so it was really cool to each team member was able to contribute their strengths to the project.

On top of that, we all were able to push ourselves to learn new tools and concepts that elevated handful from being a basic finger recognition app to being a playground to experience early gestural interaction. All of us care about interaction and accessibility, so trying something extremely different was extremely rewarding.

What we learned

  • Each member learned to use new tools and technologies, such as Jitter, the Vision framework, SwiftUI, and UIKit.
  • We learned more about the anatomy of human hands (DIPs, PIPs, and IPs!)
  • Most importantly, we were exposed on the different stages of development and learned how it is a collaborative process transitioning from design to code
  • How vision models work, and why it’s important to tune confidence and positional data for gesture recognition

What's next for handful

  • Develop more games/experiences to utilize on hand/full body motion
  • Improve design interfaces to be more accessible
  • Reduce touch-based interactions
  • Emphasis auditory and haptic feedback
  • Experiment with custom trained ML for pose detection
  • Expand on the backend to allow for playing with other users
  • Make it adaptable to other operating systems
  • Improve on the user flow between screens
  • Implement the features on our Figma prototype
  • Add biased-randomized selection or develop an ML model for the computer player in RPS

Business Model

handful is a product that has a lot of potential for growth, such as including more features and expanding on the development to allow for interaction with other users. We want to create a fun product that can be enjoyed by people of different age groups, physical abilities, and interests, and serve as an inclusive platform that connects people together through multi-sensory experiences.

Built With

Share this project:

Updates