Inspiration

We wanted to build a challenging, yet entertaining project that addresses some accessibility issue. We noticed that when we eat fast food, we want to interact with our devices, but our fingers are greasy so we don't want to touch the keyboard. This is an issue experienced by many other fast food goers...not to mention, workers who get their hands dirty, such as mechanics and surgeons. So, we built EmotiCam.

What it does

EmotiCam is an image recognition program that converts facial expression, hand gestures, and hand movements into emojis and characters, displayed and printed on any text editor or chat box. Users can reenact hand gesture and facial expression emojis such as "thumbs up" or "smiley face" or type on a "virtual keyboard", and the corresponding emoji or character will be typed and sent on the text editor or social media platform of their choice. We chose Discord.

How we built it

We used OpenCV image recognition to track and label the movement and orientation of our hands and facial features. Then, we trained a ML model using TensorFlow and PyTorch to match the orientations and movement of our hands and facial features to emojis and characters.

Challenges we ran into

Learning how to use TensorFlow and OpenCV. Merging different features developed by different team members together. Training models to recognize and convert facial expressions into emojis.

Accomplishments that we're proud of

Successfully (sort of) functioning hand gesture, facial expression, hand orientation, and hand movement conversion to emoji and characters. Sometimes accurate hand gesture and facial expression detection and translation Successfully implemented and tweaked open-source code into our project to meet our needs

What we learned

Even though somebody on the internet already wrote some chunks of code for you, implementing it into your own project isn't easy. You can normalize your images, tracking the distance between landmarks and reference points on your hands rather than absolute coordinates on the screen, allowing images to be recognized regardless of orientation. Machine Learning is very complicated...there is a lot of math going on in the background that we luckily don't see.

What's next for Emoticam

Make it less laggy. Increase accuracy of gesture and expression translation. Implement EmotiCam's functionality into its own social media app.

Built With

Share this project:

Updates