Inspiration
What it does
Our project detects faces in a live video feed and from there overlays an emoji that best corresponds to their emotions.
How we built it
We designed this program with OpenCV and the Keras API. With OpenCV we extrapolate faces within an image and use the keras model to predict the expression of the faces. With that information we overlay the corresponding emoji onto the video frame. We also introduced multiple threads, the main thread is responsible for image manipulation, while another is solely responsible for grabbing images from the live feed.
Challenges we ran into
We ran into the issue of detecting false positives during face detection process.
Log in or sign up for Devpost to join the conversation.