Our inspiration was to make a device that would assist people with disabilities. We wanted to empower them to be confident going into social interactions. This program is designed to aid somebody in understanding social cues and emotions in an online video call. It provides feedback on how their statements could potentially affect the recipient’s mood, using notifications and haptic feedback. We built a visual emotion recognition system using a CNN built with VGG architecture. As well as audio emotion recognition built with PyTorch and SpeechBrain however, it is not currently implemented. Using our gathered information, we notify users of social cues extrapolated with these models to aid them in day-to-day conversation. The biggest challenge for us was finding the right datasets. Bad data sets were not generalizable and this came as an issue to us later, forcing us to find better data.

Built With

Share this project:

Updates