What it does
The app is an interface for both blind people and people who struggle to pick up on social queues to recognize emotions and body language of those who they speak to and gain valuable feedback on how the things they say may affect the listeners in ways they don't expect.
How we built it
We used a react app for the front end to take rapid amounts of pictures and audio snippet recordings which we could then send to the Flask backend for ML analysis. Then the server responds with live audio snippets that have information about the
Challenges we ran into
Our initial front end dev had an emergency and couldn't participate so we had to pick up the slack, and dive headfirst into an area which we had little experience in - as none of us have built a react app which sends such a high volume of I/O data.
Accomplishments that we're proud of
We were able to build out a frontend and a backend which could interface with each other sending high volumes of audiovisual data back and forth and analyze it in real time fast enough to give feedback on a conversation while it's still ongoing.
What's next for In Retrospect
We plan on developing for Meta Glasses to make it so more people can use the app in day to day life
Log in or sign up for Devpost to join the conversation.