Inspiration

We wanted to create a tool that helps neurodivergent people better understand emotions in speech and feel more confident in conversations. Many people struggle with reading tone, and we wanted to use AI to make emotional cues more accessible.

What it does

MoodID helps neurodivergent users recognize emotions in speech. Upload audio, and MoodID instantly identifies emotions like happy, sad, or angry — making social interactions clearer and easier.

How we built it

We built MoodID using a machine learning model trained on emotional speech datasets. The model runs on a Flask backend, and we connected it to a simple, user-friendly web interface for easy audio uploads and results display.

Challenges we ran into

Our biggest challenge was connecting the model to the front end. After some trial and error, we successfully integrated the backend using Flask and made sure the results displayed smoothly on the website.

Accomplishments that we're proud of

We’re proud of how accurate our model is and how well it performs. We’re also proud of designing a front end that is simple, clear, and tailored to neurodivergent users, making the experience inclusive and approachable.

What we learned

We learned how to connect machine learning models to a front end, work with Flask to handle requests, and design with accessibility in mind. We also learned how much small UI choices can impact usability for different users.

What's next for MoodID

We plan to make MoodID available as a mobile app for faster use, add an iMessage extension to analyze voice messages, and explore a small portable device that can detect emotions in real time. We also want to improve the model with better algorithms, safely sourced data, and support for multiple languages — even detecting emotions in singing.

Built With

Share this project:

Updates