Click here to see a video Demo of our app

Inspiration

Our inspiration came from personal experiences with getting easily distracted while working, and the need for an intelligent music assistant. The idea also took influence from the innovative Spotify DJ, which offers tailored music experiences.

What it does

Moodify is an AI-powered music assistant that uses computer vision to analyze your facial expressions and dynamically generate personalized music. It selects and layers tracks based on your emotional reactions in real-time.

How we built it

We built Moodify using Unity and C# for the front-end experience, Python for the backend logic, and the FER (Facial Emotion Recognition) library to detect emotions via the camera.

Challenges we ran into

Implementing and fine-tuning machine learning algorithms for real-time emotion detection and creating smooth, seamless music transitions were some of our biggest challenges.

Accomplishments we're proud of

We are proud of achieving a polished, responsive music system that reacts smoothly to user emotions and creates seamless transitions between layers of music.

Video Demo

Built With

Share this project:

Updates