Inspiration

Music is a big part of our lives and also something we experience in the moment. The only way we can really express how we feel is by understanding more then what we skip but what we visually say. We want to sit at our desk and be able to get a song the gets us in focus but also something that can switch and get us concentrated

What it does

Using Hume AI and Spotify API, it reads the facial expressions of a user and recommends alternative songs if a user dislikes a song

How we built it

To build it, we use React.js for our frontend and python for our API. We send a file containing a screenshot of our webcam to the API and it runs through Hume to return us the emotions of the screenshot. We then decide if the emotion is negative or positive. If it is positive then we keep playing the song, if not, we recommend a new song.

Challenges we ran into

The first problem we encountered was deciding whether to send video or photo. This was minor but still a technical consideration which impacted our project. Next, we struggled with Spotify's OAuth2 flow and adding it to our React project. The biggest hurdle however, was getting our files from React into our python API which took a bit of time but worked.

Accomplishments that we're proud of

We are proud that we were able to put something together... together. We all had different skillsets, ideas and expertise which conflicted at first but along development we were able to find the bugs we each knew how to solve and got us in a good workflow.

What we learned

We learned a lot about Hume AI and the tools they provide. I hope to use them more!

What's next for Humify

We want to change our recommendation algorithm and also add Hume's voice expressions to catch people humming or singing along. We would also love to label each song with emotions and take means and averages of emotions that songs illicit (that would require getting more permission from Spotify API).

Built With

Share this project:

Updates