Inspiration

For hundreds of years, people have used music to escape their sorrows, to celebrate their triumphs, to convey their innermost desires, and just about every other emotion in between. In the last few decades, there has been an increasing amount of research into the relationship between human states of mind and the music they listen to. Nawaz Ahmad and Afsheen Rana postulate in their Impact of Music on Mood: Empirical Investigation (link here) "that people get inspired by listening to music. Music is a source that can get them into the other mood. People think that music has a strong impact on their mood and behavior." Music is also able to not only change peoples' moods but also deepen their existing ones, affecting them positively and negatively in both ways.

In a more quantitative study, Dr. Teresa Lesiuk (link here) writes in her journal article, The Effect of Preferred Music on Mood and Performance in a High-Cognitive Demand Occupation, that there is a "mild positive affect... shown in the psychological literature to improve cognitive skills of creative problem-solving and systematic thinking. Individual-preferred music listening offers the opportunity for improved positive affect." These types of studies have helped in the rise of music therapy, which aims to treat patients through sounds and rhythms.

However, just as there is more focus in music, there is more diversity in taste and thought of music. With dozens of genres and hundreds of subcategories molding and shaping people's experiences, the same song can have vastly different meanings to people, and the over-generalization that is occurring in research centers and labs across the world could prove disillusioned in treating the emotions of the individual before them.

Thus was born Polyphony. Birthed out of a couple of friends discussing an interesting article about music therapy during acapella auditions, we aim to reinvent the idea of personalized music tastes and embrace the feelings that often drive us to a particular song, artist, or genre.

What it does

Polyphony works hand-in-hand with the existing Spotify recommendation algorithm but adds several new features that take user experience to the next level. Training to match a user's thought processes, it begins by asking users to share their mood when choosing to listen to some of their recently listened to songs. Using this information, we incorporate our algorithm involving multi-level Gaussian Mixture Models, Support Vector Machines, and Word2Vec Adjective Finding, as well as Google Gemini and Vertex AI use, to predict what emotions users may be feeling using only the information they provide during training and in sessions, which are gamified experiences for users to develop playlists of songs related to their positively-rate songs, while simultaneously reinforcing their preferences.

How we built it

This program was built on a React.js frontend, and server-side Python, using Flask as a framework. We also used Chakra.UI and Tailwind CSS to assist with some frontend development. With this, we used a PostgreSQL database server hosted on Google Cloud Platform. For our AI/ML requirements, we ran most of the Natural Language Processing through nltk, and GMMs and SVMs on scikit learn, all of which was run directly on server side. Finally, we harnessed the power of Google's new Gemini LLM through the Vertex AI API, which was also hosted on Google Cloud Platform.

We built this app by first brainstorming the workflow of the project, developing mock page designs and deciding how we wanted to organize. Then, we tackled the hardest problems first; two of us set about creating our ML algorithms through rigorous testing, before settling on something we were satisfied with. Meanwhile, the other two began the work of bringing the large UI plans to fruition. Next, the ML-team took on the Cloud and Database structures, setting up the GCP and pSQL databases, and ensuring that the connection between the cloud and localhost was consistent. Finally, we worked together to ensure that the fullstack was operating smoothly and with efficiency.

Challenges we ran into

The main challenge we ran into was the struggle of using the Spotify API. Well known for being finicky and troublesome to use, we had several moments where we would be unexplainably blocked from servers and timed out for hours on end (the most recent being just a few minutes before our submission). This is a problem that cannot be solved without applying for a Production key, as opposed to a Development key, an undertaking that would take weeks to achieve and is strictly not permitted for students participating in Hackathons or using the Spotify API for educational purposes. However, we hope to carry this hack into the future, and hopefully find uses for it that will allow us to apply for the Production key and provide a better and more regular experience.

Accomplishments that we're proud of

We are incredibly proud of how well the machine learning algorithm came out. When testing, we were surprised by how well the outputted suggestions were, and frequently utilized the program to add new songs to our respective playlists. In addition, the UI team did an incredible job of working under the deadline to produce unique designs and flowing transitions to make the program fun to use and easy to work with, in spite of the roadblocks caused by the Spotify API. Given the time constraints, we are overall very proud of how much we were able to accomplish.

What we learned

In the project, we learned how to use Spotify API, and Google Cloud API, and how to connect the things we have learned in our several Computer Science classes at Columbia into one application. We learned how to debug and troubleshoot efficiently, and how to get answers to our questions at 4 AM in the morning when our advisors (read mentors) are fast asleep.

What's next for Polyphony

We hope to greatly expand the scope of Polyphony. We have several cool ideas on the horizon -- making it more fast and efficient, and switching to a Production account to prevent the API call limits, for one. We also want to use user data in collaboration to produce matches for people based on their similar music tastes, offering the chance to create friendships (and maybe more). Also using this user data, we hope to offer a stats display of past user sessions and music discovered during them. Finally, we were exploring the Chat function of Google Gemini as an on-demand tool for users who wanted to speak to the AI in prolonged chat (and receive awesome music recs along the way). While we ultimately decided against this for ethical reasons and limited time, it is definitely something we would like to explore in the future.

+ 11 more
Share this project:

Updates