Inspiration

Going into the weekend, all of us were interested in varying forms of music and aimed to create a custom ML model for each user. We were really curious to try out the Spotify API and had the idea of making an app that could play music based on how fast a user is moving, or their heart rate to create a fitness-oriented music streaming system. Upon working on it however, we realize we didn't quite know how to get it to work on the front end. As a result we pivoted and decided the make a product that could run mostly from the backend and decided to implement a ML feature that could track motion. This lead us to the idea of combining our existing Spotify API with the motion sensing ML to create a dance tracker that could play through songs.

What it does

The app can determine whether a user is dancing or not from their movements. based on this information it can skips songs after it determines that there is no dancing detected for a certain intervals.

How we built it

We built this using various different python libraries including Pygames, TensorFlow, OpenCV, and Mediapipe. The program is essentially a vision-based AI so it required us to first grab visual data using OpenCV and then use Mediapipe's AI body landmarking libraries to layer landmarks over specific, predetermined parts of the body. We then used Tensorflow to create an LTSM sequential neural network that would take in the x, y, and z positions of the landmarks as inputs and would then output as one of two classifications. We then included functions for a real-time view of the tracking, enabling us to include live changes as inputs into other functions. Finally, we created a function with Pygames that allowed us to initialize audio playback within the program based on the outputs of the real-time view.

Challenges we ran into

Considering that this is our first hackathon, we were in over our heads with the coding challenge we gave ourselves. We attempted to learn Flutter to build an iOS app and complete user authentication through Spotify's Web API, both of which we had no experience with. Once we settled on our current model, we integrated a Streamlit interface, however when we tried to retrain our model, we consistently got the same output regardless of our training data. Due to that, we shifted our focus to resolve that, which left our Streamlit interface incomplete.

Accomplishments that we're proud of

We are most proud of all the Spotify API we where able to implement. Additionally we where really excited to see the project working with the computer vision.

What we learned

We learned a lot about APIs, Computer vision, and got to explore a wide range of different front end platforms.

What's next for AFK DJ

Fix ML, be able to authenticate user to be able to directly play from Spotify, add a smart point system to keep track of frequently skipped songs.

Built With

Share this project:

Updates