Inspiration

One of our members came up with the idea while walking around campus listening to music and realizing that he was synchronizing his steps with the beat. What if instead the music changed tempo to match the user’s motion? At first it was just a gimmick, but then the applications became evident. Exercise being the most prevalent, because what motivates you to run faster more than the music being at a slightly faster tempo than you are running at? Or what if the dancer controlled the music instead of the music controlling the dancer? And then we stumbled upon research showing connections between music tempo and heart rate, and wondered if the synchronization concept could also be applied to a form of emotional therapy. A simple idea, but a million interesting spin-offs.

What it does

The user first logs in with their Spotify account and chooses a playlist. Then the app will play the audio while dynamically adjusting the tempo to synchronize with the user’s movements in real time.

How we built it

We used Remix, a full stack typescript framework, to build our UI and Server. We interfaced with several APIs. We used MongoDB as a database, using Mongoose as an ORM. Additionally, we used Spotify Oauth for authentication and we used the YouTube Music API for providing MP3 versions of our songs. On the frontend, we capture motion data by constantly loading the phone’s acceleration into a buffer, and then applying Fourier transforms to find the dominant frequency. Finally, the playback speed of the song was scaled up or down to match the tempo with a resonant frequency.

Challenges we ran into

  • A funny challenge came with using the Youtube API for the first time. The project requires gathering links for all songs in all playlists for a Spotify user. This would be fine with a few playlist with minimal songs, but the account that we used for testing had many playlists with over 60 hours of playtime. We very quickly reached the free rate limit and had to change our approach to url.
  • The toughest decision we made for the project was pivoting from a local Android app to a React web app when we determined that our unfamiliarity with Android combined with the limited flexibility in audio and data processing made continuing untenable. We had to sacrifice one of our original goals of using heart rate data from a smart watch, but ended up with a better product overall.
  • When you “time stretch” audio, it generates artifacts like stuttering and pitch changing. We did a lot of research on Time-Scale Modification algorithms like the phase vocoder that help clean up the signal, but unfortunately they proved very difficult to implement in a short time period. Instead, we focused on the synchronization aspect and simply adjusted the playback speed of the audio.

Accomplishments that we're proud of

  • We are greatly proud of building out an idea that centers health and wellbeing. It was fulfilling and fun to test using motion to create an experience with sound. Likewise, it was great to explore the applications of music therapy.
  • Even though we felt like we were banging our heads against walls for the longest time, we never gave up and adapted to produce something that was by no means perfect, but a great proof-of-concept

What we learned

  • How to use various types of API calls
  • The mechanics and applications of Fourier transforms
  • How to distribute tasks among team members and work efficiently

What's next for Synchronicity

  • Implementing audio buffer with phase vocoder to reduce artifacts and pitch changes when time stretching
  • More user control over settings and specific modes for exercise and therapy
  • Reducing latency
  • Adding heart rate data support
  • Cleaning up UI
Share this project:

Updates