The main idea of our application is to choose looped music snippets and put them together into a new music creation. Each button in our application is chosen via brainwaves; when you are on a screen every button will be flashing, and to make a selection you must focus on the button you want to pick.

The first layer of our application is streaming in the data from the brain, which is done via OpenBCI LabStreamingLayer (LSL). This provides a continuous stream of data to read from for analysis. Working with our problem provider, we then modified existing backend libraries made for reading this stream. We also connected the LSL to the front-end of our project which was made in Unity.

The selection process is also done using the backend libraries suggested to us via the problem provider. A Riemann Geometry and LDA-based classifier built into the application will be able to discern what button you want to select by comparing P300 Evoked Potentials occurring in your brain while you are looking at the button.

Once the classifier makes a decision about what button you want to press, it will send a notification to our front end. Based on the choice, a new screen in the application may be shown, or music may be played, or the page may be refreshed, among a few other things.

As mentioned previously, correct usage of the application will result in a composition of music that the user can listen to.

Built With

Share this project:

Updates