Inspiration

We were inspired by Track 2's (Neuromancer) availability for multiple paradigms such as SSVEP and motor imagery. To integrate more information in order to achieve robust performance, we thought incorporating SSVEP into our speller to use the frequency features that are elicitated.

What it does

Using our app, you can type using only your eyes in a VR environment! Our VR Speller evaluates both P300 and SSVEP signals, as well as color modulation, to predict the user's character choice.

How we built it

Using the starter code for the VR app, we added custom game parameters to configure the frequency at which each row/column will flash, as well as the color of the flash for rows vs. columns. To make the app playable in VR, we modified the interactive components (i.e. Train, Test, and Continue buttons) to respond to button presses on the VR controllers. With PhysioLabXR, we added processing pipelines to utilize the given data's frequency domain features. FFTs, power spectral densities, and wavelet transform functions were used. With the processed data, basic machine learning models such as linear discriminant analysis (LDA), support vector machine (SVM), and the random forest classifier were implemented. The individual model's performance was tested alongside the majority voting scheme (with soft voting).

Challenges we ran into

  • All of the software we used (i.e. Unicorn Hybrid Black Suite, Unity, and PhysioLabXR) had varying levels of incompatibility with macOS, and all of us arrived with macOS devices.
  • Debugging PhysioLabXR configuration errors were difficult without deep familiarity with the codebase.
  • Both the Unicorn Bluetooth connection and the Quest 2 cable connection felt unstable.

Accomplishments that we're proud of

-We modified the board in Unity to make it adapt to the paradigm of SSVEP. -We proposed a method of integrating the SSVEP into P300 tasks: extracting both time and frequency domains. -We employed a majority voting classifier to incorporate all of the data.

What we learned

-Teamwork! Even though we struggled with setting things up, we managed to implement the features we wanted through teamwork. -Unity beginner: Interact with Quest 2 headset -EEG streaming: Process EEG signals in real-time(online training) -Research skills: Inspire from the literature(Try to combine P300 and SSVEP)

What's next for NoTV

-Correct the '90 Epoch' issue: There seems to be an issue with sending the Flash marker to the PhysioLab using our customized board. Instead of the expected 60 epochs, the number of epochs turned out to be 90, potentially causing confusion in label generation. -Validate what adding the SSVEP components to a P300 speller does. Is it better than using each paradigm on its own? If so, why? -Do colors affect the performance of a speller? What influence does it have on the user and is there an optimal color palette for each individual? -Wavelet Transform & wavelet scattering: When applying wavelet transform, we met some problems regarding the data structure matching. In the future, we can analyze EEG data and extract frequency features that reflect the complex dynamics of brain activity across different frequency bands and scales by combining Wavelet Transform and Wavelet Scattering techniques. -Improve the layout of the board -Employ deep neural networks such as EEGNet.

Built With

Share this project:

Updates