Many patients, especially those with neurological conditions like autism spectrum disorder and dementia, experience difficulties in communicating their emotions. Therefore, there exists a significant gap in emotional care between the patients and caregivers, causing stress, frustration, and impaired quality of life.

We propose “EmoNeuro” — a novel EEG-based technology that can translate emotional fluctuations into an adaptive music output. Using real-time EEG recorded with a 4-channel Muse-2 headset, we show that the ongoing emotional state can be decoded precisely using a Random Forest classifier among 5 candidate machine learning models. These accurately decoded emotions expressed as adaptive ongoing music can increase the sense of presence and ‘personhood’ of persons, especially for subjects with impaired communicative capacities. In addition, music can be personalized based on individual preferences. Using this approach, the caregivers can easily understand and tune in to the physiological state of the patients and stay connected emotionally.

alt text

EmoNeuro technology can also help healthy individuals to self-assess and regulate their emotions. Upon understanding their current emotional state, these individuals can improve their emotional state using a music-based neuro-feedback mechanism. For example, a ‘negative’ ongoing emotion can be rectified by listening to ‘positive’ feedback music that can be played through the EmoNeuro graphical user interface. The current technique is based on 4-channel EEG recordings, but we show the feasibility of using a single-channel EEG set-up that will be inexpensively available in the future for everyone. Therefore, EmoNeuro will assist many people in need by bridging the existing emotional gap.

Built With

Share this project:

Updates