Inspiration

We all know someone or have met someone who has been physically impaired and sometimes suffer from difficulty with speech and motor movements. It takes just around 100 different muscles to be able to talk and communicate. Some people, are impaired in these areas thus making it hard to share their thoughts or even simple yes or no answers.

Blink2Speech provides users with a way to communicated simple commands by blinking their eyes, utilizing EEG waves read by the Muse headset. This is something he or she can do without independently of other motor cortical areas being damaged.

What it does

This program utilizes the Muse SDK, Flask, OSC, JavaScript, JQuery, Python, and HTML to make an interactive front end webpage that allows the user to seamlessly move through a grid of letters with their eyes and create a string in response to a question.

How we built it

We first downloaded and researched how the Muse headset worked and what established a blink on an EEG. After setting threshold levels we ran it through an OSC server that talks to a Python Flask server which uses AJAX to synchronously update the page through a JQuery loop and provides a front end user interface for 3rd-party viewers and the user itself to see the cursor moving over the alphabet and creating an output string.

Challenges we ran into

We initially were going to try and differentiate blinking from closing eyes to create a morse code that would output a string of characters. However, we found that differentiating between those two would not be accurate. So we decided to transfer to a grid system that works solely off of blinks to transverse through the table of letters.

Accomplishments that we're proud of

We are proud of being able to make an interactive web interface that talks with the MUSE headset and create an output string.

What we learned

We learned how to read EEG waves and how to calculate threshold levels. We also learned how to make an interactive front end.

What's next for Blink2Speech

Make it more precise for quicker integration in addition to using a text to speech API such as Watson to say the string after the user has 'Blinked' the string!

Built With

Share this project:

Updates