Inspiration

Mind Pilot is a rehabilitative software designed to empower people with reduced mobility in their arms and hands. People across the world struggle with reduced mobility due to partial paralyzation, amputation, Parkinson's disease, arthritis, pain, or injury. Everyone deserves the same access to technology as everyone else, to use for their work, education, and recreation.

Tools

Mind pilot consists of two components. The first component of Mind Pilot uses Python's Computer Vision package OpenCV, for facial recognition which references the position and direction the user's nose is pointed in to map a position for the cursor to move to.

The second component uses Reinforcement Learning for Motor Imagery Classification. The EEG signals of the eyebrows being raised are detected by a trained RL agent, and interpreted to produce a click with the mouse. We trained the Reinforcement Learning agent in real time using Hecatron, providing it with reward when it was able to correctly predict the action to click when receiving EEG signals of the eyebrows being raised.

Conclusion

The introduction our team received to neurotechnology, BCI, and computer vision software our team received was invaluable. We're proud of all of the new concepts we learned while creating Mind Pilot!

Next Steps for Mind Pilot

We imagine this software growing to provide even more functionality. Two paths for improvements we recognize are:

  • Improved pose estimation algorithms for more precise cursor movement
  • Multiple motor movement classification targets to enable to user to perform different mouse actions and macros

Built With

Share this project:

Updates