What is the inspiration
Our project was inspired by a point in time in television history when the TV Show Pokémon aired an episode in Japan, which left 685 epileptic children in hospital beds. The episode contained rapid flashes of red and blue colors, triggering seizures in said children.
As engineering students interested in the application of our skills to real-world scenarios, we saw this incident as an opportunity to learn and decided to develop a program which helps epileptic people, who we believed aren't given enough consideration in society.
What does it do
The program pre-screens videos to ensure that they do not contain flashing imagery. Users are provided the opportunity to select between YouTube videos and local videos. Users are also allowed to input how sensitive they are to flashing imagery on a scale of 1 to 3. The program then determines all sensitive time ranges in the video.
How we built it
Our program works by scanning the video for rapid shifts of red and blue between 5 and 30Hz, frequencies and colors known to trigger seizures in patients.
The script breaks down the video into one second chunks, and uses pixel sampling techniques to reduce the visual fidelity of the video frames. This is done to reduce memory usage.
Following that, an algorithm rates the risk of each chunk for each individual frequency. The algorithm incorporates scans of every frequency between 5 and 30Hz when analyzing a chunk. This helps us pinpoint the presence of flashing instead of simple frame changes. and prints the time ranges to be careful of.
Challenges we ran into
When beginning work on the program, we planned to use OpenCV to analyze the image. However, we soon had to deal with compatibility problems. Certain versions of OpenCV refused to run for certain versions of our python terminals despite being listed as compatible.
One of the biggest challenges we faced was developing an algorithm capable of relatively accurately determining whether a chunk contained seizure inducing content. Regular image comparisons between two video frames led to large amounts of false positives, and was completely ineffective.
Accomplishments that we're proud of
While we could have left video scanning functionality to be solely used on videos in our local directory, we decided to go one step beyond in our code and implement Youtube link compatibility to try and create a product which is actually convenient enough for general use. We're proud of having adapted our code to be more focused on the user's experience!
We're just as proud of the fact that we tried our best to create a product which we believe could genuinely help people who have to manage Epilepsy in their daily lives!
What we learned
We learned the application of python modules and the importance of documentation.
We learned about the importance of good design and planning when getting team members to create functions which matched each others planned input and output prior to actually having created said functions.
We also learned that sleep is for the weak and that it takes 24 hours of staying awake to hack like you mean it.
What's next for EpilepSafe
As for the future of EpilepSafe, we want to try and increase compatibility for more websites. We also want to try and get more accurate scans and reads by learning from previous user inputs and surveying. Lastly, we also want to try and massively cut down the processing time for a smoother user experience.

Log in or sign up for Devpost to join the conversation.