Inspiration
We drew inspiration on NaviLens, a tool that helps blind people move around stations in FGC.
What it does
It uses vision using a phone camera and a finetuned YOLOv8 model in order to detect, what is in front of the user, and then filters the relevant items of the station and notifies the user about them by using Text To Speech.
How we built it
We finetuned a YOLOv8n segmentation model with a dataset we've created of FGC objects with roboflow in order to get it to learn the few things it wasn’t familiar with.
Challenges we ran into
There was no dataset of images, we had to make one. In order to make it work in real time, we had to browse through some pretrained models to get one that would work.
Accomplishments that we're proud of
Our improvised dataset and model has been integrated flawlessly and detects the machines with no trouble.
What we learned
YOLOv8 training... Real time segment anything...
What's next for FerroVision
An Android/iOS application with the integration of the program.
Log in or sign up for Devpost to join the conversation.