IVision is a project developed during UofTHacks VII, a 36 hour hackathon hosted at the University of Toronto from January 17-19 2020.
Live video object detection for visually impaired with haptic feed when nearing an object.
- Live video object detection for visually impaired with haptic feed when nearing an object. Using the Darknet (Neural Network Framework), we trained the YOLOv3 ("You only look once", a CNN object detection algorithm for real time object detection) using pre-trained convolutional weights for each class with approximately 80 various classes each containing data corresponding to an object.
- Data was taken from OpenImages. Ran other libraries such as Keras and Tensorflow in backhend.