PROBLEM
Our sense of vision is a crucial aspect of our daily lives, enabling us to perform essential tasks like learning, walking, and reading. According to the Blindness Vision Atlas, there are a minimum of 43 million individuals worldwide living with blindness.
While there are some cost-effective or complimentary vision impairment devices available to assist individuals coping with blindness, the range of options remains quite constrained. Furthermore, the high-quality alternatives often come with a substantial price tag, making them unaffordable for many. For instance, a cutting-edge Braille display device like the HumanWare BrailleNote Touch Plus, which offers advanced features and connectivity, can cost upwards of $5,000. These elevated costs pose a significant barrier to accessibility for those who would greatly benefit from such devices.
Although such devices are effective for people living with blindness, these equipment could contribute to electronic waste due to improper disposable and limited recycling methods which also increases the concerns about environmental pollution.
SOLUTION
To tackle these challenges, we formulated three different solutions. First, create an open-source software designed to assist visually impaired individuals in real-time with object identification and audio assistance. This not only cuts costs for the visually impaired in accessing high-quality aids but our product offers a cross-platform design with a customizable user interface, ensuring accessibility for all blind individuals regardless of the platform they use. Moreover, it diminishes reliance on other vision impairment devices which eventually reduces electronic waste.
PRODUCT
Introducing Audiocular, our cutting-edge open-source software meticulously designed to provide support for individuals with visual impairments by offering real-time audio descriptions of their surroundings. Our commitment to open-source principles allows developers to actively participate in our mission, fostering a sense of community and collaboration. Together, we aim to bridge the gap in accessibility while simultaneously addressing environmental concerns.
Audiocular transcends conventional limitations, paving the way for a more inclusive and sustainable future.
HOW IT WORKS
Our Python module is trained with a dataset to allow Audiocular to perform real-time object classification in its surroundings. This innovative technology is compatible with any camera capable of capturing live images. When an image is captured, the Python module seamlessly processes it, swiftly identifying and classifying the object depicted.
Once an object is recognised, the system leverages this information and transforms it into an audio output. This output serves the invaluable purpose of audibly narrating the name of the identified object to the user. This combination of cutting-edge technology ensures that Audiocular offers a seamless and informative experience for individuals with visual impairments, enhancing their understanding and interaction with the world around them.
WHAT'S NEXT
Over the course of the next four years in our development journey, we have exciting plans to introduce a range of additional features to Audiocular. These enhancements will include a distance detection capability, text recognition functionality, and an innovative feature for assisting the hearing impaired through sign language recognition. Furthermore, we're committed to making our software even more versatile by offering multi-language support, ensuring that it caters to a broader audience with diverse needs. Our mission is to continually evolve and expand Audiocular's capabilities to provide comprehensive assistance to individuals with varying sensory and language requirements.
Other than developing Audiocular for different platforms, we also want to integrate our software with VR technology for users to identify objects hand-free. We will also run different trials with volunteers for beta testing to receive user feedback for bug fixes and make improvements before our official launch.
WHAT WE LEARNT
Technical skills Training ML models with python
Non technical skills Communication Collaboration

Log in or sign up for Devpost to join the conversation.