Inspiration
When I started programming, my main goal was to do something for the people. To build products that would solve real world problems. But, as I grew, Fame and Fortune became my two biggest aspirations. Only one day when, I was talking to my mom and I was telling her how well I was doing academically and professionally. But, she caught me off guard when she asked , "So, what did you do for the people?", and I couldn't utter a word. So, the moment I had this idea, I knew I had to do it. Technology is a huge part of our society, and that idea is most strongly present in social media. Facebook, Instagram, Twitter, Snapchat, and countless others, help connect a large number of various groups of people around the world. And a large part of this online media consists of photos and videos. And it suddenly dawned on me that a large community of people are denied the major pleasure of using online media. What I wanted to do was, atleast, solve one such problem, so that we can start a dialogue and discussion about expanding inclusivity amongst people who experience the world in a way differently, which unfortunately bars them from enjoying the full extent of the capabilities that online social media provides and how we must be actively seeking ways to address this issue.
What it does
Our Android app provides an interface for people with visibility challenges, to sense the world of colors, in the form of sound. It captures the color from the camera, using computer vision and convert it to a particular frequency, that the person can sense. With the range of frequencies, we allow them to visualize their surrounding through the power of sound.
How we built it
First we used a camera API to constantly monitor the environment around the person, and then let them choose a portion, from the live preview section on the app. By using the azure API we find the dominant color of the section, and convert it a particular frequency using audio track.
Challenges we ran into
The azure api's documentation, for android was very difficult to understand, as it lacked lots of important code snippets. Also, the response time for the picture, to convert to color was very slow, hindering our main goal of real time environment analysis.
Accomplishments that we're proud of
We are able to get color from images, accurately and are able to differentiate each color with a particular frequency. But, the main highlight of the project is that we created our own language of color using sounds that has the potential to be used by many people, and can have many applications other than the app.
What we learned
We learned to use threads to create parallel code, that allows different features in our application to run at the same time. We learned many forms of error handling that we didn't know existed in the android development environment.
What's next for Cologram
We plan to have real time depth analysis, that would allow the user to not only be aware of the colors of the environment, but also be able to perceive different objects around them. We would then test our application with different users and based on feedback, trying to officiate the language of color and sound.
Built With
- android-studio
- azura
Log in or sign up for Devpost to join the conversation.