Inspiration

One of our developers was inspired by a conversation with a blind man who was her Uber passenger. They discussed computer science, and she learned that the man had a background in programming and often programmed his script from scratch every time he wanted to access data. Our developer first wanted to ease the burden of consumers by making it easier to manage data through voice command, and our idea changed to helping the visually impaired access social media.

What it does

"Megane" uses Amazon Alexa to listen to the user's voice command and access his social media account. The user then can access his feed, and ask Alexa questions such as "What is the name of the person who posted? What is happening on this picture? What does the status say?" and Alexa can voice back concepts from the image and read the text on the screen, with the help of Clarifai and Alexa's text-to-speech function. The app overall gives voice to what the user cannot see.

How we built it

We used Python to run Amazon Alexa, and programmed her specific commands through Alexa Skill Kits. We then used Facebook API to authenticate Facebook login.

Challenges we ran into

Authentication of Facebook, and using non-graphical user interface.

Accomplishments that we're proud of

We are proud of have learned new skills, such as Alexa Skill Kits and implementing APIs into our app, and being able to program Alexa's voice command the way we wanted despite our beginner knowledge of programming.

What we learned

We learned how to implement APIs into our program and build Alexa Skill Kits.

What's next for Megane

So far, we have programmed Alexa to respond to a certain command by authorizing Facebook login. Next, we will develop access to the feed, then develop visual recognition of photos and program Alexa to read out loud concepts and text based on information given from the APIs. Once we have accomplished this task, we will connect our app to other social media, such as SnapChat and Instagram.

Share this project:

Updates