I wanted to do something that would be of benefit and that might actually help people that use it. It's not perfect but I hope to keep tweaking and refining it until it becomes a viable and completely usable app that would make people's lives a little easier.
LetsC is a voice activated assistant that analyzes images through camera and describes those images through text to speech for the visually impaired.
I built it using Azure's Cognitive Computer Vision API, Web Speech API for both speech to text and text to speech. Express server and javascript
Hooking up to azure was a bit of a challenge as well as working with jquery and getting everything to tie in together.
I'm proud of the fact that it is currently operational!
I learned to use computer vision API, text to speech and speech to text apis.
Refining it until it is something that could be widely used to help. Hopefully I can contribute more of the pieces of machine learning for it in the future.