The beginning
It all started with the pitch by Zeiss and our lofty ideas of how we could turn that into a working service for the Amazon echo system. Our ideas quickly came crashing down when we started AWS lambda and were overwhelmed by the complexity of the system and it's functions. After working for 3 hours we came the the conclusion that taking the picture directly with an echo device as part of an Alexa skill was not technically possible, and so we decided to concentrate on the segment of the work-flow between the doctor and the cloud.
An android app is in the works to upload images directly into the s3 bucket where it is categorised and attached to a person registered in the database. The images in the database can be accessed by sending the full name of the patient to the lambda server which will return a card containing information about the patient and the image stored which can b viewed directly on the echo show.
The App
We decided to focus on, in our opinion the most important part, the easy accessability of data by Doctor and a Client and the interconnection between the two. We realised this by using a Android App for our Client to upload pictures and Alexa Skills for the doctor side to examine them.
The skill
Because of the lack of functionality the Alexa skill only works on the doctors side right now. He can ask Alexa to show a picture of one of his patients by adding his or hers name to the question.
Whats next?
The developement could be continued by fixing some errors, showing more pictures then just one and editing the layout. Furthermore new features could be added like a feeback feature from the Doctors side or a Machine Learning part which searchs and rejects closed eyes or more general unsuitable image.
Log in or sign up for Devpost to join the conversation.