Inspiration
People don't really know what they need in order to be prepared, so an itemized list of things they need could be incredibly helpful.
What it does
The user takes images of whatever rudimentary kit they have and sends it to some number, then we send back whatever else they need.
How I built it
We used IBM Watson Machine Learning primarily. First, we used a Python script to pull images from Google Images to train the model. Then, once we trained the model with the Machine Learning service from IBM, we used a combination of IBM Cloud, Twilio, and Slack to handle the output from the model (a JSON array) and turn it into I/O with the user via SMS texting.
Challenges I ran into
Firstly, we had troubles with the image scraping script, as some of our members had issues with their computers that didn't allow the script to run, which essentially relegated scraping to one member of the group. Then, we had issues with IBM Watson. We couldn't decide which of the services to use, and once we decided, we ran into issues where some group members couldn't train the model due to errors or infinite loops. Once we got it working, we had an issue where three fourths into the hackathon, our account credentials expired and our model was lost, so we had to retrain the model. We also had issues with the Twilio API, which changed some things that made some aspects unfamiliar.
Accomplishments that I'm proud of
We used ML successfully for the first time in a hackathon project.
What I learned
IBM offers a large variety of cloud services that could be incredibly useful in the future. Some of us learned how to work with the Twilio API, some of us familiarized themselves with Python.
What's next for EmKit
We want to give the model more time and data for more accurate results. We also want to expand the number of classes that the model can detect to give more specific results as to what is needed.
Built With
- ibm-watson
- machine-learning
- python
- twilio
- visual-recognition
Log in or sign up for Devpost to join the conversation.