WE ARE SUBMITTING THIS PROJECT FOR THE IOT HACK USING QUALCOMM DEVICE
Inspiration
The inspiration for this project came from the game "Draw it" on the app store. One of our team members has a convolutional neural network, and we wanted to use it in some form of our project. One of the best uses of CNN is image processing and classification. So we decided that we would design an app or game that would have the user draw something, then our app would try to classify the drawing. We wanted to add some value to the user on top of this. We think this could be a good application for language learning. In traditional language learning apps, the app introduces the phrase by showing the learner the target word, the word in their native language, and sometimes a picture of the object or idea. The traditional apps will then have the user recall the word by using multiple choice or a fill-in-the-blank to match the new word with the word they know. In our app, the learner is exposed to phrases in the target language, then recalls the phrases by drawing a picture that matches the phrase.
What it does
We also wanted to use IoT in our device, so we had to find some sensor data that we could communicate between microcontrollers and the Raspberry Pi running our app. We decided to try an simulate a wireless mouse, using a joystick to position the cursor and a push button to clicking. When the app is launched, it shows four phrases with associated words. When the player is ready, they move on to the recall step, where there is a white square available for them to draw on. In addition, they are given an instruction to draw one of the words they were introduced to. The user can clear the drawing at any time, and submit once they are ready. After the user submits their drawing, the CNN judges the drawing and tries to classify it to match the drawing to one of the four phrases.
How we built it
For the mouse, we used a joystick connected to an Arduino Nano, which sends the joystick data and button data over serial to a NodeMCU microcontroller. The NodeMCU microcontroller passes the data to another NodeMCU microcontroller over WiFi. The data from the second NodeMCU is sent to the Raspberry Pi through Serial. All the code for the Arduino Nano and NodeMCU were coded in Arduino. For the app, we used Python, TensorFlow, and Keras to train a model on drawings that we created so it could have some data to judge drawings. The CNN extracts features from the drawings and tries to understand which features are the most important in determining which object the drawing is related to. All the drawings are saved and can be fed back through the model to improve accuracy. The app reads in the joystick data and the button data and turns the positional data from the joystick into a velocity value for the mouse cursor.
Challenges we ran into
The joystick has two analog outputs: the x-axis and the y-axis. However, the NodeMCU only has one analog input. We had to use the Arduino Nano to read the joystick data because the Arduino Nano has multiple ADC's. We are able to simulate a wireless mouse, but moving the cursor with the joystick is harder than expected. If we had the materials, a touchpad would have been the better component to use for the mouse simulation. Another problem we ran into is the lack of training data. Part of the time dedicated to this project by our group members was used to create the training data for our model. When one of us uses the app, it has pretty good accuracy for classifying the drawing into one of the four phrases. However, when someone else uses the app, the accuracy is lower than we would like.
Accomplishments that we're proud of
It was very difficult learning about the different protocols to send the data from the joystick to the Raspberry Pi. Although our mouse simulation doesn't have the best performance, we are still proud. To me, it's not a limitation of our project, but just a limitation of the sensor we used. Using a joystick to type something out on a keyboard on a TV is a problem that many game console developers are only now starting to solve. We are proud of our app's ability to classify drawings. Although the accuracy isn't as high as we would like it due to a lack of training images, when it works it's very fulfilling. We had some people test our app and the drawing part is very fun. People like to try and beat the algorithm by drawing shapes that the model is not expecting.
What we learned
We learned about protocols for sending information on an IoT network, mainly Serial and WiFi. We also tried using rest API on HTTP, but the speeds weren't high enough for reading the joystick data. We also learned how to utilize multi-category classification instead of binary classification. Our app uses multithreading to run the wireless mouse simulation while also displaying the drawing board for the user, so we learned how to use multithreading instead of running two programs to run our app. One thing we weren't expecting to learn about were the logistics of the project. Almost a third of the time working on the project was spent coming up with the idea, delegating tasks, and collaborating with each other.
What's next for Draw2Learn
If Draw2Learn were to be further developed, there are many things that could be improved. None of our group members have experience with ui-design, so although our ui works it's very bare. More phrases and lessons could be added, and the app could keep track of scores and how often the word was drawn correctly. Also if more phrases were added, it would be important to crowdsource the initial training data to ensure accuracy.
Built With
- arduino
- keras
- nodemcu
- python
- raspberry-pi
- tensorflow

Log in or sign up for Devpost to join the conversation.