Inspiration
We were inspired by a programmer who suffered from Carpel Tunnel Syndrome from overuse of his keyboard and mouse. He decided to attach a lazer pointer to his hat and use voice-recognition to continue coding.
We wanted to expand on this idea while at the same time promoting healthly lifestyles for children and adults in sedentary jobs, and making programming a more artistic and physical acitivty and dispel the bias that coding is "only for geeks or nerds".
Since less than half of households in developing countries have access to a desktop at home, while most already have a mobile device, allowing them to code via webcams and mobile cameras will expand access to promising hackers worldwide.
What it does
Adding movement and dance to programming to appeal to a wider range of learners, especially children who are too curious to sit still! As a bonus, involving muscle memory will help students recall syntax and design patterns.
Pipeline
- Fetch pre-trained Computer Vision model from backend
- User gives webcam / mobile camera permission
- Sample frames from video stream
- Posenet model predicts coordinates for each body part
- Gestures represented by poses in a time interval
- Pretrained model predicts gesture (loop, conditional, etc)
- Gesture is translated into pseudocode / abstract syntax
- Compiled to languages like Javascript
How we built it
UX & Data flow
- Users interact with gestures and voice
- Uses machine learning to learn new gestures & recognise previously trained gestures
- Translate to pseudocode
- Design: Figma
- Frontend: React & Redux, Material UI
- Backend: Python Flask REST API
- Machine Learning: Tensorflowjs & Posenet for frontend => Tensorflow & Keras RNN for backend
- Deployment: Google Cloud
- APIs: Codemirror, Google NLP
Challenges we ran into
- Deployment on cloud, a lot of data processing, cleaning, building efficient ML pipeline.
- Coordinating many asynchronious operations for cam feed, playing audio, text editing
- Collaborating remotely on two repositories (frontend & backend)
Accomplishments that we're proud of
- Made effective end-to-end machine learning pipeline from data collection to model inference
- Hacking through bugs and trying up to five solutions! #hustle
- Effective planning & prioritization to ship MVP
- Learning and peer programming across the stack
What we learned
Thomas
- Axios for AJAX requests
- Flask REST API
- Tensorflowjs & posenet
- Data cleaning
Mihir
- React (Redux & Routes)
- Material UI
- CSS Grid
- Codemirror API
Bhavini
- Figma mockups & prototyping
- React & Javascript
- Material UI
Nick
- React & Flask
- How to construct a data pipeline
- Convolutional Models (CNN)
What's next for Dance Dance Convolution
Features
- peer-programming (web sockets)
- input by voice commands** (NLP)
- gaze inference => no more need for mouse!
- Compile to various languages
Userbase for pilot project
- schools at all levels, from kindergarten to university
- khan academy
- girlswhocode
- Africa Code Week
Tech assistance & funding
- 1517 / Thiel Fellowship
- ycombinator
- tensorflow partners
- Western University
Built With
- axios
- flask
- google-cloud
- javascript
- keras
- materialui
- python
- react
- tensorflow



Log in or sign up for Devpost to join the conversation.