Inspiration

Trying to use vision and human body language recognition to drive the movements/responses or Roboy. We say that body language tells us more about a person than their words, and we wanted to enhance Roboy in this way in his interactions.

What it does

Uses Microsoft Kinect to sense and define human gestures, which then trigger the expected human reaction from another human. For example, if the human waves, Roboy waves back - if the human puts out their hand to shake hands, Roboy should reach out to shake that hand!

How I built it

All systems (recording surroundings and sending commands to Roboy) are all based on ROS.

Challenges I ran into

Calibrating the commands to send to Roboy so that the movements created were meaningful. The system of pulleys is quite complicated, so producing something as 'simple' as a handshake was a real challenge!

Accomplishments that I'm proud of

Getting the complete pipeline functional, from running the Kinect and recognising human gestures to sending them through a well-built control system that then manages commands between connected systems.

What I learned

Interactions between ROS and Roboy Mechanics of a real robot based on human muscle systems

What's next for R2

The first thing would be to fine-tune the physical movements that Roboy can make. After that it would be great to include some emotion into our motion - perhaps combining speech output with Roboy's gesture to create a more complete interaction. For example, if someone approaches and attempts to handshake or hug and is also smiling, Roboy would react with the appropriate gesture and also say how happy he is to see you, or ask how you are. This could be implemented using Affectiva.

Built With

Share this project:

Updates