Skip to content

JesseDolfin/co_learning_robot_personalities

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn


KUKA arm

Co-learning with robot personalities for KUKA iiwa7

About the project
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Contributing
  5. License
  6. Contact

About The Project

This project explores co-learning in human-robot teams by implementing distinct robot personalities that adjust the robot’s interaction style. By configuring the KUKA iiwa7 robot arm with a reinforcement learning (RL) algorithm, the setup enables the robot to adapt to human partners based on pre-set personality traits dynamically.

Key Features

  • Personality Parameters: Adjust the robot’s behaviour along different personality axes—e.g., leader/follower or patient/impatient—to observe how these traits impact collaborative efficiency and human perceptions.
  • Reinforcement Learning: Uses Q-learning to enable the robot to learn handover strategies based on human feedback, refining these strategies over repeated interactions.
  • Human-Robot Interaction Insights: Collects data on joint strategies, adaptation rates, and collaboration fluency to better understand the influence of robot personality in co-learning environments.

This project is intended for researchers and developers in human-robot interaction and collaborative robotics. It aims to provide a practical toolkit for examining how robot personality traits can impact teamwork and learning dynamics.

Getting Started

This package is tested with Ubuntu 20.2 and ROS noetic; it uses Python version 3.8. Other configurations have not been tested, and compatibility may vary.

Prerequisites

  1. Install the combined robot hw package:
    sudo apt install ros-noetic-combined-robot-hw
  2. Install the RealSense SDK 2.0. To do this, follow their installation instructions

Installation

  1. Create a workspace folder and go into it, then create a src folder
    mkdir <WORKSPACE_NAME> && cd <WORKSPACE_NAME>
    mkdir src
  2. Install the kuka-fri repository
    git clone [email protected]:nickymol/kuka_fri.git
    cd kuka_fri
    # Apply SIMD patch:
    wget https://gist.githubusercontent.com/matthias-mayr/0f947982474c1865aab825bd084e7a92/raw/244f1193bd30051ae625c8f29ed241855a59ee38/0001-Config-Disables-SIMD-march-native-by-default.patch
    git am 0001-Config-Disables-SIMD-march-native-by-default.patch
    # Build
    ./waf configure
    ./waf
    sudo ./waf install
  3. Install the Dependencies from the iiwa_ros package, except for the kuka-fri repository, as you have already installed this. This means that all the cloned repos go into <WORKSPACE_NAME>. DON'T RUN: export CXXFLAGS="-march=native -faligned-new" The installation of the kuka-fri repository has disabled the SIMD flag, running -march=native will run this flag, causing segmentation fault issues later on.
  4. Go into the source folder and clone the iiwa_ros repo:
    cd src
    git clone https://github.com/epfl-lasa/iiwa_ros.git
  5. Clone the impedance controller and checkout a specific branch that removes the need for a specific end-effector:
    git clone [email protected]:nickymol/iiwa_impedance_control.git
    cd iiwa_impedance_control
    git checkout no_end_effector
    cd ..
  6. Clone the qb_hand repositories:
    git clone --recurse-submodules https://bitbucket.org/qbrobotics/qbdevice-ros.git
    git clone https://bitbucket.org/qbrobotics/qbhand-ros.git
  7. Lastly, clone this repository and build the workspace:
    git clone https://github.com/JesseDolfin/co_learning_robot_personalities.git
    cd ..
    catkin_make
  8. Install additional dependencies:
    pip install numpy==1.21 python-dateutil==2.8.2 mediapipe pyrealsense2 ultralytics gymnasium pygame pyserial

Posterior modifications

Now that everything is installed, you need to modify the config of the iiwa_ros package to work with the iiwa 7 at TU Delft. Browse to: iiwa_ros/iiwa_driver/config/iiwa.yaml and modify the port and robot_ip to:

port: 30207
robot_ip: 192.180.1.7

This should now allow for a connection via the fri overlay app on the smart pad.

(back to top)

Usage

After installing and sourcing the software, the simulation may be started using the following roslaunch command:

roslaunch co_learning_controllers bringup.launch allow_nodes:=false

This will start the simulation without any of the nodes. If you want to start this on the real robot, run the command with the prefix simulation:=false:

roslaunch co_learning_controllers co_learning_test_setup.launch simulation:=false

It is possible to selectively turn off nodes, make sure that you set allow_nodes:=true or omit the option entirely as the default value is true. The full set of node control parameters are:

<arg name="allow_nodes" default="true"/>
<arg name="control_node" default="true"/>
<arg name="secondary_task" default="true"/>
<arg name="rviz" default="false"/>
<arg name="detection" default="true"/>
<arg name="qb_hand" default="true"/>

All the nodes are stand-alone; however, the control_node requires input from the secondary_task_message. It is impossible to obtain a successful handover without the RealSense camera to monitor it. If you want to test the experiment with a positive handover input, you can bypass this check if you manually set the handover_successful field of the secondary_task_message to '-1' or '1'. You have to publish this on the /Task_status topic. If you pass the parameter fake:=true to the control launch file, it will run without the qb_hand launch file and will run the hand_controller node in fake mode (this allows the control node to run without the qb_hand present).

Running the full experiment

The experiment auto collects data, the base directory folder is hardcoded in the control_node.py file. You have to change the line:

self.base_dir = os.path.expanduser('~/thesis/src/co_learning_robot_personalities/data_collection')

To:

self.base_dir = os.path.expanduser('Your/file/path')

If you want the data to be collected in a folder of your choosing.

Next, you need to follow the Setup multimachine ROS guide Then, on machine A, source the workspace in a terminal and run the following:

roslaunch co_learning_controllers bringup.launch simulation:=false secondary_task:=false participant_number:=<x> personality_type:=<y>

Where the participant_number must be an integer and the personality type can be any of the following: "baseline, leader, follower, impatient, patient".

On machine B, connect the ethernet cable from the switch to the machine, source the workspace and run the following:

rosrun co_learning_secondary_task secondary_task.py 

After all of this, start the FRIOverlay app in automatic mode on the tablet.
The robot arm should now be moving, to start the experiment simply start the simulation on machine B.

For more examples, please refer to the Documentation

(back to top)

Contributing

Contributions make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you suggest improving this, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement." Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Jesse Dolfin - [email protected]

Project Link: https://github.com/JesseDolfin/co_learning_robot_personalities

(back to top)

(back to top)

About

This repository implements the expermential design to test the effect of different robot personalities on Co-Learning

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors