Authors: Markus Knauer, Alin Albu-Schäffer, Freek Stulp, and João Silvério
Responsible: Markus Knauer ([email protected]) Research Scientist @ German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Munich, Germany & Doctoral candidate & Teaching Assistant @ Technical University of Munich (TUM), Germany.
This repository contains the code to reproduce the experiments from our RA-L paper.
If you are interested, you can find similar projects on https://markusknauer.github.io
RA-L paper | ArXiv paper | ELIB paper | YouTube
Ctrl+Click to open links in a new tab
The problem of generalization in learning from demonstration (LfD) has received considerable attention over the years, particularly within the context of movement primitives, where a number of approaches have emerged. Recently, two important approaches have gained recognition. While one leverages via-points to adapt skills locally by modulating demonstrated trajectories, another relies on so-called task-parameterized models that encode movements with respect to different coordinate systems, using a product of probabilities for generalization. While the former are well-suited to precise, local modulations, the latter aim at generalizing over large regions of the workspace and often involve multiple objects. Addressing the quality of generalization by leveraging both approaches simultaneously has received little attention. In this work, we propose an interactive imitation learning framework that simultaneously leverages local and global modulations of trajectory distributions. Building on the kernelized movement primitives (KMP) framework, we introduce novel mechanisms for skill modulation from direct human corrective feedback. Our approach particularly exploits the concept of via-points to incrementally and interactively 1) improve the model accuracy locally, 2) add new objects to the task during execution and 3) extend the skill into regions where demonstrations were not provided. We evaluate our method on a bearing ring-loading task using a torque-controlled, 7-DoF, DLR SARA robot.
Keywords: Incremental Learning, Imitation Learning, Continual Learning, Robotics
Create and activate the conda environment:
conda env create -f requirements.yaml
conda activate tpkmpIf you don't have conda installed, follow the installation guide.
Run all four experiments:
python interactive_incremental_learning/main.py --experiment 0123 --plotOr run individual experiments:
# Experiment 0: Generalization to new frame configurations
python interactive_incremental_learning/main.py --experiment 0 --plot
# Experiment 1: Adding via-points to refine the trajectory
python interactive_incremental_learning/main.py --experiment 1 --plot
# Experiment 2: Adding a new reference frame during execution
python interactive_incremental_learning/main.py --experiment 2 --plot
# Experiment 3: Computing variable stiffness from uncertainty
python interactive_incremental_learning/main.py --experiment 3 --plotSee experiments/README.md for expected outputs and detailed descriptions.
make pytestInstall in editable mode with test dependencies:
pip install -e ".[tests]"Run all checks:
make commit-checks # format + type check + lint
make pytest # run tests with coverageSee CONTRIBUTING.md for more details.
If you use our ideas in a research project or publication, please cite as follows:
@ARTICLE{knauer2025,
author={Knauer, Markus and Albu-Sch{\"a}ffer, Alin and Stulp, Freek and Silv{\'e}rio, Jo{\~a}o},
journal={IEEE Robotics and Automation Letters (RA-L)},
title={Interactive incremental learning of generalizable skills with local trajectory modulation},
year={2025},
volume={10},
number={4},
pages={3398-3405},
keywords={Incremental Learning; Imitation Learning; Continual Learning},
doi={10.1109/LRA.2025.3542209}
}
