Skip to content

snt-spacer/RoboRAN-Website

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

15 Commits
Β 
Β 
Β 
Β 

Repository files navigation

RobRAN Website

A Unified Robotics Framework for Reinforcement Learning-Based Autonomous Navigation.

IsaacSim Python Linux platform Windows platform pre-commit License License

This is a landing page for all the projects related to RobRAN. Below there's the links to the code of each project:

Main Code (TMLR)

Deployment To Real Robots Code

BibTex

@article{
el-hariry2025roboran,
title={Robo{RAN}: A Unified Robotics Framework for Reinforcement Learning-Based Autonomous Navigation},
author={Matteo El-Hariry and Antoine Richard and Ricard Marsal and Luis Felipe Wolf Batista and Matthieu Geist and C{\'e}dric Pradalier and Miguel Olivares-Mendez},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=0wDbhLeMj9},
note={}
}

Overview

RobRAN is a multi-domain reinforcement learning benchmark designed for robotic navigation tasks in terrestrial, aquatic, and space environments. Built on IsaacLab, our framework enables:

βœ… Fair comparisons across different robots and mobility systems βœ… Scalable training pipelines for reinforcement learning agents βœ… Sim-to-real transfer validation on physical robots

πŸŽ₯ Real-world deployments

Turtlebot 2 Kingfisher Floating platform

Features

  • Diverse Navigation Tasks: GoToPosition, GoToPose, GoThroughPositions, TrackVelocities, and more.
  • Cross-Domain Evaluation: Supports thruster-based platforms, wheeled robots, and water-based propulsion.
  • Unified Task Definitions: Standardized observation space, reward structures, and evaluation metrics.
  • Efficient Simulation: GPU-accelerated rollouts via IsaacLab for rapid RL training.
  • Real-World Validation: Policies successfully deployed on a Floating Platform, Kingfisher, and Turtlebot2.

🚧 Installation

Code lives in this anonymous repo:

git clone https://anonymous.4open.science/r/RobRAN-Code-E08E/README.md
cd RobRAN-Code
./docker/container.py start
./docker/container.py enter

Reproducibility

🧠 Training pipelines for all tasks and robots

./isaaclab.sh -p scripts/reinforcement_learning/<isaac_lab_rl_framework>/train.py --task=Isaac-RANS-Single-v0 --headless env.robot_name=<robot_name> env.task_name=<task>

Robots

Land Water Space
Jetbot Kingfisher FloatingPlatform
Leatherback
Turtlebot2

Tasks

  • GoToPosition
  • GoToPose
  • GoThroughPositions
  • TrackVelocities

Note

The paper was tested using SKRL and RL_Games for the isaac_lab_rl_framework.

PPO Hyperparameters

Parameter Value
Rollouts 32
Learning Epochs 8
Mini Batches 8
Discount Factor 0.99
Lambda 0.95
Learning Rate 5.0e-04
KL Threshold 0.016
Epochs 1000
Network size 32x32

πŸ§ͺ Evaluation and visualization

Play trained models

./isaaclab.sh -p scripts/reinforcement_learning/<isaac_lab_rl_framework>/play.py --task=Isaac-RANS-Single-v0 --num_envs=32 env.robot_name=<robot_name> env.task_name=<task> --checkpoint=<path_to_pt_model>

Evaluation & Metrics

./isaaclab.sh -p scripts/reinforcement_learning/run_all_evals.py

Performance Comparison Across RL Frameworks

The table below summarizes the performance of policies trained with skrl and rl_games on shared navigation tasks.

Task Robot Success Final Dist Err Time to Target Ctrl Var Heading Err Goals Reached
GoThroughPositions
FloatingPlatform (skrl) 1.000 2.346 65.180 0.318 β€” 13.565
FloatingPlatform (rl_games) 1.000 2.697 66.640 0.373 β€” 14.025
Kingfisher (skrl) 1.000 2.414 93.290 0.430 β€” 10.702
Kingfisher (rl_games) 1.000 3.525 67.050 0.092 β€” 14.716
Turtlebot2 (skrl) 1.000 1.789 101.500 0.133 β€” 11.006
Turtlebot2 (rl_games) 1.000 1.861 84.170 0.052 β€” 10.835
GoToPosition
FloatingPlatform (skrl) 0.994 0.050 92.380 0.620 β€” β€”
FloatingPlatform (rl_games) 0.995 0.035 91.830 0.676 β€” β€”
Kingfisher (skrl) 0.589 1.063 176.110 0.750 β€” β€”
Kingfisher (rl_games) 0.998 0.023 90.040 0.112 β€” β€”
Turtlebot2 (skrl) 0.986 0.069 92.600 0.433 β€” β€”
Turtlebot2 (rl_games) 0.979 0.066 99.200 0.063 β€” β€”
GoToPose
FloatingPlatform (skrl) 0.993 0.024 92.370 0.688 0.783 β€”
FloatingPlatform (rl_games) 0.979 0.035 88.710 0.754 0.801 β€”
Turtlebot2 (skrl) 0.836 0.145 131.490 0.629 4.389 β€”
Turtlebot2 (rl_games) 0.779 0.155 134.540 0.095 2.189 β€”
TrackVelocities
FloatingPlatform (skrl) 0.930 β€” β€” 0.447 β€” 0.049
FloatingPlatform (rl_games) 0.679 β€” β€” 0.388 β€” 0.044
Kingfisher (skrl) β€” β€” β€” 0.618 β€” 0.241
Kingfisher (rl_games) 0.434 β€” β€” 0.093 β€” 0.272
Turtlebot2 (skrl) 0.768 β€” β€” 0.152 β€” 0.107
Turtlebot2 (rl_games) 0.783 β€” β€” 0.025 β€” 0.100

This table compares performance across tasks using PPO from two RL libraries: skrl and rl_games. While both show strong convergence, some variations emerge, particularly in heading control and velocity tracking. These differences likely stem from implementation details (e.g., optimizer behavior, action noise, or learning rate schedules). Despite these, both frameworks achieve high success rates and consistent trends, confirming that the benchmark stack is stable and the results are reproducible across PPO variants.

πŸ“Š Pre-trained models and performance metrics

You can download all the trained models from this link.

Simulation

Real-world

Turtlebot 2 Kingfisher Floating platform

About

RobRAN website

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors