General-purpose robot reward models are typically trained to predict absolute task progress from expert demonstrations, providing only local, frame-level supervision. While effective for expert demonstrations, this paradigm scales poorly to large-scale robotics datasets where failed and suboptimal trajectories are abundant and assigning dense progress labels is ambiguous. We introduce Robometer, a scalable reward modeling framework that combines intra-trajectory progress supervision with inter-trajectory preference supervision. Robometer is trained with a dual objective: a frame-level progress loss that anchors reward magnitude on expert data, and a trajectory-comparison preference loss that imposes global ordering constraints across trajectories of the same task, enabling effective learning from both real and augmented failed trajectories. To support this formulation at scale, we curate RBM-1M, a reward-learning dataset comprising over one million trajectories spanning diverse robot embodiments and tasks, including substantial suboptimal and failure data. Across benchmarks and real-world evaluations, Robometer learns more generalizable reward functions than prior methods and improves robot learning performance across a diverse set of downstream applications.
robometer/
βββ robometer/ # Main package
β βββ data/ # Datasets and preprocessing
β βββ configs/ # Hydra and experiment configs
β βββ models/ # Model definitions
β βββ evals/ # Baseline evals (GVL, VLAC, Robodopamine, etc.)
βββ eval_commands/ # Shell scripts for baseline evals
βββ train.py # Training entrypoint
βββ pyproject.toml # Dependencies (uv)
- Git, Python 3.10+
- NVIDIA drivers (GPU)
- uv (recommended)
git clone https://github.com/aliang8/robometer.git
cd robometer
# Create venv and install
uv synchf auth
export ROBOMETER_PROCESSED_DATASETS_PATH=/path/to/save/processed_datasets
./scripts/download_processed_datasets.sh
./scripts/untar_processed_datasets.shFor raw download and preprocessing, see π₯ Download raw datasets below.
Inference runs a pretrained RBM model on your own videos to get per-frame progress, per-frame success, and (for two trajectories) preference scores.
Pretrained models (Hugging Face):
- Robometer-4B β general-purpose, trained on RBM-1M
Robometer-4B-LIBERO β LIBERO-10 / Spatial / Object / Goalremoved because the standard Robometer model is already trained on LIBERO 10/Spatial/Object/Goal+failures and simply performs better than the version trained exclusively on LIBERO
Start the eval server on your machine, then call it with a video and task:
uv run python robometer/evals/eval_server.py \
server_url=0.0.0.0 \
server_port=8000Then run the client (no robometer dependency):
# SOAR
uv run python scripts/example_inference.py \
--eval-server-url http://localhost:8000 \
--video scripts/example_videos/soar_put_green_stick_in_brown_bowl.mp4 \
--task "Put green stick in brown bowl" \
--fps 3
# Berkeley RPT (Wrist)
uv run python scripts/example_inference.py \
--eval-server-url http://localhost:8000 \
--video scripts/example_videos/berkeley_rpt_stack_cup.mp4 \
--task "Pick up the yellow cup and stack it on the other cup" \
--fps 3
# Your own video
uv run python scripts/example_inference.py \
--eval-server-url http://localhost:8000 \
--video /path/to/video.mp4 \
--task "your task description"To run the model locally (loads checkpoint from Hugging Face, no server):
uv run python scripts/example_inference_local.py \
--model-path robometer/Robometer-4B \
--video /path/to/video.mp4 \
--task "your task description"Train on RBM-1M in-distribution and evaluate on RBM-1M-OOD
First, modify robometer/configs/config.yaml's wandb_entity flag to your WandB entity. To disable WandB logging, remove "wandb" from the log_to list in the config yaml file.
See more flags in the config file (e.g., batch size, learning rates, etc.)
uv run accelerate launch --config_file robometer/configs/distributed/fsdp.yaml --num_processes=N_GPUS_YOU_HAVE train.py \
data.train_datasets=[rbm-1m-id] \
data.eval_datasets=[rbm-1m-ood] \
data.max_frames=8 \
model.train_progress_head=true \
model.train_preference_head=true \
training.max_steps=15000 \
custom_eval.reward_alignment=[rbm-1m-ood] \
custom_eval.policy_ranking=[rbm-1m-ood] \
custom_eval.confusion_matrix=[rbm-1m-ood] \
logging.save_best.metric_names=[eval_p_rank/kendall_last_utd_so101_clean_top,eval_p_rank/kendall_last_usc_xarm,eval_p_rank/kendall_last_usc_franka,eval_p_rank/kendall_last_rfm_new_mit_franka_nowrist,eval_p_rank/kendall_last_usc_trossen] \
logging.save_best.greater_is_better=[True,True,True,True,True]LIBERO: train on 10 / object / spatial / goal, test on 90.
First, modify robometer/configs/config.yaml's wandb_entity flag to your WandB entity. To disable WandB logging, remove "wandb" from the log_to list in the config yaml file.
uv run accelerate launch --config_file robometer/configs/distributed/fsdp.yaml train.py \
data.train_datasets=[libero_pi0] \
data.eval_datasets=[libero_pi0] \
data.max_frames=8 \
model.train_progress_head=true \
model.train_preference_head=true \
training.max_steps=5000 \
custom_eval.reward_alignment=[libero_pi0] \
custom_eval.policy_ranking=[libero_pi0]See robometer/configs/experiment_configs.py for more config options.
Preprocess a new dataset, LoRA fine-tune from Robometer-4B on your own data, upload the model to the Hub, and run inference:
- Preprocessing: Add your dataset to the preprocess config and run the preprocessor; for raw videos (e.g. MINT-SJTU/RoboFAC-dataset), convert to RBM format first via
dataset_upload, then preprocess. - Fine-tuning: Set
model.use_peft=trueandtraining.resume_from_checkpoint=robometer/Robometer-4B, then train on your dataset. - Upload & inference: Use
robometer/utils/upload_to_hub.pyto push checkpoints; runscripts/example_inference_local.pywith your Hub model.
Full step-by-step: FINETUNE_ROBOMETER.md.
Evaluation runs benchmark evals (reward alignment, policy ranking, confusion matrix) on fixed datasets to measure model quality. Use this to reproduce paper results or compare checkpoints.
Run RBM with reward_model=rbm; override model_path and custom_eval.* as needed. See eval_commands/*.sh for ReWIND, Robo-Dopamine, VLAC, RoboReward.
Reward alignment
uv run python robometer/evals/run_baseline_eval.py \
reward_model=rbm \
model_path=robometer/Robometer-4B \
custom_eval.eval_types=[reward_alignment] \
custom_eval.reward_alignment=[rbm-1m-id,rbm-1m-ood] \
custom_eval.use_frame_steps=true \
custom_eval.subsample_n_frames=5 \
custom_eval.reward_alignment_max_trajectories=30 \
max_frames=8 \
model_config.batch_size=32Policy ranking
uv run python robometer/evals/run_baseline_eval.py \
reward_model=rbm \
model_path=robometer/Robometer-4B \
custom_eval.eval_types=[policy_ranking] \
custom_eval.policy_ranking=[rbm-1m-ood] \
custom_eval.use_frame_steps=false \
custom_eval.num_examples_per_quality_pr=1000 \
max_frames=8 \
model_config.batch_size=32Confusion matrix
uv run python robometer/evals/run_baseline_eval.py \
reward_model=rbm \
model_path=robometer/Robometer-4B \
custom_eval.eval_types=[confusion_matrix] \
custom_eval.confusion_matrix=[[aliangdw_usc_franka_policy_ranking_usc_franka_policy_ranking,jesbu1_utd_so101_clean_policy_ranking_top_utd_so101_clean_policy_ranking_top,aliangdw_usc_xarm_policy_ranking_usc_xarm_policy_ranking]] \
max_frames=8 \
model_config.batch_size=32Details: robometer/evals/README.md.
- RBM: use the reward alignment, policy ranking, or confusion matrix commands above; set
model_pathto your checkpoint. - ReWIND, Robo-Dopamine, VLAC, RoboReward: see robometer/evals/README.md and
eval_commands/reward_alignment.sh,eval_commands/policy_ranking.sh,eval_commands/confusion_matrix.sh. For Robo-Dopamine use.venv-robodopamine/bin/python(vLLM) instead ofuv run.
Supported: AgiBotWorld (streaming), LIBERO (HDF5), and custom configs.
# AgiBotWorld
uv run python dataset_upload/generate_hf_dataset.py --config_path=dataset_upload/configs/data_gen_configs/agibot_world.yaml
# LIBERO
uv run python dataset_upload/generate_hf_dataset.py --config_path=dataset_upload/configs/data_gen.yaml \
--dataset.dataset_path=LIBERO/libero/datasets/libero_90 --dataset.dataset_name=libero_90See dataset_upload README and dataset_guides for adding datasets.
If you prefer not to use the processed datasets:
export ROBOMETER_DATASET_PATH=/path/to/your/robometer_dataset
./scripts/download_data.sh
# Preprocess
uv run python -m robometer.data.scripts.preprocess_datasets --config robometer/configs/preprocess.yaml
export ROBOMETER_PROCESSED_DATASETS_PATH=/path/to/save/processed_datasetsThis project is licensed under the MIT License.
