- January 2026: Accepted at ICRA 2026! 🎉
- October 2025: Training and evaluation code published. Weights and dataset available in releases.
- Create a virtual environment with conda and python 3.9.
conda create python=3.9 cmake=3.14.0 -n metricnet
conda activate metricnet- Install the dependencies using pip or pdm
conda install habitat-sim headless -c conda-forge -c aihabitat
conda env update -n metricnet --file environment.yaml
pip install git+ssh://[email protected]/debOliveira/diffusion_policy.git@db1434cc256b53deb0ad7228c129c0ce7c733822
pip install git+ssh://[email protected]/debOliveira/depth-anything-V2.git@7885bbc0647bc64d55ff5803561ea2c7dea1af72Warning
We are looking into hosting solutions for our data. In the meantime, we make public the data generation scripts.
- Download and process the datasets according to NoMaD's instructions.
- Generate the benchmark data using the provided script
python generate_benchmark_data.py. For more information on the generation, please use the--helpflag. - Generate the training data using the provided script
python generate_training_data.py. For more information on the generation, please use the--helpflag. - Download DepthAnything-V2 ViT-s weights for training MetricNet, and DepthAnything-V2 Metric ViT-B weights for MetricNav.
- If you want to use the pretrained model, download the weights from our latest release.
To train the model, you need to adjust the in the configuration YAML metricnet.yaml as:
traintoTruedepth_encoder_weightsto the DepthAnythingV2 checkpoint pathdatasets/<DATASET>/data_folder,datasets/<DATASET>/trainanddatasets/<DATASET>/testto the folders generated during the (data processing step)[#-download-data-and-weights]
Then, run the following command:
python train.py -c <YOUR_CONFIG>.yamlIf you want to use wandb to log the training, you can set the use_wandb flag in the configuration YAML to True and the project and entity to your desired project and entity (usually your username). Don't forget to login first:
wandb loginTo test the model, you need to have the model trained. Weights are available in the latest release. Adjust in the configuration YAML metricnet.yaml as:
traintoFalse.depth_encoder_weightsto the DepthAnythingV2 checkpoint path.load_runto the path of the desired weights.
Then, run the following command:
python train.py -c <YOUR_CONFIG>.yamlTo run the benchmark, you need to have the model trained and a local copy of Matterport3D. Weights are available in the latest release. Adjust in the configuration YAML benchmark.yaml as:
benchmark_dirto the path of the Matterport3D dataset.result_dirto the desired output folder.model_cfgto the path of your model configuration YAML.model_ckptto the path of the desired weights.
Then, run the following command:
python benchmark.py -c <YOUR_CONFIG>.yamlWe deployed to a TurtleBot 4 using ROS2 Humble.
Warning
Deployment code is under development and a final version will be uploaded soon.
- NoMaD: Goal Masking Diffusion Policies for Navigation and Exploration
- NaviDiffusor: Cost-Guided Diffusion Model for Visual Navigation
- Imperative Path Planner (iPlanner)
@misc{nayak2025metricnetrecoveringmetricscale,
title={MetricNet: Recovering Metric Scale in Generative Navigation Policies},
author={Abhijeet Nayak and Débora N. P. Oliveira and Samiran Gode and
Cordelia Schmid and Wolfram Burgard},
year={2025},
eprint={2509.13965},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2509.13965}
}