Skip to content

lmy1001/One2Any

Repository files navigation

One2Any: One-Reference 6D Pose Estimation for Any Object

Mengya Liu1, Siyuan Li1, Ajad Chhatkuli2, Prune Truong3, Luc Van Gool1,2, Federico Tombari3,4

1ETH Zurich,   2INSAIT, Sofia University “St. Kliment Ohridski”,   3Google,   4TUM   

Paper

Teaser

Environment

git clone https://github.com/lmy1001/One2Any.git
cd One2Any
conda env create --file env.yaml
conda activate one2any_env

Dataset

For model training, you need both oo3d_9d_dataset and foundationpose_dataset.

For oo3d_9d_dataset, please follow here for data download and preparation.

For foundationpose_dataset, please follow here for data download and preparation.

Training

./train.sh

Test

Here is an example of evaluation on linemod dataset. You have to first download dataset from BOP benchmark.

The pretrained model can be downloaded here, put in under ./pretrained_model/, and run

./test.sh

BibTex

If you find this project useful in your research, please cite:

@inproceedings{liu2025one2any,
  title={One2Any: One-Reference 6D Pose Estimation for Any Object},
  author={Liu, Mengya and Li, Siyuan and Chhatkuli, Ajad and Truong, Prune and Van Gool, Luc and Tombari, Federico},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={6457--6467},
  year={2025}
}

Acknowledgements

This project is developed upon OV9D, Oryon. We thank the authors for open sourcing their great works!

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors