Follow the step by step guide to run HFGaussian
conda env create --file environment.yml
conda activate hfgaussian
Then, compile the diff-gaussian-rasterization in 3DGS repository:
cd diff-gaussian-rasterization
pip install -e .
cd ..
(optinal) RAFT-Stereo provides a faster CUDA implementation of the correlation sampler to speed up the model without impacting performance:
git clone https://github.com/princeton-vl/RAFT-Stereo.git
cd RAFT-Stereo/sampler && python setup.py install && cd ../..
If compiled this CUDA implementation, set corr_implementation='reg_cuda' in config/stereo_human_config.py else corr_implementation='reg'.
Code related to dataset generation can be found inside the prepare_data folder.
- Use the
prepare_data/render_datascript to generate dataset using the THuman2.0 - After rendering the images use
detectron2/projects/DensePoseto generate Densepose for each image. Use the original detectron2 GitHub code
- Stage1: pretrain the depth prediction model. Set
data_rootin stage1.yaml to the path of unzipped folderrender_data.
python train_stage1.py
- Stage2: train the full model. Set
data_rootin stage2.yaml to the path of unzipped folderrender_data, and set the correct pretrained stage1 model pathstage1_ckptin stage2.yaml
python train_stage2.py
-
THuman2.0
python test_eval.py -
THuman4.0
python test_eval_th4.py -
Real-world data:
python test_real_data.py \
--test_data_root 'PATH/TO/REAL_DATA' \
--ckpt_path 'PATH/TO/stage2_final.pth' \
--src_view 0 1 \
--ratio=0.5
- Freeview rendering: run the following code to interpolate freeview between source views, and modify the
novel_view_numsto set a specific number of novel viewpoints.
python test_view_interp.py \
--test_data_root 'PATH/TO/RENDER_DATA/val' \
--ckpt_path 'PATH/TO/stage2_final.pth' \
--novel_view_nums 5