Skip to content

haiantyz/HuGDiffusion

Repository files navigation

HuGDiffusion: Generalizable Single-Image Human Rendering via 3D Gaussian Diffusion

Accepted by IEEE Transactions on Visualization and Computer Graphics 2025

Please follow PointNet++ 3DGS, PointTransformerV3 and PVD to prepare the python environment.

Please follow GuassianCube or Trellis to prepare the rendered RGB images (saved as png) and corresponding camera parameters (saved as json). We provide an example folder which contains the png and json file. Note that, you might need to slightly change some codes because of the different camera parameters.

# first stage per-person overfitting. At this stage, you will achieve a npz file which contains 3dgs attributes.
python overfit_firststage.py

# second stage all person overfitting. At this stage, you will achieve a ckpt file first.
python unifyalign_secondstage.py

# load the ckpt file and run the inference py file you will achieve a distribution unified proxy 3dgs dataset.
python unifyalign_inference.py

Please follow HaP to generate the human point cloud. Please reffer to instructpix2pix, controlnet, SiTH and PSHuman for generating the backside view images.

# train the diffusion model.
python traindiffusion.py

# train the refinement model.
python traindiffusionrefine.py

If you find our paper is helpful, please cite our paper.

@article{tang2025human,
  title={HuGDiffusion: Generalizable Single-Image Human Rendering via 3D Gaussian Diffusion  },
  author={Tang, Yingzhi and Zhang, Qijian and  and Hou, Junhui},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2025},
  publisher={IEEE}
}

About

Official pytorch implement of HuGDiffusion

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages