Clone this repo.
git clone https://github.com/Ha0Tang/BiGraphGAN
cd BiGraphGAN/facialPlease follow C2GAN to directly download the facial dataset.
This repository uses the same dataset format as SelectionGAN and C2GAN. so you can use the same data for all these methods.
cd scripts/
sh download_bigraphgan_model.sh facialNote: Please try to execute the command line a second time, if it doesn’t work the first time.
Then,
- Change several parameters in
test_facial.sh. - Run
sh test_facial.shfor testing.
- Change several parameters in
train_facial.sh. - Run
sh train_facial.shfor training. - Change several parameters in
test_facial.sh. - Run
sh test_facial.shfor testing.
For your convenience, you can directly download the images produced by the authors for qualitative comparisons in your own papers!!!
cd scripts/
sh download_bigraphgan_result.sh facialWe adopt SSIM, PSNR, LPIPS for evaluation of Market-1501. Please refer to C2GAN for more details.
This source code is inspired by both C2GAN, and SelectionGAN.
XingGAN | GestureGAN | C2GAN | SelectionGAN | Guided-I2I-Translation-Papers
If you use this code for your research, please cite our papers.
BiGraphGAN
@inproceedings{tang2020bipartite,
title={Bipartite Graph Reasoning GANs for Person Image Generation},
author={Tang, Hao and Bai, Song and Torr, Philip HS and Sebe, Nicu},
booktitle={BMVC},
year={2020}
}
If you use the original XingGAN, GestureGAN, C2GAN, and SelectionGAN model, please cite the following papers:
XingGAN
@inproceedings{tang2020xinggan,
title={XingGAN for Person Image Generation},
author={Tang, Hao and Bai, Song and Zhang, Li and Torr, Philip HS and Sebe, Nicu},
booktitle={ECCV},
year={2020}
}
GestureGAN
@article{tang2019unified,
title={Unified Generative Adversarial Networks for Controllable Image-to-Image Translation},
author={Tang, Hao and Liu, Hong and Sebe, Nicu},
journal={IEEE Transactions on Image Processing (TIP)},
year={2020}
}
@inproceedings{tang2018gesturegan,
title={GestureGAN for Hand Gesture-to-Gesture Translation in the Wild},
author={Tang, Hao and Wang, Wei and Xu, Dan and Yan, Yan and Sebe, Nicu},
booktitle={ACM MM},
year={2018}
}
C2GAN
@inproceedings{tang2019cycleincycle,
title={Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation},
author={Tang, Hao and Xu, Dan and Liu, Gaowen and Wang, Wei and Sebe, Nicu and Yan, Yan},
booktitle={ACM MM},
year={2019}
}
SelectionGAN
@inproceedings{tang2019multi,
title={Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation},
author={Tang, Hao and Xu, Dan and Sebe, Nicu and Wang, Yanzhi and Corso, Jason J and Yan, Yan},
booktitle={CVPR},
year={2019}
}
@article{tang2020multi,
title={Multi-channel attention selection gans for guided image-to-image translation},
author={Tang, Hao and Xu, Dan and Yan, Yan and Corso, Jason J and Torr, Philip HS and Sebe, Nicu},
journal={arXiv preprint arXiv:2002.01048},
year={2020}
}
If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Hao Tang ([email protected]).
