by Yitong Deng, Hong-Xing Yu, Jiajun Wu, and Bo Zhu.
Our paper can be found at: https://arxiv.org/abs/2301.11494.
Video results can be found at: https://yitongdeng.github.io/vortex_learning_webpage.
The environment can be installed by conda via:
conda env create -f environment.yml
conda activate vortex_envOur code is tested on Windows 10 and Ubuntu 20.04.
The 5 videos (2 synthetic and 3 real-world) used in our paper can be downloaded from Google Drive. Once downloaded, place the unzipped data folder to the project root directory.
First, execute the command below to pretrain the trajectory network so that the initial vortices are regularly spaced to cover the simulation domain and remains stationary.
python train.py --config configs/synthetic_1.txt --run_pretrain TrueOnce completed, navigate to pretrained/exp_synthetic_1/tests/ and check that the plotted dots are regularly spaced and remain roughly stationary. A file pretrained.tar shall also appear at pretrained/exp_synthetic_1/ckpts/.
Then, run the command below to train.
python train.py --config configs/synthetic_1.txtCheckpoints and testing results are written to logs/exp_synthetic_1/tests/ once every 1000 training iterations.
When run on our Windows machine with AMD Ryzen Threadripper 3990X and NVIDIA RTX A6000, this is the final testing result we get:
Note that since our PyTorch code includes nondeterministic components (e.g., the CUDA grid sampler), it is expected that each training session will not generate the exact same outcome.
python train.py --config configs/synthetic_2.txt --run_pretrain Truepython train.py --config configs/synthetic_2.txtpython train.py --config configs/real_1.txt --run_pretrain Truepython train.py --config configs/real_1.txtpython train.py --config configs/real_2.txt --run_pretrain Truepython train.py --config configs/real_2.txtpython train.py --config configs/real_3.txt --run_pretrain Truepython train.py --config configs/real_3.txtWe assume the input is a Numpy array of shape [num_frames], 256, 256, 3, with the last dimension representing RGB pixel values between 0.0 and 1.0, located in data/[your_name_here]/imgs.npy. For fluid videos with boundaries (like in our real-world examples), it is required that a Numpy array of shape 256, 256 representing the signed distance field to the boundary be supplied in data/[your_name_here]/sdf.npy. We assume the signed distance has a unit of pixels.
For videos of higher dynamical complexity, we also encourage playing around with the number of vortex particles used. Currently, this is determined by the vorts_num_x and vorts_num_y parameters in train.py hard coded to 4, which might need to be increased as needed.
If you find our paper or code helpful, please consider citing:
@inproceedings{deng2023vortex,
title={Learning Vortex Dynamics for Fluid Inference and Prediction},
author={Yitong Deng and Hong-Xing Yu and Jiajun Wu and Bo Zhu},
booktitle={Proceedings of the International Conference on Learning Representations},
year={2023},
}




