This project focus on the multi-GPU parallel implementation of LBM(Lattice Boltzmann Method). We use OpenACC to accelerate codes on single GPU and MPI for inter-GPU communication.
Bold for completed, italic for coded but not documented. Bold and italic means we are now working on this.
-
MPI(Pure) MPI implementation for multi processor, Including communication overlap(non-blocked version), and scalable decomposition of sub-domains. (sub-domains arranged in 3 dimensions.) contains:LaplaceWell-know jacobi iteration for testingLid_driven_cavityLid-driven cavity flow.Thermal_flowThermal flow.Particle_flowParticle flow.Multi_Phase_flowMultiphase flow.
-
OpenACCOpenACC accelerated codes for single GPU. -
MPI + OpenACCMulti-GPU solver.