High Performance Uncertainty Quantification (HPUQ) – StDDM provides deterministic and stochastic finite-element solvers for elliptic PDEs using two-level preconditioned conjugate gradient methods (PCGM) embedded inside a domain-decomposition (DDM) framework. The code couples hand-written FEM assembly routines with PETSc-based sparse blocks, supports both deterministic and stochastic (KLE/PCE) representations of coefficients, and scales to large MPI runs on Compute Canada clusters.
- Hybrid FEM/PETSc workflow: local dense assembly (
assembly.F90,variationalform.F90) feeding sparse PETSc objects built inPETScAssembly.F90. - Multiple DDM solvers: classic implementations in
solvers.F90plus the PETSc-based two-level NNC PCGM (PETScSolvers.F90+PETScommon.F90). - Automated mesh preprocessing (
preprocmesh*.F90) for global and per-subdomain partitions exported by Gmsh. - Turnkey scripts under
scripts/to generate meshes/KLE data (seedata/kle_pce/), run on desktops (scripts/run_ddm.sh) or submit SLURM jobs on Cedar, Graham, or Niagara with matching makefiles. - Reproducible stochastic data inputs (defaults in
data/stochastic/plus many higher-order datasets stored underdata/kle_pce/).
src/: all Fortran sources.main.F90orchestrates MPI, data ingestion, PETSc solver selection, and IO;common.F90/myCommon.F90expose shared utilities;variationalform.F90,assembly.F90,PETScAssembly.F90,PETScommon.F90, andPETScSolvers.F90implement the FEM/PETSc solver stack;solvers.F90provides legacy DDM solvers;preprocmesh*_*.F90extract global/local mesh metadata.data/geometry/square.geo: sample geometry for partitioning; adjustlcfor refinement.data/stochastic/{cijk,multipliers,omegas}: default stochastic data consumed bymain.F90.data/kle_pce/: KLE/PCE generators (KLE_PCE_Data*.F90) plus all precomputedcijk****,multiIndex****,nZijk****,multipliers*, andomegas*tables.scripts/: automation entry points (generate_data*.sh,preprocess.sh,run_ddm*.sh,clean.sh,MeshDataClean.sh,NonPCEdatClean.sh).docs/PETSc_Install.pdf: walkthrough for compiling PETSc on Compute Canada systems.makefileandmakefile_{cedar,graham,niagara}: PETSc-aware build recipes for local and cluster environments.
- Compiler: GNU Fortran 4.9.2+ (Intel Fortran tested on CC clusters).
- MPI: Open MPI 1.8+, or vendor-provided MPI on SLURM systems.
- PETSc: >= 3.7.5 (3.9.2 run files included). Define
PETSC_DIRandPETSC_ARCH. - Gmsh: ≥ 2.8.5 for mesh generation. Scripts expect 3.0.4 but accept other versions with matching CLI.
- ParaView: ≥ 4.1.0 for VTK visualization.
- MATLAB/UQTK (optional): for advanced stochastic post-processing of generated data sets.
Ensure PETSc libraries are visible (module load or manual installation) before invoking make.
- Clone the repository and switch into it.
- Pick the makefile closest to your machine (e.g.,
makefile_cedar,makefile_graham,makefile_niagara) and copy/rename it tomakefile. UpdatePETSC_DIR,PETSC_ARCH, compiler, BLAS/LAPACK, and optimization flags. - Export MPI launchers if PETSc is not on your
PATH, e.g.:export PETSC_DIR=/path/to/petsc export PETSC_ARCH=arch-linux2-c-debug export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
All script invocations below assume you run them from the repository root so relative paths resolve correctly.
- Set the number of partitions (
NP) inside the appropriatescripts/generate_data*.shscript and update the absolute path to your Gmsh binary. - Run
bash scripts/generate_data.sh(or the Cedar/Graham/Niagara variants). The script:- Invokes
gmsh -2 data/geometry/square.geo -part $NP -o gmsh.msh. - Compiles and runs each
preprocmesh*.F90program to emit global/local data (points.dat,edges.dat,triangles.dat,meshdim.dat,nbnodes####.dat, measurement-node lists, etc.).
- Invokes
- On workstations you can alternatively call
bash scripts/preprocess.sh, which mirrors the same commands using the systemgmsh.
- Enter
data/kle_pce/. - Decide the stochastic dimension (
nDim) and polynomial order (nOrd) and editKLE_PCE_Data*.F90accordingly (or choose the pre-generated dataset that matches your case, e.g.,multiIndex00030025.dat). - Run
gfortran KLE_PCE_Data.F90 -O2 -o kle_gen && ./kle_gen. - Copy or link the produced
cijk****,multiIndex****,nZijk****,multipliers*, andomegas*files expected bymain.F90. Global defaults indata/stochastic/(cijk,multipliers,omegas) cover 9 KLE modes; override them if you use larger tables.
make all # builds a.out by linking PETSc libraries listed in the makefileThe top-level makefile defines LIST with all Fortran objects and reuses PETSc’s variables/rules. Ensure PETSc’s bin/mpiexec is in PATH before building on shared clusters where login nodes restrict custom MPI binaries.
Pass the number of MPI ranks (NP, must match the number of subdomains) to scripts/run_ddm.sh or the cluster-specific launch script.
bash scripts/preprocess.sh # if not already done
make all
$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np $NP ./a.out [-ksp_monitor ...]Enable PETSc diagnostics by appending options shown at the bottom of each scripts/run_ddm*.sh.
- Cedar & Graham: Copy
scripts/generate_data_{cedar,graham}.shtoscripts/generate_data.sh, run it in$SCRATCH, then adjustscripts/run_ddm_{cedar,graham}.sh(account, walltime, nodes, tasks, PETSc modules). Submit withsbatch scripts/run_ddm_cedar.shorsbatch scripts/run_ddm_graham.sh. Both scripts loadnixpkgs/16.09andintel/2016.4+ OpenMPI 2.1.1 by default. - Niagara: Files must reside in
$SCRATCH. LoadCCEnvandStdEnv, thenmodule load nixpkgs/16.09 intel/2016.4 openmpi/2.1.1. Submitsbatch scripts/run_ddm_niagara.sh. Niagara nodes expose 40 cores, so choose NP in multiples of 40.
scripts/run_ddm.sh remains a template for other SLURM/PBS systems; adapt walltime, modules, and mpiexec arguments as needed.
- Solution vectors per subdomain plus assembled global data are written as
*.datandout_deterministic.vtk. Load the VTK file withparaview out_deterministic.vtk. - Measurement files (
measnodes*.dat,measlocs.dat) are produced bypreprocmesh_meas*.F90for Bayesian filtering or Kalman updates downstream.
- Mesh:
points.dat,edges.dat,triangles.dat, per-domainnodes####.dat,tri####.dat,bnodes####.dat, etc. - Boundary metadata:
boundary_nodes.dat,meshdim.dat,dbounds.dat,nbnodes####.dat,corn####.dat,rema####.dat. - Stochastic inputs: defaults in
data/stochastic/(cijk,multipliers,omegas) plus the high-order tables underdata/kle_pce/(multiIndex****.dat,nZijk****.dat,multipliers*,omegas*,params.dat). - Measurement:
measnodes####.dat,nmeas####.dat,measlocs.dat. - Solver outputs:
Ui,Ub,Ub_g,out_deterministic.vtk, PETSc log files if-log_summaryis enabled.
scripts/clean.sh: removes compiled objects, modules,.msh,.vtk,.dat, and binaries.scripts/MeshDataClean.sh: deletes mesh-related.datfiles when regenerating partitions.scripts/NonPCEdatClean.sh: extends the mesh clean to PCE/KLE-specific files so stochastic datasets can be re-derived safely.
Always clean stale mesh data before re-running scripts/generate_data*.sh with a different NP.
- MPI rank mismatch: Ensure
NPused in mesh partitioning matchesmpiexec -np. Otherwisemain.F90will fail when trying to open per-domain files. - PETSc paths:
scripts/run_ddm.shexpects${PETSC_ARCH}/bin/mpiexec; setPETSC_ARCHaccordingly or edit the script to usempiexeconPATH. - Memory footprint: Large
NP×npceoutcombinations raise the size of PETSc vectors. Monitor--mem-per-cpuin SLURM scripts. - KLE/PCE selection: When changing
nDim/nOrd, regenerate both the triplet products and multi-index files insidedata/kle_pce/and update filenames consumed inmain.F90. - Documentation: For PETSc installation hints see
docs/PETSc_Install.pdf. For algorithmic details refer to the CMAME article cited below.
Scalable Domain Decomposition Methods for Nonlinear and Time-Dependent Stochastic Systems
Authors: Vasudevan, Padillath and Sharma, Sudhi
Institution: Carleton University (2023)
DOI: 10.22215/etd/2023-15817
Click to expand BibTeX citation
@phdthesis{vasudevan2023scalable,
title={Scalable Domain Decomposition Methods for Nonlinear and Time-Dependent Stochastic Systems},
author={Vasudevan, Padillath and Sharma, Sudhi},
year={2023},
school={Carleton University},
doi={10.22215/etd/2023-15817}
}
\```
</details>
## Contact
Sudhi Sharma P V — sudhisharmapadillath@gmail.com