v2406: New and improved parallel operation
In addition to local agglomeration, the GAMG solver supports combining matrices across processors using processor agglomeration. This has been shown to be particularly effective at larger core counts due to:
- lowering the number of cores acting on the coarsest level---where most of the global reductions happen.
- increasing the amount of implicitness for all operations, e.g. smoothing and preconditioning.
In the previous release (v2312) the mechanism was rewritten to allow any kind of coupled boundary condition to be included. However, this broke the existing, processor-boundary-only, processor agglomeration. This has been fixed in this release.
The following image shows the performance benefit for an external aerodynamics investigation:
Source code
Tutorial
Gitlab
A fix has been applied to the overlapping communication introduced in merge request #641 that might hold references to finished MPI requests. This could happen with large core counts, fast interconnect and pipelined linear solvers.
Source code
- $FOAM_SRC/finiteVolume/fields/fvPatchFields/constraint/cyclicAMI/cyclicAMIFvPatchField.C
- $FOAM_SRC/finiteVolume/fields/fvPatchFields/constraint/cyclicACMI/cyclicACMIFvPatchField.C
- $FOAM_SRC/meshTools/AMIInterpolation/GAMG/interfaceFields/cyclicAMIGAMGInterfaceField/cyclicAMIGAMGInterfaceField.C
- $FOAM_SRC/meshTools/AMIInterpolation/GAMG/interfaceFields/cyclicACMIGAMGInterfaceField/cyclicACMIGAMGInterfaceField.C
Gitlab