• Description

In a recent blog post, we discussed how to use the Domain Decomposition solver for computing large problems in the COMSOL Multiphysics® software and parallelizing computations on clusters. We show how to save memory by a spatial decomposition of the degrees of freedom on clusters and single-node computers with the Recompute and clear option. To further illustrate the Domain Decomposition solver and highlight reduced memory usage, let’s look at a thermoviscous acoustics problem: simulating the transfer impedance of a perforate.

A Thermoviscous Acoustics Example: Transfer Impedance of a Perforate

If you work with computationally large problems, the Domain Decomposition solver can increase efficiency by dividing the problem’s spatial domain into subdomains and computing the subdomain solutions concurrently and sequentially on the fly. We have already learned aboutusing the Domain Decomposition solver as a preconditioner for an iterative solver and discussed the possibilities to enable simulations that are constrained by the available memory. Today, we will take a detailed look at how to use this functionality with a thermoviscous acoustics example.

Let’s start with the Transfer Impedance of a Perforate tutorial model, which can be found in the Application Library of the Acoustics Module. This example model uses the Thermoviscous Acoustics, Frequency Domain interface to model a perforate, a plate with a distribution of small perforations or holes.

Simulation plot depicting the transfer impedance in a perforate.
A simulation of transfer impedance in a perforate.

For this complex simulation, we are interested in the velocity, temperature, and total acoustic pressure in the transfer impedance of the perforate model. Let’s see how we can use the Domain Decomposition solver to compute these quantities in situations where the required resolution exceeds the margins of available memory.

Applying the Settings for the Domain Decomposition Solver in COMSOL Multiphysics®

Let’s take a closer look at how we can set up a Domain Decomposition solver for the perforate model. The original model uses a fully coupled solver combined with a GMRES iterative solver. As a preconditioner, two hybrid direct preconditioners are used; i.e., the preconditioners separate the temperature from the velocity and pressure. By default, the hybrid direct preconditioners are used with PARDISO.

As the mesh resolution becomes refined, the amount of memory used continues to grow. An important parameter in the model is the minimum thickness of the viscous boundary layer (dvisc), which has a typical size of 50 μm. The perforates are a few millimeters in size. The minimum element size of the mesh element is taken to be dvisc/2. To refine the solution, we divide dvisc by the refinement factors r = 1, 2, 3, 5. We can insert the domain decomposition preconditioner by right-clicking on the Iterative node and selecting Domain Decomposition. Below the Domain Decomposition node, we find the Coarse Solver and Domain Solver nodes.

To accelerate the convergence, we need to use the coarse solver. Since we do not want to use an additional coarse mesh, we set Coarse Level > Use coarse level to Algebraic in order to use an algebraic coarse grid correction. On the Domain Solver node, we add two Direct Preconditionersand enable the hybrid settings like they were used in the original model. For the coarse solver, we take the direct solver PARDISO. If we use a Geometric coarse mesh grid correction instead, we can also apply a hybrid direct coarse solver.

Screenshot showing the settings for the Domain Decomposition solver in COMSOL Multiphysics.
Settings for the Domain Decomposition solver.

Comparing the Resource Consumption for Three Solvers

We can compare the default iterative solver with hybrid direct preconditioning to both the direct solver and the iterative solver with domain decomposition preconditioning on a single workstation. For the unrefined mesh with a mesh refinement factor of r = 1, we use approximately 158,682 degrees of freedom. All 3 solvers use around 5-6 GB of memory to find the solution for a single frequency. For r = 2 with 407,508 degrees of freedom and r = 3 with 812,238 degrees of freedom, the direct solver uses a little bit more memory than the 2 iterative solvers (12-14 GB for r = 2 and 24-29 GB for r = 3). For r = 5 and 2,109,250 degrees of freedom, the direct solver uses 96 GB and the iterative solvers use around 80 GB on a sequential machine.

As we will learn in the subsequent discussion, the Recompute and clear option for the Domain Decomposition solver gives a significant advantage with respect to the total memory usage.

Memory Usage, Nondistributed Case Degrees of Freedom Memory Usage, Direct Solver Memory Usage, Iterative Solver with Hybrid Direct Preconditioning Memory Usage, Iterative Solver with Domain Decomposition Preconditioning Memory Usage, Iterative Solver with Domain Decomposition Preconditioning with Recompute and clear enabled
Refinement r = 1 158,682 5.8 GB 5.3 GB 5.4 GB 3.6 GB
Refinement r = 2 407,508 14 GB 12 GB 13 GB 5.5 GB
Refinement r = 3 812,238 29 GB 24 GB 26 GB 6.4 GB
Refinement r = 5 2,109,250 96 GB 79 GB 82 GB 12 GB

Memory usage for the direct solver and the two iterative solvers in the nondistributed case.

On a cluster, the memory load per node can be much lower than on a single-node computer. Let us consider the model with a refinement factor of r = 5. The direct solver scales nicely with respect to memory, using 65 GB and 35 GB per node on 2 and 4 nodes, respectively. On a cluster with 4 nodes, the iterative solver with domain decomposition preconditioning with 4 subdomains only uses around 24 GB per node.

Memory Usage per Node on a Cluster Memory Usage, Direct Solver Memory Usage, Iterative Solver with Hybrid Direct Preconditioning Memory Usage, Iterative Solver with Domain Decomposition Preconditioning
1 node 96 GB 79 GB 82 GB (with 2 subdomains)
2 nodes 65 GB 56 GB 47 GB (with 2 subdomains)
4 nodes 35 GB 35 GB 24 GB (with 4 subdomains)

Memory usage per node on a cluster for the direct solver and the two iterative solvers for refinement factor r = 5.

On a single-node computer, the Recompute and clear option for the Domain Decomposition solver gives us the benefit we expect: reduced memory usage. However, it comes with the additional cost of decreased performance. For r = 5, the memory usage is around 41 GB for 2 subdomains, 25 GB for 4 subdomains, and 12 GB for 22 subdomains (the default settings result in 22 subdomains). For r = 3, we use around 15 GB of memory for 2 subdomains, 10 GB for 4 subdomains, and 6 GB for 8 subdomains (default settings).

Even on a single-node computer, the Recompute and clear option for the domain decomposition method gives a significantly lower memory consumption than the direct solver: 12 GB instead of 96 GB for refinement factor r = 5 and 6 GB instead of 30 GB for refinement factor r = 3. Despite the performance penalty, the Domain Decomposition solver with the Recompute and Clearoption is a viable alternative to the out-of-core option for the direct solvers when there is insufficient memory.

Refinement Factor r = 3 r = 5
Memory Usage 30 GB 96 GB

Memory usage on a single-node computer with a direct solver for refinement factors r = 3 and r = 5.

Number of Subdomains Recompute and clear Option Refinement r = 3 Refinement r = 5
2 Off 24 GB 82 GB
2 On 15 GB 41 GB
4 On 10 GB 25 GB
8 On 6 GB 20 GB
22 On - 12 GB

Memory usage on a single-node computer with an iterative solver, domain decomposition preconditioning, and the Recompute and clear option enabled for refinement factors r = 3 and r = 5.

As demonstrated with this thermoviscous acoustics example, using the Domain Decomposition solver can greatly lower the memory footprint of your simulation. By this means, domain decomposition methods can enable the solution of large and complex problems. In addition, parallelism based on distributed subdomain processing is an important building block for improving computational efficiency when solving large problems.