OpenFOAM

The OpenFOAM (Open Field Operation and Manipulation) CFD Toolbox can simulate anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics, electromagnetics and the pricing of financial options.

Policy

OpenFOAM is produced by OpenCFD Ltd, is freely available and open source, licensed under the GNU General Public Licence.

Citations

You can cite the www.openfoam.org website, if web-referencing is allowed, otherwise, cite the OpenFOAM documentation.

Another option is to cite the original FOAM paper:

  • H. G. Weller, G. Tabor, H. Jasak, C. Fureby, A tensorial approach to computational continuum mechanics using object-oriented techniques, COMPUTERS IN PHYSICS, VOL. 12, NO. 6, NOV/DEC 1998. (DOI)

Overview

The core technology of OpenFOAM is a flexible set of efficient C++ modules. These are used to build a wealth of: solvers, to simulate specific problems in engineering mechanics; utilities, to perform pre- and post-processing tasks ranging from simple data manipulations to visualisation and mesh processing; libraries, to create toolboxes that are accessible to the solvers/utilities, such as libraries of physical models.

OpenFOAM is supplied with numerous pre-configured solvers, utilities and libraries and so can be used like any typical simulation package. However, it is open, not only in terms of source code, but also in its structure and hierarchical design, so that its solvers, utilities and libraries are fully extensible.

OpenFOAM uses finite volume numerics to solve systems of partial differential equations ascribed on any 3D unstructured mesh of polyhedral cells. The fluid flow solvers are developed within a robust, implicit, pressure-velocity, iterative solution framework, although alternative techniques are applied to other continuum mechanics solvers. Domain decomposition parallelism is fundamental to the design of OpenFOAM and integrated at a low level so that solvers can generally be developed without the need for any parallel-specific coding.

OpenFOAM at HPC2N

On HPC2N we have OpenFOAM and OpenFOAM-Extend available as modules on Kebnekaise. To see the available versions, login to Kebnekaise and do ml spider OpenFOAM or ml spider OpenFOAM-Extend.

Usage at HPC2N

To use, load the OpenFOAM module to add it to your environment. You give this command to see which versions are available for OpenFOAM:

ml spider OpenFOAM

To see how to load a specific version, and its prerequisites, do:

ml spider OpenFOAM/<version>

or

ml spider OpenFOAM-Extend/<version>

Example

Loading OpenFoam version 8

ml GCC/10.2.0  OpenMPI/4.0.5
ml OpenFOAM/8

Note

After loading the module, you also need to do

source $FOAM_BASH

Important

  • WM_PROJECT_USER_DIR=$HOME/$WM_PROJECT/$USER-$WM_PROJECT_VERSION.
  • FOAM_RUN is set to $WM_PROJECT_USER_DIR/run
  • It is best to make sure that the $FOAM_RUN directory is located in your project storage.
  • HPC2N has installed OpenFOAM compiled with the GCC compilers (and OpenFOAM-Extend with the Intel compilers) and there are some patches applied, which means that some third party applications/modules may not compile out of the box.
  • OpenFOAM(-Extend) provides a few handy environment variables that refers to various directories, please use them and not hard coded pathnames as this will make your life easier when OpenFOAM gets upgraded. You can find them with "env|grep FOAM"
  • the OpenFOAM(-Extend) documentation refers to editing shell settings such as .bashrc and .cshrc, this is not needed and might cause unexpected behaviour since loading the module and sourcing $FOAM_BASH takes care of setting all needed variables.
  • Important! You should NOT attempt to use the cases and applications in the tutorials for a different version of OpenFOAM! Some of them may work, but most will not. Make a fresh copy of the applications and cases you wish to modify, from the version of OpenFOAM you wish to run.
  • paraFoam does not work on HPC2N, so that part will have to be done on your own computer.
  • Use the environment variables ($FOAM_USER_APPBIN and $FOAM_USER_LIBBIN) to specify destination directories when building your own extensions, this ensures that correct directories are specified and eases future upgrades.
  • $FOAM_USER_APPBIN is included in the search path for binaries ($PATH).

Serial jobs

There are some examples in the tutorials directory of the OpenFOAM installation ($FOAM_TUTORIALS). Here we will look at $FOAM_TUTORIALS/incompressible/icoFoam/cavity and we will run OpenFOAM 8.

Note

Here in the below example, we are just running it on the command line. This is a short example, so it is not a problem, but if you are doing something longer/more resource intensive, you must always run it as a batch job.

In order to run a serial job like the one in the directory above, you should do the following:

  1. Load the module
    ml GCC/10.2.0  OpenMPI/4.0.5 OpenFOAM/8
    
  2. source $FOAM_BASH
    
  3. Create your working directory. This should be done in your project storage (name it anyway you want - I just picked FOAMRUN):

mkdir -p <MY-PROJ-STORAGE>/FOAMRUN
4. Change to that directory:

cd <MY-PROJ-STORAGE>/FOAMRUN
5. Copy the tutorials there and change permissions:
cp -r $FOAM_TUTORIALS tutorials
chmod -R 755 tutorials
6. The following is correct for OpenFOAM 4.0 and higher. For 1.5, the directory is tutorials/icoFoam/cavity and for 2.x and 3.x it is tutorials/incompressible/icoFoam/cavity
cd tutorials/incompressible/icoFoam/cavity/cavity
7. There will always be (at least) three subdirectories. Cases for OpenFOAM are setup by editing case files. A case being simulated involves data for mesh, fields, properties, control parameters, etc. The structure can be seen here: File Structure of OpenFOAM cases.
The three subdirectories that are always present are:

- 0/”time” directories: containing the files p and U, with information about the boundary and initial conditions for the pressure and the velocity. More information can be found in the example here: Lid-driven cavity flow. There can be more than one ‘time’ directory.
- constant: containing the directory polymesh and one or more files with the suffix …Properties. polymesh has files for mesh generation, and the …Properties are files for the physical properties. In the case of icoFoam, the only property that needs to be specified is the kinematic viscosity (in transportProperties)
- system: this directory contains files for controlling the case (controlDict), discretisation schemes (fvSchemes), the specification of linear solvers and tolerances (fvSolution), and other things like setting the initial field (setFields), depending on the case. They can be found in the OpenFOAM User Guide.

8. In order to run the case, you must either be located in the directory or give the path to it.

9. The first you must do is run blockMesh to generate the mesh - several needed files.
$ blockMesh
NOTE: It is sometimes a good idea to view the mesh to check for any errors before running.

10. You then run the application by typing the name of the solver or utility (here the solver icoFoam) while standing in the case directory, or with the path (here icoFoam -case $path_to/tutorials/incompressible/icoFoam/cavity/cavity)
$ icoFoam

Parallel jobs

The example we will look at is $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak. The example will be for OpenFOAM 8, but it should not be much different for earlier or later versions (1.6 is tested to work in the same manner).

You probably need to make some changes to the example.

Note

This example assumes you have copied the tutorials from $FOAM_TUTORIALS to your own directory/project storage in the same way as was shown under the serial example.

  1. Load the module and prerequisites (here for OpenFOAM/8)
    ml GCC/10.2.0  OpenMPI/4.0.5 OpenFOAM/8
    
  2. Source the $FOAM_BASH
    source $FOAM_BASH
    
  3. First, make a copy to make the changes in:

cd <MY-PROJ-STORAGE>/FOAMRUN/tutorials/multiphase/interFoam/laminar/
4. Create a new directory to play in
mkdir damBreakFine
5. Copy all the files to it (early versions have the files directly under the first “damBreak”)
cp -r damBreak/damBreak/0 damBreakFine
cp -r damBreak/damBreak/system damBreakFine
cp -r damBreak/damBreak/constant damBreakFine
6. Enter the new case directory and change the blocks description in the blockMeshDict dictionary (under the directory /system) to
blocks
(       
    hex (0 1 5 4 12 13 17 16) (46 10 1) simpleGrading (1 1 1)
    hex (2 3 7 6 14 15 19 18) (40 10 1) simpleGrading (1 1 1)
    hex (4 5 9 8 16 17 21 20) (46 76 1) simpleGrading (1 2 1)
    hex (5 6 10 9 17 18 22 21) (4 76 1) simpleGrading (1 2 1)
    hex (6 7 11 10 18 19 23 22) (40 76 1) simpleGrading (1 2 1)
);
7. Run blockMesh (in the damBreakFine/ directory) to create the mesh etc.
$ blockMesh
8. As the mesh has now changed from the damBreak example, the user must re-initialise the phase field alpha1 in the 0 time directory since it contains a number of elements that is inconsistent with the new mesh. The best way to do this, is to rerun the setFields utility. There is a backup copy of the initial uniform α1 that the user should copy to 0/alpha1 before running setFields:
cd <MY-PROJ-STORAGE>/FOAMRUN/tutorials/multiphase/interFoam/laminar/damBreakFine
cp -r 0/alpha.water.orig 0/alpha.water
setFields
9. The method of parallel computing used by OpenFOAM is known as domain decomposition, in which the geometry and associated fields are broken into pieces and allocated to separate processors for solution. The first step required to run a parallel case is therefore to decompose the domain using the decomposePar utility. There is a dictionary associated with decomposePar named decomposeParDict which is located in the system directory of the tutorial case:
cd <MY-PROJ-STORAGE>/FOAMRUN/tutorials/multiphase/interFoam/laminar/damBreakFine/system
10. Open decomposeParDict in your favourite editor. The first entry is numberOfSubdomains which specifies the number of subdomains into which the case will be decomposed, usually corresponding to the number of processors available for the case.
In this example we are using 16 processors, so:
numberOfSubdomains 16;
We also need to adjust n = nx ny nz in the simpleCoeffs entry accordingly, so nx ny = numberOfSubdomains. I am changing the values of nx ny nz to 4 4 1. There is information in section 3.2 of the User Guide with further details of how to run a case in parallel.

11. After this parameter is sat, you should run (in damBreakFine)
$ decomposePar
in order to automatically construct subdirectories. These will have been created, one for each processor, in the case directory. The directories are named processorN, where N = 0,1, ...

12. To run this, you need to submit a batch job. Note that Kebnekaise has various numbers of cores per node, depending on type, but none with less than 28 cores. See either the Kebnekaise hardware page or the section the different parts of the batch system for information about the various node types.

13. Make a job submit file like this (works for 16 cores). Remember, SLURM exports the environment (including modules), so you should do ml purge first to make sure that the submit file loads the expected module.
#!/bin/bash
# Change to your actual project id number
#SBATCH -A hpc2nXXXX-YYY
# (Project id:s are of the form SNICXXX-YY-ZZ, NAISSXXXX-YY-ZZ, or hpc2nXXXX-YYY)
#SBATCH --output=openfoam_dambreak.out
#SBATCH --error=openfoam_dambreak.err
# Asking for 16 cores
#SBATCH -n 16
#SBATCH --time=00:15:00

#For OpenFOAM version 8 
ml purge > /dev/null 2>&1 # Ignore some warnings from the purge command
ml GCC/10.2.0  OpenMPI/4.0.5 OpenFOAM/8    

source $FOAM_BASH

srun interFoam -parallel
14. Name the submit file something suitable and submit the job from the case directory (or give the path in the submit file) with
sbatch <job submit file>

Example SLURM Job Scripts

Running on all 28 cores of 1 Skylake node on Kebnekaise.

#!/bin/bash
# Change to your actual project id number (of the form: hpc2nXXXX-YYY, SNICXXX-YY-ZZ, or NAISSXXXX-YY-ZZ)
#SBATCH -A hpc2nXXXX-YYY
# Asking for 1 node (the whole node for Skylake) on Kebnekaise
#SBATCH -n 28 
#SBATCH --time=00:15:00

#For OpenFOAM version 8
ml purge > /dev/null 2>&1 # Ignore warnings from purge
ml GCC/10.2.0  OpenMPI/4.0.5 OpenFOAM/8    

source $FOAM_BASH

srun -cpu_bind=cores interFoam -parallel

Running on 16 cores on 1 node using all the memory. .

#!/bin/bash
# Change to your actual project id number (of the form: hpc2nXXXX-YYY, SNICXXX-YY-ZZ, or NAISSXXXX-YY-ZZ) 
#SBATCH -A hpc2nXXXX-YYY
# Asking for 1 node (the whole node) on Kebnekaise, but only using 16 cores
#SBATCH -n 16 
#SBATCH -N 1
#SBATCH --exclusive
#SBATCH --time=00:15:00

#For OpenFOAM version 8
ml purge > /dev/null 2>&1 # Ignore warnings from purge
ml GCC/10.2.0  OpenMPI/4.0.5 OpenFOAM/8    

source $FOAM_BASH

srun interFoam -parallel

Running on two nodes

If a job runs slowly due to memory bandwidth limitation, it can be a good idea to try and split over more nodes. In this example, over two nodes (Kebnekaise).

#!/bin/bash
# Change to your actual project id number (of the form: hpc2nXXXX-YYY, SNICXXX-YY-ZZ, or NAISSXXXX-YY-ZZ)
#SBATCH -A hpc2nXXXX-YYY 
#SBATCH -n 16
# use all 14 cores in a NUMA island per MPI task for increased memory bandwidth
# this is for a Skylake node. Change as appropriate for Zen3 or Zen4  
#SBATCH -c 14 
# Change the below if you are using a different node type with different number
# of cores per NUMA island
#SBATCH -C skylake
#SBATCH --time=00:15:00

#For OpenFOAM version 8
ml purge > /dev/null 2>&1 # Ignore warnings from purge
ml GCC/10.2.0  OpenMPI/4.0.5 OpenFOAM/8

source $FOAM_BASH

srun interFoam -parallel

Note

See the page The different parts of the batch system for information on number of cores on the different nodes as well as the constraints/features to use to make sure it runs on the type of node you want.

Additional info

  • OpenFOAM is often used as a framework when developing own codes for solving problems.
    • Applications come in two main categories; solvers and utilities.
    • It is often possible to find an already existing application that is similar to what you would like to do. Copy that and modify it for your purposes.
    • When reading the OpenFOAM documentation on the issue, please note the following regarding where to place your binaries/libraries in order for it to work:
    • $WM_PROJECT_USER_DIR is the OpenFOAM user project directory which contains $FOAM_USER_APPBIN and $FOAM_USER_LIBBIN. The name of these directories are configurable, so don’t trust the documentation which uses absolute paths in some places. Use the variables instead.
    • The root path for the OpenFOAM user project directory is $WM_PROJECT_USER_DIR.

If you have questions about the different options available, please contact support@hpc2n.umu.se.

For further help, here is an example of compiling your own OpenFOAM application.

Useful links