WRF

Advanced Research WRF (ARW) Modeling System is a flexible, state-of-the-art atmospheric simulation system.

Policy

WRF is available to users at HPC2N.

Citations

For referencing in publications or other texts any work that involves using WRF or WRF output, we encourage the citation of the model. This allows others to understand what was used and helps the WRF support effort in assessing the scope of the model’s use and broader impacts.

See How to cite the WRF-ARW model for specifics on how to cite.

Overview

WRF features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers.

Usage at HPC2N

On HPC2N we have WRF (and WPS) available as a module.

Loading

To use the WRF (and WPS) module, add it to your environment. You can find versions with

module spider WRF

and

module spider WPS

To find how to load a specific version (including prerequisites), you use

module spider WRF/<VERSION>

and

module spider WPS/<version> 

WRF/WPS are build with the GCC compilers and built with both MPI and OpenMP.

Example, loading WRF/WPS 4.2

ml GCC/10.2.0 OpenMPI/4.0.5
ml WRF/4.2.2-dmpar
ml WPS/4.2-dmpar 

Running

The name of the WRF binary is

wrf.exe

The input tables are located under /pfs/data/wrf/geog/

The Vtables are located in $EBROOTWPS/WPS/ungrib/Variable_Tables (environment variable can only be used after module WPS is loaded).

Files in $EBROOTWRF/WRFV3/run may need to be copied or linked to your case directory if the program complains about missing files.

Submit file example

#!/bin/bash
# Request 2 nodes exclusively
#SBATCH -N 2
# We want to run OpenMP on one NUMA unit (the cores that share a memory channel) 
# The number of core in a NUMA unit depends on the node type. 
# See the section "The different parts of the batch system" under 
# "The Batch System" in the left side menu. 
# In this example we use an Intel Skylake node, which 
# has 14 cores/NUMA unit. Change -c accordingly. 
#SBATCH -c 14
#SBATCH -C skylake 
# Slurm will then figure out the correct number of MPI tasks available
#SBATCH --time=6:00:00 

# WRF version 4.2.2 
ml GCC/10.2.0  
ml OpenMPI/4.0.5 
ml WRF/4.2.2-dmpar

# Set OMP_NUM_THREADS to the same value as -c, i.e. 14 
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

srun --cpu_bind=rank_ldom wrf.exe 

Additional info

More information can be found on