NAMD¶
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.
Policy¶
NAMD is available to users at HPC2N - we have a non-exclusive, non-commercial use license for academic purposes.
Citations
See the Acknowledge the TCB Group page for specifics on how to acknowledge the use of NAMD.
Overview¶
NAMD is currently being developed by theTheoretical and Computational Biophysics Group at University of Illinois, USA. Together with VMD, its partner software for molecular visualization and simulation setup, NAMD constitutes an essential tool for molecular modellers. NAMD is a flexible software which allows the users to interact with data structures through Tcl or Python scripts. It supports input files from other packages such as CHARMM, and X-PLOR and parameter files from them as well.
NAMD is distributed either as source code or in binary executables for standard arquitechtures including shared memory (SMP), MPI, and CUDA (all these available on our systems).
Besides the basic algorithms for classical molecular dynamics simulations (thermostats, barostats, precise long-range electrostatic methods, etc.), NAMD offers advanced algorithms for enhanced sampling simulations (replica exchange as an example) or free energy computations.
Usage at HPC2N¶
On HPC2N we have NAMD available as a module.
Loading¶
To use the NAMD module, add it to your environment. You can find versions with
and you can then find how to load a specific version (including prerequisites), with
Running¶
NAMD is compiled for MPI, and can run on multiple nodes and can make use of worker threads on each node (process).
The versions which have “CUDA” in the name supports GPUs.
Submit file examples¶
Here we present a description of our recommendation on how to run NAMD depending on the version and the number of cores you want to use:
Single node using 8 cores
#!/bin/bash
#SBATCH -A hpc2nXXXX-YYY
# Asking for 10 min.
#SBATCH -t 00:10:00
# Number of nodes
#SBATCH -N 1
# Ask for 8 processes
#SBATCH -c 8
# Load modules necessary for running NAMD
module add GCC/11.3.0 OpenMPI/4.1.4
module add NAMD/2.14-CUDA-11.7.0
# Execute NAMD
namd2 +setcpuaffinity +p6 config_file > output_file
Single node all cores
#!/bin/bash
#SBATCH -A hpc2nXXXX-YYY
# Asking for 10 min.
#SBATCH -t 00:10:00
# Number of nodes
#SBATCH -N 1
# Ask for <total cores/node> where you can find the possibilities
# in the section "Different parts of the batch system" (under
# The Batch System in the left side menu)
#SBATCH -c <total cores/node>
# Constrain the type of node. You find the options the same places
# As mentioned above with <total cores/node>
#SBATCH -C <NODE-TYPE>
# Load modules necessary for running NAMD
module add GCC/11.3.0 OpenMPI/4.1.4
module add NAMD/2.14-CUDA-11.7.0
# Execute NAMD
namd2 +setcpuaffinity +p28 config_file > output_file
Using single node GPU version
#!/bin/bash
#SBATCH -A hpc2nXXXX-YYY
# Asking for 10 min.
#SBATCH -t 00:10:00
# Number of nodes
#SBATCH -N 1
# Ask for <total cores/node> where you can find the possibilities
# in the section "Different parts of the batch system" (under
# The Batch System in the left side menu)
#SBATCH -c <total cores/node>
# Constrain the type of node. You find the options the same places
# As mentioned above with <total cores/node>
#SBATCH -C <NOTE-TYPE>
# Ask for 2 GPU cards
#SBATCH --gpus-per-node=2
# Load modules necessary for running NAMD
module add GCC/11.3.0 OpenMPI/4.1.4
module add NAMD/2.14-CUDA-11.7.0
# Execute NAMD
namd2 +setcpuaffinity +p28 +idlepoll +devices $CUDA_VISIBLE_DEVICES config_file > output_file
Using multi-node CPU version (change as needed for a largemem node)
Here the user can execute several tasks on each node. In this example, we are asking for a Skylake node, which has 28 CPU cores/node. Change as needed:
#!/bin/bash
#SBATCH -A hpc2nXXXX-YYY
# Asking for 10 min.
#SBATCH -t 00:10:00
# Asking for a skylake node
#SBATCH -C skylake
# Number of nodes
#SBATCH -N 2
# Ask for 56 processes (2x28 cores on the nodes)
#SBATCH -n 56
# Load modules necessary for running NAMD
module add GCC/11.3.0 OpenMPI/4.1.4
module add NAMD/2.14-CUDA-11.7.0
# Execute NAMD
srun namd2 +setcpuaffinity config_file > output_file
- The option
+setcpuaffinity
in these examples is used to set the affinity of the working threads which results in an increase of performance > 20% w.r.t. to the timings where that option is not used.
Comparisons and benchmarks¶
The figure below shows the best performance of NAMD on CPUs and GPUs. The benchmark case consisted of 158944 particles, using 1 fs. for time step and a cutoff of 1.2 nm. for real space electrostatics calculations. Particle mesh Ewald was used to solve long-range electrostatic interactions. Here, CL refers to the classical simulations setup, MTS means multiple time stepping algorithm, and RM is the resident mode implementation. The full example can be found here.