Amber

Amber (Assisted Model Building with Energy Refinement) is a molecular dynamics software package that simulates the Amber force fields.

Policy

Amber is freely available to users at HPC2N under a site-license agreement.

Citations

You must acknowledge use of Amber in any reports or publications of results obtained with the software. You can find instructions on how to cite Amber here.

Overview

AMBER is a molecular dynamics platform integrated by independent sub-programs that perform a variety of tasks ranging from integration of Newton’s equations to analysis of resulting trajectories.

Originally, it simulated the Amber force fields but nowadays it can be used with other common force fields.

The AMBER project is currently developed by different research groups including, David Case at Rutgers University, Thomas E. Cheatham III, at the University of Utah, Thomas A. Darden at NIEHS, Kenneth Merz at Michigan State University, Carlos Simmerling at SUNY-Stony Brook University, Ray Luo at UC Irvine, and Junmei Wang at Encysive Pharmaceuticals.

Usage at HPC2N

On HPC2N we have AMBER available as a module.

Loading

To use the Amber module, add it to your environment. You can find versions with

module spider Amber

and you can then find how to load a specific version (including prerequisites), with

module spider Amber/<VERSION>

Note

If you need to use xleap, or otherwise run applications which opens a display, you must login with

ssh -X

or use ThinLinc (recommended).

Running

Note

There is rarely any benefit from running multi-gpu jobs on the newer NVIDIA cards. Here is a quote from the Amber developers:

“The code supports serial as well as parallel GPU simulations, but from Pascal (2016) onward, the benefit of running a simulation, with the exception of REMD based simulations, on two or more GPUs is marginal. On the latest Voltai and Turing architectures our algorithms cannot scale to multiple GPUs. We therefore recommend executing independent simulations on separate GPUs in most cases. A key design feature of the GPU code is that the entirety of the molecular dynamics calculation is performed on the GPU. This means that only one CPU core is needed to drive a simulation and a server full of four or eight GPUs can run one independent simulation per card without loss of performance provided that there are at least the same number of free CPU cores available as GPUs in use. (Most commodity CPU chips have at least four cores.) The fact that GPU performance is unaffected by CPU performance means that any CPU compiler (the open source GNU C and Fortran compilers are adequate) will deliver comparable results with Amber’s premier engine, and sets Amber apart from other molecular dynamics codes. Another key feature of this design choice is that it means low cost CPUs can be used which coupled with custom designed precision models and bitwise reproducibility use to validate consumer cards gives AMBER unrivaled performance per dollar.”

Prepare the job on the login node (nab, xleap, …) but do NOT run anything longer/heavier there. You should submit a job script to run sander and pmemd.

Submit script examples

When you have prepared your Amber job, use a batch script similar to the examples below to submit your job.

Note

The following examples uses “LOAD-THE-MODULE” as a placeholder for the respective module load commands, which can be found with ml spider Amber and ml spider Amber/<version>.

Important

For starting MPI enabled programs one should use “srun”.

Note

Amber prefers the number of cores to be a power of 2, i.e., 2, 4, 8, 16, 32, 64, etc.

Example, sander.MPI

Uses sander.MPI, 8 tasks (cores). It references a groupfile and is derived from the Amber Tutorial A7, found here.

#!/bin/bash
#SBATCH -A <Your-Project-Here>
#SBATCH -n 8
#SBATCH --time=01:00:00

LOAD-THE-MODULE

srun sander.MPI -ng 8 -groupfile equilibrate.groupfile

Example, pmemd.MPI

Uses pmemd.MPI, 96 tasks (cores), 48 per node. It’s derived from the Amber Tutorial 17, found here.

#!/bin/bash
#SBATCH -A <Your-Project-Here>
#SBATCH -n 96
#SBATCH --ntasks-per-node=48
#SBATCH --time=01:00:00

LOAD-THE_MODULE

srun pmemd.MPI -O -i 02_heat.in -o 02_heat.out -p ala_tri.prmtop -c 01_min.rst -r 02_heat.rst -x 02_heat.nc

It’s useful to note that a groupfile simply specifies file input and output details and is used for convenience. Both sander.MPI and pmemd.MPI can be used with or without a groupfile. If a groupfile is not used, then proper input and output files need to be specified, like in the pmemd.MPI example above.

Example, using a single V100 card on Kebnekaise

#!/bin/bash
#SBATCH -J Amber
#SBATCH -A <Your-Project-Here>
#SBATCH -n 1
#SBATCH --gres=gpu:v100:1
#SBATCH --time=1:00:00

LOAD-THE-MODULE

pmemd.cuda -O -i mdinfile -o mdoutfile -c inpcrdfile -p prmtopfile -r restrtfile

The job is submitted with

sbatch <submitscript.sh>

Comparisons and benchmarks

A comparison of runs on the various types of nodes on Kebnekaise is displayed below. We evaluated the performance of different AMBER implementations including Sander-MPI (with 28 cores), PMEMD-MPI (with 28 cores), and PMEMD-GPU (with 1 MPI processes and 1 or 2 GPU card(s)). The figure below shows the best performance of AMBER. The benchmark case consisted of 158944 particles, using 1 fs. for time step and a cutoff of 1.2 nm. for real space electrostatics calculations. Particle mesh Ewald was used to solve long-range electrostatic interactions. The full example can be found here.

profiling_amber

Additional info

More information can be found on