namd
Software Description
NAMD, recipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of processors on high-end parallel platforms. NAMD is best run on the high performance systems where it has been compiled with the latest Intel compilers and recent MPI parallel libraries. There are different compiled versions of NAMD, which use different parallelization methods.
Info
Module Name
namd
Last Updated On
08/29/2023
Support Level
Primary Support
Software Access Level
Open Access
Home Page
Documentation
Software Description
NAMD, recipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of processors on high-end parallel platforms. NAMD is best run on the high performance systems where it has been compiled with the latest Intel compilers and recent MPI parallel libraries. There are different compiled versions of NAMD, which use different parallelization methods.
Mesabi K40
Running a NAMD calculation on GPUs currently requires an IBVerbs-based NAMD module. An example job script for running NAMD on our k40 GPU partition follows:
#!/bin/bash -l
#SBATCH --time=2:00:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=24
#SBATCH --mem-per-cpu=2gb
#SBATCH -p k40
#SBATCH --gres=gpu:k40:2
module load namd/2.14-libverbs-CUDA
# Create a formatted list of nodes
HOSTNAMES=$(scontrol show hostnames)
while IFS= read -r HOST;
do
echo "host ${HOST}" >> namd.hostfile;
done <<< "$HOSTNAMES"
# Create a runscript for loading the namd module
# on each node
cat > runscript << EOL
#!/bin/bash
module load namd/2.12-ibverbs
\$*
EOL
chmod +x runscript
let NCPUS=$SLURM_CPUS_ON_NODE-1
# Please note there is no line break
# in the following command.
`which charmrun` ++runscript ./runscript `which namd2` \
++nodelist namd.hostfile ++n $SLURM_JOB_NUM_NODES ++ppn $NCPUS \
+idlepoll +devices 0,1 stmv.namd > stmv.out
General Linux
OpenMPI NAMD version
The following is an example job script for running an OpenMPI-based NAMD module under SLURM:
!/bin/bash -l
#SBATCH -t 24:00:00
#SBATCH --mem-per-cpu=2gb
#SBATCH -N=8
#SBATCH --ntasks-per-node=32
cd $SLURM_SUBMIT_DIR
module load namd/2.14-ompi
mpirun namd2 stmv.namd > stmv.out
IBVerbs NAMD version
The following is an example of a SLURM script for running an IBVerbs version of NAMD. This script is more complex because using IBVerbs, we are responsible for telling charmrun about the topology and environment under which we want the NAMD calculation to run.
#!/bin/bash -l
#SBATCH -t 24:00:00
#SBATCH --mem-per-cpu=2gb
#SBATCH -N=4
#SBATCH --ntasks-per-node=8
module load namd/2.12-ibverbs
cd $SLURM_SUBMIT_DIR
# Create a formatted list of nodes
rm namd.hostfile
HOSTNAMES=$(scontrol show hostnames)
while IFS= read -r HOST;
do
echo "host ${HOST}" >> namd.hostfile;
done <<< "$HOSTNAMES"
# Create a runscript for loading the namd module
# on each node
cat > runscript << EOL
#!/bin/bash
module load namd/2.12-ibverbs
\$*
EOL
chmod +x runscript
# Please note there is no line break
# in the following command.
`which charmrun` ++runscript ./runscript `which namd2` \
++nodelist namd.hostfile ++p $SLURM_NPROCS stmv.namd > stmv.out
One of the things that using IBVerbs allows us to do is to run NAMD on GPUs, which can offer significant performance improvements over purely-CPU-based calculations in many cases. An example of a job script for running NAMD on MSI's k40 GPU nodes can be found in the k40 documentation tab on this page. Additional Information
Additional information may be found at: http://www.ks.uiuc.edu/Research/namd Performance hints are found at: http://www.ks.uiuc.edu/Research/namd/wiki/?NamdPerformanceTuning
Agate Modules
Default
2.7-ompi
Other Modules
2.12-ibverbs-smp-CUDA, 2.12-multicore-CUDA, 2.13-ibverbs-smp-CUDA, 2.10-ibverbs, 2.11-ibverbs, 2.12-ibverbs, 2.13-ompi, 2.14-libverbs-CUDA, 2.14-ompi, 2.7-ompi, 2.8-ompi, 2.9-impi, 2.9-libverbs-CUDA, 2.9b1-libverbs, 3.0-alpha13-netlrts-smp-CUDA
Mangi Modules
Default
2.7-ompi
Other Modules
2.10-ibverbs, 2.11-ibverbs, 2.12-ibverbs, 2.13-ompi, 2.14-libverbs-CUDA, 2.14-ompi, 2.7-ompi, 2.8-ompi, 2.9-impi, 2.9-libverbs-CUDA, 2.9b1-libverbs, 3.0-alpha13-netlrts-smp-CUDA
Mesabi Modules
Default
2.7-ompi
Other Modules
2.12-ibverbs-smp-CUDA, 2.12-multicore-CUDA, 2.13-ibverbs-smp-CUDA, 2.10-ibverbs, 2.11-ibverbs, 2.12-ibverbs, 2.13-ompi, 2.14-libverbs-CUDA, 2.14-ompi, 2.7-ompi, 2.8-ompi, 2.9-impi, 2.9-libverbs-CUDA, 2.9b1-libverbs, 3.0-alpha13-netlrts-smp-CUDA