NAMD

Software Summary

Mesabi

Default Module: 

2.7-ompi

Other Modules Available: 

2.12-ibverbs-smp-CUDA, 2.12-multicore-CUDA, 2.13-ibverbs-smp-CUDA, 2.10-ibverbs, 2.11-ibverbs, 2.12-ibverbs, 2.13-ompi, 2.14-libverbs-CUDA, 2.14-ompi, 2.7-ompi, 2.8-ompi, 2.9-impi, 2.9-libverbs-CUDA, 2.9b1-libverbs, 3.0-alpha13-netlrts-smp-CUDA

Last Updated On: 

Tuesday, August 29, 2023

Mesabi K40

Default Module: 
Other Modules Available: 
Last Updated On: 

Mangi

Default Module: 
2.7-ompi
Other Modules Available: 

2.10-ibverbs, 2.11-ibverbs, 2.12-ibverbs, 2.13-ompi, 2.14-libverbs-CUDA, 2.14-ompi, 2.7-ompi, 2.8-ompi, 2.9-impi, 2.9-libverbs-CUDA, 2.9b1-libverbs, 3.0-alpha13-netlrts-smp-CUDA

Last Updated On: 

Tuesday, August 29, 2023

Mangi v100

Default Module: 
Other Modules Available: 
Last Updated On: 

NICE

Default Module: 
Other Versions Available: 
Last Updated On: 
Last Updated On: 

Tuesday, August 29, 2023

Support Level: 
Primary Support
Software Access Level: 
Open Access
Software Categories: 
Drug Discovery
Structural Biology
Molecular Modeling and Simulation
Software Description

NAMD, recipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of processors on high-end parallel platforms. NAMD is best run on the high performance systems where it has been compiled with the latest Intel compilers and recent MPI parallel libraries. There are different compiled versions of NAMD, which use different parallelization methods.

Software Documentation

Software Documentation Tabs

Mesabi K40
Mesabi k40 Documentation: 
Running a NAMD calculation on GPUs currently requires an IBVerbs-based NAMD module. An example job script for running NAMD on our k40 GPU partition follows:
#!/bin/bash -l
#SBATCH --time=2:00:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=24
#SBATCH --mem-per-cpu=2gb
#SBATCH -p k40
#SBATCH --gres=gpu:k40:2

module load namd/2.14-libverbs-CUDA

# Create a formatted list of nodes
HOSTNAMES=$(scontrol show hostnames)
while IFS= read -r HOST; 
do 
 echo "host ${HOST}" >> namd.hostfile;
done <<< "$HOSTNAMES"

# Create a runscript for loading the namd module
# on each node
cat > runscript << EOL
#!/bin/bash
module load namd/2.12-ibverbs
\$*
EOL

chmod +x runscript
let NCPUS=$SLURM_CPUS_ON_NODE-1
# Please note there is no line break
# in the following command.

`which charmrun` ++runscript ./runscript `which namd2` \
++nodelist namd.hostfile ++n $SLURM_JOB_NUM_NODES ++ppn $NCPUS \
+idlepoll +devices 0,1 stmv.namd > stmv.out