Partitions

 

Slurm Partitions

Under MSI's new scheduler, Slurm, queues are known as partitions. The job partitions on our systems manage different sets of hardware, and have different limits for quantities such as walltime, available processors, and available memory. When submitting a calculation it is important to choose a partition where the job is suited to the hardware and resource limitations.

Selecting a Partition

Each MSI system contains job partitions managing sets of hardware with different resource and policy limitations. MSI currently has two primary systems: the supercomputer Mesabi and the Mesabi expansion Mangi. Mesabi has high-performance hardware and a wide variety of partitions suitable for many different job types. Mangi expands Mesabi and should be your first choice for submitting jobs. Which system to choose depends highly on which system has partitions appropriate for your software/script. More information about selecting a partitions and the different partition parameters can be found on the Choosing A Partition (Slurm) page.

Below is a summary of the available partitions organized by system, and the associated limitations. The quantities listed are totals or upper limits.

Mangi and Mesabi 

Partition name Node sharing? Cores per node Walltime limit Total node memory Advised memory per core Local scratch per node Maximum Nodes per Job
amdsmall (1) Yes 128 96:00:00 248 GB 1900 MB 415 GB 1
amdlarge No 128 24:00:00 248 GB 1900 MB 415 GB 32
amd512 Yes 128 96:00:00 499 GB 4000 MB 415 GB 1
amd2tb Yes 128 96:00:00 1995 GB 15 GB 415 GB 1
v100 (1) Yes 24 24:00:00 374 GB 15 GB 859 GB 1
small Yes 24 96:00:00 60 GB 2500 MB 429 GB 10
large No 24 24:00:00 60 GB 2500 MB 429 GB 48
max Yes 24 696:00:00 60 GB 2500 MB 429 GB 1
ram256g Yes 24 96:00:00 248 GB 10 GB

429 GB

2
ram1t (2) Yes 32 96:00:00 1002 GB 31 GB 380 GB 2
k40 (1) Yes 24 24:00:00 123 GB 5 GB 429 GB 40
interactive (3) Yes 24 24:00:00 60 GB 2 GB 228 GB 2
interactive-gpu (3) Yes 24 24:00:00 60 GB 2 GB 228 GB 2
preempt (4) Yes 24 24:00:00 60 GB 2 GB 228 GB 2
preempt-gpu (4) Yes 24 24:00:00 60 GB 2 GB 228 GB 2

 

(1) Note: In addition to selecting a GPU partition, GPUs need to be requested for all GPU jobs.  A  k40 GPU can be requested by including the following two lines in your submission script: 

 

#SBATCH -p k40                                            
#SBATCH --gres=gpu:k40:1

A  V100 GPU can be requested by including the following two lines in your submission script: 

 

#SBATCH -p v100                                            
#SBATCH --gres=gpu:v100:1

 

(2) Note: The ram1t nodes contain Intel Ivy Bridge processors, which do not support all of the optimized instructions of the Haswell processors. Programs compiled using the Haswell instructions will only run on the Haswell processors.

 

(3) Note: Users are limited to 2 jobs in the interactive and interactive-gpu partitions.

(4) Note: Jobs in the preempt and preempt-gpu partitions may be killed at any time to make room for jobs in the interactive or interactive-gpu partitions.