Slurm Partitions
Under MSI's new scheduler, Slurm, queues are known as partitions. The job partitions on our systems manage different sets of hardware, and have different limits for quantities such as walltime, available processors, and available memory. When submitting a calculation it is important to choose a partition where the job is suited to the hardware and resource limitations.
Selecting a Partition
Each MSI system contains job partitions managing sets of hardware with different resource and policy limitations. MSI currently has two primary systems: the supercomputer Mesabi and the Mesabi expansion Mangi. Mesabi has high-performance hardware and a wide variety of partitions suitable for many different job types. Mangi expands Mesabi and should be your first choice for submitting jobs. Which system to choose depends highly on which system has partitions appropriate for your software/script. More information about selecting a partitions and the different partition parameters can be found on the Choosing A Partition (Slurm) page.
Below is a summary of the available partitions organized by system, and the associated limitations. The quantities listed are totals or upper limits.
Mangi and Mesabi
Partition name | Node sharing? | Cores per node | Walltime limit | Total node memory | Advised memory per core | Local scratch per node | Maximum Nodes per Job |
---|---|---|---|---|---|---|---|
amdsmall (1) | Yes | 128 | 96:00:00 | 248.7 GB | 1900 MB | 429 GB | 1 |
amdlarge | No | 128 | 24:00:00 | 248.7 GB | 1900 MB | 429 GB | 32 |
amd2tb | Yes | 128 | 96:00:00 | 2010 GB | 15 GB | 429 GB | 1 |
v100 (1) | Yes | 24 | 24:00:00 | 376.4 GB | 15 GB | 875 GB | 1 |
small | Yes | 24 | 96:00:00 | 60.4 GB | 2500 MB | 390 GB | 10 |
large | No | 24 | 24:00:00 | 60.4 GB | 2500 MB | 390 GB | 48 |
max | Yes | 24 | 696:00:00 | 60.4 GB | 2500 MB | 390 GB | 1 |
ram256g | Yes | 24 | 96:00:00 | 248.9 GB | 10 GB |
390 GB |
2 |
ram1t (2) | Yes | 32 | 96:00:00 | 1003.9 GB | 31 GB | 228 GB | 2 |
k40 (1) | Yes | 24 | 24:00:00 | 123.2 GB | 5 GB | 390 GB | 40 |
interactive (3) | Yes | 24 | 24:00:00 | 60.4 GB | 2 GB | 228 GB | 2 |
interactive-gpu (3) | Yes | 24 | 24:00:00 | 60.4 GB | 2 GB | 228 GB | 2 |
preempt (4) | Yes | 24 | 24:00:00 | 60.4 GB | 2 GB | 228 GB | 2 |
preempt-gpu (4) | Yes | 24 | 24:00:00 | 60.4 GB | 2 GB | 228 GB | 2 |
(1) Note: In addition to selecting a GPU partition, GPUs need to be requested for all GPU jobs. A k40 GPU can be requested by including the following two lines in your submission script:
#SBATCH -p k40
#SBATCH --gres=gpu:k40:1
A V100 GPU can be requested by including the following two lines in your submission script:
#SBATCH -p v100
#SBATCH --gres=gpu:v100:1
(2) Note: The ram1t nodes contain Intel Ivy Bridge processors, which do not support all of the optimized instructions of the Haswell processors. Programs compiled using the Haswell instructions will only run on the Haswell processors.
(3) Note: Users are limited to a single job in the interactive and interactive-gpu partitions.
(4) Note: Jobs in the preempt and preempt-gpu partitions may be killed at any time to make room for jobs in the interactive or interactive-gpu partitions.