Interactive queue use with srun

It is possible to start an interactive job on any of MSI’s clusters that support job submission. Interactive jobs can be useful for tasks including data exploration, development, or (with X11 forwarding) visualization activities.

 

The basic syntax for submitting an interactive job is illustrated using the following example for Mesabi:

srun -N 1 --ntasks-per-node=4  --mem-per-cpu=1gb -t 1:00:00 -p interactive --pty bash

If you are submitting an interactive job from a session with X11 forwarding and would like to continue using X11 forwarding inside your job, you would add the additional flag ‘-x11’, as follows:

srun -N 1 --ntasks-per-node=4  --mem-per-cpu=1gb -t 1:00:00 -p interactive --x11 --pty bash

The basic syntax for requesting two nodes, for instance to test MPI code between two nodes:

srun -N 2 --ntasks-per-node=1  --mem-per-cpu=1gb -t 1:00:00 -p interactive --pty bash

The syntax for requesting an interactive gpu node with a k40 GPU is:

srun -n 12 -t 1:00:00 -p interactive-gpu --gres=gpu:a40:1 --pty bash

You may also submit an interactive job using an interactive submission script. For instance, to submit an interactive job to test an MPI code (in our case named ‘interactive.sh’) with contents:

 

#SBATCH -N 2

#SBATCH --ntasks-per-node=1

#SBATCH --mem-per-cpu=1gb

#SBATCH -t 1:00:00

#SBATCH -p interactive

./my_application

 

You would then submit this job using:

sbatch interactive.sh

Interactive jobs are subject to the same rules and limitations as batch jobs. Interactive jobs have access to only those partitions that are available on the cluster they are submitted from. Jobs that request more resources may wait in the queue longer, so be sure that you are requesting only those resources that you need.

 

An interactive job supports all of the usual Slurm commands. These commands are included as flags when requesting an interactive job. 

 

Example interactive srun Session

 

username@mydesktop$ ssh -Y msiusername@mesabi.msi.umn.edu

Password:

Last login: Tue May 10 15:16:06 2018 from mydesktop.mydept.umn.edu

-------------------------------------------------------------------------------

            University of Minnesota Supercomputing Institute

                                Mesabi

                        HP Haswell Linux Cluster

-------------------------------------------------------------------------------

[... output truncated ...]

---------------

username@ln0006 [~] % srun -N 2 --ntasks-per-node=1  --mem-per-cpu=1gb --x11 -t 1:00:00 --pty bash

username@cn0001 [~] % 

username@cn0001 [~] % echo $SLURM_JOB_NODELIST

cn[0001-0002]

username@cn0001 [~] %

 

Note that this interactive job is using two nodes (-N 2). At this point you are currently in your home directory, and can load modules and launch software, including GUI software.