Cascade - Quickstart Guide

This guide will provide you with the basic information needed to get up and running jobs on Cascade.

Important information for Cascade Users

  • Cascade is a General-Purpose GPU resource. You can find more details in the cascade page.
  • PIs and group administrators must grant access to Cascade for their users.
  • Jobs that do not use the GPU resources are strictly prohibited.
  • Cascade is a billable resource and will be charged at the same rate as Itasca (1.5 CPU hours / SU).
  • There is currently no additional charge for GPU use.

Login Procedure:

To log in from an MSI resource:

ssh -X cascade.msi.umn.edu

To log in from a system outside of MSI, users first must use the MSI login host to connect to Cascade.

Available Software:

A module system is used on Cascade to control the run-time environment for individual applications. Please type module avail to see the software available on Cascade. Please note that the cuda-sdk module is designed to only operate on the compute nodes. The head node does not have any GPU devices so the cuda-sdk module won't work on it.

Cascade queue:

The system includes the module for CUDA and the CUDA-SDK for the Tesla card. The cascade queue uses the Tesla GPU cards. The CUDA-SDK is already pre-compiled and all examples will be include in the path variables once the module is loaded. The head node does not have any GPU resources, but you may compile there. To gain access to a Tesla node, you must do an interactive qsub by typing the following commands in the Linux command prompt.

>> qsub -I -l walltime=1:00:00,nodes=1:ppn=12:gpus=4:cascade 

>> module load cuda 

>> module load cuda-sdk 

>> deviceQuery 

This example requests 1 hour wall clock time on one node with 8 cores and 4 Tesla cards. It is recommended that users use the GPU compilers for all CUDA work. OpenMPI/gnu is recommended for users who want to run over multiple nodes.

If you would like to submit a job to the cascade queue, use a PBS script similar to one given below.

#!/bin/bash -l
#PBS -l walltime=01:00:00,pmem=2500mb,nodes=1:ppn=8:gpus=4:cascade
#PBS -m abe
#PBS -q cascade
module load cuda 
module load cuda-sdk 
cd /LOCATION/OF/FILES
./mycudaprogram

To access the other queues, try the following qsub options:

Kepler queue:

To gain access to a Kepler node, you must do an interactive qsub by typing the following commands in the Linux command prompt.

 qsub -I -l walltime=1:00:00,nodes=1:ppn=16:gpus=2:kepler,pmem=200mb

This example requests 1 hour wall clock time on one node with 16 cores and 2 Kepler cards.

If you would like to submit a job to the kepler queue, use a PBS script similar to one given below.

#!/bin/bash -l
#PBS -l walltime=1:00:00,nodes=1:ppn=16:gpus=2:kepler,pmem=200mb
#PBS -m abe
#PBS -q kepler
module load cuda 
module load cuda-sdk 
cd /LOCATION/OF/FILES
./mycudaprogram

Phi queue:

For a summary of how to use the Phi nodes in native mode see Intel Phi Quickstart
To gain access to a Phi node, you must do an interactive qsub by typing the following commands in the Linux command prompt.

qsub -I -l walltime=24:00:00,nodes=1:ppn=12:phi,pmem=200mb