CUDA, (the Compute Unified Device Architecture), is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives program developers direct access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.

At MSI, CUDA is installed on Mesabi, our main cluster. There are 40 nodes with 2 K40 GPUs each. In order to request the GPU nodes, you need to use the k40 queue.  In the PBS options, you should include the number of GPUs that are needed for the job. Below is an example of an interactive session. 1 node with 2 GPUs was requested for 20 minutes.

% qsub -I -l nodes=1:ppn=16:gpus=2,walltime=20:00 -q k40
qsub: waiting for job to start
qsub: job ready

[~] %


Load the cuda modules

[~] % module load cuda cuda-sdk

Here, the deviceQuery program shows that there are 2 GPUs available

[~] % deviceQuery | grep NumDevs
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.0, CUDA Runtime Version = 7.0, NumDevs = 2, Device0 = Tesla K40m, Device1 = Tesla K40m
[~] %



Support level: 
Access level: