CUDA

Search Software

CUDA, (the Compute Unified Device Architecture), is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives program developers direct access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.

At MSI, CUDA is installed on Cascade, a heterogenous computing environment with NVIDIA Kepler, NVIDIA Tesla and Intel Xeon Phi processor.

SW Documentation: 

To run CUDA, use the following commands in the Linux command prompt:

module load cuda
module load cuda-sdk
deviceQuery

The last command 'deviceQuery' returns the details of the GPUs in the system.  It is a simpler way to test the GPU.

There are several versions installed on different systems. In order to find out which versions are available on the system you're using, run one of the following:

module avail cuda

Jobs can be submitted to the cascade queue which uses Tesla cards using a script shown below.

#!/bin/bash -l
#PBS -l walltime=01:00:00,pmem=2500mb,nodes=1:ppn=8:gpus=4:cascade
#PBS -m abe
#PBS -q cascade
module load cuda 
module load cuda-sdk 
cd /LOCATION/OF/FILES
./mycudaprogram

In addition to cascade queue, cascade also has kepler queue and phi queue.  You can find more details of the three queues and the corresponding PBS script in the cascade quickstart guide.

Short Name: 
cuda
SW Module: 
cuda
Service Level: 
Primary
SW Category