Minnesota Supercomputing Institute
With regards to the safety measures put in place by the university to mitigate the risks of the COVID-19 virus, at this time all MSI systems will remain operational and can be accessed remotely as usual. The only planned outages concern our in-person Helpdesk and tutorials. More information, as well as alternative remote support options, can be found at MSI COVID-19 Continuity Plan
6.5, 7.0, 7.5, 8.0, 9.0, 10.0, 9.1, 10.1
Friday, May 15, 2020
Wednesday, December 7, 2016
CUDA, (the Compute Unified Device Architecture), is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives program developers direct access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
At MSI, CUDA is installed on Mesabi, our main cluster. There are 40 nodes with 2 K40 GPUs each. In order to request the GPU nodes, you need to use the k40 queue. In the PBS options, you should include the number of GPUs that are needed for the job. Below is an example of an interactive session. 1 node with 2 GPUs was requested for 20 minutes.
NOTE: GPU nodes are not shared, which means any job running in the k40 queue will be charged for 24 cores of utilization.
(This assumes you are already on a mesabi login node.)
[ln0003:~] % qsub -I -l nodes=1:gpus=2,walltime=20:00 -q k40qsub: waiting for job 469592.mesabim3.msi.umn.edu to startqsub: job 469592.mesabim3.msi.umn.edu ready[cn3006:~] %
Load the cuda modules
[cn3006:~] % module load cuda cuda-sdk
Here, the deviceQuery program shows that there are 2 GPUs available
[cn3006:~] % deviceQuery | grep NumDevsdeviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.0, CUDA Runtime Version = 7.0, NumDevs = 2, Device0 = Tesla K40m, Device1 = Tesla K40m[cn3006:~] %