Cascade consists of a Dell R710 head/login node with 48 GiB of memory, eight Dell compute nodes each with dual X5675 six-core 3.06 GHz processors and 96 GiB of memory, and 32 Nvidia M2070 GPGPUs. A compute node is connected to four GPGPUs, each of which has 448 1.15 GHz cores and 5 GB of memory. Each GPGPU is capable of 1.2 single-precision TFLOPS and 0.5 double-precision TFLOPs.

User Guide and Tutorials

For a quick introduction to working on Cascade, see the quick start guide.
See hands-on tutorial materials.
For a presentation about the current state of applications ported to use GPU computing, including performance improvement information, see the NVIDIA tutorial presentation.
See a report with an overview of GPU ports of molecular dynamics (Ch 1) and quantum chemistry (Ch 2). This report was mentioned during the NVIDIA presentation (above).

Hardware and Configuration

  • Dell R710 head node with 48 GiB memory
  • Eight compute nodes in two Dell C6100 chassis with 32 Nvidia M2070 GPGPUs (nominally four GPGPUs per node, but configurable).  These nodes will be referred as tesla nodes (cascade queue)
  • Four HP compute nodes (Proliant SL250s Gen 8) each with two Kepler (K20m) GPGPUs.  These nodes will be referred as kepler nodes (kepler queue)
  • Two nodes with one Intel Xeon Phi (5110P) Coprocessor card each.  These nodes will be referred as phi nodes (phi queue)
  • QDR-IB connectivity between nodes

Node Descriptions:

Tesla nodes (cascade queue) :

  • Two Intel(R) Xeon(R) CPU X5675 @ 3.06GHz (each with six cores)
  • 96 GiB of memory

Kepler nodes (kepler queue):

  • Two Intel(R) Xeon(R) CPU E5-2670 (Sandy Bridge) @ 2.60GHz (each with 16 cores)
  • 124 GB of memory

Phi nodes (phi queue) :

  • Two Intel(R) Xeon(R) CPU E5-2670 (Sandy Bridge) @ 2.60GHz (each with 16 cores)
  • 124 GB of memory


Features both IP and IB networks.

Home Directories and Disks

Cascade home directories are as described on the MSI Home Directories page and are shared with Itasca. MSI central project spaces are available on the Cascade nodes by request.

Scratch Space

Cascade shares a Lustre scratch filesystem with Itasca. The Lustre filesystem is available at /lustre and has a capacity of 590TB.  


Cacade has three queues named cascade, kepler and phi. The kepler and phi queues have a 24-hour run-time limit. The cascade queue has a 120 hr limit. A more detailed queue description is provided in the queues page.