High-Performance Computing Resources

High-Performance Computing Resources

MSI provides access to a variety of high-performance computing systems. The HPC Hardware Policies are common to all HPC resources. Check each HPC resource's page for hardware and configuration, quickstart, and user resources information. All MSI software information, including for HPC systems, is documented on the software page.


MSI has selected HP for the deployment of the next major supercomputer, Mesabi, at the University of Minnesota. The delivery, installation, and testing timeline is shown on the system's page. 


Itasca is an HP Linux cluster with 1,091 HP ProLiant BL280c G6 blade servers, each with two quad-core 2.8 GHz Intel Xeon X5560 "Nehalem EP" processors sharing 24 GiB of system memory, plus 51 HP Proliant BL460c G8 blade servers, each with two 8-core 2.6 GHz E5-2670 Xeon "Sandy Bridge EP" processors with 64, 128 or 256 GiB of memory, with a 40-gigabit QDR InfiniBand (IB) interconnect. In total, Itasca consists of 9712 compute cores and 25 TiB of main memory.


Cascade is a mixed GPU and PHI cluster. There are eight compute nodes with four Tesla GPUs per node and four nodes with dual Kepler GPUs. In addition, there are three nodes which feature a PHI co-processor.

Red Nodes

Red Nodes is an MSI Hadoop cluster to support Big Data analytics.  It is composed of 50 nodes each with "Sandy Bridge EP" E5-2620 processors with six 2 GHz cores, QDR infiniband, eight GB of memory, and a 500 GB hard drive. MSI has configured 40 of these compute nodes into two Hadoop clusters, each with 20 nodes.

Access to HPC Resources

Current MSI users with a Service Unit allocation should use myMSI to add access on additional HPC resources as needed.

If you are not yet an MSI user or are a user whose group does not yet have an SU allocation, please sign up for access.