The Minnesota Supercomputing Institute (MSI) is happy to announce that it has added 550 TB of usable disk space for its flagship Itasca system with the addition of a Dell | Terascala HPC Storage Solution (HSS). This is a complete Lustre appliance including hardware, file system and the Terascala ISIS management solution. The HSS consists of 360 2-TB disk drives configured in Raid 5 storage with small raid sets, three pairs of data movers, and a pair of meta-data servers, all with redundancy and managed failover capabilities.
This new storage solution performs 5 times faster on IO-intensive applications and it is expected that this solution will enhance MSI users' computations by making data input and output significantly faster.
Actual performance testing showed results of:
- 1 GB/s read and 620 MB/s write for single clients;
- 16.2 GB/s read and 13 GB/s write for sets of 50 clients;
- 11.4 GB/s read and 8.1 GB/s write for MPI IO with 50 clients (4 TB files).
"Terascala has intelligently and effectively streamlined the formidable tasks of Lustre installation, deployment and management," said MSI's Assistant Director of HPC Operations, Jeff McDonald. "Terascala has turned Lustre into an appliance that just works once you plug it in. The system was fully deployed in two racks when it arrived on 24 September. After unpacking, the system was powered and connected and minutes later we were writing to the filesystem!"
Terascala has also integrated a management console with their solution. In one application, the console displays the current state of the system by key component, and it issues alerts via email when any arise. "With the management console, we no longer have to look for issues ourselves; the system nearly manages itself -- a striking contrast from 'roll your own' Lustre deployments," said McDonald.
The new Dell | Terascala HSS storage appliance is connected to MSI's Itasca system, an HP Linux cluster with 1,091 HP ProLiant BL280c G6 blade servers, each with two-socket, quad-core 2.8 GHz Intel Xeon X5560 "Nehalem EP" processors sharing 24 GB of system memory, with a 40-gigabit QDR InfiniBand (IB) interconnect. In total, Itasca consists of 8,728 compute cores and 24 TB of main memory.
This new system will be used by the research community for applications including weather modeling, galactic modeling, computational fluid dynamics, bioinformatics research and computational chemistry.