University of Minnesota
University Relations

Minnesota Supercomputing Institute

Log out of MyMSI

Tutorial Details: Parallel Programming Using MPI

Date: Tuesday, March 3, 2009, 10:00 am - 04:00 pm
Location: 575 Walter
Instructor(s): David H Porter, MSI, Hakizumwami Birali Runesha, MSI

This one-day workshop on MPI will help researchers write better and portable parallel codes for distributed-memory machines like Linux clusters, such as MSI's 248-core Intel-based cluster (Calhoun), the 1,000-core AMD cluster (Blade), and the SGI Altix Itanium-based shared-memory system. It will focus on basic point-to-point communication and collective communications, which are the most commonly used MPI routines in high-performance scientific computation. In addition, the advantage of using MPI non-blocking communication will be introduced. Each session of the workshop will combine a lecture with hands-on practice. The lecture will introduce basic principles, and the hands-on portion will focus on the use of MPI principles via examples.

Session One: Introduction to basic concepts of "MPI is Small," centering on point-to-point communication.

Session Two: MPI collective communications including broadcast, gather, scatter, and Alltoall. Programming will be done in Fortran and C, so any background in these two languages will be helpful.

Prerequisites: Familiarity with UNIX/Linux and knowledge of Fortran, C, or C++