University of Minnesota
University Relations
http://www.umn.edu/urelate
612-624-6868

Minnesota Supercomputing Institute


Log out of MyMSI

Tutorial Details: Parallel Programming Using MPI

Date: Tuesday, October 9, 2007, 10:00 am - 04:00 pm
Location: 575 Walter
Instructor(s): Shuxia Zhang, MSI, Hakizumwami Birali Runesha, MSI

This one-day workshop on MPI will help researchers write better and portable parallel codes for distributed-memory machines like Linux clusters, such as the Institute's IBM Power4, SGI Altix, BladeCenter and Calhoun. It will focus on basic point-to-point communication and collective communications, which are the most commonly used MPI routines in high-performance scientific computation. In addition, the advantage of using MPI non-blocking communication will be introduced. Each session of the workshop will combine a lecture with hands-on practice. The lecture introduces basic principles, and the hands-on portion focuses on the use of MPI principles via examples.

Session One: Introduction to basic concepts of "MPI is Small," centering on point-to-point communication.

Session Two: MPI collective communications including broadcast, gather, scatter, and Alltoall.

Programming will be done in Fortran and C, so any background in these two languages will be helpful.

Level:
Prerequisites: Familiarity with UNIX/Linux and knowledge of Fortran, C, or C++