University of Minnesota
University Relations
http://www.umn.edu/urelate
612-624-6868

Minnesota Supercomputing Institute


Log out of MyMSI

Tutorial Details: Parallel Programming Using MPI

Date: Tuesday, October 4, 2005, 10:00 am - 04:00 pm
Location: 585 Walter
Instructor(s): Shuxia Zhang, MSI, Hakizumwami Birali Runesha, MSI

This one-day workshop on MPI will help researchers write better and portable parallel codes for distributed-memory machines like Linux clusters. We will focus on basic point-to-point communication and collective communications, which are the most commonly used MPI routines in high-performance scientific computation. In addition, the advantage of using MPI nonblocking communication will be introduced.

The workshop will combine a lecture with hands-on practice. The lecture introduces basic principles, and the hands-on portion focuses on the use of MPI principles via examples.

1. Introduction to basic concepts of "MPI Is Small", centering on point-to-point communication.

2. MPI collective communications including: broadcast, gather, scatter, and Alltoall. Programming will be done in Fortran and C, so any background in these two languages will be helpful.

Level:
Prerequisites: Familiarity with UNIX/Linux and knowledge of either Fortran, C, or C++