Tutorial Details: Parallel Programming Using MPI
|Date:||Tuesday, March 1, 2005, 10:00 am - 04:00 pm|
|Instructor(s):||Shuxia Zhang, MSI, Hakizumwami Birali Runesha, MSI|
This one-day workshop on MPI will help researchers write better and portable parallel codes for distributed-memory machines, like Linux clusters. It will focus on basic point-to-point communication and collective communications, which are the most commonly used MPI routines in high-performance scientific computation. In addition, the advantage of using MPI non-blocking communication will be introduced.
The workshop will combine lecture with hands-on practice. The lecture introduces basic principles, and the hands-on portion focuses on the use of MPI principles via examples.
Session One: Introduction to basic concepts of "MPI is Small," centering on point-to-point communication.
Session Two: MPI collective communications including: broadcast, gather, scatter, and Alltoall.
Programming will be done in Fortran and C, so any background in these two languages will be helpful.
|Prerequisites:||Familiarity with Unix/Linux and knowledge of either Fortran, C, or C++|