Minnesota Supercomputing Institute
This one-day, hands-on workshop provides an introduction on how to write a parallel program using MPI and will help researchers write better and portable parallel codes for distributed-memory Linux clusters. The tutorial will focus on basic point-to-point communication and collective communications, which are the most commonly used MPI routines in high- performance scientific computation. In addition, the advantage of using MPI non-blocking communication will be introduced. Each session of the workshop will combine a lecture with hands-on practice. The lecture will introduce basic principles, and the hands-on portion will focus on the use of MPI principles via examples.
Session 1: Introduction to basic concepts of MPI, centering on point-to-point communication.
Session 2: MPI collective communications including broadcast, gather, scatter, and All-to-All.
Programming will be done in Fortran and C, so any background in these two languages will be helpful.