This tutorial provides an introduction on how to write a parallel program using OpenMP, and will help researchers write better and more portable parallel codes for shared memory Linux nodes. The course will cover the Compiler Directives (44), Runtime Library Routines (35), and Environment Variables (13) relevant to OpenMP. OpenMP supports C/C++ and Fortran implementations. Examples of how to enable OpenMP on the Intel, GNU, and PGI compilers will be given. The fork-join model of thread parallel execution will be described.
This one-day, hands-on workshop provides an introduction on how to write a parallel program using MPI and will help researchers write better and portable parallel codes for distributed-memory Linux clusters. The tutorial will focus on basic point-to-point communication and collective communications, which are the most commonly used MPI routines in high- performance scientific computation. In addition, the advantage of using MPI non-blocking communication will be introduced. Each session of the workshop will combine a lecture with hands-on practice.
This tutorial will help users learn the basics of parallel computation methods, including strategies for collecting calculations together for parallel execution. A brief description of parallel programming using MPI message passing will be given. A brief description of parallel programming using OpenMP will also be given. The hybrid MPI/OpenMP model will be briefly described. This will be a fast crash course on the most basic parallel computation and programming methods. Examples of how to compile and execute simple parallel programs will be given.
This tutorial will help users learn the basics of compiling and debugging their code on MSI systems. Particular attention will be paid to code written in Fortran, C, and C++. Basic methods for debugging will be outlined, with users being able to explore different debugging tools. This tutorial will focus primarily on compiling serial programs,but brief information on compiling and debugging parallel programs will also be given. Attendees should have a basic knowledge of Linux and rudimentary knowledge of a programming language.
This two part tutorial will first introduce you to the concept of interactive high performance computing, as distinct from batch computing. We will cover the Citrix (Windows) and NICE EnginFrame (Linux) interactive computing environments hosted by MSI. Attendees will learn how to launch virtual desktops at MSI, connect to a variety of resources, load software modules, and build complex research workflows.
Intel will be providing training, to help software developers optimize their codes for the Xeon Phi coprocessor. This training is being held at the Minnesota Supercomputing Institute at the University of Minnesota. Current University students, staff, and faculty are invited.
See the program description. While the training is free, you must pre-register.
The training will cover:
This tutorial will provide an introduction to the Linux operating system, with particular attention paid to working from the command line. The tutorial will cover basics such as fundamental commands, editing files, understanding directories and permissions, and remote access. No previous Linux experience is required.
This tutorial will introduce users to MSI supercomputers, and provide an overview of how to submit calculations to the job schedulers. Topics covered include creating job scripts, types of jobs, job queues, differences in available hardware, checking job status, and choosing an appropriate place to submit a calculation.
This tutorial is geared to new MSI users and will provide a highlevel introduction to the facilities and computational resources at MSI.