Python is a modern general purpose programming language that is popular in scientific computing for its readable syntax and extremely rich ecosystem of scientific and mathematical modules. The morning section will provide an introduction to some widely used packages, including common idioms for manipulating and visualizing data. The afternoon section will cover advanced modules and techniques relevant to high performance computing.
This tutorial is paired with Analyzing ChIP-Seq Data using Galaxy and will take the user though the same steps but, using the command line versions of the tools used in the Galaxy environment. This tutorial will:
1. Provide a brief introduction to MSI systems.
2. Provide a very brief introduction to UNIX.
3. Take users step-by-step though the process needed to analyze ChIP-Seq data
4. Provide users with a basic PBS script to automate the mapping and peak calling.
5. Teach users how to edit and run the script to be used in the future.
In this tutorial you will learn about the data storage systems available for academic research at the University of Minnesota. An overview of the kinds of storage systems that are available, policies for getting access to them, a comparison of their characteristics, and examples of how they can be accessed will be presented. You will also be given an overview of how the characteristics of UMN storage will impact the stability and throughput of various applications and workflows.
This tutorial provides an introduction on how to write a parallel program using OpenMP, and will help researchers write better and more portable parallel codes for shared memory Linux nodes. The course will cover the Compiler Directives (44), Runtime Library Routines (35), and Environment Variables (13) relevant to OpenMP. OpenMP supports C/C++ and Fortran implementations. Examples of how to enable OpenMP on the Intel, GNU, and PGI compilers will be given. The fork-join model of thread parallel execution will be described.
This one-day, hands-on workshop provides an introduction on how to write a parallel program using MPI and will help researchers write better and portable parallel codes for distributed-memory Linux clusters. The tutorial will focus on basic point-to-point communication and collective communications, which are the most commonly used MPI routines in high- performance scientific computation. In addition, the advantage of using MPI non-blocking communication will be introduced. Each session of the workshop will combine a lecture with hands-on practice.
This tutorial will help users learn the basics of parallel computation methods, including strategies for collecting calculations together for parallel execution. A brief description of parallel programming using MPI message passing will be given. A brief description of parallel programming using OpenMP will also be given. The hybrid MPI/OpenMP model will be briefly described. This will be a fast crash course on the most basic parallel computation and programming methods. Examples of how to compile and execute simple parallel programs will be given.
Undergraduate and graduate students with some familiarity with finite element method, plus faculty interested in finite element analysis, optimization, or fatigue.
The SIMULIA Central’s Minneapolis office invites you to two-part seminar held on campus to provide an introductory, hands-on workshop with Abaqus and to introduce you to additional simulation technology recently made available to the University of Minnesota.
This tutorial will help users learn the basics of compiling and debugging their code on MSI systems. Particular attention will be paid to code written in Fortran, C, and C++. Basic methods for debugging will be outlined, with users being able to explore different debugging tools. This tutorial will focus primarily on compiling serial programs,but brief information on compiling and debugging parallel programs will also be given. Attendees should have a basic knowledge of Linux and rudimentary knowledge of a programming language.
This two part tutorial will first introduce you to the concept of interactive high performance computing, as distinct from batch computing. We will cover the Citrix (Windows) and NICE EnginFrame (Linux) interactive computing environments hosted by MSI. Attendees will learn how to launch virtual desktops at MSI, connect to a variety of resources, load software modules, and build complex research workflows.
Intel will be providing training, to help software developers optimize their codes for the Xeon Phi coprocessor. This training is being held at the Minnesota Supercomputing Institute at the University of Minnesota. Current University students, staff, and faculty are invited.
See the program description. While the training is free, you must pre-register.
The training will cover: