Gamma Ray Astrophysics; Zooniverse Crowdsourcing Science
The Fortson research group is focused on two main research areas, each of which can require MSI resources.
- Gamma Ray Astrophysics: VERITAS is an array of four imaging atmospheric Cherenkov telescopes (IACTs), located at the F. L. Whipple Observatory in southern Arizona. The array has been detecting extraterrestrial gamma rays since 2007. In order to properly calibrate the results, large amounts of simulation and data processing are required. In addition to VERITAS, the next-generation gamma-ray experiment CTA, with a factor of 10 improvement in sensitivity over existing arrays, is finalizing development of its low-level systems. One key system is the triggering and event building stage, which collects and associates information from telescopes spread over several square kilometers.
The Fortson group at UMN has responsibilities for both VERITAS and CTA development. For VERITAS, they produce a large fraction of the simulations necessary for calibrating the instrument and performing analysis on the data. More processing capability allows them to explore a larger parameter space of observational conditions. Different atmospheric humidity and aerosol content between summer and winter require them to repeat these simulations. Another important example of the importance of simulations is to track the performance as the array hardware is upgraded.
For CTA, the group is developing a novel use of self-assembly algorithms to generate a self-annealing event building architecture. These algorithms are meant to better cope with the high data rate and correspondingly high failure rates. These failures include network errors, timing errors, and other hardware errors. The ability of the CTA event builder to correctly identify the information associated with a particular gamma-ray atmospheric shower is vital to the success of this large-scale project.
Supercomputing resources are also required for running NASA Fermi LAT gamma-ray analysis. Typically this is run in several stages depending on the data products required such as counts maps, test statistic maps, spectra and light curves. For example, to perform a standard binned analysis on a single gamma-ray source (using all the photons collected by the Fermi satellite to date) this typically requires about 2GB of disk storage space with memory usage between 2 to 4GB using approximately 15 CPU hours. This example is for a Log Likelihood analysis of an object situated away from the Galactic plane where the relative number of nearby Fermi sources is smaller and the diffuse background emission low. For an object on or close to the Galactic plane the same analysis could easily take 30 CPU hours depending on the number of sources to be included in the Log Likelihood fit. For data products such as a test statistic map which can only be generated once the standard analysis is complete, this requires significantly longer CPU hour usage e.g. ~168 CPU hours. This is because a maximum likelihood computation is performed on each and every pixel in the requested map. Typically, computing jobs using the Fermi LAT analysis tools are submitted serially to a batch management system. The group expects to analyze several dozen Fermi LAT sources this year.
- The Zooniverse is the world’s largest online citizen science platform and several members of the Fortson group are involved in the development and analysis of Zooniverse project data. It is likely that the Fortson group will need to use MSI resources about two-three times during 2016 to batch process hundreds of thousands of images in preparation for their upload to the Zooniverse site.
This PI's work in translational informatics and the Zooniverse project was featured in an MSI Research Spotlight in November 2014.
Return to this PI’s main page.
- What MPI packages are available on MSI systems?
- On a given system, which MPI one can use?
- How to compile MPI codes?
- How to run an MPI job built with a specific MPI package?
- Quick test examples
- Useful online MPI tutorials and hands-on page
Message passing interface has become an essential components in high performance distributed parallel computing environment. All MSI systems are capable of running MPI applications, but different system may have different kind of MPI Implemntations. Presnted below is a summary on the available MPI implementations made by different vendor or developer so that users can use the MPI package on which thier applications have been developed on the right system.
Table 1: Names and modules available for different MPI packages
|MPI Package||Name||Run *one* of these modules|
module load intel impi/intel
module load gcc intel/gcc
module load intel pmpi/intel
module load gcc pmpi/gcc
module load intel ompi/intel
module load gcc ompi/gnu
module load intel qmpi/intel
module load gcc qmpi/gcc
module load intel mpt/intel
module load gcc/mpt
Table 2: MPI packages available on different systems
|Syetem name||Available MPI package|
ompi, pmpi, impi
|Lab Linux workstations||ompi|
MSI has simplified the code procedure by using the same compiling commands on different systems:
Fortran: mpif90 -O3 your.f90
C mpicc -O3 your.c
C++ mpicxx -O3 your.cpp
Although MPI has become well accepted standard for coding applications, it is up to the vendor who developed the package decides how to launch MPI jobs. In Table 3, we presend the commands set by different vendors for launching tthe MPI jobs. Please pay attention to the difference marked by red color. One can use the man page (e.g. man mpirun) to find the details about the options available for a specific MPI implementation.
Table 3: Commands for luanching MPI jobs with different MPI packages
|Name||Commands for launching MPI job under PBS with different MPI package|
mpirun -bynode -npernode 8 ./test
or mpirun -np 1024 ./test
mpirun -np 192 dplace -c 0-191 -x2 ./a.out
or mpirun -np 192 ./test
A fortran example:
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, mprocs, ierr)
CALL MPI_COMM_RANK(mpi_COMM_WORLD, irank, ierr)
node_name = name(1:LEN)
if(irank < 10) then
print*,' Hello from ',irank,node_name
A C example:
int main(int argc, char *argv)
int rank, nprocs, len;
if (rank < 10)
printf("Hello, world. I am %d of %d on %s\n", rank, nprocs, name);fflush(stdout);
Interested users may find working examples developed by MSI staff in following webpage: