Page not found

Gamma Ray Astrophysics; Zooniverse Crowdsourcing Science

Abstract: 

Gamma Ray Astrophysics; Zooniverse Crowdsourcing Science

The Fortson research group is focused on two main research areas, each of which can require MSI resources.

  • Gamma Ray Astrophysics: VERITAS is an array of four imaging atmospheric Cherenkov telescopes (IACTs), located at the F. L. Whipple Observatory in southern Arizona. The array has been detecting extraterrestrial gamma rays since 2007. In order to properly calibrate the results, large amounts of simulation and data processing are required. In addition to VERITAS, the next-generation gamma-ray experiment CTA, with a factor of 10 improvement in sensitivity over existing arrays, is finalizing development of its low-level systems. One key system is the triggering and event building stage, which collects and associates information from telescopes spread over several square kilometers.  

    The Fortson group at UMN has responsibilities for both VERITAS and CTA development. For VERITAS, they produce a large fraction of the simulations necessary for calibrating the instrument and performing analysis on the data. More processing capability allows them to explore a larger parameter space of observational conditions. Different atmospheric humidity and aerosol content between summer and winter require them to repeat these simulations. Another important example of the importance of simulations is to track the performance as the array hardware is upgraded.

    For CTA, the group is developing a novel use of self-assembly algorithms to generate a self-annealing event building architecture. These algorithms are meant to better cope with the high data rate and correspondingly high failure rates. These failures include network errors, timing errors, and other hardware errors. The ability of the CTA event builder to correctly identify the information associated with a particular gamma-ray atmospheric shower is vital to the success of this large-scale project.

    Supercomputing resources are also required for running NASA Fermi LAT gamma-ray analysis. Typically this is run in several stages depending on the data products required such as counts maps, test statistic maps, spectra and light curves. For example, to perform a standard binned analysis on a single gamma-ray source (using all the photons collected by the Fermi satellite to date) this typically requires about 2GB of disk storage space with memory usage between 2 to 4GB using approximately 15 CPU hours. This example is for a Log Likelihood analysis of an object situated away from the Galactic plane where the relative number of nearby Fermi sources is smaller and the diffuse background emission low. For an object on or close to the Galactic plane the same analysis could easily take 30 CPU hours depending on the number of sources to be included in the Log Likelihood fit. For data products such as a test statistic map which can only be generated once the standard analysis is complete, this requires significantly longer CPU hour usage e.g. ~168 CPU hours. This is because a maximum likelihood computation is performed on each and every pixel in the requested map. Typically, computing jobs using the Fermi LAT analysis tools are submitted serially to a batch management system.  The group expects to analyze several dozen Fermi LAT sources this year.

  • The Zooniverse is the world’s largest online citizen science platform and several members of the Fortson group are involved in the development and analysis of Zooniverse project data. It is likely that the Fortson group will need to use MSI resources about two-three times during 2016 to batch process hundreds of thousands of images in preparation for their upload to the Zooniverse site.

This PI's work in translational informatics and the Zooniverse project was featured in an MSI Research Spotlight in November 2014.

Return to this PI’s main page.

Group name: 
fortson

Materials Studio

MS Modeling is Materials Studio's modeling and simulation product suite, and is designed for structural and computational researchers in chemicals and materials R&D who need to perform expert-level modeling and simulations tasks in an easy to learn yet powerful environment. It provides flexible and validated tools for the study of materials at various length and time scales. MS Modeling is Materials Studio's modeling and simulation product suite, and is designed for structural and computational researchers in chemicals and materials R&D who need to perform expert-level modeling and...

MSI Users Bulletin – March 2016

The Users Bulletin provides a summary of new policies, procedures, and events of interest to MSI users. It is published quarterly. To request technical assistance with your MSI account, please contact help@msi.umn.edu . 1. User Accounts: MSI is making changes that will consolidate user accounts and...

MPI packages available on MSI systems

What MPI packages are available on MSI systems?

Message passing interface has become an essential components in high performance distributed parallel computing environment. All MSI systems are capable of running MPI applications, but different system may have different kind of MPI Implemntations.  Presnted below is a summary on the available MPI implementations made  by different vendor  or developer so that users can use  the MPI package on which thier applications have been developed on the right system.

 

 Table 1: Names and modules available for different MPI packages

MPI Package Name Run *one* of these modules
Intel MPI impi

module load intel impi/intel

module load gcc intel/gcc

Pltform MPI pmpi

module load intel pmpi/intel

module load gcc pmpi/gcc

Open MPI ompi

module load intel ompi/intel

module load gcc ompi/gnu

Qlogic qmpi

module load intel qmpi/intel

module load gcc qmpi/gcc

SGI MPT mpt

module load intel mpt/intel

module load gcc/mpt

 

Which MPI implementations  can I use on different systems?

Table 2: MPI packages available on different systems

Syetem name Available MPI package
Itasca

ompi, pmpi, impi

Cascade ompi, impi,qmpi
Lab Linux workstations ompi

 

How to compile the code?

MSI has  simplified the code procedure by using the same compiling commands on different systems:

Fortran: mpif90 -O3 your.f90

C  mpicc -O3 your.c

C++ mpicxx -O3 your.cpp

 

How to run mpi applications with different MPI package?

Although MPI has become well accepted standard for coding applications, it is up to the vendor who developed the package decides how to launch MPI jobs. In Table 3, we presend the commands set by different vendors for launching tthe MPI jobs. Please pay attention to the difference marked by red color. One can use the man page (e.g. man mpirun) to  find the details about the options available for a specific MPI implementation.

Table 3: Commands for luanching MPI jobs  with different MPI packages

Name    Commands for launching MPI job under PBS with different MPI package      
impi

mpirun -r ssh -f $PBS_NODEFILE -np 1024./test

pmpi

mpirun -np 1024 -hostfile $PBS_NODEFILE ./test

ompi

             mpirun  -bynode -npernode 8 ./test

or         mpirun -np 1024 ./test

qmpi

            cpn=4
            sort -u $PBS_NODEFILE | sed -e "s/$/:${cpn}/" > new.nodefile
            mpirun -ssh -ppn ${cpn} -np 16 -m ./new.nodefile ./test

mpt

             mpirun -np 192 dplace -c 0-191 -x2 ./a.out

or         mpirun -np 192 ./test          

 

Quick test examples

A fortran example:

        Program HellofromEachNode
        INCLUDE 'mpif.h'
        CHARACTER*8 node_name
        character*(MPI_MAX_PROCESSOR_NAME) name
        Integer LEN
        CALL MPI_INIT(IERR)
        CALL MPI_COMM_SIZE(MPI_COMM_WORLD, mprocs, ierr)
        CALL MPI_COMM_RANK(mpi_COMM_WORLD, irank, ierr)
        call MPI_GET_PROCESSOR_NAME(name,LEN,ierr)
        node_name = name(1:LEN)
        if(irank < 10) then
        print*,' Hello from ',irank,node_name
        end if
        CALL MPI_FINALIZE(ierr)
        STOP
        END
 

A C example:

#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
    int rank, nprocs, len;
    char name[MPI_MAX_PROCESSOR_NAME];

    MPI_Init(&argc,&argv);
    MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
    MPI_Comm_rank(MPI_COMM_WORLD,&rank);
    MPI_Get_processor_name(name, &len);
    if (rank < 10)
    printf("Hello, world.  I am %d of %d on %s\n", rank, nprocs, name);fflush(stdout);
    MPI_Finalize();
    return 0;
}
 

Useful online MPI tutorials and hands-on webpage

Interested users may find working examples developed by MSI staff in  following webpage:

https://www.msi.umn.edu/content/mpi-hands-workshop
 

PacBio SMRT Analysis Portal

The PacBio Single Molecule Real Time (SMRT) analysis portal is an easy-to-use web-based platform for analyzing 3rd generation sequencing data generated from the PacBio SMRT platform. Currently, workflows for microbial whole genome assembly, resequencing analysis, transcriptome analysis and various data processing steps are available through the portal. For more information on the analysis portal itself, see http://www.pacb.com/devnet/ and the tutorial materials . The software must be run from a browser in the MSI network. This can be achieved via connection through the NICE interface , or by...

Translational Informatics

Application of Informatics to Transcription of Ancient Papyri While computers can do many things, there are still a few areas in which humans excel such as the discriminatory power of the eye and the natural human ability to quickly classify objects. The visual ability of recognizing patterns is at...

Pages