Gamma Ray Astrophysics; Zooniverse Crowdsourcing Science
The Fortson research group is focused on two main research areas, each of which can require MSI resources.
- Gamma Ray Astrophysics: VERITAS is an array of four imaging atmospheric Cherenkov telescopes (IACTs), located at the F. L. Whipple Observatory in southern Arizona. The array has been detecting extraterrestrial gamma rays since 2007. In order to properly calibrate the results, large amounts of simulation and data processing are required. In addition to VERITAS, the next-generation gamma-ray experiment CTA, with a factor of 10 improvement in sensitivity over existing arrays, is finalizing development of its low-level systems. One key system is the triggering and event building stage, which collects and associates information from telescopes spread over several square kilometers.
The Fortson group at UMN has responsibilities for both VERITAS and CTA development. For VERITAS, they produce a large fraction of the simulations necessary for calibrating the instrument and performing analysis on the data. More processing capability allows them to explore a larger parameter space of observational conditions. Different atmospheric humidity and aerosol content between summer and winter require them to repeat these simulations. Another important example of the importance of simulations is to track the performance as the array hardware is upgraded.
For CTA, the group is developing a novel use of self-assembly algorithms to generate a self-annealing event building architecture. These algorithms are meant to better cope with the high data rate and correspondingly high failure rates. These failures include network errors, timing errors, and other hardware errors. The ability of the CTA event builder to correctly identify the information associated with a particular gamma-ray atmospheric shower is vital to the success of this large-scale project.
Supercomputing resources are also required for running NASA Fermi LAT gamma-ray analysis. Typically this is run in several stages depending on the data products required such as counts maps, test statistic maps, spectra and light curves. For example, to perform a standard binned analysis on a single gamma-ray source (using all the photons collected by the Fermi satellite to date) this typically requires about 2GB of disk storage space with memory usage between 2 to 4GB using approximately 15 CPU hours. This example is for a Log Likelihood analysis of an object situated away from the Galactic plane where the relative number of nearby Fermi sources is smaller and the diffuse background emission low. For an object on or close to the Galactic plane the same analysis could easily take 30 CPU hours depending on the number of sources to be included in the Log Likelihood fit. For data products such as a test statistic map which can only be generated once the standard analysis is complete, this requires significantly longer CPU hour usage e.g. ~168 CPU hours. This is because a maximum likelihood computation is performed on each and every pixel in the requested map. Typically, computing jobs using the Fermi LAT analysis tools are submitted serially to a batch management system. The group expects to analyze several dozen Fermi LAT sources this year.
- The Zooniverse is the world’s largest online citizen science platform and several members of the Fortson group are involved in the development and analysis of Zooniverse project data. It is likely that the Fortson group will need to use MSI resources about two-three times during 2016 to batch process hundreds of thousands of images in preparation for their upload to the Zooniverse site.
This PI's work in translational informatics and the Zooniverse project was featured in an MSI Research Spotlight in November 2014.
Return to this PI’s main page.
The Users Bulletin provides a summary of new policies, procedures, and events of interest to MSI users. It is published quarterly.
To request technical assistance with your MSI account, please contact email@example.com.
1. MSI Primary File System: On November 9, 2016, Director of Research Computing Claudia Neuhauser sent an email to MSI users concerning issues with MSI systems. The information below repeats some of that information, plus provides some updates:
Starting in late August, we experienced an unusually high number of outages and you felt the impact. We immediately started to work with the vendor to figure why the system was behaving differently. After extensive testing, our storage vendor was able to replicate in their labs the conditions that led to our outages. A bug in the file system software is triggered when the following conditions occur: a certain type of hardware failure occurs, new storage shelves are being integrated into the current system, and the file system is under a heavy load. These conditions are not unusual at MSI, because we are frequently adding new storage shelves to keep up with demand, we expect parts to fail in a system as large as ours, and our systems are at least at 90% load all of the time.
Our storage vendor recognizes the criticality of this issue and has made creating a patch for this bug a top priority. On December's maintenance day, we fully migrated the one volume which is needed across all of MSI to a different bladeset. That migration allows us to now work with the vendor on more debugging. There no significant risk of an MSI-site outage at this time. The vendor is continuing to test and is developing a patch.
We continue to address system availability by addressing other issues as well. For example, contractors will soon begin work to upgrade the cooling zones in Walter Library, so that we are much less susceptible to general building outages. We’re also actively working on new types of systems that offer a higher level of availability by leveraging some of the features of a cloud-based infrastructure. We’ll provide more details on these developments in the weeks and months to come. In the meantime, please don’t hesitate to contact me (firstname.lastname@example.org) if you have any questions or comments about what we are doing to support your research requirements.
2. Account Renewal Reminders:
a. The MSI 2016 allocation renewal period ended on December 9, 2016. MSI user accounts must be renewed if you wish to continue using MSI during 2017. PIs and Group Administrators who still wish to renew but who have not submitted a renewal request should contact email@example.com as soon as possible. Non-renewed accounts will be locked in early 2017.
b. Accounts for Non-UMN MSI Users: MSI is transitioning from using sponsored accounts for non-UMN affiliated users to a “Person of Interest (POI)” designation. This change will create a greater level of security for accounts.
MSI is no longer accepting sponsored accounts as valid UMN Internet IDs for new users. As of January 1, 2017, sponsored accounts will no longer be allowed to log in to MSI resources. PI groups who have users with sponsored accounts must convert the accounts to POI. PI groups will need to get POI status for any external user they wish to add to their group. See the FAQ for more information.
The MSI Tech Support staff (firstname.lastname@example.org) will assist non-University affiliated PIs with creating POIs. University-affiliated PIs are authorized to set up POIs with the University.
3. Printing at MSI: MSI has removed the printers from the Scientific Development and Visualization Laboratory in Walter Library. We will no longer provide printing capability for MSI users. There are multiple places available on campus where you can print, including:
4. Spring Tutorials: MSI will resume tutorials at the start of the spring semester. They will be posted on the Events page of the MSI website when the schedule is finalized.
5. 2017 MSI Research Exhibition: Save the Date! MSI will host the annual Research Exhibition on April 25, 2017, in Walter Library. The event includes a judged poster session with prizes awarded to the finalists. We will post information on our website and send out a Call for Posters in January 2017.
6. Jobs Available at MSI:
7. Useful Webpages: Looking for help with using MSI? One of these pages may have the information you need:
b. Getting Started (includes Quickstart Guides)
c. MSI Systems
The Hoard memory allocator is a fast, scalable, and memory-efficient memory allocator for Linux, Solaris, Mac OS X, and Windows. Hoard is a drop-in replacement for malloc that can dramatically improve application performance, especially for multithreaded programs running on multiprocessors and multicore CPUs.
To load this software in a Linux environment run the command(s):
module load hoard