Research in Domain Decomposition Methods for Eigenvalue Problems
This group is conducting two major research efforts under the general theme of parallel sparse matrix computations, plus a new project using data mining to study materials.
- New, novel algorithms for solving large eigenvalue problems - specifically for those related to electronic structure calculations. Current efforts emphasize Domain Decomposition technques on the one hand and spectrum slicing methods on the other. The researchers investigate parallel methods based on these two basic ideas to improve the runtime without sacrificing the accuracy of the computations. They will continue to develop eigenvalue solvers that will reduce the amount of time needed to compute eigenvalues of the Hamiltonian matrices. In their efforts, they will emphasize the development of eigensolvers that compute a very large number of eigenvalues and parallel eigensolvers. They also have a joint NSF-supported collaboration with Eric Polizzi (UMass Amherst), the developer of FEAST, which is a novel approach to the problem geared toward electronic structure calculations.
- Parallel robust iterative solvers. As three-dimensional models are gradually becoming commonplace, iterative methods for solving large sparse linear systems arising from the discretization of partial differential equations, are gaining popularity. In the past, direct methods have often been preferred for two-dimensional problems, especially on computers with large memories. However, there is currently a general consensus that iterative methods become almost mandatory for three-dimensional problems because of the challenging memory and computational requirements of these problems. With the use of iterative solvers comes the need for effective preconditioners. A new direction the group is now taking is to consider the use of low-rank approximation techniques for approximating the Schur complement systems. The researchers have recently developed this technique sequentially. A recursive version was implemented in MATLAB.The primary initial motivation of this research was the development of preconditioners for GPUs. However, the researchers are now expanding this viewpoint as they have realized that these preconditioners have excellent potential for a Domain Decomposition approach. The researchers now plan to develop these techniques for linear systems that arise from realistic applications such as computational fluid dynamics and other application areas. Special methods, such as eigenvalue deflation, will be used to enhance robustness.
- Materials informatics. The researchers have been working on using data mining-type methods for materials. Machine learning is a broad discipline that comprises a variety of techniques for extracting meaningful information and patterns from data. It draws on knowledge and "know-how" from various scientific areas such as statistics, graph theory, linear algebra, databases, mathematics, and computer science.Recently, materials scientists have begun to explore data-mining ideas for discovery in materials. This project will explore the power of these methods for studying various materials properties such as melting points, structures, formation energy, etc. The group also considers unsupervised learning such as clustering compounds based on data obtained, e.g. from the constituent atoms or band diagrams.
Return to this PI's main page.
The Users Bulletin provides a summary of new policies, procedures, and events of interest to MSI users. It is published quarterly.
To request technical assistance with your MSI account, please contact email@example.com.
1. New Storage Limits: Early this year, MSI started a campaign to help groups with big data requirements manage their data using MSI's Second Tier storage. The campaign is helping groups get below and stay below the new 20 TB limit on primary storage. MSI staff are also helping groups move older data (greater than one year) to our Second Tier if they are using between 5 and 20 TB. These efforts have freed up space on our Primary Storage system and have made it possible for us to grant new storage allocations to 39 user groups.
See the Storage Allocations page of the MSI website for complete information on MSI storage limits.
2. Stratus: This spring, MSI has been testing a locally hosted cloud environment, called Stratus, designed to store and analyze protected data, such as dbGaP data. Stratus is isolated from other MSI storage and compute resources in order to meet the data use requirements of some of our funding agencies. Stratus will be available starting July 1, 2017 under a fee-for-service model with a rate structure similar to popular commercial cloud providers. For example, a basic annual subscription will cost $626 for 16 vCPUs and 2 TB of storage. Additional storage can be purchased as needed.
See the Stratus page on the MSI website for more details concerning MSI’s cloud environment.
3. New Archive Storage: MSI has been testing a big data archive storage solution that will give researchers a robust, secure, and inexpensive place to store very large datasets for five years or more. Think of this as a good alternative to purchasing a bunch of USB hard drives to back up important data or to archive data that you don't have to access on a regular basis. The new archive system will go into production starting July 1, 2017. The cost of MSI archival storage is $456.12 for 6 TB of replicated storage for five years. Replication means that data are written to two tapes, so that data are not lost if one tape fails. Access to the archive storage will be available via Globus and some other tools to help automate workflows.
4. Summer Tutorials: The Summer tutorial schedule is posted on the MSI website.
5. Acknowledgment of MSI in Publications: Please acknowledge MSI in your published works (e.g., posters, research reports, journal articles, abstracts, and talks) where MSI resources (computing, data storage, visualization, staff, etc.) contributed to your published research results.
- You can either list MSI in your affiliations in the byline, or cite MSI in the acknowledgments section (including, at a minimum, "Minnesota Supercomputing Institute (MSI)” and "University of Minnesota").
- On posters or in slide presentations, you can use the MSI wordmark; please contact Tracey Bartlett (firstname.lastname@example.org) to get a copy of the wordmark. Please do not use old MSI logos or wordmarks, as these do not meet current branding standards.
The following is a more complete example of how you could acknowledge MSI:
The authors acknowledge the Minnesota Supercomputing Institute (MSI) at the University of Minnesota for providing resources that contributed to the research results reported within this paper. URL: www.msi.umn.edu
This text can also be found in a couple of locations on the MSI website: FAQ - How do I properly cite MSI to acknowledge the use I have made of MSI’s resources for my research? and the Acknowledgments page in the Research @ MSI section.
6. Jobs at MSI:
7. Useful Webpages: Looking for help with MSI? One of these pages may have the information you need:
c. MSI systems