Professor Yousef Saad

Project Title: 
Research in High Performance Algorithms for Eigenvalue Problems and Sparse Linear Systems

The Saad group uses the supercomputers to develop high-performance software in numerical linear algebra. Among these are EVSL (a library of eigensolvers based on spectrum slicing) and GeMSLR (a library of linear system solvers based on domain-decomposition and multilevel low-rank correction). The researchers develop new parallel algorithms for solving linear systems, linear eigenvalue problems, and nonlinear eigenvalue problems. The group is also carrying out research machine learning, which involves the use of GPU resources when training Graph Neural Networks (GNNs). The research focuses on designing an efficient coarsening algorithm to serve as a preprocessing step, aimed at scaling existing GNN-based graph classification algorithms to large graphs. The datasets used to conduct the experiments cover a variety of real-world applications, including identifying protein functions, classifying movies, and finding possible connections in social networks. 

This group has three main research projects that require MSI resources:

  • Numerical methods for eigenvalue problems: The researchers are developing new techniques based on polynomial and/or rational filtering, domain decomposition, and Schur complements strategies. The group is also working on generalizing those techniques into nonlinear problems like approximating the eigenvalue problem by rational functions and linearizing the resulting  problem and advanced Rayleigh quotient iteration-like method. They are also developing the EigenValues Slicing Library (EVSL) library, a high-performance and robust  package for solving large and difficult eigenvalue problems. 
  • Solution of large sparse and dense linear systems: This includes the group's new generalized multilevel low-rank scalable preconditioners (GeMSLR) for general complex sparse linear systems, new algorithms based on multicoloring, as well as multiple-right-hand-sides problems. The group is working on improving the convergence result of those algorithms for hard problems, as well as improving the parallel performance. The researchers are developing the GeMSLR library, a parallel generalized  multilevel low-rank scalable preconditioners for general complex sparse linear systems.
  • New optimization methods for machine learning: This project includes large high-dimensional tensor completion and GNN-based graph classification algorithms for large graphs. 

Project Investigators

Professor Yousef Saad
Ziyuan Tang
Zechen Zhang
 
Are you a member of this group? Log in to see more information.