You are here
MSI, with the support of the University of Minnesota’s NTS (Network and Telecommunication Services), has recently deployed a system to take advantage of the ten-gigabit network known as the Northern Lights GigaPoP network. With this system in place, MSI was able to help Peter J. Mendygral, a post-doctoral member of MSI principal investigator Tom Jones’s research group and employee of Cray, with some important work. Pete needed to transfer about seven terabytes of data in less than two days. He was able to do this successfully by utilizing MSI’s high-performance TeraScala storage system to store his remote data and using Globus GridFTP software to effect the transfer. Pete sat down with a MSI staffer to talk about his research, experience with MSI, and how MSI resources have helped him succeed.
MSI: What type of research are you doing?
Pete: My research is in astrophysics. I study magnetohydrodynamics, which means fluid dynamics simulations of ionized plasma with magnetic fields and their effects. I also study what we understand to exist in galaxy clusters, which are the largest gravitationally bound systems in the universe. So, specifically what I look at are the interactions between outflows of supermassive black holes that we believe exist at the centers of some of the large galaxies in these galaxy clusters. Within these galaxies are black holes that emit jets, supersonic jets. On a very large scale, they can have an enormous influence on the evolution of the clusters as well as the individual galaxies and stars.
MSI: What resources previously and currently have you utilized here at MSI?
Pete: Primarily we’ve run these simulations of jets and galaxy clusters on Itasca. I’ve also focused on other more specific phenomena, like the build-up of magnetic fields in galaxy clusters and the creation and evolution of cosmic ray particles. We have also done some smaller, kind of one-off type of things mainly for verifying the codes that I have written. Collaborators and I have done simulations of solar wind, galactic super bubbles, and giant bubbles created in galaxies by supernovas. My primary focus has been on the outflows of black holes and galaxy clusters, in terms of the simulations I have run on MSI systems. In addition, I use the LMVL systems a lot for my movie creations. In the LMVL I do a lot of stereo renderings because it really gives me an edge in understanding what’s going on inside a simulation. Seeing that third dimension is critical for understanding these simulations.
MSI: How has your use of the Northern Lights GigaPoP network in coordination with MSI systems and our high-performance TeraScala storage system enhanced your capabilities as a researcher?
Pete: I did my simulation this particular time on the Kraken system [the image above is from that simulation] at the National Institute for Computational Science (NISC) in Tennessee and it produced around seven terabytes of data. It’s often taken for granted in high-performance computing that you can produce your calculation relatively quickly, but then you come across the problem of dealing with this enormous dataset, especially when you run your simulation at a non-local site. So getting it in-house for analysis and creating movies become difficult challenges. The traditional route would be to send a secured copy via remote access (i.e. the ‘scp’ command), but this process would have taken at least 90 days. In this particular instance I had a conference coming up in a couple of weeks and I needed the data as soon as possible. So, 90 days was not going to work for me. The other option was to try to coordinate someone on the other end filling up a large number of hard drives and manually mailing them to me. That would require a large personnel cost and, best-case scenario, it would have still taken about two weeks. So what MSI’s Jeff McDonald and David Porter did for me was set up an MSI hardware interface with the Northern Lights GigaPoP network and I was able to move the entire dataset in less than two days. Because of this I was able to start working on my data immediately. This is an absolutely critical ability to have.
With the ever-changing needs of our users, MSI is proud to be able to help meet their challenges and to support researchers like Pete Mendygral in his pursuit of cutting edge research.
Deep brain stimulation (DBS) is a surgical therapy used to treat many neurological disorders that are refractory to medication. Patients with severe Parkinson’s disease, dystonia, essential tremor, obsessive-compulsive disorder, and depression have all benefited from DBS. The therapy involves implanting small electrodes in brain regions that exhibit pathological activity, and then stimulating those regions with continuous pulses of electricity. One of the ongoing challenges with this therapy is how to best minimize damage to brain tissue during the electrode implantation process.
Ben Teplitzky and Allison Connolly, student researchers in the group of Assistant Professor Matt Johnson (Biomedical Engineering), presented a poster at the 2012 MSI Research Exhibition on a project developing DBS electrodes that can be implanted within arterial and venous vessels of the brain. With the support of MSI resources, the Johnson group has developed three-dimensional models of the cerebral vasculature and coupled them with computational neuron models of DBS. These models now enable researchers to prospectively evaluate and optimize electrode geometries and targeting strategies for endovascular deep brain stimulation, as shown in the figure above.
Minnesota Supercomputing Institute (MSI) Provides Modeling and Analysis Support for Development of INVELOX, A Wind-Generated Energy Technology Developed by SheerWind, Inc.
SheerWind, Inc., a Minnesota-based company, has been developing novel approaches for generating electric power from the wind. In the fall of 2011, SheerWind contracted MSI to do an independent study of their new INVELOX tower, which is designed to channel and focus wind kinetic energy. MSI generated high-resolution computational fluid dynamic (CFD) simulations for this analysis. To achieve sufficient accuracy these simulations were performed on high-resolution computational meshes. The simulations were performed using Itasca’s high-performance computing (HPC) resources. These models produced the detailed air flow and pressure information needed to assess the INVELOX tower’s ability to channel and focus wind power. This work lead to the development of custom software for generating the solid geometry of a wide class of structures, as well as assembling and tuning a suite of software tools for massively parallel CFD computations on MSI’s HPC resources. Potential uses for this suite of software on MSI’s HPC systems include parameter space searches for optimal wind tower design and, on a much large scale, modeling air flow over terrain for optimal siting of wind towers. The image above shows streamlines viewed from the top of an INVELOX tower.
MSI is an authorized External Sales Organization of the University of Minnesota, which allows it to sell services to parties outside the University, including access to its supercomputing facilities, technical consulting, and training. MSI is currently working with a number of different firms and looks to develop relationships with additional parties.
MSI is proud to spotlight some of our researchers in a new series of articles that are aimed both at highlighting their research and illustrating how MSI facilitates it. Christopher Cramer’s group uses supercomputing resources to push the limits of present-day computational chemistry tools in order to examine large systems that are of relevance to one or more areas in chemistry. The group develops, codes, and applies novel molecular and quantum mechanical methodologies to model chemical structures, properties, and reactivities. Recently Professor Cramer sat down with an MSI staffer to talk about two Department of Energy grants that will fund his work, what research they will be funding, and what role MSI has played in his experience as a U of M researcher.
MSI: What research are the DOE grants funding?
Cramer: The two DOE grants will both start in September. One is a multi-institutional grant. It will provide a total of $8.1M in funding and it’s called the Nanoporous Materials Genome Center with Chemistry colleague Professor Laura Gagliardi as Director. It is designed to make predictions about and explain the properties of things called metal-organic frameworks (MOFs), which are a type of what are known as nanoporous materials. The other grant will be funded under DOE’s SciDAC program. SciDAC stands for “Scientific Discovery through Advanced Computing.” The related project involves the collaboration of the University of Minnesota and the Pacific Northwest National Laboratory and will focus on excited-state processes of molecules and excited-state dynamics. In layman’s terms, the focus will be on how to take solar energy, which can be captured by molecules that absorb solar photons to attain electronically excited states, and transfer and use that energy effectively, in, say, a chemical reaction that generates a so-called solar fuel.
Cramer: There are two separate projects, so let me talk about each of them separately, because they’re a little bit different.
The Nanoporous Materials Genome Center project has a database aspect to it. While we want to predict novel properties, at the same time we’re going to have to train our predictions on what’s known about existing properties. So, one goal of the project will be to create a database that, essentially, would allow one to look up everything that’s been done, and we’ll then be using that data in order to validate the models that we make, thus predicting things that aren’t yet known. But, will that involve terabytes of data? Probably not. There are only so many properties we would store, and there are only so many known systems. So it’s not quite a data-mining operation. In the very long run, though, we’d like to have enough confidence in the predictive models that the database can then be extended by blue-sky predictions. That is, for any combination of metal and organic frameworks, what structures and properties can be expected?
The SciDAC, which, again, deals with excited state chemistry and developing new models and putting them into code, does not really pose a data storage challenge. The computational demands are more associated with speed and memory. With quantum chemistry, the issue is generally the need for large memory and fast processor speeds rather than terabytes of storage. So, we’re going to design some software as well as new algorithms that will allow us to treat these excited states and do that in combination with a code developed at the PNL called NWChem, which, I believe, is already running on MSI machines. NWChem is a massively parallel quantum chemistry code that the PNL makes available under free license all around the world.
MSI: Is there any other software that MSI offers that you will utilize?
Cramer: Yes, we’re heavy users of all sorts of quantum chemistry codes. Gaussian09 is probably the one my group uses the most, and some of the other codes installed at MSI that do quantum chemistry are MOLCAS, Turbomole, ORCA, ADF (Amsterdam Density Function) – those are probably the workhorses that get the most use in my group. They all sort of do the same thing, but each one of them has something that it does better than the others. That’s why we would – for certain problems, we would use one over the others. We will also write our own code, typically modules that interface with larger production codes where we have access to the source.
MSI: Let’s change directions a little bit: Is any of your group’s research being done by graduate students for their dissertations?
Cramer: Oh, absolutely. My group usually has around five graduate students at any given time; that’s sort of my historical average. I also have post-doctoral researchers who work with me, and quite a number of undergrads. I think I’ve probably advised 60 undergraduates in my time at Minnesota, and they’re all doing computational work at MSI.
And then, actually, although this is a research story, I guess I would mention that I teach a computational chemistry course every year in the department. Well, it’s not always I, but, anyway, I have taught it a lot. The students in that course get MSI accounts and they do practical labs, if you will, as part of the course, and that’s both seniors and first-year graduate students, who are the people who tend to take the course.
I have also taught a freshman seminar where – you know, freshman seminars, they’re pretty introductory – we read some books, and one of the books is called The Billion-Dollar Molecule, which is an interesting book written by the journalist Barry Werth. It’s about a start-up pharmaceutical company in Boston. It’s a very readable book, not a science book, but it does discuss the science of this molecule they’re going after. They’ve got an x-ray structure of the molecule, and they’re trying to figure out how to get the drug to bind to it. When we get to this point in the freshman seminar, we go on a little field trip to the MSI visualization lab because we can view the crystal structure there. The students get to see the protein rotating in 3D space, and really, the freshman love it. They say, “Wow! This is what the University is all about.”
MSI’s mission is to work with researchers like Christopher Cramer and his group to help promote the use of high performance computation in cutting-edge research while also promoting the education of postdoctoral, graduate, and undergraduate students here at the University of Minnesota. You can learn more about Professor Cramer and the Department of Chemistry’s Chemical Theory Center at their website. Another article about the DOE grants can be found on the College of Science and Engineering website.
The image above shows the time evolution of an electronic excited state of the dye alizarin attached to a model of solid titania. With time, an electron is shown to move from the molecule (above) into the titania cluster (below), a process known as "charge injection" and important for the capture of solar energy. (Graphic courtesy of K. Lopata and N. Govind, Cramer group collaborators at Pacific Northwest National Laboratory.)
Researchers in higher mathematics have long used supercomputers to handle the huge numbers of calculations necessary for their work. Professor Doug Arnold (Mathematics; MSI Fellow) is using Itasca to solve problems with various aspects of finite-element methods for partial differential equations. Professor Arnold has developed the finite element exterior calculus (FEEC), and FEniCS (fenicsproject.org), which is one of the most impressive open-source projects in numerical partial differential equations, has been designed so that it is well suited to implementing and testing FEEC-based algorithms. FEniCS, which allows one to implement finite elements from a high-level mathematical perspective, includes a Python/C++ problem solving environment which provides access to advanced solver systems like PETSc, uBLAS, and Trilinos, which in turn use MPI, and so are in a position to make efficient use of Itasca.
Professor Arnold and his group are studying such areas as elastrodynamics and dynamic viscoelastic computations and harmonic forms, among others. The image above shows Stokes flow through a double pipe (in Stokes flow, fluid velocities are very slow, viscosities are very large, or length-scales of the flow are very small). The flow enters through the smaller pipe and exits the larger one. Since starting work on Itasca, the Arnold group has been able to implement FEniCS and tune its performance; they have been getting good results for a number of problems.