5/8/13: Virtual School Summer Computational Science Courses

 

The Virtual School of Computational Science and Engineering (VSCSE) is holding two courses this summer. These courses are open to graduate students, post-docs, and young professionals who want to expand their skills with advanced computational resources. The courses are offered at institutions around the country, allowing participants to go to the most convenient location.

 

Descriptions of the courses are shown below. You can register at the XSEDE portal. Questions can be mailed to info@vscse.org.

 

Summer 2013 VSCSE Courses:

 

Data Intensive Summer School (July 8 – 10, 2013)

From the VSCSE website: The Data Intensive Summer School focuses on the skills needed to manage, process and gain insight from large amounts of data. It is targeted at researchers from the physical, biological, economic and social sciences that are beginning to drown in data. We will cover the nuts and bolts of data intensive computing, common tools and software, predictive analytics algorithms, data management and non-relational database models. Given the short duration of the summer school, the emphasis will be on providing a solid foundation that the attendees can use as a starting point for advanced topics of particular relevance to their work.

 

Proven Algorithmic Techniques for Many-Core Processors (July 29 – August 2, 2013) 

From the VSCSE website: Studying many current GPU computing applications, we have learned that the limits of an application's scalability are often related to some combination of memory bandwidth saturation, memory contention, imbalanced data distribution, or data structure/algorithm interactions. Successful GPU application developers often adjust their data structures and problem formulation specifically for massive threading and executed their threads leveraging shared on-chip memory resources for bigger impact. We looked for patterns among those transformations, and here present the seven most common and crucial algorithm and data optimization techniques we discovered. Each can improve performance of applicable kernels by 2-10X in current processors while improving future scalability.