Dr. Katya Kovalenko

UMD Natural Rsrc Rsrch Inst
UM Duluth
Project Title: 
Machine Learning Applications in Landscape-Level Ecology

This group applies machine learning models to predict land use, land conversion, forest type, and wetland restoration scenarios at landscape scales and other statistical approaches for community ecology analyses on paleolimnological big data. 

  • The Restorable Wetlands Inventory Project focuses on applying Ranfom Forests models to predict location of restorable wetlands based on hydrologic and geomorphologic variables using high-resolution data. The resulting map and an online utility will be used in prioritizing wetland restoration as a decision support tool by multiple agencies.
  • The Forest Structural Complexity Project focuses on the key habitat attribute driving conservation success for a variety of species. Quantifying habitat complexity from the high-resolution LiDAR data is a novel cost-saving approach utilizing a cutting edge technology in conservation. These researchers test a variety of models to predict canopy complexity from variables including stand age, variability in tree diameter, and tree diversity, and then apply successful models to the full multi-layer LIDAR dataset looped over several types of forest to derive regional-scale maps which outline areas of highest forest complexity, to be prioritized in future conservation efforts.
  • In the paleoecology project cluster, the researchers are using assemblage dissimilarity, directional statistics, and community threshold analyses on diatom paleorecords from more than 100 Minnesota lakes to characterize periods of spatially consistent assemblage changes over the last 200 years and identify the relative importance of climate change and land use in driving these shifts. In addition, the lead PI is a MSI liaison for the Natural Resources Research Institute at the University of Minnesota Duluth.

Project Investigators

Will Bartsch
Dr. Jessica Gorzo
Dr. Katya Kovalenko
Kristi Nixon
Are you a member of this group? Log in to see more information.