Research Focus

The field of computer vision has dramatically changed in the past ten years, making it possible to automatically extract detailed information from vast sets of images. While challenges remain, this presents an opportunity to improve our understanding of earth processes and their impact on plants and animals. The Multimodal Vision Research Laboratory (MVRL) is developing computer vision techniques to address pressing problems in ecology in close collaboration with domain experts. We have identified several exciting projects:

  • Species Distribution Mapping (Collaborators: Beth Brio, Solny Adalsteinsson, and Erin O’Connell): This project, in collaboration with the Tyson Research Center (TRC), would focus on using computer vision techniques to map species distributions, using a combination of citizen science and remote sensing data. This would build upon prior work on low-resolution global distribution models developed @ MVRL and high-resolution regional distribution models developed @ TRC.
  • Camera-Trap Imagery Analysis (Collaborators: Beth Brio, Solny Adalsteinsson, and Erin O’Connell): This project, also in collaboration with TRC, would focus on developing a system for detecting and classifying animals in camera-trap imagery. Camera traps are motion-triggered infrared cameras capable of documenting animal presence with minimal disturbance to study subjects. While passive monitoring devices continue to improve and become more affordable, ecologists are limited by their ability to process the enormous volumes of data that these devices collect. Human annotation of these datasets is time-consuming and costly, limiting the practical utility of passive monitoring devices. The goal of this project would be to extend an existing automated classification model to be more accurate for the species that are present in the St. Louis region and to support user-in-the-loop interaction.
  • Forest Canopy Structure Monitoring (Collaborator: Roger Michaelides): This project, in collaboration with the Michaelides Radar Laboratory, would develop computer vision algorithms that integrate imagery from various remote sensing sources, including Sentinel 1/2 and space-based LiDAR, to better understand the spatial distribution of trees and the canopy structure of particular forest plots. These characteristics are critical for understanding global biodiversity and carbon sequestration.

We expect the output of each project will be a public software repository that is capable of replicating the main methods developed by the fellow, a presentation to be delivered to other summer research fellows and Center for the Environment associates, and a manuscript intended for academic publication.

Skills, Techniques, Methods

Experience with Python programming, data wrangling, and machine learning is required. Experience with computer vision, image processing, and deep learning would be helpful but is not required.

Research Conditions

The research will be conducted primarily in person in the Multimodal Vision Research Laboratory (MVRL) in McKelvey Hall. It will involve software development, dataset curation, training machine learning models, and model performance analysis. For projects with Tyson Research Center, expect at least three meetings at the center (approximately a 30-minute drive). 

Team Structure and Opportunities

The undergraduate fellow will work closely with a Ph.D. student mentor from MVRL, with weekly meetings with Nathan Jacobs and meetings every few weeks with our ecology collaborator. The fellow will be exposed to other Computer Vision research taking place in the lab through a weekly journal club and work-in-progress meeting.