Multimodal Vision for Synthetic Aperture Radar

Synthetic Aperture Radar (SAR) is a radar imaging technology capable of producing high-resolution images of landscapes. Due to its ability to produce images in all weather and lighting conditions, SAR imaging has advantages in Humanitarian Assistance and Disaster Relief (HADR) missions compared to optical systems. This project aims to improve human interpretability of SAR images, performance of SAR object detection and Automatic Target Recognition (ATR) by leveraging complementary information from related modalities (e.g., EO/IR, LiDAR, MODIS), simulated data, and physics-based models. Project findings and resulting technologies will be shared across the government enterprise to be beneficial in the HADR problem space where multiple partners across services may be able to exploit developed technology.

Published Research

To learn more about Multimodal Vision for Synthetic Aperture Radar research and other AI Accelerator projects, view our published research here.

Are you up for a Challenge?

The AI Accelerator challenge is to monitor the Amazon rainforest in all weather and lighting conditions using our multimodal remote sensing dataset, which includes a time series of multispectral and synthetic aperture radar (SAR) images. To support the interpretation and analysis of the rainforest, the challenge is to develop the following:
  • Image-to-Image Translation: Given a SAR image, predict a set of possible cloud-free corresponding electro-optical (EO) images
  • Matrix Completion: Given images taken at different locations and times, and in different modalities, predict appearance at a novel (time, location, modality) query
  • Downstream Task: Environment change estimation (e.g. deforestation, fire, water coverage) from the cloud-free view of the Amazon
Learn more about the challenge here.