Earth Intelligence Engine

The Earth Intelligence (EI) Engine for weather and climate includes a novel AI testbed platform to support rapid, effective decision-making and long-term, strategic planning and operations for USAF. Advances in AI help close the gap between AI researchers and available Earth systems data via a platform connecting data and models, novel algorithms, and image gap-filling tasks to bridge lower-quality to higher-quality weather and climate data sets. The EI Engine will provide the USAF with improved algorithms for anomaly detection; critical remote access to centralized Earth intelligence data; intuitive supercomputer visualizations of Earth intelligence for mission support; improved nowcasting weather forecasting for mission operations; and strategic location identifications affected by climate change to enhance resource allocation. The goal is to develop a platform, which provides global scale to high-resolution local scale Earth weather and climate data and visceral visualizations to better inform policy decision-makers and leaders in government and business.

Published Research

To learn more about Earth Intelligence Engine research and other AI Accelerator projects, view our published research here.

Workshop on Monitoring the World Through an Imperfect Lens (MONTI) June 3-4, 2026 in Denver

Modern deep learning approaches show promising results in meteorological applications like precipitation nowcasting, synthetic radar generation, front detection, and several others. The DAF-MIT AI Accelerator is working to rapidly develop new approaches to these challenges.

Multimodal remote sensing has the potential to deliver comprehensive Earth observation by combining complementary sensor capabilities, yet fundamental challenges prevent this potential from being realized. While computer vision has made remarkable progress in multimodal learning with aligned, simultaneously-collected data (e.g., RGB-D cameras), remote sensing operates under far more challenging constraints. Satellites collect data asynchronously, at different resolutions, and through fundamentally different imaging physics. For example, synthetic aperture radar (SAR) actively transmits microwave pulses whereas electro-optical (EO) sensors passively capture reflected sunlight. These disparities create a critical gap between theoretical multimodal methods and practical Earth observation systems.

The core challenge lies not in sensor alignment or co-registration, but in learning meaningful representations across modalities that differ in their fundamental observational properties. When monitoring dynamic Earth processes, we rarely have the luxury of complete, synchronized observations. Instead, we must extract insights from whatever data is available: for example, pre-event optical imagery paired with post-event SAR, high-resolution commercial imagery combined with frequent but coarse images from public satellites, or clear-sky observations from weeks apart bracketing a critical cloudy period.

The goal of this workshop is to gather a wide audience of researchers in academia, industry, and related fields to address real-world constraints in multimodal remote sensing. While many recent multimodal remote sensing publications have focused on adapting computer vision algorithms to satellite imagery, fewer have tackled the unique challenges intrinsic to the remote sensing domain, such as irregular data collection intervals, disparities in modality and resolution, and non-ideal monitoring environments.

The workshop will solicit short papers applying machine learning to Earth and environmental science monitoring, particularly focused on multimodal learning under imperfect conditions.

Topics will include, but will not be limited to:
 
  • Multimodal fusion combining EO, SAR, LiDAR, and other sensors Heterogeneous change detection across different modalities
  • Temporal analysis for event monitoring Domain adaptation and cross-sensor generalization
  • Self-/unsupervised learning with limited data
  • Foundation models for remote sensing and Earth observation Uncertainty quantification
  • Real-time multimodal satellite processing Infrastructure monitoring and hazard prediction using incomplete data
  • Multimodal remote sensing analysis
  • Change detection and multi-temporal analysis
  • Geographic Information Science
  • Multimodal generative modeling
  • Multimodal representation learning
Speakers include:
 
Submission Guidelines:
  • We accept submissions of max 8 pages (excluding references) on the aforementioned and related topics. We encourage authors to submit 4-page papers. 
  • Submitted manuscripts should follow the CVPR 2026 paper template. Accepted papers are not archival and will not be included in the proceedings of CVPR 2026.
  • Submissions will be rejected without review if they contain more than 8 pages (excluding references), violate the double-blind policy or violate the dual-submission policy.
  • Paper submission must contain substantial original contents not submitted to any other conference, workshop, or journal.
  • Papers will be peer-reviewed under a double-blind policy and need to be submitted online through the Open Review submission website.
For more information, please contact the CVPR directly here.