Large Improved Human-Machine Collaboration through Explainable AI and Transparent Value Alignment

The project aims to address the challenges of effective human-machine collaboration in complex, uncertain environments relevant to the Department of the Air Force (DAF). The goal is to develop shared mental models that allow human-machine teams to anticipate each other’s actions and adapt to dynamic scenarios, enabling calibrated reliance and seamless collaboration. Central to this effort is investigation of principled ways to refine the representations employed by complex models or decision-making systems so as to meet the informational and cognitive needs of human teammates Enabling a human and machine to iteratively refine the representations and abstractions employed in explanation systems is important to calibrate trust between users and AI systems by providing users with the right level of information to assess the system’s performance and limitations, and enable more efficient communication between AI systems and humans. This project will develop and study interactive and iterative processes for refining representations and abstractions employed in explanation systems, and empirically validate that such a process supports the formation of shared mental models that allow human-machine teams to anticipate each other’s actions.

Published Research

To learn more about Guardian Autonomy research and other AI Accelerator projects, view our published research here.

Are you up for a Challenge?

Learn more about AI Accelerator challenges here.