Abstract
In this work, we investigate how flight instructors observe aviator scan patterns and assign quality to an aviator's gaze. We first establish the reliability of instructors to assign similar quality to an aviator's scan patterns, and then investigate methods to automate this quality using machine learning. In particular, we focus on the classification of gaze for aviators in a mixed-reality flight simulation. We create and evaluate two machine learning models for classifying gaze quality of aviators: a task-agnostic model and a multi-task model. Both models use deep convolutional neural networks to classify the quality of pilot gaze patterns for 40 pilots, operators, and novices, as compared to visual inspection by three experienced flight instructors. Our multi-task model can automate the process of gaze inspection with an average accuracy of over 93.0% for three separate flight tasks. Our approach could assist existing flight instructors to provide feedback to learners, or it could open the door to more automated feedback for pilots learning to carry out different maneuvers.
Scholarly Commons Citation
Wilson, J.,
Scielzo, S.,
Nair, S.,
&
Larson, E. C.
(2020).
Automatic Gaze Classification for Aviators: Using Multi-task Convolutional Networks as a Proxy for Flight Instructor Observation.
International Journal of Aviation, Aeronautics, and Aerospace,
7(3).
DOI: https://doi.org/10.15394/ijaaa.2020.1499
Included in
Artificial Intelligence and Robotics Commons, Aviation and Space Education Commons, Graphics and Human Computer Interfaces Commons