Submitting Campus
Worldwide
Department
Aeronautics
Document Type
Book Chapter
Publication/Presentation Date
2025
Abstract/Description
Artificial intelligence (AI) is pervasive in scholarly publications, internet sites, and public discourse. AI is a term with broad scope that refers to machines that can learn and perform tasks that typically require human intelligence. The specter of AI intruding into many aspects of aviation has raised alarms, concerns, and prodigious misunderstanding of potential and contemplated applications in systems and processes. The EASA AI Roadmap (EASA, 2023 ), a linear projection with three levels EASA, 2023 extending into 2050, places the human-AI teaming period (through 2035) at Level 2. This suggests a ten-year span to develop the interactive issues to accomplish reliable and transparent generative AI in aviation flight operations. To aid in bifurcating some of the current angst regarding popular conceptions of how generative AI will be increasingly invasive in systems, a proposed twin trajectory approach has been suggested (Holley et al., 2024), which follows a continuum of automation evolution that interacts with a domain-specific approach for current and near-current applications and with a more complex approach that addresses human cognition and machine testing.
The authors envision trajectories showing a progression or roadmap of AI technologies across two main categories: (1) complex AI and (2) domain-specific AI. Each category has a series of concepts or technologies arranged linearly, suggesting an evolution or increasing complexity. The pathways start with more fundamental or specific technologies and progress toward more advanced, integrated, and generalized AI systems. For instance, the complex AI path moves from Augmented Cognition through CAPS/CHARM and Autonomous Operations to Generative AI Neurospace. The Automation path begins with basic Automation and progresses through stages like Cognitive Echelons and Machine Learning, ultimately leading to Adaptive Allocation. The paths converge, implying that these AI approaches may eventually integrate or lead to similar advanced outcomes in AI development. The domain-specific AI path starts with more specialized systems. It progresses toward more advanced and adaptable technologies, including Auto-pilot GPWS TCAS, GPWS (Ground Proximity Warning System), and Trust in Generative Artificial Intelligence : Human-Robot Interaction and Ethical Considerations, edited by Joanna Paliszkiewicz, et al., Taylor & Francis Group, 2025. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/erau/detail.action?docID=31807371. Created from erau on 2026-05-01 19:09:24. Copyright © 2025. Taylor & Francis Group. All rights reserved. Ebook pages 161-175 | Printed page 1 of 12 TCAS (Traffic Collision Avoidance System), Augmented Automation, suggesting enhancing automated systems with additional capabilities, possibly integrating more data or decision-making abilities; Adaptive Automation implying systems that can adjust their behavior based on changing conditions or requirements, Future Systems, a broader term suggesting more advanced AI systems that build upon the previous stages. Lastly, adaptive allocation, the final stage, shared with the automation path, likely refers to AI systems capable of dynamically allocating resources or making decisions across various domains. This progression shows a path from highly specific, rule-based systems to more flexible and broadly applicable AI technologies in specialized domains (Miller et al., 2023; Miller et al., 2024).
Critical issues of reliability, transparency, and trust are at the core of this evolution, particularly concerning the implementation of AI systems. As the commercial aviation industry cautiously explores AI applications, understanding public perception and acceptance becomes paramount. Of particular interest is how different generational cohorts with distinct technological experiences and attitudes may vary in their trust of AI in aviation contexts. This study investigates the intersection of AI trust and generational attitudes within the promising field of AI adoption in US commercial aviation. By examining the unique characteristics and technological inclinations of various age groups, we seek to uncover insights that could inform the development and implementation of AI systems in aviation, ultimately contributing to this critical industry’s safe and accepted progression.
Publication Title
Trust in Generative Artificial Intelligence: Human-Robot Interaction and Ethical Considerations
DOI
https://doi.org/10.4324/9781003586937
Publisher
Abingdon, Oxon; New York:Routledge
Scholarly Commons Citation
Halawi, L., Miller, M., & Holley, S. (2025). Cultivating Confidence. Trust in Generative Artificial Intelligence: Human-Robot Interaction and Ethical Considerations https://doi.org/10.4324/9781003586937