Is this project an undergraduate, graduate, or faculty project?
Graduate
individual
What campus are you from?
Daytona Beach
Authors' Class Standing
graduate
Lead Presenter's Name
Katie Stubits
Faculty Mentor Name
Elizabeth H. Lazzara
Abstract
Semi-automated drug delivery systems, such as closed-loop insulin pumps, smart infusion devices, and patient-controlled analgesia (PCA) tools, are transforming healthcare by improving dosing precision and empowering patients with greater autonomy. However, their safety and efficacy depend critically on trust calibration—the dynamic alignment between patient trust in automation and actual system reliability (Hoff & Bashir, 2015; Lee & See, 2004). Miscalibrated trust can lead to misuse or disuse, jeopardizing patient safety in high-stakes contexts (Fallon et al., 2010). Grounded in theories of human–automation interaction (Lee & See, 2004) and recent empirical research findings (Zheng et al., 2022), this work conceptualizes trust as a malleable construct shaped by transparency, feedback, and training (Adams et al., 2020; Hancock et al., 2023; Yang et al. 2021). Despite its centrality, trust in automation is often under-measured or poorly defined in medical device research. Brzowski and Nathan-Roberts (2019) identified significant inconsistencies in conceptual definitions, measurement methods, and experimental rigor, emphasizing the need for standardized, continuous approaches. Addressing this gap, this work advocates for integrated trust assessment frameworks, combining self-report, behavioral measures, and trust dynamics modeling, to inform both system design and real-time calibration interventions. In insulin delivery and continuous glucose monitoring, automation improves clinical outcomes but introduces user burden, alert fatigue, and shifting expectations (Anandhakrishnan, 2024). Control transitions, such as manual overrides or system failures, pose cognitive and emotional challenges (Parasuraman et al., 2000). This work recognizes the role of patient variability in experience, health literacy, and risk tolerance, acknowledging that trust is not “one size fits all” (Dzindolet et al., 2003). Ultimately, continuous monitoring and trust adjustment are essential to ensure safe, effective transitions when automation “lets go,” aligning human trust with system capability for safer patient outcomes. This work advances understanding of trust calibration as a continuous and measurable process critical to semi-automated drug delivery safety. Learning objectives include: (1) defining trust calibration as a human factors design priority; (2) identifying influences on trust formation, such as feedback, transparency, and user experience; (3) demonstrating strategies for monitoring and maintaining optimal trust; and (4) supporting recovery during automation handovers. While this work is focused on medical applications, parallels between trust calibration in healthcare and other domains that employ varying levels of automated systems (e.g., aviation, surface transportation, etc.) will be identified and discussed.
Did this research project receive funding support from the Office of Undergraduate Research.
No
Included in
When the Machine Lets Go: Designing Drug Delivery Devices Patients Can Rely On
Semi-automated drug delivery systems, such as closed-loop insulin pumps, smart infusion devices, and patient-controlled analgesia (PCA) tools, are transforming healthcare by improving dosing precision and empowering patients with greater autonomy. However, their safety and efficacy depend critically on trust calibration—the dynamic alignment between patient trust in automation and actual system reliability (Hoff & Bashir, 2015; Lee & See, 2004). Miscalibrated trust can lead to misuse or disuse, jeopardizing patient safety in high-stakes contexts (Fallon et al., 2010). Grounded in theories of human–automation interaction (Lee & See, 2004) and recent empirical research findings (Zheng et al., 2022), this work conceptualizes trust as a malleable construct shaped by transparency, feedback, and training (Adams et al., 2020; Hancock et al., 2023; Yang et al. 2021). Despite its centrality, trust in automation is often under-measured or poorly defined in medical device research. Brzowski and Nathan-Roberts (2019) identified significant inconsistencies in conceptual definitions, measurement methods, and experimental rigor, emphasizing the need for standardized, continuous approaches. Addressing this gap, this work advocates for integrated trust assessment frameworks, combining self-report, behavioral measures, and trust dynamics modeling, to inform both system design and real-time calibration interventions. In insulin delivery and continuous glucose monitoring, automation improves clinical outcomes but introduces user burden, alert fatigue, and shifting expectations (Anandhakrishnan, 2024). Control transitions, such as manual overrides or system failures, pose cognitive and emotional challenges (Parasuraman et al., 2000). This work recognizes the role of patient variability in experience, health literacy, and risk tolerance, acknowledging that trust is not “one size fits all” (Dzindolet et al., 2003). Ultimately, continuous monitoring and trust adjustment are essential to ensure safe, effective transitions when automation “lets go,” aligning human trust with system capability for safer patient outcomes. This work advances understanding of trust calibration as a continuous and measurable process critical to semi-automated drug delivery safety. Learning objectives include: (1) defining trust calibration as a human factors design priority; (2) identifying influences on trust formation, such as feedback, transparency, and user experience; (3) demonstrating strategies for monitoring and maintaining optimal trust; and (4) supporting recovery during automation handovers. While this work is focused on medical applications, parallels between trust calibration in healthcare and other domains that employ varying levels of automated systems (e.g., aviation, surface transportation, etc.) will be identified and discussed.