Presentation Type
Poster
Abstract
Many factors affect how much trust a person places in a robotic system that they work with. In particular, the reliability of a robot is a current leader in predicting trust levels in Human Robot Interaction (HRI) (Hancock, et al., 2011). Because robots are built and programmed by fallible humans, it is likely that malfunctions will occur periodically during their operation. However, the question still remains as to how a robot’s decision making capacity (level of autonomy) combined with level of information (status and projected end state) they provide to the operator affect operator trust in the system. Previous studies have focused on level of information, and modality of information provided (Sanders et al., 2014). However, none have yet to combine system malfunctions, system autonomy level, and level of information. The findings of this study will be principally salient for those operating robots in team situations (e.g., the military). In particular, the findings may assist programmers in deciding how much information the robot should be giving to its teammates as well as the level of autonomy needed in order to engender proper trust in the operator. Study design and implementation will be discussed along with preliminary data.
Included in
Robot Autonomy and Malfunction Affects on Trust During Human Robot Interaction
Many factors affect how much trust a person places in a robotic system that they work with. In particular, the reliability of a robot is a current leader in predicting trust levels in Human Robot Interaction (HRI) (Hancock, et al., 2011). Because robots are built and programmed by fallible humans, it is likely that malfunctions will occur periodically during their operation. However, the question still remains as to how a robot’s decision making capacity (level of autonomy) combined with level of information (status and projected end state) they provide to the operator affect operator trust in the system. Previous studies have focused on level of information, and modality of information provided (Sanders et al., 2014). However, none have yet to combine system malfunctions, system autonomy level, and level of information. The findings of this study will be principally salient for those operating robots in team situations (e.g., the military). In particular, the findings may assist programmers in deciding how much information the robot should be giving to its teammates as well as the level of autonomy needed in order to engender proper trust in the operator. Study design and implementation will be discussed along with preliminary data.