Presentation Type

Poster

Abstract

Many factors affect how much trust a person places in a robotic system that they work with. In particular, the reliability of a robot is a current leader in predicting trust levels in Human Robot Interaction (HRI) (Hancock, et al., 2011). Because robots are built and programmed by fallible humans, it is likely that malfunctions will occur periodically during their operation. However, the question still remains as to how a robot’s decision making capacity (level of autonomy) combined with level of information (status and projected end state) they provide to the operator affect operator trust in the system. Previous studies have focused on level of information, and modality of information provided (Sanders et al., 2014). However, none have yet to combine system malfunctions, system autonomy level, and level of information. The findings of this study will be principally salient for those operating robots in team situations (e.g., the military). In particular, the findings may assist programmers in deciding how much information the robot should be giving to its teammates as well as the level of autonomy needed in order to engender proper trust in the operator. Study design and implementation will be discussed along with preliminary data.

Share

COinS
 

Robot Autonomy and Malfunction Affects on Trust During Human Robot Interaction

Many factors affect how much trust a person places in a robotic system that they work with. In particular, the reliability of a robot is a current leader in predicting trust levels in Human Robot Interaction (HRI) (Hancock, et al., 2011). Because robots are built and programmed by fallible humans, it is likely that malfunctions will occur periodically during their operation. However, the question still remains as to how a robot’s decision making capacity (level of autonomy) combined with level of information (status and projected end state) they provide to the operator affect operator trust in the system. Previous studies have focused on level of information, and modality of information provided (Sanders et al., 2014). However, none have yet to combine system malfunctions, system autonomy level, and level of information. The findings of this study will be principally salient for those operating robots in team situations (e.g., the military). In particular, the findings may assist programmers in deciding how much information the robot should be giving to its teammates as well as the level of autonomy needed in order to engender proper trust in the operator. Study design and implementation will be discussed along with preliminary data.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.