Submitting Campus

Daytona Beach

Student Status



Graduate Student Works

Advisor Name

Andrew Dattel, Ph.D.


Numerous studies have been conducted with trolley dilemmas to better understand moral decision making. The first classical dilemma was introduced in 1967 as a philosophical thought experiment (Foot, 1967). It observed how humans decided between the lesser of two evils, sacrificing one person to save many or vice versa, by controlling which track a trolley would travel along. Modified versions of the trolley dilemma have been adapted to human-driven cars to help decide how to set an automated vehicle’s ethical decisions (Faulhaber et al., 2018). Other recent work has shown that humans, within a simulated environment, prove to be more utilitarian than they claim to be (Patil, Cogoni, Zangrando, Chittaro, & Silani, 2014). Contissa, Lagioia, and Sartor (2017) outlined various submissions and observations that have been discussed regarding what kind of ethical technology should be implemented into automated vehicles to address and solve this issue. Automated vehicles (AVs) continue to increase in today’s market and researchers have modified trolley dilemmas to account for them. Decision making analysis of participants within these modified trolley dilemmas has led researchers to propose numerous ethical theories to base algorithms upon that may be programmed into the AV. This will dictate what actions the vehicle will take in an inevitable crash event. One suggestion is for allowing the users of AVs to pre-program v their own customizable algorithm, but this may cause unwanted outcomes and a mandatory ethics setting (MES) is suggested as best for society (Gogoll & Müller, 2017). Limited research has been done exploring exactly how the public’s affect in choice would react to being given the ability to program their own algorithm. This study has compared differences of affect and willingness to ride (WTR) of participants involved in either a congruent or incongruent group using an AV in a modified trolley dilemma. The congruent group rode in a simulated AV that performed actions consistent with the algorithm the user preselected; the incongruent group rode in a simulated AV that performed actions that were opposite of the algorithm the user preselected. The groups represent either complete control, congruent, in the selection process of the algorithm versus no control, incongruent. The study utilized an experimental, 2 x 2 mixed design using 44 participants. Tests were conducted in the CERTS lab at Embry-Riddle Aeronautical University using STI Sim Drive simulator software and Logitech driving assembly. Statistical analysis included a 2 x 2 mixed ANOVA. Affect and WTR scores were predicted to significantly differ between participants involved in the congruent group versus the incongruent group and emotions of happiness, anger, and fear were expected to significantly differ between groups. Results showed that although the null hypotheses were retained, several two-way interactions were revealed between the following categories:

  • SUFES Happiness and congruency group: F(1, 42) = 5.142, p = .029, η2 = .109
  • Affect total and congruency group: F1, 42) = 4.199, p = .047, η2 = .091 vi
  • Affect Favorable and congruency group:F(1, 42) = 10.017, p = .003, η2 = .193
  • WTR Confident and congruency group: F(1, 42) = 6.021, p = .018, η2 = .125

Document Type


Publication/Presentation Date



Daytona Beach, FL