individual
What campus are you from?
Daytona Beach
Authors' Class Standing
Matthew Bernhardt, Senior
Lead Presenter's Name
Matthew Bernhardt
Faculty Mentor Name
Elizabeth Lazzara
Abstract
The rapid growth in popularity of artificial intelligence, especially large language models (LLMs), has fueled a widespread desire to harness this technology for a variety of purposes. There are, however, a variety of concerns surrounding the adoption of LLMs that warrant an examination of the factors that influence user trust in these technologies. This study conducts a literature review surrounding research on LLMs to evaluate the impact that various design features such as modality, user interface, anthropomorphism, and explainability have on user trust. During analysis of included articles, challenges surrounding the striking heterogeneity of definitions and measures used for trust emerged, warranting a robust examination of this subject before further analysis could be conducted.
Did this research project receive funding support from the Office of Undergraduate Research.
No
Designing for Trust: Evaluating the Conceptualizations of Trust in Literature Towards LLM Design
The rapid growth in popularity of artificial intelligence, especially large language models (LLMs), has fueled a widespread desire to harness this technology for a variety of purposes. There are, however, a variety of concerns surrounding the adoption of LLMs that warrant an examination of the factors that influence user trust in these technologies. This study conducts a literature review surrounding research on LLMs to evaluate the impact that various design features such as modality, user interface, anthropomorphism, and explainability have on user trust. During analysis of included articles, challenges surrounding the striking heterogeneity of definitions and measures used for trust emerged, warranting a robust examination of this subject before further analysis could be conducted.