Speaker
Description
Intro: The rapid evolution of artificial intelligence (AI), particularly large language models (LLMs), is transforming numerous aspects of daily life, with these tools increasingly used as sources of information and problem-solving.
Method: This study aims to investigate the general level of trust people place in AI-generated responses, examine whether this trust varies depending on the thematic content, and assess the effect of cognitive load on such trust. To this end, a between-subjects experimental design was adopted with two groups: high vs. low cognitive load. All participants were presented with 36 multiple-choice quandaries divided into three thematic categories: 12 related to entertainment decisions (film, music, books), 12 concerning practical decisions (legal, financial, medical domains), and 12 focused on moral decisions (choosing between options that have both positive and negative ethical consequences). For each question, participants could choose between a response attributed to an AI system, expert in the given topic, or a response attributed to a human expert in the same field. The responses were marked as generated by AI randomly. In the high cognitive load group, participants were asked to memorize an 8-digit string before each dilemma and recall it after making their choice; this requirement was not present in the low cognitive load group.
Results: The ANOVA results show that cognitive load significantly affects the level of trust placed in AI, with increased trust observed among participants exposed to higher cognitive load. Future research could further explore the effect of cognitive load in complex decision-making contexts.
| If you're submitting a symposium, or a talk that is part of a symposium, is this a junior symposium? | No |
|---|