Author: Molina, Maria D.; Sundar, S. Shyam
Description: When evaluating automated systems, some users apply the “positive machine heuristic” (i.e. machines are more accurate and precise than humans), whereas others apply the “negative machine heuristic” (i.e. machines lack the ability to make nuanced subjective judgments), but we do not know much about the characteristics that predict whether a user would apply the positive or negative machine heuristic. We conducted a study in the context of content moderation and discovered that individual differences relating to trust in humans, fear of artificial intelligence (AI), power usage, and political ideology can predict whether a user will invoke the positive or negative machine heuristic. For example, users who distrust other humans tend to be more positive toward machines. Our findings advance theoretical understanding of user responses to AI systems for content moderation and hold practical implications for the design of interfaces to appeal to users who are differentially predisposed toward trusting machines over humans.
Subject headings: Human-AI interaction; Machine heuristic; Content moderation; Individual differences
Publication year: 2024
Journal or book title: New Media & Society
Volume: 26
Issue: 6
Pages: 3638-3656
Find the full text: https://journals.sagepub.com/doi/abs/10.1177/14614448221103534
Find more like this one (cited by): https://scholar.google.com/scholar?cites=18331606262731667499&as_sdt=1000005&sciodt=0,16&hl=en
Serial number: 4029