Date of Award

9-22-2025

Document Type

Open Access Dissertation

Department

Linguistics

First Advisor

Amit Almor

Abstract

Over the past decade, deep learning AI systems have gone from academic obscurity to near cultural centrality in the developed world. As these technologies become increasingly integrated into daily life, there is growing cause for concern about AI anthropomorphism—i.e., the perception of AIs as more human than they really are. Despite the potential risks, AI developers continue to market their technologies using agentive language that seems to be designed to encourage anthropomorphism. The current dissertation utilizes behavioral and eye-tracking methods to examine the relationship between agentive linguistic framing and AI anthropomorphism in both controlled judgments and automatic inferences. The results show that exposure to agentive linguistic framing can 1) increase the perceived responsibility of AIs for both negative and positive outcomes, 2) cause readers to see AIs as agents rather than objects, as evidenced by a decrease in negative agency bias, 3) and facilitate the processing of animate verbs with AI subjects. Overall, these findings show that agentive linguistic framing has a high risk of causing unconscious AI anthropomorphism, especially for readers with preexisting anthropomorphic beliefs.

Rights

© 2025, Dawson Petersen

Included in

Linguistics Commons

Share

COinS