AIML Research Seminar: Perceiving Mind in Machine
- Date: Tue, 23 Sep 2025, 10:30 am - 11:15 am
- Location: AIML
- Oliver Lack PhD student
Abstract: Humans are prone to anthropomorphising AI, especially as systems grow more human-like. Perceiving mind and agency in machines has raised concerns about trust, attachment, manipulation, and other ethical/behavioural consequences. The potential long-term influences of advancing the human-likeness of machines are profound, likely affecting how we assign moral responsibility or moral significance to entities. Existing theories relating to anthropomorphism, such as Theory of Mind, dimensional models of Mind Perception, the Intentional Stance, and Mindreading, are vexed and overlapping. The state of anthropomorphism theory and lack of ecologically valid empirical work complicate the prediction and interpretation of consequences concerning human-like AI systems. This talk overviews whether the theory can tell us about the consequences of advancing human-like chat/assistant systems and what should make a system more human-like. To better answer such questions and advance empirical work in the field of human-AI interaction more broadly, the talk will introduce chatPsych: an open-source AI interface built to facilitate cognitive/behavioural/psychological research. This new research tool leverages a plethora of prevalent chat/assistant models, enabling extensive interaction and survey data collection, and offering many opportunities for new experimental designs. Lastly, I will present evidence from a chatPsych experiment, exploring unexpected findings into how the temperature parameter and system message can shape human perceptions of system unpredictability. These findings offer insights towards establishing the influence of perceived unpredictability on anthropomorphism.

AIML PhD student Oliver Lack presents before the AIML community.