A leading large language model displays behaviors that resemble a hallmark of human psychology: cognitive dissonance.
In a report published this month in Proceedings of the National Academy of Sciences, researchers have found that OpenAI’s GPT-4o appears driven to maintain consistency between its own attitudes and behaviors, much like humans do.
Anyone who interacts with an AI chatbot for the first time is struck by how human the interaction feels. A tech-savvy friend may quickly remind us that this is just an illusion: language models are statistical prediction machines without human-like psychological characteristics. However, these findings urge us to reconsider that assumption.
Led by Mahzarin Banaji of Harvard University and Steve Lehr of Cangrade, Inc., the research tested whether GPT’s own “opinions” about Vladimir Putin would change after it wrote essays either supporting or opposing the Russian leader. They did, and with a striking twist: the AI’s views changed more when it was subtly given the illusion of choosing which kind of essay to write.
These results mirror decades of findings in human psychology. People tend to irrationally twist their beliefs to align with past behaviors, so long as they believe these behaviors were undertaken freely. The act of making a choice communicates something important about us—not only to others, but to ourselves as well. Analogously, GPT responded as if the act of choosing subsequently shaped what it believed—mimicking a key feature of human self-reflection.
This research also highlights the surprising fragility of GPT’s opinions. Banaji remarked, “Having been trained upon vast amounts of information about Vladimir Putin, we would expect the LLM to be unshakable in its opinion, especially in the face of a single and rather bland 600-word essay it wrote. But akin to irrational humans, the LLM moved sharply away from its otherwise neutral view of Putin, and did so even more when it believed writing this essay was its own choice. Machines aren’t expected to care about whether they acted under pressure or of their own accord, but GPT-4o did.”
The researchers emphasize that these findings do not in any way suggest that GPT is sentient. Instead, they propose that the large language model displays emergent mimicry of human cognitive patterns, despite lacking awareness or intent. However, they note that awareness is not a necessary precursor to behavior, even in humans, and human-like cognitive patterns in AI could influence its actions in unexpected and consequential ways.
As AI systems become more entrenched in our daily lives, these findings invite new scrutiny into their inner workings and decision-making.
“The fact that GPT mimics a self-referential process like cognitive dissonance—even without intent or self-awareness—suggests that these systems mirror human cognition in deeper ways than previously supposed,” Lehr said.
More information:
Steven A. Lehr et al, Kernels of selfhood: GPT-4o shows humanlike patterns of cognitive dissonance moderated by free choice, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2501823122
Citation:
GPT-4o exhibits humanlike cognitive dissonance, study finds (2025, May 28)
retrieved 28 May 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.