A team of roboticists at the University of Canberra’s Collaborative Robotics Lab, working with a sociologist colleague from The Australian National University, has found humans interacting with an LLM-enabled humanoid robot had mixed reactions. In their paper published in the journal Scientific Reports, the group describes what they saw as they watched interactions between an LLM-enabled humanoid robot posted at an innovation festival and reviewed feedback given by people participating in the interactions.
Over the past couple of years, LLMs such as ChatGPT have taken the world by storm, with some going so far as to suggest that the new technology will soon make many human workers obsolete. Despite such fears, scientists continue to improve such technology, sometimes employing it in new places—such as inside an existing humanoid robot. That is what the team in Australia did—they added ChatGPT to the interaction facilities of a robot named Pepper and then posted the robot at an innovation festival in Canberra, where attendees were encouraged to interact with it.
Before it was given an LLM, Pepper was already capable of moving around autonomously and interacting with people on a relatively simple level. One of its hallmarks is its ability to maintain eye contact. Such abilities, the team suggested, made the robot a good target for testing human interactions with LLM-enabled humanoid robots “in the wild.”
The researchers watched and recorded the interactions between the deployed robot and festival-goers and also asked each of the people who had engaged with the robot for some feedback, all of which was also recorded.
In studying the recordings, the research team found quite a mix of reactions to the robot—some found the robot with LLM abilities to be quite fascinating. Others found the robot left much to be desired, with some even offering ways to improve the robot and its skills.
The team found that they were able to narrow down the reactions to four major themes: ideas for improvement, emotional responses, expectations, and ideas about the shape of the robot. The researchers suggest that the wildly different reactions to the robot appeared to be tied to preconceptions and robot glitches, such as taking too long to respond.
Some also thought that LLM abilities did not match up with the advanced design of the robot overall. Many also seemed uneasy with a robot that could maintain eye contact but was unable to recognize or respond to human facial expressions.
More information:
Damith Herath et al, First impressions of a humanoid social robot with natural language capabilities, Scientific Reports (2025). DOI: 10.1038/s41598-025-04274-z
© 2025 Science X Network
Citation:
Debut of LLM-enabled humanoid robot at event met with mixed reviews by human attendees (2025, June 10)
retrieved 10 June 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.