AI-powered headphones offer group translation with voice cloning and 3D spatial audio

AI-powered headphones offer group translation with voice cloning and 3D spatial audio

Credit: University of Washington

Tuochao Chen, a University of Washington doctoral student, recently toured a museum in Mexico. Chen doesn’t speak Spanish, so he ran a translation app on his phone and pointed the microphone at the tour guide. But even in a museum’s relative quiet, the surrounding noise was too much. The resulting text was useless.

Various technologies have emerged lately promising fluent translation, but none of these solved Chen’s problem of public spaces. Meta’s new glasses, for instance, function only with an isolated speaker; they play an automated voice translation after the speaker finishes.

Now, Chen and a team of UW researchers have designed a headphone system that translates several speakers at once, while preserving the direction and qualities of people’s voices. The team built the system, called Spatial Speech Translation, with off-the-shelf noise-canceling headphones fitted with microphones. The team’s algorithms separate out the different speakers in a space and follow them as they move, translate their speech and play it back with a 2-4 second delay.







University of Washington researchers designed a headphone system that translates several people speaking at once, following them as they move and preserving the direction and qualities of their voices. The team built the system, called Spatial Speech Translation, with off-the-shelf noise-cancelling headphones fitted with microphones. Credit: Chen et al./CHI ’25

The team presented its research Apr. 30 at the ACM CHI Conference on Human Factors in Computing Systems in Yokohama, Japan. The code for the proof-of-concept device is available for others to build on. “Other translation tech is built on the assumption that only one person is speaking,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “But in the real world, you can’t have just one robotic voice talking for multiple people in a room. For the first time, we’ve preserved the sound of each person’s voice and the direction it’s coming from.”

The system makes three innovations. First, when turned on, it immediately detects how many speakers are in an indoor or outdoor space.

“Our algorithms work a little like radar,” said lead author Chen, a UW doctoral student in the Allen School. “So they’re scanning the space in 360 degrees and constantly determining and updating whether there’s one person or six or seven.”

The system then translates the speech and maintains the expressive qualities and volume of each speaker’s voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.) Finally, when speakers move their heads, the system continues to track the direction and qualities of their voices as they change.

The system functioned when tested in 10 indoor and outdoor settings. And in a 29-participant test, the users preferred the system over models that didn’t track speakers through space.

In a separate user test, most participants preferred a delay of 3-4 seconds, since the system made more errors when translating with a delay of 1-2 seconds. The team is working to reduce the speed of translation in future iterations. The system currently only works on commonplace speech, not specialized language such as technical jargon. For this paper, the team worked with Spanish, German and French—but previous work on translation models has shown they can be trained to translate around 100 languages.

“This is a step toward breaking down the language barriers between cultures,” Chen said. “So if I’m walking down the street in Mexico, even though I don’t speak Spanish, I can translate all the people’s voices and know who said what.”

Qirui Wang, a research intern at HydroX AI and a UW undergraduate in the Allen School while completing this research, and Runlin He, a UW doctoral student in the Allen School, are also co-authors on this paper.

More information:
Tuochao Chen et al, Spatial Speech Translation: Translating Across Space With Binaural Hearables, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (2025). DOI: 10.1145/3706598.3713745

Provided by
University of Washington


Citation:
AI-powered headphones offer group translation with voice cloning and 3D spatial audio (2025, May 10)
retrieved 10 May 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Source link

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam! Read our privacy policy for more info.

More From Author

Barcelona-Real Madrid: Why this is the most important Clásico in years

Barcelona-Real Madrid: Why this is the most important Clásico in years

Europe Wants to Arm Ukraine, but It’s Losing a Race Against Time

Europe Wants to Arm Ukraine, but It’s Losing a Race Against Time

Leave a Reply

Your email address will not be published. Required fields are marked *