UW News

May 9, 2025

AI headphones translate multiple speakers at once, cloning their voices in 3D sound

UW News

Tuochao Chen, a University of Washington doctoral student, recently toured a museum in Mexico. Chen doesn’t speak Spanish, so he ran a translation app on his phone and pointed the microphone at the tour guide. But even in a museum’s relative quiet, the surrounding noise was too much. The resulting text was useless.

Various technologies have emerged lately promising fluent translation, but none of these solved Chen’s problem of public spaces. Meta’s new glasses, for instance, function only with an isolated speaker; they play an automated voice translation after the speaker finishes.

Now, Chen and a team of UW researchers have designed a headphone system that translates several speakers at once, while preserving the direction and qualities of people’s voices. The team built the system, called Spatial Speech Translation, with off-the-shelf noise-cancelling headphones fitted with microphones. The team’s algorithms separate out the different speakers in a space and follow them as they move, translate their speech and play it back with a 2-4 second delay.

The team presented its research Apr. 30 at the ACM CHI Conference on Human Factors in Computing Systems in Yokohama, Japan. The code for the proof-of-concept device is available for others to build on. “Other translation tech is built on the assumption that only one person is speaking,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “But in the real world, you can’t have just one robotic voice talking for multiple people in a room. For the first time, we’ve preserved the sound of each person’s voice and the direction it’s coming from.”

Related:

The system makes three innovations. First, when turned on, it immediately detects how many speakers are in an indoor or outdoor space.

“Our algorithms work a little like radar,” said lead author Chen, a UW doctoral student in the Allen School. “So it’s scanning the space in 360 degrees and constantly determining and updating whether there’s one person or six or seven.”

The system then translates the speech and maintains the expressive qualities and volume of each speaker’s voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.) Finally, when speakers move their heads, the system continues to track the direction and qualities of their voices as they change.

The system functioned when tested in 10 indoor and outdoor settings. And in a 29-participant test, the users preferred the system over models that didn’t track speakers through space.

In a separate user test, most participants preferred a delay of 3-4 seconds, since the system made more errors when translating with a delay of 1-2 seconds. The team is working to reduce the speed of translation in future iterations. The system currently only works on commonplace speech, not specialized language such as technical jargon. For this paper, the team worked with Spanish, German and French — but previous work on translation models has shown they can be trained to translate around 100 languages.

“This is a step toward breaking down the language barriers between cultures,” Chen said. “So if I’m walking down the street in Mexico, even though I don’t speak Spanish, I can translate all the people’s voices and know who said what.”

Qirui Wang, a research intern at HydroX AI and a UW undergraduate in the Allen School while completing this research, and Runlin He, a UW doctoral student in the Allen School, are also co-authors on this paper. This research was funded by a Moore Inventor Fellow award and a UW CoMotion Innovation Gap Fund.

For more information, contact the researchers at babelfish@cs.washington.edu

Tag(s):