Research

Real-time brain-controlled selective hearing isolates one voice in a crowd in first human study

Vishal Choudhari, Maximilian Nentwich, and colleagues at Columbia University’s Zuckerman Institute, working with collaborators at NYU, UCSF, and Northwell Health, published the first real-time human-study evidence that a brain-computer interface decoding a listener’s auditory attention can isolate a single voice from a multi-talker environment. The paper, “Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments,” appeared in Nature Neuroscience on 11 May 2026. The technology is built around auditory attention decoding, an approach that reads neural signals from the auditory cortex to detect which speaker a listener is focused on and selectively enhances that voice in the listener’s audio output.

How the study worked

The team used intracranial electrodes already implanted in epilepsy patients for clinical reasons, recording neural activity from the auditory cortex while subjects listened to two overlapping conversations. The system detected the attended speaker from the neural signature produced by the auditory cortex, enhanced that voice in real time, and tracked both instructed attention switches and self-initiated attention shifts. Subjects reported reduced listening effort and improved speech intelligibility, and the system was consistently preferred over no enhancement.

Auditory attention decoding has been a research method for almost a decade, with Mesgarani’s earlier work and subsequent papers from other labs establishing the underlying neuroscience. Until this paper, the demonstrations had been offline reconstructions of what the brain had attended to after the fact. The Nature Neuroscience publication is the first peer-reviewed evidence that real-time AAD works in human subjects.

Why this matters for consumer audio

Conventional hearing aids amplify everything in the surrounding environment, which often makes speech harder to follow rather than easier when multiple voices overlap. Brain-controlled selective hearing inverts that approach. Instead of filtering audio at the device level, the system reads the listener’s neural focus and selectively enhances the voice the listener is already trying to attend to. The clinical-application target is hearing-aid users in noisy environments such as restaurants, classrooms, and public transit. The broader category target is anyone wearing an audio device that could carry the same processing layer, including consumer headphones, augmented-reality glasses with audio output, and future neural-controlled earpieces.

The author network

Nima Mesgarani at Columbia leads the AAD work as corresponding author. Edward F. Chang at UCSF is named on the paper, which links the work to the speech-BCI programmes running at UCSF including the Pancho and Ann Johnson neuroprosthetic speech projects. Ashesh Mehta and Stephan Bickel at Northwell Health provided the intracranial recordings, and Adeen Flinker is named alongside Daniel Friedman from NYU. Jose Herrero rounds out the team. The cross-institutional collaboration is the kind that typically signals a research programme entering its clinical-readiness phase rather than its early-investigation phase.

What this is not yet

The study used intracranial electrodes implanted for medical reasons, which is not a clinically viable consumer product. The next generation of the work needs to reproduce these results on non-invasive measurement systems, with scalp EEG or ear-EEG as the leading candidates. The Mesgarani lab and other groups have published prior work on ear-EEG and surface electrode arrays for AAD, and the path to a commercial brain-controlled hearing aid runs through that translation. Until non-invasive AAD reaches the same selectivity that intracranial recordings produced in this paper, the technology stays in the research-grade domain.

How it fits the broader BCI thesis

Apple announced its Brain-Computer Interface Human Interface Device protocol on 13 May 2025 with Synchron as the integrating partner, positioning Apple as the consumer-side endpoint of the BCI category at the operating-system level. The Columbia AAD work extends the consumer-audio BCI thread to the input side, where the device reads neural attention rather than displays neural-controlled output. Hearing-aid manufacturers (Cochlear, Sonova, GN, Demant, Starkey) and consumer-audio platforms (Apple, Samsung, Google) are the commercial counterparties watching this research path most closely. If non-invasive sensing can eventually deliver the selectivity intracranial electrodes produced in this paper, brain-controlled selective hearing becomes a commercial product category. If it cannot, the technology stays alongside the rest of clinical-stage BCI in the implantable-electrode domain.

Sources: Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments — Nature Neuroscience, 11 May 2026. Columbia Zuckerman Institute announcement, May 2026. EurekAlert release on the Columbia / Mesgarani lab study. Medical Xpress coverage of the multi-talker isolation results.

Weekly BCI Brief in your inbox

Join researchers, investors, and industry leaders who start their day with Inside BCI.