Research

Macaque BCI Navigates Virtual Forest With a Single Fixed Decoder Across Task Switches

Researchers at KU Leuven have trained three rhesus macaques to navigate an avatar through a virtual forest using only their implanted brain-computer interface, with a single fixed decoder that generalised across changing environments, viewpoints, avatar shapes, and moving targets. The work, published in Science Advances on 17 April (DOI 10.1126/sciadv.adw3876), was led by Peter Janssen, Professor of Neurophysiology at KU Leuven, and is among the longest-running nonhuman-primate motor BCI studies to demonstrate generalisation on this scale without re-training the decoder between conditions.

Each animal was implanted with three 96-channel Utah arrays — one each in primary motor cortex (M1), dorsal premotor cortex (dPMC), and ventral premotor cortex (vPMC) — for 288 channels per monkey. After a training period, the macaques learned to move avatars through virtual forest and city environments by modulating neural activity that would normally drive arm and hand movement. The same decoder that handled forest navigation transferred to city environments, to first-person and third-person views, to different avatar bodies, and to tasks with moving rather than static targets.

Premotor cortex beat primary motor cortex

The most informative BCI-relevant result from the study is that premotor cortex, both dorsal and ventral, produced more reliable navigation signals than primary motor cortex across the task set. M1 is the standard target for motor BCIs in humans (Neuralink, Synchron, Blackrock, Paradromics all primarily target motor regions, with cursor and speech decoding drawing largely on M1 or adjacent hand-knob representations). The KU Leuven finding does not overturn that choice for the specific motor tasks where M1 has been validated, but it does suggest that for higher-level navigation-type decoding, premotor regions may carry more generalisable signal. For a wheelchair-control or locomotion-restoration indication, which is closer to the avatar-navigation task than to cursor-click decoding, that distinction matters.

Janssen was quoted in the KU Leuven press office announcement saying the brain “adapts to the system surprisingly fast.” The generalisation behaviour — one decoder holding across environment, viewpoint, and avatar changes — is consistent with the animals learning an abstract control mapping rather than one tied to a specific visual context.

Why avatar navigation is a useful proxy

Nonhuman-primate BCI work has historically focused on reaching, grasping, and cursor control — tasks that map cleanly onto human motor prosthetic indications. Avatar navigation in a virtual environment is a different class of task: it is continuous, open-ended, involves changing visual scenes, and resembles the motor-planning problem of piloting a wheelchair or a teleoperated robot more than it resembles a pick-and-place arm movement. The KU Leuven group argues that demonstrating decoder stability across this broader task space is a prerequisite for clinical systems that will have to operate in uncontrolled real-world environments with changing visual and motor demands.

The team positions the work as two years out from human trials, with ALS and Parkinson’s disease named as initial target indications. That timeline is a research-team estimate, not a regulatory commitment, and sits on top of the usual nonhuman-primate to human translation risks. The animals in this study were healthy; patients with advanced ALS or Parkinson’s present different decoding problems because the underlying motor cortex representations degrade or fluctuate.

How this sits in the field

Three 96-channel Utah arrays per animal is a denser implant than any currently approved human device. Paradromics, Neuralink, and Precision Neuroscience are all working toward higher channel counts in human trials, but the field is still scaling up. The KU Leuven result is a useful upper bound on what’s achievable when channel count is not the limiting factor, and a data point on which cortical regions are worth pursuing as channel density increases.

The group did not report energy consumption, wireless implementation, or chronic stability metrics in detail in the press materials; those questions are more central to a clinical system than to a primate behavioural demonstration. The paper itself covers signal analysis and decoder performance; biocompatibility and long-term stability of the Utah arrays in these animals is a separate literature.

Funding and collaborators

The research was conducted at KU Leuven’s Laboratory for Neuro- and Psychophysiology within the Department of Neurosciences, with virtual environment engineering support and standard Utah-array hardware from Blackrock Neurotech. Funding sources listed in the KU Leuven announcement include the Research Foundation Flanders (FWO) and KU Leuven internal grants. Janssen’s group has published nonhuman-primate motor and visual cortex work since the early 2000s; this is the first from the laboratory to demonstrate single-decoder generalisation across a full navigation task suite.

The clinical-stage BCI companies are not yet running navigation decoders in humans, and the systems in human trial operate at lower channel counts than the 288-channel configuration used here. What the KU Leuven study adds to the picture is evidence that the decoder-generalisation problem, which has been a persistent concern for chronic BCI use, is tractable with current-generation arrays if the cortical targeting is right. Whether premotor cortex becomes a clinical target in the next generation of human devices is a decision each of the clinical-stage players will make on their own timelines and trial data.

Weekly BCI Brief in your inbox

Join researchers, investors, and industry leaders who start their day with Inside BCI.