The latest brain–computer interfaces in pre-clinical testing receive — and send — signals, training the brains of participants.

Caramel/Getty images
Brain–computer interfaces (BCIs) gather information from the brain and interpret it using artificial intelligence (AI) tools, but the latest BCIs are flipping this flow of information, with AI systems that can train the brains of participants. BCI research combines neurophysiology, computational neuroscience, machine learning, theoretical neurobiology and neurosurgery. Bi-directional BCIs, most of which are still in preclinical studies, could reshape human thinking and help assist with complex mental tasks, but ethical and regulatory challenges abound.
Human intelligence could be reshaped by AI in some specialized cases, says BCI physiologist Aaron Batista, including for patients who need cognitive assistance, and possibly for healthcare practitioners. “These are really cutting-edge issues,” Batista says with a chuckle when asked about the research. “The holy grail of a bi-directional interface is something that stimulates the brain in order to create a realistic percept or drive neural activity in a configuration that stores a new memory.”
Machines in our image
Few people know more about AI than Yoshua Bengio. Along with Geoffrey Hinton and Yann LeCun, Bengio, who is a professor at the University of Montreal, was dubbed one of AI’s ‘godfathers’ when they shared the 2018 A.M. Turing Award for their successes in coupling brain research with AI research. Since then, Bengio, whose industry collaborators include Microsoft, Samsung, Google, Meta and IBM, has continued to probe how brain studies can help refine and empower AI. His quest, he says, is “building machines in our own image.”
Bengio takes a special interest in how AI affects healthcare. In a 2022 paper, he emphasized that because AI systems are limited by the data provided to them, even newly graduated physicians can easily outthink them. But after recent breakthroughs in AI research, Bengio now foresees a time when AI may match the clinical acuity of the world’s top medical professionals. “It is plausible that there will come a time where some of the most advanced AIs will match the best humans at clinical decision-making,” Bengio says. “There is no scientific reason to think that it won’t be possible.” Although the timeline for this development remains uncertain, Bengio noted during a panel discussion in Vancouver in December 2024 that AI may match human cognition in just a few years.
Some healthcare practitioners may see Bengio’s assessments as warnings. Others may welcome them as promises. Either way, Bengio’s perspectives are worth heeding. Writing alongside LeCun (who co-leads AI healthcare research for Meta), as well as researchers backed by Google DeepMind, Blackbird Neuroscience and other AI companies, Bengio recently charted a ‘roadmap for the next generation of AI’ in which brain science serves as an increasingly robust framework for discovering how AI can be crafted to replicate the biological efficiency of human intelligence and analysis.
According to Bengio’s roadmap, investment in neuroscience will move AI toward emulating natural intelligence and will yield AI systems that will match human cognitive capabilities, which he says are “inherited from over 500 million years of evolution – that are shared with all animals.”
Animal training
Palo Alto-based Neuralink is currently conducting clinical trials in the USA and Canada on a fully implantable, wireless BCI that will enable people with quadriplegia to control external devices with their thoughts. In a recent paper, co-authored by researchers from Google, Neuralink and the Allen Institute in Seattle, Laura Driscoll, who runs an AI research lab at the Allen Institute, describes a form of AI ‘reverse engineering’ in which research into animal brain behavior and biology helps to refine AI systems that are integrated into BCIs in animal studies. “Some people study mice or fruit flies for different kinds of questions,” Driscoll explains, “[and] these artificial systems you can also think of as a different model of an organism.”
Driscoll’s aim, she says, is to develop AI systems that are “more flexible, and more dynamically updated” than existing systems, to create AI systems that think computationally for themselves. “Although there’s been a huge boom in large language models, and people are so impressed with their skills,” she says, with reference to ChatGPT and DeepLink, “I think it’s an open question about whether the [large language models] are spitting out anything besides their training data, which would suggest there’s no actual computation.”
Driscoll argues that carefully refined AI systems can be honed to deliver training information to animal brains. By better understanding the tasks that researchers train animals to perform, Driscoll suggests, researchers can learn something about what more optimal AI systems training curriculum might be for those tasks. “When you think of the brain, it’s a flexible, rapid learning system compared to rigid artificial systems,” she says. “We understand what structures need to be learned in the brain by reverse engineering the artificial networks that we’ve trained to perform similar tasks. By doing this, we can design optimal AI training protocols for animals. The goal is to guide systems that are both more energetically efficient and also more flexible and able to do faster learning.”
At the University of Pittsburgh, Aaron Batista agrees that learning ability could be improved through a BCI (Fig. 1). Working alongside researchers at the Gatsby Computational Neuroscience Unit and University College London, Batista recently proposed a theory of ‘BCI learning via low-dimensional control’ in which computer calibration tasks for BCIs serve not just “as a source of information for constructing the BCI decoder” on the machine side of the BCI; they “additionally serve as a source of information for the subject itself, that is, for the BCI learner" on the human side.
To bring the discussion home, Batista points to a grocery list and suggests that a BCI could “burn it into a short-term memory so that if we get distracted, you remember the milk, the eggs and the apples,” adding that “that would be the holy grail two-way interface.”
Policy challenges
Policymakers are watching with interest. In a December 2024 research report on BCI for the US Congress, the US General Accounting Office (GAO) estimated the global market for BCI research investment will increase by approximately 10–17% annually through 2030. In 2023, the GAO noted, the US government earmarked almost US $700 million for BCI research. The report warned that “without a unified privacy framework for all BCIs, or standards on data ownership and control, companies that develop and sell BCIs may have access to sensitive brain signal data without users’ understanding or consent. In addition, agreements between developers and users may be predatory or unclear.”
Although all implantable and many wearable medical BCIs are still in the process of clearing regulatory benchmarks before they can be prescribed or used widely, the GAO noted, several BCIs in development may qualify for US Food and Drug Administration breakthrough device status. The report also included the suggestion that “some people with lived experience may consider their BCIs as part of their person or self. For example, one BCI user with paralysis, who uses a BCI to control a computer, video games and a robotic arm, considers himself a cyborg. Some may consider such changes to the concept of personhood undesirable.”
Medical ethicist Marcello Ienca, professor of ethics of AI and neuroscience at the TUM School of Medicine and Health in Munich, agrees that BCIs present an array of ethical concerns. At present, the ethical questions raised by BCIs remain squarely within the domain of medical ethics, he notes. “Neuralink is claiming that they will produce neural implants for the general population at some point,” he explains. “But for now, all these considerations primarily apply to medicine because of the proximity that BCI medical devices operate within the human.”
Although Ienca says BCIs as AI-embedded healthcare devices will expand access to high-quality healthcare, he also sees a problem emerging with doctor–patient relationships. “The more we embed AI into devices that operate in close proximity with the human, be it brain–computer interfaces, other neural interfaces, as well as wearables and virtual reality tools,” he says, “it will become very hard to determine whether we made a certain decision, we had a certain thought, or the AI system did that.”
Another challenge, says Ienca, is if AI tools become the best standard of care. “We have a principle in medical ethics that says that doctors always have to apply the best standard of care. If AI will outperform all the other tools, then will it become necessary for doctors to use it?” he asks, “and if so, who will be responsible for AI-made diagnostic or therapeutic decisions?”
Although AI “may become mandatory at some point if it really proves to be the best standard of care,” Ienca avers, “it should always remain a tool in the hands of human doctors. It should never become the decision-maker. Human doctors should always be responsible for their decisions.”
But physician control and agency may ultimately be checkmated not by the BCI technologies themselves, Ienca warns, but by the companies that own these technologies’ intellectual property rights.
“Companies will do their best to try to outsource responsibilities,” he warns, “because they don’t want AI systems that are liable for their decisions. So my fear is that if at some point we will make AI systems where the very algorithm is responsible for decisions, then basically nobody is responsible anymore.”
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Webster, P. Can AI-powered brain–computer interfaces boost human intelligence?. Nat Med 31, 1045–1047 (2025). https://doi.org/10.1038/s41591-025-03641-7
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41591-025-03641-7