How AI Is Changing Neuroscience Discovery
by Rachel Tompa
AI is altering how scientists study the brain

The Author
These days, AI is everywhere.
Every industry, every field of science, every new start-up seems to pivot on artificial intelligence — or more specifically, generative AI, the form of AI that includes large language models like ChatGPT.
Neuroscience is no exception.
For those trying to understand the brain, generative AI is opening new doors to discovery. Scientists are using AI to ask and answer questions they might never have dreamed of just a few years ago, to handle massive troves of brain data, or to allow more scientists to access powerful computational neuroscience tools due to AI’s ability to craft custom code.
“Maybe AI is our mathematics,” said Mackenzie Mathis, the Bertarelli Foundation Chair of Integrative Neuroscience and an Assistant Professor at the Swiss Federal Institute of Technology in Lausanne, Switzerland. “Physics has theory and mathematics. Maybe AI is neuroscience’s equivalent language, and we just don’t know how to find the QED at the end of the equation yet.”
Through its Open Data in Neuroscience theme, The Kavli Foundation’s Science program is funding several projects that integrate AI and neuroscience. These projects and tools are both accelerating the pace of discovery about the brain and enabling new kinds of research questions.
At the same time, Kavli’s Science and Society program is helping scientists consider how these powerful tools intersect with society, raising questions of openness, ethics, and trust in science.
“Across Kavli-supported projects, we’re seeing how open data standards, scalable analysis platforms, and AI-driven discovery are helping researchers ask entirely new kinds of questions and answer them faster, and with greater precision, than ever before,” said Stephanie Albin, Senior Program Officer at The Kavli Foundation. “AI isn’t just a powerful tool. It’s starting to reshape how science gets done.”
Digital behavior
Mackenzie Mathis and her collaborator Alexander Mathis are the brains behind DeepLabCut, a popular software package that uses deep learning, a form of AI, to digitally track animal movement. Developed to track mice snouts as the animals sniffed out certain scents, mouse hands as they reach for joysticks, and flies in 3D chambers, the package has been installed nearly 1 million times since its release in 2018.
Once it is trained on a handful of human-labeled images, DeepLabCut can recognize different animal body parts in videos, superimposing colorful dots over animals’ toes, heads, tails, and more. It’s not limited to specific species, as users can instruct the software to track anything that is visible. It can even detect octopi and insects that are highly camouflaged against their natural backgrounds.
Researchers have used the software in unexpected ways that never would have occurred to Mathis when the team first released DeepLabCut, she said. Clinical researchers are using it to analyze gait patterns in clinical trials of new treatments for patients with Parkinson’s disease. Professional golfers are using it to analyze their swings. A research team is using it to understand mosquito flight patterns.
Mathis’ team has also recently published SuperAnimal, a method that can build models using large, often publicly available, animal behavior datasets to estimate poses on a wide variety of animal species with DeepLabCut, without any additional training data.
With the help of funding from The Kavli Foundation, they’ve now launched AmadeusGPT, an AI agent designed to enhance what scientists can do with DeepLabCut (and other AI models). It was built to bridge the gap between natural language descriptions of animal behaviors and custom code to analyze those behaviors through DeepLabCut. For example, Mathis said, a researcher can ask AmadeusGPT to write a script to track how long a lab mouse spends in one part of its cage.
AmadeusGPT allows more scientists to benefit from DeepLabCut because it lowers the barrier to writing custom code for new scientific questions. But Mathis hopes it will also broaden the potential of the software for all neuroscientists.
“We were inspired by asking how we could leverage AI in neuroscience to go beyond the standard questions that we ask with our own data,” she said. “In the future, a corpus of models that have ingested massive amounts of data could propose new experiments and new analyses that we wouldn’t have thought of. Currently, I see its utility in two ways: an interface for people who don’t code, and a place anyone can go for exploration and inspiration.”
To keep the science rigorous, the team built in transparency checks, including measures of uncertainty and users validation checks.
“There’s a lot of hype and excitement with AI right now,” Mathis said. “The excitement is real, but we all have to continue wearing our scientific hats.”
They’re working on other algorithms that reveal how AI models work to further increase the transparency of their software.
Open data meets AI
AI tools like DeepLabCut and SuperAnimal rely on well-structured, machine-readable data (and, in some cases, create it). But that kind of data doesn’t just happen automatically. It requires standards and infrastructure.
A decade ago, The Kavli Foundation helped launch Neurodata Without Borders (NWB), a standard to make neuroscience data shareable and reusable across labs. While NWB wasn’t built with AI in mind, the standard now positions it to support the growing use of AI in brain research. Through its partnership with the BRAIN Initiative’s DANDI Archive for data storage and sharing, NWB enables neuroscientists to collaborate across labs and reuse datasets to ask and answer new scientific questions.
“Standards build the foundations for being able to use AI with the data,” said Oliver Rübel, a staff scientist at the Lawrence Berkeley National Laboratory and one of the lead developers at NWB. “AI is only useful if it can interpret the data. If I don’t structure my data properly, the AI would not be able to understand the data, just as a human would not be able to understand the data.”
NWB developers are keeping an eye on how their tools can best support what researchers are looking for now and what they might need in the near future. That means adapting the data standards to emerging kinds of data as new techniques come online, and building tools that allow comparisons across different types of data and data standards. It also means making sure the data is ready for AI consumption, as well as developing tools that use AI to analyze and glean understanding from the large datasets.
“We’re at an inflection point for AI,” said Ben Dichter, a research software engineer and NWB community liaison. “We’re going to see all sorts of different use cases that are going to require state of the art computing. We’re going to need to make sure that this data not only has all the correct metadata but also is sorted in an efficient way that enables supercomputers to be able to process it and cloud computers to be able to work on it.”
As neuroscientists are preparing to use AI in their research, Dichter and Rübel hope they keep in mind the power of data sharing. Even in projects that entail new data generation, having multiple datasets for comparison can confirm and strengthen findings, they say.
How neural activity changes in the Alzheimer’s brain
A collaboration between researchers at the University of California, Los Angeles (UCLA) and Harvard Medical School is using AI to find meaning in massive amounts of data related to Alzheimer’s disease. More than 7 million Americans are estimated to be living with Alzheimer’s, a progressive neurodegenerative disease for which there is no cure. Although scientists are learning more about which neurons die in the disease, how these cells’ electrical activity changes over the course of Alzheimer’s remains a mystery. That knowledge gap is due in large part to a technical hurdle: Electrical probes to measure neural activity in animal models are fairly blunt instruments and can’t be used for long without damaging neurons. Therefore, tracking the same neurons as the disease progresses has to date been impossible.
Theodore Zwang, an assistant professor of neurology at Harvard, recently developed tiny, flexible mesh probes that mimic neurons, allowing them to remain in the brain and record from the same cells for more than a year. Zwang teamed up with Andrew Holbrook, an associate professor of biostatistics at UCLA, to use these probes to track how neural activity changes over time in the aging mouse brain and as animals develop Alzheimer’s-like pathology. The two scientists met through the Toffler Scholars program for early career scientists, run by the Karen Toffler Charitable Trust. They have since received funding from The Kavli Foundation, the Cure Alzheimer’s Fund and the Trust to support their new collaboration.
The scientists are tracking the activity of hundreds of neurons, measured once a week over the course of a year, as animals perform different behaviors in the lab. The resulting dataset is massive and requires AI models to parse through it, Holbrook said. He’s developing custom models to better understand dependencies among the data — how does one neuron’s firing or lack of firing affect other neurons, and how do groups of neurons’ activity affect that of other groups? What is the order of dependencies in a network of neurons? How do these dependencies change as Alzheimer’s develops?
“We’re extracting information from this extremely complex symphony of signals,” Holbrook said. “The question is, how to build a model that incorporates all this data and responds to the structure that’s inherent in the data? Then we can answer questions that we wouldn’t even be able to ask were it not for this technology.”
The team is specifically focusing on neurons in the entorhinal cortex and the hippocampus, two regions of the brain involved in memory formation and which are known to be especially vulnerable to degeneration in Alzheimer’s. In an initial study released as a preprint, the researchers found that as mouse brains accumulate tau, a protein that builds up in sticky tangles in Alzheimer’s in humans, neurons undergo prolonged periods of silence. Although that lack of activity is reversible, with neurons becoming active again weeks later, the silencing appears to destabilize entire circuits and networks. This silencing is independent of tau tangle formation and neuron death and could represent the early stages of the disease. While this kind of recording isn’t possible in humans, the team’s AI models may inform the development of models to interpret human data, such as EEG or fMRI, revealing brain changes long hidden from view.
Considering public opinion
Of course, generative AI is making waves not only in neuroscience, but across all of society. And opinions on its utility are mixed, depending on how it’s used. With support from The Kavli Foundation, Samantha Blickhan, co-director and humanities research lead at Zooniverse — the world’s largest platform for online participatory research — is exploring the ethical implications of AI use in citizen science projects. The effort came out of chatter on Zooniverse message boards, where some volunteers questioned the use of AI in the platform’s projects and others encouraged it.
Even in research projects that don’t directly involve members of the public, it is critical that these conversations happen in the open, Blickhan said. It can be an unusual situation for scientists when it’s the method rather than the topic of their research that’s proving controversial, she noted. With the advent of ChatGPT and similar tools, public awareness of AI has skyrocketed — and so have polarized opinions, which don’t often fully reflect the actual ways technology is being used in research. Scientists using AI should be prepared to have those conversations, maintain transparency around their work, and be aware that public interest in AI is very high right now.
“There’s always going to be a little bit of tension between innovation and scientific rigor, and that’s amplified by the speed at which we’re moving with this technology. Part of that tension is public opinion, and that’s creating a fairly unique environment for this conversation,” Blickhan said. “Are we prepared to be held accountable for the decisions we make regarding our research processes, and are we prepared to explain those choices publicly?”
As scientists increasingly turn to AI to power their projects, organizations like The Kavli Foundation can help them navigate this new world, Albin said.
“As AI continues to evolve, we want to make sure the neuroscience community has what it needs: not just cutting-edge tools, but the infrastructure and support to fully harness AI’s potential to accelerate discovery,” she said. “We see ourselves as partners in helping build a research ecosystem that’s open, forward-looking, and ready for what’s next.”