Report on the Kavli Futures Symposium

Roundtable Discussion: The Future of Neurotechnology
Feature Story: Meet the Toolmakers


Kavli Futures Symposium: The Novel Neurotechnologies

November 3–4, 2014, Columbia University, New York

The Kavli Futures Symposium: The Novel Neurotechnologies focused on emerging tools for brain research and treatment. It was a highly interdisciplinary meeting that brought together experimental and computational neuroscientists and engineers, all of whom are pioneering techniques to unravel the complexity of the brain.

The symposium was organized and hosted by the NeuroTechnology Center (NTC) at Columbia University, which launched in October 2014. The NTC is directed by Rafael Yuste, professor of neuroscience and co-director of the Kavli Institute for Brain Science. The aims of the NTC are to connect the diverse tool-building laboratories at Columbia, train the next-generation of neurotechnologists, seed a local neurotechnology sector and educate the public about the importance of brain research tools.

The symposium welcomed 14 scientists who gave presentations on three main topics—neuroimaging, "nano" neuroscience and neurocomputation technologies. Neuroscientist John Donoghue, Director of the Brown Institute for Brain Science, and geneticist George Church, Director of the Wyss Institute at Harvard, gave keynote presentations. Neurobiologist Cornelia Bargmann, a Kavli Prize laureate and professor at Rockefeller University, opened the symposium with an overview of the Unites States BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, a 12-year federal research program to spur the development of brain research tools and advance our understanding of the brain.

BRAIN Initiative: An Update

When President Obama announced the BRAIN Initiative in April 2013, he said it would give scientists “the tools they need to get a dynamic picture of the brain in action and better understand how we think and how we learn and how we remember.” In response, the National Institutes of Health (NIH) convened a working group, co-chaired by Cornelia Bargmann (Rockefeller University) and Bill Newsome (Stanford University), to decide how to accelerate the development of these tools.

Several of the researchers who attended the Kavli Futures Symposium — including Rafael Yuste, Michael Roukes, John Donoghue and George Church — championed with Miyoung Chun, Executive Vice President of Science Programs at The Kavli Foundation, the Brain Activity Map project, the precursor to the BRAIN Initiative. So it was fitting to launch the event with an update on the Initiative’s progress.

The NIH working group very quickly concluded that the critical problem to solve in neuroscience is how the brain’s circuits work, said Bargmann. It outlined seven high-priority research areas, but the BRAIN Initiative really pivots around the goal of observing the brain in action through large-scale monitoring of neural circuits. To that end, the first five years will emphasize technology development. In the first year of funding, the NIH awarded $46 million to 58 projects, including new methods for mapping and modulating ensembles of neurons.

The BRAIN Initiative is often called President Obama’s moon shot because it was pitched to and endorsed by lawmakers as a basic science project: a fact Bargmann called “remarkable.” However, the impact of the BRAIN Initiative on our ability to treat neurological and psychiatric disorders, potentially by fine-tuning neural circuits, is expected to be profound.

Opening Keynote: Human Neurotechnology

In a keynote address that opened the scientific portion of the symposium, John Donoghue (Brown University) emphasized the potential of “human neurotechnologies,” which he defined as devices that are coupled to the nervous system to diagnose, treat, restore and possibly even enhance neural function. These devices are not only providing new pathways to treatment for brain disorders but helping researchers understand human brain function in health and in disease. Echoing Bargmann, Donoghue said the biggest impediment to developing high-quality human neurotechnologies is our lack of knowledge about the “meso,” or middle scale, of the brain, at the level of neural circuits.

The focus of Donoghue’s presentation was on implantable devices that sense, or read out, the activity of the nervous system. Donoghue has led the development of BrainGate, a brain-computer interface that reads out the activity of the motor cortex, the part of the brain that plans movements, in individuals with paralysis. These brain signals are then fed into a computer that interprets them, for example, to flex a person’s muscles, type on a keyboard, drive a wheel chair or control a robotic arm.  

Two major challenges to advancing BrainGate and similar devices exist, said Donoghue. The first is technical: neuroscientists need more sensitive and longer-lasting tools to record neural activity. The second is scientific: they need to better understand neural circuits in order to make sense of those signals. Donoghue is working on a high-bandwidth, wireless version of BrainGate that should be available to patients in one to two years. He is also developing better computational methods to analyze the activity of ensembles of neurons, which will improve the device’s performance.  

Session 1: Neuroimaging Technologies

The theme of the first session of the symposium was the development and application of new optical imaging tools for brain research, an area where researchers are making rapid progress. Darcy Peterka, Director of Technologies for the NeuroTechnology Center, organized this portion of the symposium.

Chris Harvey (Harvard University) is using some of the newest optical imaging techniques to study the brain circuits that underlie cognitive processes such as decision making, motor planning and working memory in mice. He is interested in the activity of the posterior parietal cortex (PPC), an area that integrates the sensory signals that are coming in and the motor signals that are going out of then brain. By monitoring and manipulating large groups of neurons in the PPC, his laboratory is testing long-standing models of how the brain processes information that couldn’t be tested before.

In particular, he uses a technique called two-photon microscopy, which can capture the activity of more than 1,000 nerve cells, or neurons, at once, in a live mouse. The mice, which have their heads fixed, move atop of a spherical ball that’s surrounded by a video screen. This creates a virtual reality environment, in which Harvey can ask how an animal’s brain lights up in response to certain stimuli or tasks, such as the choice to turn left or right. He can also interfere with the brain’s response using optogenetics, a technique that turns neurons on and off using light. The combination of these three techniques — two-photon microscopy, a virtual reality maze and optogenetics — is allowing neuroscientists such as Harvey to test cause and effect, something that was very difficult to do just a few years ago.

Andreas Tolias (Baylor College of Medicine) is adapting two-photon microscopy to study the densely packed columns of neurons that are the elementary functional units of the cerebral cortex—the outermost layer of the brain. Neuroscientists still don’t have a complete list of the cell types and connections that make up these microcolumns or a clear understanding of how these cells interact to process information. To sort this out, they simply need better tools, said Tolias. He has developed a way to steer the beam of light from a two-photon microscope in three dimensions using crystals known as acousto-optic deflectors. This allows him to rapidly image larger numbers of cortical neurons in the brain’s of live mice than ever before.

But perhaps the biggest problem with existing optical imaging methods for brain research is depth. Brain tissue scatters visible light, making it impossible to see much beyond the most superficial layers of the brain with conventional microscopes. In the mouse, this means that only about 5 percent of the brain is accessible to neuroscientists. To get around this, some researchers have inserted GRIN lenses into tissue, which Jerome Mertz (Boston University) said is “like going down into tissue with a telephone pole.” Instead, he has demonstrated that it’s possible to use a single optical fiber, thinner than a human hair, to see a light-emitting object (such as cells that fluoresce) deep in tissue. The technique is still at the proof-of-concept stage and hasn’t been tested in the brain.

Session 2: Nano Neuro Technologies

The use of multielectrode arrays to directly measure the electrical activity of the brain has been the gold standard in neuroscience. But the state-of-the-art arrays contain only about 200 electrodes, or recording sites, and therefore can only capture a tiny fraction of the activity in a neural circuit. As a result, nanotechnologists are working to downsize the arrays to the order of nanometers, or billionths of a meter. Doing so would allow them to increase the number of recording sites, allowing them to eavesdrop on tens of thousands of neurons simultaneously, possibly more. Ken Shepard, Co-Director of the NeuroTechnology Center and an expert in electrical design and nano fabrication, invited four nanotechnologists to discuss their progress.

Tim Harris (Janelia Research Campus, Howard Hughes Medical Institute (HHMI)) is striving to optimize measurement of neural circuits and their connections in laboratory animals such as the rat, mouse, fruit fly and c. elegans, a small worm. He discussed an ongoing collaboration between HHMI and IMEC, a Belgium-based nanoelectronics research center, to develop multielectrode arrays with more recording sites, using state-of-the-art manufacturing techniques employed by the semiconductor industry. The partners are currently testing four probe designs, several of which will enter production and be available to the wider neuroscience research community in 2016. The current challenge, Harris said, is “how do you test such a thing and decide how well it works?”

The “workhorse” neural probes used in neuroscience today are usually made of metals, glass or silicon and are rigid. When they’re inserted in the brain, they tend to cause tissue damage and evoke an immune response, which limits the amount of time the devices can be implanted in the brain — for research or clinical purposes. “We need new technology strategies to improve the lifetime of fixed, implanted microelectrodes,” said bioengineer Daryl Kipke, CEO of NeuroNexus, a neural probe manufacturer based in Michigan.

Kipke described two design strategies his company is exploring to limit tissue damage and reactivity caused by electrodes: The first puts the recordings sites on ultra thin platforms that extend off the side of the fine needles, or shanks, that are inserted into brain tissue; the second approach uses carbon fiber instead of silicon to make the electrodes and coats them with bioactive substances. Both approaches are proving successful, said Kipke.

Charles Lieber (Harvard University) is taking a different tack to reduce tissue injury, adapting tools he developed for tissue engineering for neuroscience. Specifically, Lieber has developed nanoelectronic devices, networked together using ultrathin wires, that integrate with cells in a laboratory dish. These tiny sensors and their wires can also be mixed with a polymer and injected in the brain, where they unfold and become enmeshed with neurons. Lieber has shown that they can be used to record electrical activity of nearby neurons and are well tolerated by the brain over long periods of time.   

However, nanosizing electronic tools for the measurement of neural activity will only take the field so far, said Michael Roukes (California Institute of Technology). His estimate is 100,000 neurons—several orders of magnitude higher than today’s technologies. To simultaneously record from more neurons than that, he said, will require new optical tools such “integrated neurophotonics,” which he is developing along with several other researcher who attended the symposium. The multidisciplinary team will still use silicon chip technology to build dense arrays, but instead of electrodes, the arrays will contain light-emitting and light-sensing probes—“the elements of an imaging system,” said Roukes. The project, which is funded by the BRAIN Initiative, aims to measure one million neurons in the primary visual cortex of a mouse.

Session 3: Neurocomputation Technologies

Computational neuroscientists such as Liam Paninski, Co-Director of the NeuroTechnology Center, are working alongside experimentalists to develop models of how the brain functions at different scales, ranging from a single cell to the whole brain, and to test these models using data. Paninski organized the third session on new computational tools for brain research.  

A fundamental and long-standing questions in neuroscience is how many different types of cells make up the brain? And, also, how are they are connected to each other? Some of the 19th century’s most accomplished scientists were neuroanatomists who first described the brain’s cells and its architecture. Now, finely detailed three-dimensional maps of nerve cells and their connections, known as connectomes, are allowing neuroscientists to tackle this question anew.

Sebastian Seung (Princeton University) is working on the connectome of the retina, a readily accessible and relatively simple part of the nervous system. He has developed an online video game, called EyeWire, to crowdsource the work of tracing neurons in three-dimensions. He is also working on automating the task with computers, and on training the computers to perform the job more accurately by “learning” from the gamers who play EyeWire. Seung discussed a one-year project he launched in January 2015 called “Countdown to Neuropia.” It aims to create a comprehensive catalog of the cells in a small cube of retinal tissue. Eventually, he aims to apply the technique to the cerebral cortex.

Seung also described progress he has made in modeling a fundamental property of starburst amacrine cells, one of the cell types in the retina: directional selectivity. These neurons preferentially respond to a visual stimulus that is moving in a specific direction. Neuroscientists have been trying to explain how this property is computed for about 50 years. Seung’s team recently came up with a model based on information about how these cells are connected to other retinal cells, and it is currently being tested by experimentalists.

Konrad Kording (Northwestern University) has developed a statistical model that can distinguish cells types in the retina as well as anatomists can by factoring in new connectomics data about the distance between cells. Understanding cell types and their connections may help neuroscientists arrive at the organizational principles that the brain employs. Kording is also working with Harvard geneticist George Church on a project they call Rosetta Brain, an integrated map of a single brain (see below).

Another neuroscientist who is studying the retina, Eero Simoncelli (New York University), builds computation models to describe how ensembles of sensory neurons encode scenes from the natural environment. His model assumes that the retina has evolved ways to efficiently process these signals and minimize the amount of energy it uses in doing so. In order to test these models, his laboratory uses techniques to record from ensembles of neurons in the visual system of primates. His team has shown that the circuits that process visual information in the retina are indeed highly efficient.

Gyorgy Buzsaki (New York University), who Rafael Yuste called “an all-terrain neurotechnologist,” has pioneered the development of new computational and experimental tools for brain research. In particular, Buzsaki has devised silicon-based probes to study the circuits in the hippocampus and the cerebral cortex that are involved in memory; he has also developed an important model of how memories are consolidated in the brain by bursts of intense brain activity called sharp-wave ripples.

At the symposium, Buzsaki discussed a new neurotechnology, called NeuroGrid, for making dense electrical recordings from the outermost layer of the brain, or cortex. NeuroGrid is made from an extremely thin cellophane-like paper that contains a high-density of electrodes. It’s remarkably scalable, said Buzsaki: “There’s no reason why we can’t produce a high-density piece of paper like this and put it on an entire hemisphere [of the brain].”

Closing Keynote: Rosetta Brain

In the closing keynote, George Church (Harvard University), one of the architects of the Human Genome Project, discussed a handful of molecular tools based on DNA sequencing techniques that may help neuroscientists understand the brain. He outlined two in particular that could be transformative.

The first is FISSEQ (Fluorescent In Situ Sequencing), a method of sequencing DNA or RNA — the molecules that carry genetic information — in intact neurons in a slice of brain tissue. Researchers usually grind up the cells they are interested in and put them in a test tube to read out the genetic code, or the sequence of As, Cs, Ts and Gs. With FISSEQ, they gain important data about where the RNA and DNA molecules come from. In principle, the technique could be adapted to create a connectome, a wiring diagram of the brain, and determine cell types, said Church. It could also be used to trace the family of tree of a cell that has divided, as the brain develops, to produce generations of daughter cells.

The second is a method for reading out neural activity using DNA. As fanciful as it sounds, Church and his colleagues have envisioned a way to use an enzyme called polymerase, which strings the letters A, C, G and T into strands of DNA, to create a record of activity patterns over time that can be read out like a ticker tape.

Where might these early-stage tools take neuroscience? Church’s answer is the Rosetta Brain—a master dataset that would capture five kinds of information about every neuron in a single brain: its activity, cell type, connections, developmental lineage, or family tree, and how it relates to behavior. It is important because it would allow neuroscientists to integrate the types of data that they are now collecting and analyzing independently, for example, brain activity and connectivity. Just as the Rosetta Stone helped scholars decipher the ancient language of Egypt, a tool such as the Rosetta Brain could finally provide the key to deciphering the body’s most complex organ.


As the presentations underscored, it is a time of bold ideas and rapid progress in neuroscience. Fundamental discoveries in chemistry, physics, molecular biology, genetics and computer science are being applied to brain research, giving neuroscientists powerful new tools with which to examine the brain in action.  “Many of the techniques we’ve heard about in the last two days were almost science fiction a few years ago,” said the NTC’s Director Rafael Yuste at the close of the symposium.

He also emphasized that neuroscience is experiencing an unprecedented fusion between tool builders and tools users. “What you’ve seen is a beautiful mixture of biology and physical sciences that’s creating the new field of neurotechnology,” he said. The NTC’s Kavli Futures Symposium will continue to bring together multidisciplinary group of researchers and showcase emerging brain research tools. Yuste and his colleagues are currently planning the next meeting, scheduled for later this year.



  • Rafael Yuste
    Director, NeuroTechnology Center
    Co-Director, Kavli institute for Brain Science (KIBS)
    Columbia University

  • Ken Shepard
    Professor of Engineering and Biomedical Engineering
    Co-Director, NeuroTechnology Center
    Columbia University

  • Virginia Cornish
    Professor of Chemistry
    Co-Director, NeuroTechnology Center
    Columbia University

  • Liam Paninski
    Professor of Statistics and of Neuroscience
    Co-Director, NeuroTechnology Center
    Columbia University

  • Darcy Peterka
    Research Scientist in the laboratory of Rafael Yuste
    Director of Technologies, NeuroTechnology Center
    Columbia University

  • Julia Sable
    Senior Research Associate in the laboratory of Rafael Yuste
    Coordinator, NeuroTechnology Center
    Columbia University













  • Cornelia Bargmann
    Torsten N. Wiesel Professor
    Rockefeller University

  • Gyorgy Buzsaki
    Biggs Professor of Neural Sciences
    New York University School of Medicine

  • George Church
    Professor of Genetics
    Director, Wyss Institute
    Harvard University

  • John Donoghue
    Director, Brown Institute for Brain Science
    Brown University

  • Tim Harris
    Director, Applied Physics and Instrumentation Group
    Janelia Research Campus
    Howard Hughes Medical Institute

  • Chris Harvey
    Assistant Professor of Neurobiology
    Harvard University

  • Daryl Kipke
    Founder and CEO, NeuroNexus

  • Konrad Kording
    Associate Professor of Physical Medicine and Rehabilitation,
    Physiology, and Applied Mathematics
    Harvard University

  • Charles Lieber
    Mark Hyman Professor of Chemistry
    Harvard University

  • Jerome Mertz
    Professor of Biomedical Engineering
    Boston University

  • Michael Roukes
    Robert M. Abbey Professor of Physics, Applied Physics and Biological Engineering
    California Institute of Technology
    Founding Director, Kavli Nanoscience Institute

  • Sebastian Seung
    Professor of Computer Science and Professor at
    the Princeton Neuroscience Institute
    Princeton University

  • Eero Simoncelli
    Professor of Neural Science, Mathematics, and Psychology
    New York University
    Investigator, Howard Hughes Medical Institute

  • Andreas Tolias
    Associate Professor of Neuroscience
    Baylor College of Medicine

— April 2015