Dr. Randal A. Koene is CEO and Founder of the not-for-profit science foundation Carboncopies as well as the neural interfaces company NeuraLink Co. Dr. Koene is Science Director of the 2045 Initiative and a scientific board member in several neurotechnology companies and organizations.
Dr. Koene is a neuroscientist with a focus on neural interfaces, neuroprostheses and the precise functional reconstruction of neural tissue, a multi‑disciplinary field known as (whole) brain emulation. Koene’s work has emphasized the promotion of feasible technological solutions and “big‑picture” roadmapping aspects of the field. Activities since 1994 include science-curation such as bringing together experts and projects in cutting‑edge research and development that advance key portions of the field.
Randal Koene was Director of Analysis at the Silicon Valley nanotechnology company Halcyon Molecular (2010-2012) and Director of the Department of Neuroengineering at Tecnalia, the third largest private research organization in Europe (2008-2010). Dr. Koene founded the Neural Engineering Corporation (Massachusetts) and was a research professor at Boston University’s Center for Memory and Brain. Dr. Koene earned his Ph.D. in Computational Neuroscience at the Department of Psychology at McGill University, as well as an M.Sc. in Electrical Engineering with a specialization in Information Theory at Delft University of Technology. He is a core member of the University of Oxford working group that convened in 2007 to create the first roadmap toward whole brain emulation (a term Koene proposed in 2000). Dr. Koene’s professional expertise includes computational neuroscience, neural engineering, psychology, information theory, electrical engineering and physics.
In collaboration with the VU University Amsterdam, Dr. Koene led the creation of NETMORPH, a computational framework for the simulated morphological development of large‑scale high‑resolution neuroanatomically realistic neuronal circuitry.
Luke Muehlhauser: You were a participant in the 2007 workshop that led to FHI’s Whole Brain Emulation: A Roadmap report. The report summarizes the participants’ views on several issues. Would you mind sharing your own estimates on some of the key questions from the report? In particular, at what level of detail do you think we’ll need to emulate a human brain to achieve WBE? (molecules, proteome, metabolome, electrophysiology, spiking neural network, etc.)
(By “WBE” I mean what the report calls success criterion 6a (“social role-fit emulation”), so as to set aside questions of consciousness and personal identity.)
Randal Koene: It would be problematic to base your questions largely on the 2007 report. All of those involved are pretty much in agreement that said report did not constitute a “roadmap”, because it did not actually lay out a concrete / well devised theoretical plan by which whole brain emulation is both possible and feasible. The 2007 white paper focuses almost exclusively on structural data acquisition and does not explicitly address the problem of system identification in an unknown (“black box”) system. That problem is fundamental to questions about “levels of detail” and more. It immediately forces you to think about constraints: What is successful/satisfactory brain emulation?
System identification (in small) is demonstrated by the neuroprosthetic work of Ted Berger. Taking that example and proof-of-principle, and applying it to the whole brain leads to a plan for decomposition into feasible parts. That’s what the actual roadmap is about.
I don’t know if you’ve encountered these two papers, but you might want to read and contrast with the 2007 report:
I think that a range of different levels of detail will be involved in WBE. For example, as work by Ted Berger on a prosthetic hippocampus has already shown, it may often be adequate to emulate at the level of spike timing and patterns of neural spikes. It is quite possible that, from a functional perspective, emulation at that level can capture that which is perceptible to us. Consider, differences of pre- and post-synaptic spike times are the basis for synaptic strengthening (spike-timing dependent potentiation), i.e. encoding of long term memory. Trains of spikes are used to communicate sensory input (visual, auditory, etc). Patterns of spikes are used to drive groups of muscles (locomotion, speech, etc).
That said, a good emulation will probably require a deeper level of data acquisition for parameter estimation and possible also a deeper level of emulation in some cases, for example if we try to distinguish different types of synaptic receptors, and therefore how particular neurons can communicate with each other. I’m sure there are many other examples.
So, my hunch (strictly a hunch!) is that whole brain emulation will ultimately involve a combination of tools that carry out most data acquisition at one level, but which in some places or at some times dive deeper to pick up local dynamics.
I think it is likely that we will need to acquire structure data at least at the level of current connectomics that enables identification of small axons/dendrites and synapses. I also think it is likely that we will need to carry out much electrophysiology, amounting to what is now called the Brain Activity Map (BAM).
I think is is less likely that we will need to map all proteins or molecules throughout an entire brain – though it is very likely that we will be studying each of those thoroughly in representative components of brains in order to learn how best to relate measurable quantities with parameters and dynamics to be represented in emulation.
(Please don’t interpret my answer as “spiking neural networks”, because that does not refer to a data acquisition level, but a certain type of network abstraction for artificial neural networks.)
Read more »