Randal Koene on whole brain emulation

 |   |  Conversations

Randal A. Koene portraitDr. Randal A. Koene is CEO and Founder of the not-for-profit science foundation Carboncopies as well as the neural interfaces company NeuraLink Co. Dr. Koene is Science Director of the 2045 Initiative and a scientific board member in several neurotechnology companies and organizations.

Dr. Koene is a neuroscientist with a focus on neural interfaces, neuroprostheses and the precise functional reconstruction of neural tissue, a multi‑disciplinary field known as (whole) brain emulation. Koene’s work has emphasized the promotion of feasible technological solutions and “big‑picture” roadmapping aspects of the field. Activities since 1994 include science-curation such as bringing together experts and projects in cutting‑edge research and development that advance key portions of the field.

Randal Koene was Director of Analysis at the Silicon Valley nanotechnology company Halcyon Molecular (2010-2012) and Director of the Department of Neuroengineering at Tecnalia, the third largest private research organization in Europe (2008-2010). Dr. Koene founded the Neural Engineering Corporation (Massachusetts) and was a research professor at Boston University’s Center for Memory and Brain. Dr. Koene earned his Ph.D. in Computational Neuroscience at the Department of Psychology at McGill University, as well as an M.Sc. in Electrical Engineering with a specialization in Information Theory at Delft University of Technology. He is a core member of the University of Oxford working group that convened in 2007 to create the first roadmap toward whole brain emulation (a term Koene proposed in 2000). Dr. Koene’s professional expertise includes computational neuroscience, neural engineering, psychology, information theory, electrical engineering and physics.

In collaboration with the VU University Amsterdam, Dr. Koene led the creation of NETMORPH, a computational framework for the simulated morphological development of large‑scale high‑resolution neuroanatomically realistic neuronal circuitry.

Luke Muehlhauser: You were a participant in the 2007 workshop that led to FHI’s Whole Brain Emulation: A Roadmap report. The report summarizes the participants’ views on several issues. Would you mind sharing your own estimates on some of the key questions from the report? In particular, at what level of detail do you think we’ll need to emulate a human brain to achieve WBE? (molecules, proteome, metabolome, electrophysiology, spiking neural network, etc.)

(By “WBE” I mean what the report calls success criterion 6a (“social role-fit emulation”), so as to set aside questions of consciousness and personal identity.)


Randal Koene: It would be problematic to base your questions largely on the 2007 report. All of those involved are pretty much in agreement that said report did not constitute a “roadmap”, because it did not actually lay out a concrete / well devised theoretical plan by which whole brain emulation is both possible and feasible. The 2007 white paper focuses almost exclusively on structural data acquisition and does not explicitly address the problem of system identification in an unknown (“black box”) system. That problem is fundamental to questions about “levels of detail” and more. It immediately forces you to think about constraints: What is successful/satisfactory brain emulation?

System identification (in small) is demonstrated by the neuroprosthetic work of Ted Berger. Taking that example and proof-of-principle, and applying it to the whole brain leads to a plan for decomposition into feasible parts. That’s what the actual roadmap is about.

I don’t know if you’ve encountered these two papers, but you might want to read and contrast with the 2007 report:

I think that a range of different levels of detail will be involved in WBE. For example, as work by Ted Berger on a prosthetic hippocampus has already shown, it may often be adequate to emulate at the level of spike timing and patterns of neural spikes. It is quite possible that, from a functional perspective, emulation at that level can capture that which is perceptible to us. Consider, differences of pre- and post-synaptic spike times are the basis for synaptic strengthening (spike-timing dependent potentiation), i.e. encoding of long term memory. Trains of spikes are used to communicate sensory input (visual, auditory, etc). Patterns of spikes are used to drive groups of muscles (locomotion, speech, etc).

That said, a good emulation will probably require a deeper level of data acquisition for parameter estimation and possible also a deeper level of emulation in some cases, for example if we try to distinguish different types of synaptic receptors, and therefore how particular neurons can communicate with each other. I’m sure there are many other examples.
So, my hunch (strictly a hunch!) is that whole brain emulation will ultimately involve a combination of tools that carry out most data acquisition at one level, but which in some places or at some times dive deeper to pick up local dynamics.

I think it is likely that we will need to acquire structure data at least at the level of current connectomics that enables identification of small axons/dendrites and synapses. I also think it is likely that we will need to carry out much electrophysiology, amounting to what is now called the Brain Activity Map (BAM).
I think is is less likely that we will need to map all proteins or molecules throughout an entire brain – though it is very likely that we will be studying each of those thoroughly in representative components of brains in order to learn how best to relate measurable quantities with parameters and dynamics to be represented in emulation.

(Please don’t interpret my answer as “spiking neural networks”, because that does not refer to a data acquisition level, but a certain type of network abstraction for artificial neural networks.)


Luke: Which of these “assumptions” on page 15 of the report do you generally agree with? (physicalism, multiple realizability, computability, non-organicism, scale separation, component tractability, simulation tractabilty, brain-centeredness)


Randal:

Philosophical physicalism: Yes. With the caveat that we are assuming that mind function can take place on different functional substrates (just as you could move a program from one type of computer to another). Since we already know that the brain can work around some types of brain damage by carrying out a function from a previously damaged piece in some other piece of brain (after retraining), leading to a resumption of normal mental operations, we have some evidence of such distinction between brain and mind functions even in the biological implementation. (That specific example does not work for all mind functions.)

Computability: Yes – at the level that we are interested in. Rather than addressing the matter of Turing computability, I’d rather highlight the problem of replication: If you try to duplicate something accurately, by analog or digital means, it is pretty much impossible to do so to infinite precision. A good example is digital audio. Despite that constraint, we can capture what we are interested in. The assumption here is that what we would consider a satisfactory whole brain emulation does not require replication at infinite precision. From my experience in neuroscience, I see a lot of evidence that the brain itself goes to a lot of trouble to make itself more predictable, to make it possible for brain regions to talk to each other and cooperate intelligibly. In essence, the brain uses tricks like spike bursts, nested oscillatory modulation, redundant ensembles of neurons active in spike patterns, etc, to insure reliability and be more “computable”.

Non-organicism: Yes (I was unfamiliar with the term, but I agree with the description). What is understanding? When do we understand something? If we can make a good model of something, is that understanding? In biological sciences, that is increasingly becoming the measure of understanding due to the increase in complexity with the number of components (unlike basic physics questions such as “pressure”, where, for example, adding more particles can make a system more easy to describe/understand than working with a single particle).

Scale-separation: Yes-ish, but see my longer answer along those lines above where I talked about what sort of data I think would be needed.

Component tractability: Yes, but not really the components: It’s the signals that matter! This is about system identification, see for example as used by Ted Berger to create Volterra expansions that capture system function for neural prosthesis. You can draw a box around some part of the brain at some resolution and call that a ‘black box’. The important thing is to know which signals you are interested in that are going in and coming out of the box. At an unknown CMOS component, we could say those signals are 1s and 0s (though actually voltages below and above some threshold). We wouldn’t necessarily care about other signals there, such as infrared EM radiation (heat). Similarly, you need to consider the signals that are interesting when you examine black boxes at some level in the brain biology. I do believe that we will be able to carry out system identification (i.e. make a model of / understand) giving us the functions that describe what a component does in terms of its transforming of input into output.

Simulation tractability: Yes. Having carried out system identification we will learn which functions or parts of functions are most common and thereby determine how to build efficient processors for them (call them neuromorphic if you like, though they may be quite different than what goes by that name… since it’s really about whatever the functions turn out to be, not about neurons per-se). The biological brain is able to run functions of the mind fairly efficiently, so we should be able to engineer the same.

Brain centeredness: Maybe, but not quite. I find this an odd thing to state as an important assumption, because it isn’t. Focusing on emulating the brain is a choice just like the choice of which black boxes to select at higher resolution. What is ‘me’? Is me my brain, or my body, or my interaction with the whole universe and everyone in it? It seems to me like you can move that definition in or out as much as you’d like. But… given a universe in which I exist, I can take a part of that, such as my brain, declare it a volume to be emulated, build the emulator, and it should fit in nicely with the rest. So, perhaps, to exist properly you do need to have at least the perception of legs and arms and visual input, etc. I don’t think it is a fundamental problem for brain emulation. But the assumption is a weird thing to include, because it (falsely, in my opinion) makes it look like it could be a possible road-block to whole brain emulation. Do we need bodies or no? How much of a body? Does the answer to those questions say much at all about the feasibility of whole brain emulation? I doubt it.


Luke: Table 4 of the report (p. 39) summarizes possible modeling complications, along with estimates of how likely they are to be necessary for WBE and how hard they would be to implement. Which of those estimates do you disagree with most strongly?


Randal: Spinal cord, don’t know. I agree with all estimates from synaptic adaptation through ephaptic effects.

Dynamical state: I agree it’s probably not necessary as such, but capturing dynamics through functional characterization may well be needed, hence brain activity mapping. This is not a show-stopper, on account of recent new interface prototypes.

I agree with the estimate about analog computation and about “true” randomness.


Luke: Neuroscientist Miguel Nicolelis says whole brain emulation is incomputable because the “most important features [of consciousness] are the result of unpredictable, nonlinear interactions among billions of cells.” What’s your reply to that?


Randal: I’d say that Miguel is making a bit of a wild claim without actual evidence. What is he actually saying there? Remember the bit where I explained my take on its computability and the lengths the brain goes to to make itself more predictable + the issue of picking what you consider satisfactory for whole brain emulation (if you can’t be satisfied by anything less than atomic precision duplication… well then you can’t make a Mac emulator to run on a PC either).

It’s a strange thing for a neuroscientist to say, given that almost all of them – Miguel included – use computational neuroscience models and keep asking for ones that more rigorously replicate what is going on in the neurophysiology.

How does Miguel know if it makes any difference if the “most important features [of consciousness]” (whatever those are) are the result of unpredictable, possibly nonlinear interactions among billions of biological cells or billions of components of an emulation?

The short answer is: Miguel’s statement rings of strong feeling, but I can’t parse his argument to connect with any demonstrated evidence.


Luke: In two recent papers (2012a, 2012b) you explain the general challenge of whole brain emulation (WBE), and outline some experimental research that would make progress toward successful WBE. In your view, what are some specific, tractable “next research projects” that, if funded, could show substantial progress toward WBE in the next 10 years?


Randal: To be honest, I’m in the process of revamping the roadmap right now, because events of 2013 have altered what is likely to be the area most in need of focused attention.

With that caveat, I’ll propose a few:

  1. Develop a platform for wireless free-floating neural interfaces based on prototypes presently at UC Berkeley1 and MIT, but sufficiently open and standardized that many research labs can use the platform even if the interface hardware undergoes several iterations of improvement. This is one component of work on the brain activity map (BAM) that will be essential to carry out system identification in general brain tissue.
  2. Demonstrate the ability to theoretically “break” a piece of neural tissue into a collection of tractable small subsystems, where characterization of a.) the connectivity between the pieces, and b.) the system functions identified within each piece allow reconstruction of function of the whole. You can start this very small and work your way up, though validation will be tough if small pieces of tissue are arbitrarily chosen. It might be best to work in something like retina or a very small animal (not sure if I should suggest C.Elegans for this, due to its oddities).
  3. Attempt to use neural interfaces and neural prostheses in increasing numbers of patient populations for treatment of mental disorders, brain damage in select areas (hippocampus is one interesting candidate, obviously), and nerve damage (paralysis) as a way to accelerate experience with and development of those tools that share most of the same ultimate specifications as the ones needed to acquire activity data for WBE.
  4. Combine multiple microscopy modalities, such as EM and light (we seem some of this already in Brainbow), protein tagging to improve the identification of physiological components in sections of brain tissue.
  5. Develop automated segmentation, identification, 3D reconstruction and translation to functional model parameters from stacks of brain slice images (especially for EM images used in connectomics). As this proceeds, it becomes possible to test the limits of such identification and reconstruction, which relies (when not paired with BAM data) on sampling from probability distributions in libraries of correlations between structure & function.
  6. Improve throughput in connectomics by further developing mechanical automation of serial imaging (closely related to Ken Hayworth’s work2, of course). Related to this, further improve the quality of brain tissue preparations for connectomics work (see the Brain Preservation Foundation).
  7. Explore the whole tree of signal detection and stimulation modalities and their possible implementations for a) BAM and b) connectomics, a process that has already begun with the work of the so-called PoBAM group3 (Physics of Brain Activity Mapping) and the various publications by Adam Marblestone et al. As this progresses, you get to the development of hybrid technology applications where the strengths of each modality (e.g. ultrasound, opto-electrics, biochemical sensing, etc.) are employed where best suited, so that hurdles that any one of them faces indivually are overcome. There is a pretty strong group working on this right now, which promises advances in BAM in the next 5 years similar to the ones we’ve seen in connectomics over the last 5 years.
  8. Systematize (even more than the Allen Institute already does) the process of fundamental neurophysiological inquiry, in order to continue and complete the identification and mechanistic understanding of all physiological components involved in brain function (all receptor types, proteins, etc). While this may not all end up being needed if BAM and connectomics works well enough to identify system function and system interactivity, more understanding may sometimes turn out to be a game changer and certainly helps with validation, as well as treatments for issues in the biological brain.

There are certainly more, but as I already mentioned, this is presently undergoing heavy revamping. Also, I think there is still merit to completing the mapping and modeling of C.Elegans, though I suspect that something like further success in brain-piece neural prostheses (such as the hippocampal and prefrontal cortex work of the Berger group) will have more impact to accelerate research and is a better representation as proof-of-principle that neuroprosthetic replacement of mammalian brain tissue is possible.


Luke: In suggestion #2 you referred to C. elegans‘ “oddities.” What oddities are you thinking of?


Randal: Firstly, neurons in C.Elegans don’t spike. The system operates on sub-threshold communications. Secondly, the 302 neurons of C.Elegans are all quite unique and different, basically different types that carry out rather sophisticated functions (non-redundantly and non-distributed) by themselves. If you were to compare this with the human brain, it would be more like having 302 different brain regions.

A much better “small” animal that compares with mammals in terms of the sorts of functions its brain can carry out (like visual processing, locomotion using limbs, etc.) would be the fruitfly Drosophila, for example, which is the primary animal studied at Janelia Farms labs.


Luke: What’s your guess as to when scientists will successfully emulate a Drosphila brain? (Use whichever interpretation of “successfully” you prefer.)


Randal: Obviously, I can’t be very detailed about this sort of long-range guesswork.

I’d say that if the brain activity map stuff develops in the next 5 years the way connectome stuff developed in the past 5 years, then in about 5-7 years it might be a conceivable / feasible / budegatable thing to propose a project to map the drosophila brain and to emulate it in a first, draft version of an emulation. I would expect that such a proposal would map out something on a duration of 8-10 years. So, I would think that the earliest date by which we could expect first emulations of drosophila brain is about 15-17 years out.

There are two interesting things to consider there:

  1. Meanwhile, what is happening on the parallel track where useful neural prostheses are built for human patients even if those are piece-wise and not “whole brain”?
  2. Is there any distinction between a drosophila brain emulation and a human brain emulation besides scale? I.e. once you start iterating through improved versions of drosophila emulation, how does that affect motivation and funding for a race to scale the procedure to humans?

Luke: What research into various aspects of WBE do you personally hope to be doing over the next 5 years?


Randal: I personally hope to always apply myself where my analysis of the roadmap indicates that my activity can have the most useful impact. That is foremost for me, and I’ve been adjusting my activities accordingly for a few years now.

Aside from that, I’m very interested to get involved with the development of neural interface platforms that can get to the resolution and bandwidth required for system identification.

Both of those considerations, the big-picture focus and application to that where needed, and work on neural interfaces, those are in my field of view right now when I think about the next 5 years.


Luke: Thanks, Randal!


  1. See Neural Dust, presently at 126um in size  
  2. Also see: Hayworth, K. 2013. Preserving and Mapping the Brain’s Connectome. Global Future 2045 (2013). Proceedings.  
  3. See: