MIRI’s October Newsletter

 |   |  Newsletters

 

 

Machine Intelligence Research Institute

Research Updates

  • Our major project last month was our Friendly AI technical agenda overview and supporting papers, the former of which is now in late draft form but not yet ready for release.
  • 4 new expert interviews, including John Fox on AI safety.
  • MIRI research fellow Nate Soares has begun to explain some of the ideas motivating MIRI’s current research agenda at his blog. See especially Newcomblike problems are the norm.

News Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

 

 

Kristinn Thórisson on constructivist AI

 |   |  Conversations

krisDr. Kristinn R. Thórisson is an Icelandic Artificial Intelligence researcher, founder of the Icelandic Institute for Intelligent Machines (IIIM) and co-founder and former co-director of CADIA: Center for Analysis and Design of Intelligent Agents. Thórisson is one of the leading proponents of artificial intelligence systems integration. Other proponents of this approach are researchers such as Marvin Minsky, Aaron Sloman and Michael A. Arbib. Thórisson is a proponent of Artificial General Intelligence (AGI) (also referred to as Strong AI) and has proposed a new methodology for achieving artificial general intelligence. A demonstration of this constructivist AI methodology has been given in the FP-7 funded HUMANOBS project HUMANOBS project, where an artificial system autonomously learned how to do spoken multimodal interviews by observing humans participate in a TV-style interview. The system, called AERA, autonomously expands its capabilities through self-reconfiguration. Thórisson has also worked extensively on systems integration for artificial intelligence systems in the past, contributing architectural principles for infusing dialogue and human-interaction capabilities into the Honda ASIMO robot.

Kristinn R. Thórisson is currently managing director for the Icelandic Institute for Intelligent Machines and an associate professor at the School of Computer Science at Reykjavik University. He was co-founder of semantic web startup company Radar Networks, and served as its Chief Technology Officer 2002-2003.

 

Luke Muehlhauser: In some recent articles (1, 2, 3) you contrast “constructionist” and “constructivist” approaches in AI. Constructionist AI builds systems piece by piece, by hand, whereas constructivist AI builds and grows systems largely by automated methods.

Constructivist AI seems like a more general form of the earlier concept of “seed AI.” How do you see the relation between the two concepts?


Kristinn Thorisson: We sometimes use “seed AI”, or even “developmental AI”, when we describe what we are doing – it is often a difficult task to find a good term for an interdisciplinary research program, because each term will bring various things up in the mind of people depending on their background. There are subtle differences between both the meanings and histories of these terms that each bring along several pros and cons for each one.

I had been working on integrated constructionist systems for close to two decades, where the main focus was on how to integrate many things into a coherent system. When my collaborators and I started to seriously think about how to achieve artificial general intelligence we tired to explain, among other things, how transversal functions – functions of mind that seem to touch pretty much everything in a mind, such as attention, reasoning, and learning – could efficiently and sensibly be implemented in a single AI system. We also looked deeper into autonomy than I had done previously. This brought up all sorts of questions that were new to me, like: What is needed for implementing a system that can act relatively autonomously *after it leaves the lab*, without the constant intervention of its designers, and is capable of learning a pretty broad range of relatively unrelated things, on its own, and deal with new tasks, scenarios and environments – that were relatively unforeseen by the system’s designers? Read more »

Nate Soares speaking at Purdue University

 |   |  News

On Thursday, September 18th Purdue University is hosting the seminar Dawn or Doom: The New Technology Explosion. Speakers include James Barrat, author of Our Final Invention, and MIRI research fellow Nate Soares.

Nate’s talk title and abstract are:

Why ain’t you rich?: Why our current understanding of “rational choice” isn’t good enough for superintelligence.

The fate of humanity could one day depend upon the choices of a superintelligent AI. How will those choices be made? Philosophers have long attempted to define what it means to make rational decisions, but in the context of machine intelligence, these theories turn out to have undesirable consequences.

For example, there are many games where modern decision theories lose systematically. New decision procedures are necessary in order to fully capture an idealization of the way we make decisions.

Furthermore, existing decision theories are not stable under reflection: a self-improving machine intelligence using a modern decision theory would tend to modify itself to use a different decision theory instead. It is not yet clear what sort of decision process it would end up using, nor whether the end result would be desirable. This indicates that our understanding of decision theories is inadequate for the construction of a superintelligence.

Can we find a formal theory of “rationality” that we would want a superintelligence to use? This talk will introduce the concepts above in more detail, discuss some recent progress in the design of decision theories, and then give a brief overview of a few open problems.

For details on how to attend Nate’s talk and others, see here.

Ken Hayworth on brain emulation prospects

 |   |  Conversations

Kenneth Hayworth portraitKenneth Hayworth is president of the Brain Preservation Foundation (BPF), an organization formed to skeptically evaluate cryonic and other potential human preservation technologies by examining how well they preserve the brain’s neural circuitry at the nanometer scale. Hayworth is also a Senior Scientist at the HHMI’s Janelia Farm Research Campus where he is currently researching ways to extend Focused Ion Beam Scanning Electron Microscopy (FIBSEM) of brain tissue to encompass much larger volumes than are currently possible. Hayworth is co-inventor of the ATUM-SEM process for high-throughput volume imaging of neural circuits at the nanometer scale and he designed and built several automated machines to implement this process. Hayworth received his PhD in Neuroscience from the University of Southern California for research into how the human visual system encodes spatial relations among objects. Hayworth is a vocal advocate for brain preservation and mind uploading and, through the BPF’s Brain Preservation Prize, he has challenged scientists and medical researchers to develop a reliable, scientifically verified surgical procedure which can demonstrate long-term ultrastructure preservation across an entire human brain. Once won, Hayworth advocates for the widespread implementation of such a surgical procedure in hospitals. Several research labs are currently attempting to win this prize.

 

Luke Muehlhauser: One interesting feature of your own thinking (Hayworth 2012) about whole brain emulation (WBE) is that you are more concerned with modeling high-level cognitive functions accurately than is e.g. Sandberg (2013). Whereas Sandberg expects WBE will be achieved by modeling low-level brain function in exact detail (at the level of scale separation, wherever that is), you instead lean heavily on modeling higher-level cognitive processes using a cognitive architecture called ACT-R. Is that because you think this will be easier than Sandberg’s approach, or for some other reason?


Kenneth Hayworth: I think the key distinction is that philosophers are focused on whether mind uploading (a term I prefer to WBE) is possible in principle, and, to a lesser extent, on whether it is of such technical difficulty as to put its achievement off so far into the future that its possibility can be safely ignored for today’s planning. With these motivations, philosophers tend to gravitate toward arguments with the fewest possible assumptions, i.e. modeling low-level brain functions in exact detail.

As a practicing cognitive and neuroscientist I have fundamentally different motivations. From my training, I am already totally convinced that the functioning of the brain can be understood at a fully mechanistic level, with sufficient precision to allow for mind uploading. I just want to work toward making mind uploading happen in reality. To do this I need to start with an understanding of the requirements, not based on the fewest assumptions, but instead based on the field’s current best theories. Read more »

Friendly AI Research Help from MIRI

 |   |  News

Earlier this year, a student told us he was writing an honors thesis on logical decisions theories such as TDT and UDT — one of MIRI’s core research areas. Our reply was “Why didn’t you tell us this earlier? When can we fly you to Berkeley to help you with it?”

So we flew Danny Hintze to Berkeley and he spent a couple days with Eliezer Yudkowsky to clarify the ideas for the thesis. Then Danny went home and wrote what is probably the best current introduction to logical decision theories.

Inspired by this success, today we are launching the Friendly AI Research Help program, which encourages students of mathematics, computer science, and formal philosophy to collaborate and consult with our researchers to help steer and inform their work.

Apply for research help here.

thesishelpheadersmall

John Fox on AI safety

 |   |  Conversations

John Fox is an interdisciplinary scientist with theoretical interests in AI and computer science, and an applied focus in medicine and medical software engineering. After training in experimental psychology at Durham and Cambridge Universities and post-doctoral fellowships at CMU and Cornell in the USA and UK (MRC) he joined the Imperial Cancer Research Fund (now Cancer Research UK) in 1981 as a researcher in medical AI. The group’s research was explicitly multidisciplinary and it subsequently made significant contributions in basic computer science, AI and medical informatics, and developed a number of successful technologies which have been commercialised.

In 1996 he and his team were awarded the 20th Anniversary Gold Medal of the European Federation of Medical Informatics for the development of PROforma, arguably the first formal computer language for modeling clinical decision and processes. Fox has published widely in computer science, cognitive science and biomedical engineering, and was the founding editor of the Knowledge Engineering Review (Cambridge University Press). Recent publications include a research monograph Safe and Sound: Artificial Intelligence in Hazardous Applications (MIT Press, 2000) which deals with the use of AI in safety-critical fields such as medicine.

Luke Muehlhauser: You’ve spent many years studying AI safety issues, in particular in medical contexts, e.g. in your 2000 book with Subrata Das, Safe and Sound: Artificial Intelligence in Hazardous Applications. What kinds of AI safety challenges have you focused on in the past decade or so?


John Fox: From my first research job, as a post-doc with AI founders Allen Newell and Herb Simon at CMU, I have been interested in computational theories of high level cognition. As a cognitive scientist I have been interested in theories that subsume a range of cognitive functions, from perception and reasoning to the uses of knowledge in autonomous decision-making. After I came back to the UK in 1975 I began to combine my theoretical interests with the practical goals of designing and deploying AI systems in medicine.

Since our book was published in 2000 I have been committed to testing the ideas in it by designing and deploying many kind of clinical systems, and demonstrating that AI techniques can significantly improve quality and safety of clinical decision-making and process management. Patient safety is fundamental to clinical practice so, alongside the goals of building systems that can improve on human performance, safety and ethics have always been near the top of my research agenda.


Luke Muehlhauser: Was it straightforward to address issues like safety and ethics in practice?


John Fox: While our concepts and technologies have proved to be clinically successful we have not achieved everything we hoped for. Our attempts to ensure, for example, that practical and commercial deployments of AI technologies should explicitly honor ethical principles and carry out active safety management have not yet achieved the traction that we need to achieve. I regard this as a serious cause for concern, and unfinished business in both scientific and engineering terms.

The next generation of large-scale knowledge based systems and software agents that we are now working on will be more intelligent and will have far more autonomous capabilities than current systems. The challenges for human safety and ethical use of AI that this implies are beginning to mirror those raised by the singularity hypothesis. We have much to learn from singularity researchers, and perhaps our experience in deploying autonomous agents in human healthcare will offer opportunities to ground some of the singularity debates as well.

Read more »

Daniel Roy on probabilistic programming and AI

 |   |  Conversations

Daniel Roy portrait Daniel Roy is an Assistant Professor of Statistics at the University of Toronto. Roy earned an S.B. and M.Eng. in Electrical Engineering and Computer Science, and a Ph.D. in Computer Science, from MIT.  His dissertation on probabilistic programming received the department’s George M Sprowls Thesis Award.  Subsequently, he held a Newton International Fellowship of the Royal Society, hosted by the Machine Learning Group at the University of Cambridge, and then held a Research Fellowship at Emmanuel College. Roy’s research focuses on theoretical questions that mix computer science, statistics, and probability.

Luke Muehlhauser: The abstract of Ackerman, Freer, and Roy (2010) begins:

As inductive inference and machine learning methods in computer science see continued success, researchers are aiming to describe even more complex probabilistic models and inference algorithms. What are the limits of mechanizing probabilistic inference? We investigate the computability of conditional probability… and show that there are computable joint distributions with noncomputable conditional distributions, ruling out the prospect of general inference algorithms.

In what sense does your result (with Ackerman & Freer) rule out the prospect of general inference algorithms?


Daniel Roy: First, it’s important to highlight that when we say “probabilistic inference” we are referring to the problem of computing conditional probabilities, while highlighting the role of conditioning in Bayesian statistical analysis.

Bayesian inference centers around so-called posterior distributions. From a subjectivist standpoint, the posterior represents one’s updated beliefs after seeing (i.e., conditioning on) the data. Mathematically, a posterior distribution is simply a conditional distribution (and every conditional distribution can be interpreted as a posterior distribution in some statistical model), and so our study of the computability of conditioning also bears on the problem of computing posterior distributions, which is arguably one of the core computational problems in Bayesian analyses.

Second, it’s important to clarify what we mean by “general inference”. In machine learning and artificial intelligence (AI), there is a long tradition of defining formal languages in which one can specify probabilistic models over a collection of variables. Defining distributions can be difficult, but these languages can make it much more straightforward.

The goal is then to design algorithms that can use these representations to support important operations, like computing conditional distributions. Bayesian networks can be thought of as such a language: You specify a distribution over a collection of variables by specifying a graph over these variables, which breaks down the entire distribution into “local” conditional distributions corresponding with each node, which are themselves often represented as tables of probabilities (at least in the case where all variables take on only a finite set of values). Together, the graph and the local conditional distributions determine a unique distribution over all the variables.

An inference algorithms that support the entire class of all finite, discrete, Bayesian networks might be called general, but as a class of distributions, those having finite, discrete Bayesian networks is a rather small one.

In this work, we are interested in the prospect of algorithms that work on very large classes of distributions. Namely, we are considering the class of samplable distributions, i.e., the class of distributions for which there exists a probabilistic program that can generate a sample using, e.g., uniformly distributed random numbers or independent coin flips as a source of randomness. The class of samplable distributions is a natural one: indeed it is equivalent to the class of computable distributions, i.e., those for which we can devise algorithms to compute lower bounds on probabilities from descriptions of open sets. The class of samplable distributions is also equivalent to the class of distributions for which we can compute expectations from descriptions of bounded continuous functions.

The class of samplable distributions is, in a sense, the richest class you might hope to deal with. The question we asked was: is there an algorithm that, given a samplable distribution on two variables X and Y, represented by a program that samples values for both variables, can compute the conditional distribution of, say, Y given X=x, for almost all values for X? When X takes values in a finite, discrete set, e.g., when X is binary valued, there is a general algorithm, although it is inefficient. But when X is continuous, e.g., when it can take on every value in the unit interval [0,1], then problems can arise. In particular, there exists a distribution on a pair of numbers in [0,1] from which one can generate perfect samples, but for which it is impossible to compute conditional probabilities for one of the variables given the other. As one might expect, the proof reduces the halting problem to that of conditioning a specially crafted distribution.

This pathological distribution rules out the possibility of a general algorithm for conditioning (equivalently, for probabilistic inference). The paper ends by giving some further conditions that, when present, allow one to devise general inference algorithms. Those familiar with computing conditional distributions for finite-dimensional statistical models will not be surprised that conditions necessary for Bayes’ theorem are one example.
Read more »