MIRI’s May 2014 Newsletter
|
||
|
||
|
||
|
||
During his time as a MIRI researcher, Kaj Sotala contributed to a paper now published in the Journal of Experimental & Theoretical Artificial Intelligence: “The errors, insights and lessons of famous AI predictions – and what they mean for the future.”
Abstract:
Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus’s criticism of AI, Searle’s Chinese room paper, Kurzweil’s predictions in the Age of Spiritual Machines, and Omohundro’s ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.
Dr. Suresh Jagannathan joined DARPA in September 2013. His research interests include programming languages, compilers, program verification, and concurrent and distributed systems.
Prior to joining DARPA, Dr. Jagannathan was a professor of computer science at Purdue University. He has also served as visiting faculty at Cambridge University, where he spent a sabbatical year in 2010; and as a senior research scientist at the NEC Research Institute in Princeton, N.J.
Dr. Jagannathan has published more than 125 peer-reviewed conference and journal publications and has co-authored one textbook. He holds three patents. He serves on numerous program and steering committees, and is on the editorial boards of several journals.
Dr. Jagannathan holds Doctor of Philosophy and Master of Science degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. He earned a Bachelor of Science degree in Computer Science from the State University of New York, Stony Brook.
Luke Muehlhauser: From your perspective, what are some of the most interesting or important developments in higher-order verification in the past decade?
Suresh Jagannathan: I would classify the developments of the past decade into four broad categories:
Ruediger Schack is a Professor at the Department of Mathematics at Royal Holloway, University of London. He obtained his PhD in Theoretical Physics at the University of Munich in 1991 and held postdoctoral positions at the Max Planck Institute for Quantum Optics, the University of Southern California, the University of New Mexico, and Queen Mary and Westfield College before joining Royal Holloway in 1995. His research interests are quantum information theory, quantum cryptography and quantum Bayesianism.
Luke Muehlhauser: In Fuchs et al. (2013), you and your co-authors provide an introduction to quantum Bayesianism aka “QBism,” which you more or less co-invented with Carlton Caves and Christopher Fuchs. But before I ask about QBism, let me ask one of the questions asked of the interviewees in Elegance and Enigma: The Quantum Interviews (including Fuchs): “What first stimulated your interest in the foundations of quantum mechanics?”
Ruediger Schack: I can trace the beginning of my interest in quantum foundations to reading one paper: “Where do we stand on maximum entropy?” by Ed Jaynes, and one book: Du Microscopique au Macroscopique by Roger Balian. Jaynes’s paper introduced me to Bayesian probability theory, and Balian’s book taught me that one can think of quantum states as representing Bayesian probabilities.
David J. Atkinson, Ph.D, is a Senior Research Scientist at the Florida Institute for Human and Machine Cognition (IHMC). His current area of research envisions future applications of intelligent, autonomous agents, perhaps embodied as robots, who work alongside humans as partners in teamwork or provide services. Dr. Atkinson’s major focus is on fostering appropriate reliance and interdependency between humans and agents, and the role of social interaction in building a foundation for mutual trust between humans and intelligent, autonomous agents. He is also interested in cognitive robotics, meta-reasoning, self-awareness, and affective computing. Previously, he held several positions at California Institute of Technology, JPL (a NASA Center), where his work spanned basic research in artificial intelligence, autonomous systems and robotics with applications to robotic spacecraft, control center automation, and science data analysis. Recently, Dr. Atkinson delivered an invited plenary lecture on the topic of “Trust Between Humans and Intelligent Autonomous Agents” at the 2013 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT 2013). Dr. Atkinson holds a Bachelor’s degree in Psychology from University of Michigan, dual Master of Science and Master of Philosophy degrees in Computer Science (Artificial Intelligence) from Yale University, and the Doctor of Technology degree (d.Tekn) in Computer Systems Engineering from Chalmers University of Technology in Sweden.
Luke Muehlhauser: One of your projects at IHMC is “The Role of Benevolence in Trust of Autonomous Systems“:
The exponential combinatorial complexity of the near-infinite number of states possible in autonomous systems voids the applicability of traditional verification and validation techniques for complex systems. New and robust methods for assessing the trustworthiness of autonomous systems are urgently required if we are to have justifiable confidence in such applications both pre-deployment and during operations… The major goal of the proposed research is to operationalize the concept of benevolence as it applies to the trustworthiness of an autonomous system…
Some common approaches for ensuring desirable behavior from AI systems include testing, formal methods, hybrid control, and simplex architectures. Where does your investigation of “benevolence” in autonomous systems fit into this landscape of models and methods?
Update: We’re now liveblogging the fundraiser here.
On May 6th, MIRI is participating in Silicon Valley Gives. We were selected to participate along with other local Bay Area charities by the Silicon Valley Community Foundation. On this day, we recommend donors make gifts to MIRI through the SV Gives portal so we can qualify for some of the matching and bonus funds provided by dozens of Bay Area philanthropists.
Why is this exciting for supporters of MIRI? Many reasons, but here are a few:
Making the most of this opportunity will require some cleverness and a lot of coordination. We are going to need all the help we can get. Here are some ways you can help:
Roland Siegwart (born in 1959) is a Professor for Autonomous Systems and Vice President Research and Corporate Relations at ETH Zurich. After studying mechanics and mechatronics at ETH, he was engaged in starting up a spin-off company, spent ten years as professor for autonomous microsystems at EPFL Lausanne and held visiting positions at Stanford University and NASA Ames.
In his research interests are in the creation and control of intelligent robots operating in complex and highly dynamical environments. Prominent examples are personal and service robots, inspection devices, autonomous micro-aircrafts and walking robots. He is and was the coordinator of European projects, co-founder of half a dozen spin-off companies and board member of various high-tech companies.
Roland Siegwart is member of the Swiss Academy of Engineering Sciences, IEEE Fellow and officer of the International Federation of Robotics Research (IFRR). He is in the editorial board of multiple journals in robotics and was a general chair of several conferences in robotics including IROS 2002, AIM 2007, FSR 2007, ISRR 2009.
Luke Muehlhauser: In 2004 you co-authored Introduction to Autonomous Mobile Robots, which offers tutorials on many of the basic tasks of autonomous mobile robots: locomotion, kinematics, perception, localization, navigation, and planning.
In your estimation, what are the most common approaches to “gluing” these functions together? E.g. are most autonomous mobile robots designed using an agent architecture, or some other kind of architecture?
Roland Siegwart: Mobile robots are very complex systems, that have to operate in real world environments and have to take decisions based on uncertain and only partially available information. In order to do so, the robot’s locomotion, perception and navigation system has to be best adapted to the environment and application setting. So robotics is before all a systems engineering task requiring a broad knowledge and creativity. A wrongly chosen sensor setup cannot be compensated by the control algorithms. In my view, the only proven concepts for autonomous decision making with mobile robots are Gaussian Processes and Bayes Filters. They allow to deal with uncertain and partial information in a consistent way and enable learning. Gaussian Processes and Bayes filter can model a large variety of estimation and decision processes and can be implemented in different forms, e.g. as the well-known Kalman Filter estimator.
Most mobile robots use some sort of agent architecture. However, this is not a key issue in mobile robots, but rather an implementation issue for systems that run multiple tasks in parallel. The main perception, navigation and control algorithms have to adapt to unknown situation in a somewhat predictably and consistent manner. Therefore the algorithms and navigation concepts should also allow the robotics engineer to learn from experiments. This is only possible, if navigation, control and decision making is not implemented in a black-box manner, but in a model based approach taking best advantage of prior knowledge and systems models.
Read more »
Domitilla Del Vecchio received the Ph. D. degree in Control and Dynamical Systems from the California Institute of Technology, Pasadena, and the Laurea degree in Electrical Engineering from the University of Rome at Tor Vergata in 2005 and 1999, respectively. From 2006 to 2010, she was an Assistant Professor in the Department of Electrical Engineering and Computer Science and in the Center for Computational Medicine and Bioinformatics at the University of Michigan, Ann Arbor. In 2010, she joined the Department of Mechanical Engineering and the Laboratory for Information and Decision Systems (LIDS) at the Massachusetts Institute of Technology (MIT), where she is currently an Associate Professor. She is a recipient of the Donald P. Eckman Award from the American Automatic Control Council (2010), the NSF Career Award (2007), the Crosby Award, University of Michigan (2007), the American Control Conference Best Student Paper Award (2004), and the Bank of Italy Fellowship (2000). Her research interests include analysis and control of networked dynamical systems with application to bio-molecular networks and transportation networks.
Luke Muehlhauser: In Verma & del Vecchio (2011), you and your co-author summarize some recent work in semiautonomous multivehicle safety from the perspective of hybrid systems control. These control systems will “warn the driver about incoming collisions, suggest safe actions, and ultimately take control of the vehicle to prevent an otherwise certain collision.”
I’d like to ask about the application of hybrid control to self-driving cars in particular. Presumably, self-driving cars will operate in two modes: “semi-autonomous” (human driver, with the vehicle providing warnings and preventing some actions) and “fully autonomous” (no human driver). Do you think hybrid control will be used for both purposes, in commercial self-driving cars released (e.g.) 10 years from now? Or do you think hybrid control will be competing with other approaches aimed at ensuring safe behavior in autonomous and semi-autonomous vehicles?