David J. Atkinson on autonomous systems
David J. Atkinson, Ph.D, is a Senior Research Scientist at the Florida Institute for Human and Machine Cognition (IHMC). His current area of research envisions future applications of intelligent, autonomous agents, perhaps embodied as robots, who work alongside humans as partners in teamwork or provide services. Dr. Atkinson’s major focus is on fostering appropriate reliance and interdependency between humans and agents, and the role of social interaction in building a foundation for mutual trust between humans and intelligent, autonomous agents. He is also interested in cognitive robotics, meta-reasoning, self-awareness, and affective computing. Previously, he held several positions at California Institute of Technology, JPL (a NASA Center), where his work spanned basic research in artificial intelligence, autonomous systems and robotics with applications to robotic spacecraft, control center automation, and science data analysis. Recently, Dr. Atkinson delivered an invited plenary lecture on the topic of “Trust Between Humans and Intelligent Autonomous Agents” at the 2013 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT 2013). Dr. Atkinson holds a Bachelor’s degree in Psychology from University of Michigan, dual Master of Science and Master of Philosophy degrees in Computer Science (Artificial Intelligence) from Yale University, and the Doctor of Technology degree (d.Tekn) in Computer Systems Engineering from Chalmers University of Technology in Sweden.
Luke Muehlhauser: One of your projects at IHMC is “The Role of Benevolence in Trust of Autonomous Systems“:
The exponential combinatorial complexity of the near-infinite number of states possible in autonomous systems voids the applicability of traditional verification and validation techniques for complex systems. New and robust methods for assessing the trustworthiness of autonomous systems are urgently required if we are to have justifiable confidence in such applications both pre-deployment and during operations… The major goal of the proposed research is to operationalize the concept of benevolence as it applies to the trustworthiness of an autonomous system…
Some common approaches for ensuring desirable behavior from AI systems include testing, formal methods, hybrid control, and simplex architectures. Where does your investigation of “benevolence” in autonomous systems fit into this landscape of models and methods?