David J. Atkinson on autonomous systems

 |   |  Conversations

David J. Atkinson portrait David J. Atkinson, Ph.D, is a Senior Research Scientist at the Florida Institute for Human and Machine Cognition (IHMC). His current area of research envisions future applications of intelligent, autonomous agents, perhaps embodied as robots, who work alongside humans as partners in teamwork or provide services. Dr. Atkinson’s major focus is on fostering appropriate reliance and interdependency between humans and agents, and the role of social interaction in building a foundation for mutual trust between humans and intelligent, autonomous agents. He is also interested in cognitive robotics, meta-reasoning, self-awareness, and affective computing. Previously, he held several positions at California Institute of Technology, JPL (a NASA Center), where his work spanned basic research in artificial intelligence, autonomous systems and robotics with applications to robotic spacecraft, control center automation, and science data analysis. Recently, Dr. Atkinson delivered an invited plenary lecture on the topic of “Trust Between Humans and Intelligent Autonomous Agents” at the 2013 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT 2013). Dr. Atkinson holds a Bachelor’s degree in Psychology from University of Michigan, dual Master of Science and Master of Philosophy degrees in Computer Science (Artificial Intelligence) from Yale University, and the Doctor of Technology degree (d.Tekn) in Computer Systems Engineering from Chalmers University of Technology in Sweden.

Luke Muehlhauser: One of your projects at IHMC is “The Role of Benevolence in Trust of Autonomous Systems“:

The exponential combinatorial complexity of the near-infinite number of states possible in autonomous systems voids the applicability of traditional verification and validation techniques for complex systems. New and robust methods for assessing the trustworthiness of autonomous systems are urgently required if we are to have justifiable confidence in such applications both pre-deployment and during operations…  The major goal of the proposed research is to operationalize the concept of benevolence as it applies to the trustworthiness of an autonomous system…

Some common approaches for ensuring desirable behavior from AI systems include testing, formal methods, hybrid control, and simplex architectures. Where does your investigation of “benevolence” in autonomous systems fit into this landscape of models and methods?


David J. Atkinson: First let me point out that test, formal methods and such other techniques that you point out have little to do with ensuring desirable behavior and more to do with avoiding errors in behavior due to design or implementation flaws. These are not equivalent. Existing techniques improve reliability, but that is only one component of trust and it only concerned with behavior “as designed”. Furthermore, as your quote from my material points out, when it comes to the near-infinite state spaces of autonomy, those “common approaches” are inherently limited and cannot make the same strong claims regarding system behavior that they could with machines where the envelope of behavior could be completely known.

Therefore, I chose to look for answers regarding trust, trustability and trustworthiness in the operations phase of the autonomous system lifecycle because it is here, not in testing and evaluation, where the full complexity of autonomous behavior will manifest itself in response to the uncertainty, dynamics, and real-world complexity that is its major strength. My approach is to focus is on the applicability of human interpersonal trust to operation of autonomous systems. The principle reasons for this are 1) Autonomy is limited — humans make the decisions to rely upon an intelligent agent, subject to a variety of individual and situational factors and constraints, and; 2) Eons of evolution have created a reasonably good mechanism in humans for trust. It is fundamental to every human social transaction and it works. Beyond reliability, studies in multiple disciplines have shown that people want evidence of capability, predictability, openness (aka transparency), and safety before granting a measure of trustworthiness to a machine. Key questions revolve around the nature of that evidence, how it is provided, and what inferences can be reasonably made. People use both cognitive and affective mental processes for evaluating trustworthiness.  My central claim is that if we can reverse engineer these mechanisms and make intelligent autonomous agents (IATs) that are compliant with the human trust process (two big ifs), then we will have created a new way for humans to have trusting relationships with they machines they rely upon. It will be a transition from seeing IATs as tools to treating them as partners.

Benevolence is interesting for a couple of reasons. First, it is a complex attribution built upon a structure of beliefs about another person that include good will, competency, predictability, lack of a hidden agenda, agency and other beliefs, most which are likely to play a role in many kinds of interactions between human and IAT. Second, just looking at that list of beliefs makes it a hard problem, although my colleagues think people will have no problem attributing agency to a machine. Third, there are important applications of IATs, perhaps embodied as robots, where the unique psychology of benevolence is critical to success. For example, disaster rescue. It is long known that disaster victims have a unique psychology borne of stress, fear and the psycho-physiological effects these evoke. Human first responders undergo special training on victim psychology. One of the reasons is that a victim, justifiably afraid for their life, may be very reluctant to trust a rescuer and without trust there is no cooperation and rescue can become very difficult. Benevolence seems to be a part of the trust that is required. Today, we have no idea whatsoever whether a real disaster victim will trust and cooperate with a robot rescuer.

Circling back to your question about “fitting in to the landscape of models and methods”, the ultimate goal of my research is to formulate design requirements, interaction methods, operations concepts, guidelines and more that, if followed, will result in an IAT that can itself engender well-justified human trust.


Luke: As you say, formal methods and other techniques are of limited help in cases where we don’t know how to formulate comprehensive and desirable design requirements, as is often the case for autonomous systems operating in unknown, dynamic environments. What kinds of design requirements might your approach to trustworthy systems suggest? Would these be formal design requirements, or informal ones?


David: There will certainly be gaps in what we can do today regarding the specific requirements related to unknown, dynamic environments. One of our objectives is to narrow those gaps so precise questions can be studied. It is one thing to wave hands and moan about uncertainty, and another entirely to do something about it.

Generally speaking, we are working towards formal specification of the traditional types of requirements: Functional, Performance, Design Constraint, and Interface (both Internal and External). By “formal”, I mean “complete according to best practices”. I do not mean “expressed in formal logic or according to a model-based language” — that is a step beyond. Requirements may be linked in numerous ways to each other forming a directed graph (hopefully with no cycles!). Relationships include Source, Required-By, Depends-upon and so forth.

For example, an attribution of benevolence by a human “Trustor” requires a belief (among numerous others) that the candidate “Trustee” (the autonomous system) has “no hidden ill will”. This is a very anthropomorphic concept that some might scoff at, but studies have demonstrated its importance to attribution of benevolence and so it is a factor we must reckon in designing trustworthy and trustable autonomous systems. But what does it even mean?

Just to give you an idea of how we are breaking this down, here are some of the derived requirements. I should emphasize that it is very premature to make any claims about the quality or completeness of what we done with requirements engineering thus far — mostly that work is on the schedule for next year so I’ll only give you the types and titles: We will have as a matter of course a generic Level 1 Functional requirement to “Provide Information to Human User”. Derived from this is an Interface requirement something like “Volunteer Information: The Autonomous System shall initiate communication and provide information that is import to humans” (actually, that is two requirements). This in turn is elaborated by a number of Design Constraints such as “Disposition: The Autonomous System shall disclose any disposition that could result in harm to the interests of a human”. These Design Constraints are in turn linked to numerous other detailed requirements such as this Performance Requirement “Behavior – Protective: The Autonomous System shall recognize when its behavior could result in harm to the interests of a human”. A designer of an autonomous system will recognize a number of very hard problems in this simple example that need to be solved to effectively address the hypothetical need for a benevolent autonomous system.. Our focus in this project is on “what” needs to be done, not “how” to do it.

Without a doubt, this requirements process will generate plenty of questions that will require further study. It is also likely that some requirements may conflict and any particular application of autonomy must do tradeoff studies to prioritize. Nevertheless, our goal is to spell out as much as we can. We will differentiate mandatory requirements from goals or objectives. Where it is possible, we will allocate requirements to particular elements of an autonomous system’s architecture, such as “goal selection mechanism” (where issues relating to prioritization may arise and affect predictability and therefore trust). For every requirement, we will provide a rationale with links to studies and empirical data or other discussion that can be used in analysis. That part is very high priority in my mind. Too many times I have encountered requirements where it is impossible to understand how they were derived. I’d also like to take a risk-driven view of the requirements so that individual requirements, or groups of requirements, can be associated with particular risks. That is another area that will have to be quantified by further application-specific analysis. Finally, a good requirement has to be verifiable. There is considerable work to be done on this topic with respect to autonomous systems.

Requirements engineering is a lot of work of course, and the history of large scale system development is replete with horror stories of poor requirements. That’s why I want to express our trust-related requirements formally following requirements engineering standards  the extent possible. From my previous experience at NASA, I know that the less work a project has to do, the more likely it is to to adopt existing requirements. So we are developing our trust-related requirements consciously with the goal of making them easy to understand and easy to reuse. Finally, given the scope of what is required it is likely that we will only be able to go just so far under the auspices of our current project to provide a vector for future work (hint hint to potential sponsors out there!)


Luke: From your description, this research project seems somewhat interdisciplinary, and the methodology seems less clear-cut than is the case with many other lines of research aimed at similar goals (e.g. some new project to model-check a particular software design for use in robotics). It’s almost “pre-paradigmatic,” in the Kuhnian sense. Do you agree? If not, are there are other research groups who are using this methodology to explore related problems?


David: Yes, the project is very interdisciplinary but with the primary ones being social and cognitive psychology, social robotics, and artificial intelligence we well as the rigor contributed by solid systems engineering. The relevant content within each of those disciplines can be quite broad. It is not a small undertaking. My hope is that I can plant some memes in each community to help bring them together on this topic of trust and these memes will foster further work. As far as methodology, we are pursuing both theoretical development and experimentation, and trying to be as rigorous as exploratory work of this nature will permit. We have to understand the previous psychological work on human interpersonal trust, and human factors studies on human-automation trust, to find those results that may have important implications for human trust of an intelligent, autonomous agent. The importance of agency is a good example.

We know from psychological studies that an attribution of “free-will”, or more narrowly, the ability to choose, is an important component of deciding whether someone else is benevolent or not. That is, if a person feels the other is compelled to help then the less likely they are to believe that other person is benevolent. Apart from philosophers, most people don’t think very deeply about this. They make a presumption of free-will and then look to see if there are reasons it is limited, for example, is the person just following orders, or required by their profession or social norms to act in a particular way? With machines, we start from the other side: people assume machines have no free-will because they believe machines are (just) programmed. However, there are some studies that suggest that when machine behavior is complex (enough), and somewhat unpredictable because there are numerous possible courses of action, then people begin to attribute mental states including the ability to choose. My hunch is this is an evoked response of our innate demand to interpret the actions of others in a framework of intentionality. At some point, enough features are present that those evolution-designed heuristics kick in to aid in understanding. WHY that may be, I will leave to the evolutionary socio-biologists.

Back to methodology now: This is a phenomena for which we can design a study involving humans and machines, with systematic variation of various factors to see what qualities a machine actually requires in order to evoke a human attribution of the ability to choose to help. This year we will be conducting just such a study. I hope to begin running participants this summer. We have been rigorous with experimental design, choice of what data to collect and the statistical methods we will use to analyze the results. While we will fudge a little bit on the robot implementation, using simulation in places and “wizard-of-oz” techniques for language interaction for example, the products of the study ought to be recognized as solid science by researchers in multiple disciplines if we do it right. In general, I think this a very hard goal to achieve because each discipline community has certain preferences and biases about what they like to see in methodology before they are convinced. There are a couple of other groups working on social robotics who use this approach, and a very few number of psychologists who are working on human-centered design of automation. I don’t want to start listing names because I’m sure there are others of whom I’m not yet aware. I do know that I am certainly not the first to confront this challenge. Multidisciplinary research of this type is always, in some sense, pre-paradigmatic because it is a struggle for understanding and legitimacy at the boundaries of separate disciplines, not the core. Artificial intelligence has always been multidisciplinary, a strength as well as a weakness as far as more general acceptance among related disciplines. I don’t worry too much about what other people think. I just do what I believe has to be done.


Luke: What are some concrete outcomes you hope will be achieved during the next 5 years of this kind of research?


David: As a wise man once said, “It’s hard to make predictions, especially about the future.”

My hopes. This one is concrete to me, but perhaps not what you had in mind: I hope that the value of our approach will be convincingly demonstrated and other research groups and early-career researchers will join in. There is much to be explored and plenty of opportunity to make discoveries that can have a real impact.

On balance, it seems that in many potential application domains the biggest challenge is not mistrust or absence of trust, but excessive trust. It is abundantly clear that people have no trouble trusting machines in most contexts, and, as this is somewhat tied to generational attitudes, the condition will probably increase. Sometimes this leads to over-reliance and complacency, and then surprising (and potentially dangerous) conditions if things go sour. A funny but real example is the driver who set cruise control on his RV on a long stretch of straight highway, got out of the driver’s seat, and went in back to make a sandwich. You can guess what happened. Effective teamwork and cooperation requires that each team member understands the strengths and limitations of the others so under- and over-reliance do not occur. Intelligent, autonomous teammates need that same ability to participate effectively in this essential team-building process — a process that takes time and experience to build mutual familiarity. We will contribute to that solution.

I believe our work will lead directly to an understanding of When, Why, What, and (some of) How a machine needs to interact with human teammates to inoculate against (and/or correct for) under- and over-reliance. This technology is key for solving what many people claim is a lack of transparency in intelligent systems. It will help “users” to better understand the competency of a machine (the most important quality) in a given context consisting of dynamic situational factors, tasks and goals. This will in turn increase predictability (another important quality) and thereby help mitigate concerns about risks and safety. Ultimately, a deep solution that is broadly applicable will require a higher degree of machine meta-reasoning and self-awareness than we can engineer today, but this is an area of active research where useful results ought to be appearing more and more frequently. (The field of cognitive (developmental) robotics is very exciting.) However, I do expect concrete and useful results for early applications in some semi-structured task domains. A few examples of domains containing “low hanging fruit” for applications are transportation (e.g., autonomy-assisted driving, long haul trucking), healthcare (patient monitoring, therapy assistance, assisted living), and some defense-related applications. My group is actively working towards all of these possibilities. I don’t want to leave you with the impression that creating effective applications will be easy because many hard basic research challenges remain, and we will undoubtedly discover others when we start to transition the technology into real-world applications. Nevertheless, I’m optimistic!


Luke: Thanks, David!