Professor Sandor Veres was born and educated in Hungary as applied mathematician. He completed his PhD in dynamical modelling of stochastic systems in 1983 and worked in industry on computer controlled systems. In 1987-1988 he received two consecutive scholarships at Imperial College London and at Linacre College Oxford.
Between 1989-1999 he was lecturer at the Electronic and Electrical engineering department at the University of Birmingham and pursued research in system identification, adaptive control and embedded electronic control systems. In 2000 he joined the University of Southampton to do work in the areas of active vibration control systems, adaptive and learning systems, satellite formation flying and autonomous control. Since 2002 he has held a chair in control systems engineering and was chairman of IFAC Technical Committee of Adaptive and Learning Systems.
At Southampton he has established the Centre for Complex Autonomous Systems Engineering where he is now visiting professor. Today his main research interest is agent-based control systems and he is leading the Autonomous Systems and Robotics Group at the Department of Automatic Control at the University of Sheffield since 2013. He published about 200 papers, authored 4 books and co-authored numerous software packages.
Luke Muehlhauser: In “Autonomous Asteroid Exploration by Rational Agents” (2013), you discuss a variety of agent architectures for use in contexts where autonomous or semi-autonomous operation is critical — in particular, in outer space, where there are long communication delays between a robot and a human operator on Earth. Before we talk about rational agent architectures in particular, could you explain what an “agent” architecture is from your perspective, and why it is superior to other designs for many contexts?
Sandor Veres: First of all I would like to say that I am not sure that all kinds of agent architectures are “superior”. Some agent paradigms have however advantages as a way of organizing your complex software system which controls an intelligent machine such as a robot. The feature I most like in some agent architectures is when they exhibit anthropomorphic type of operations so that they can handle statements about the world in terms of space and time, past and future, express possibilities and necessities, behaviour rules, knowledge of other agent’s knowledge, including that of humans, can handle intentions and beliefs about a situation. If such constructs are part of the software, as opposed to being translated from a different kind of software, that makes things simpler in terms of programming. Also we would like robots to share with us their knowledge about the world, and we share our knowledge with them. It is an advantage if we share our ways of reasoning, a robot can be made more easy to understand and control if it is programmed in an anthropomorphic manner.
Luke: At one point in that paper you write that “Creating a new software architecture with significant benefits for autonomous robot control is a difficult problem when there are excellent software packages around…” What kinds of software architectures for autonomous robot control are out there already, and how generally applicable are they?
Sandor: By excellent packages I meant some robot operating system foundations such as the ROS for Linux and CCR for Windows on which one can build robot programs. Both of these have extensive libraries available for us to enable programming robot skills. For programming autonomous behaviour we have agent-oriented programming defined in exact terms by Yoav Shoham in 1993 and by now there are books just to review the many approaches to agent programming. There is also CLARAty by JPL which is continuous re-planning based rather than agent oriented. To summarise, GofAI has been transformed during the past 3 decades into various logic-based, subsumption, multi-layered and belief-desire-intention robot programming approaches, so there is a long and interesting history of great effort in software architectures for robots.
Luke: Where you think the future of autonomous agents is headed?
Sandor: I hope that soon not only agent programming languages will be available but standardised agents for various applications of robots. These can then be either programmed further or just simply trained further before deployed in an application. Eventually these agents could provide high level of integrity, capability and safety of robot operations autonomously. A next step will be to make these physically able agents truly social agents so that they behave in an appropriate manner and cooperate where required.
Luke: How might we get high-assurance autonomous agents for safety-critical applications?
Sandor: Determinism is a fundamental feature of digital computing so far, though we have random phenomena in some parallel computing we do on robots. Determinism of software has the advantage that our robots are also deterministic. We currently make every effort to make robots always respond in an appropriate manner, solve problems and be predictable despite often complex and disturbing environment. The more intelligent they are the more complex environment they can handle. The more complex environment however also means that it becomes more difficult to formally verify and test all their possible deterministic responses. So the trade off is between complexity of agent-software and “determinism” of intelligence, i.e. suitable actions by the robot at all times. Though the agent is deterministic, the sensors they use may not be reliable which make their response to some degree probabilistic. The challenge we face is hence to built up conceptual abstractions of complex environments which enable reliable decision making of our robotic agents. On the question of whether this is always possible in practice the jury is still out.
Luke: Will research progress on autonomy capabilities outpace research progress on safety research?
Sandor: This is a very good question and I believe this is likely to happen. High levels of capabilities will however reduce the probability of inappropriate response by a robot as long as it is accompanied by a formally verifiable decision making process of the agent. For agent development the most we can do is to make it “perfect”, meaning that it should never intentionally do the wrong action or in case of failing hardware, it should always take the most likely positive action. This is easier said than done as the environment can create conflicting requirements for a robot’s response and in such cases it needs to behave as a moral agent. Moral agents can use models of a broader context of a situation and apply principles over a wide range of knowledge. Moral agents of the future are likely to need the knowledge of an educated adult.
Luke: Thanks, Sandor!
Did you like this post? You may enjoy our other Conversations posts, including: