Roland Siegwart (born in 1959) is a Professor for Autonomous Systems and Vice President Research and Corporate Relations at ETH Zurich. After studying mechanics and mechatronics at ETH, he was engaged in starting up a spin-off company, spent ten years as professor for autonomous microsystems at EPFL Lausanne and held visiting positions at Stanford University and NASA Ames.
In his research interests are in the creation and control of intelligent robots operating in complex and highly dynamical environments. Prominent examples are personal and service robots, inspection devices, autonomous micro-aircrafts and walking robots. He is and was the coordinator of European projects, co-founder of half a dozen spin-off companies and board member of various high-tech companies.
Roland Siegwart is member of the Swiss Academy of Engineering Sciences, IEEE Fellow and officer of the International Federation of Robotics Research (IFRR). He is in the editorial board of multiple journals in robotics and was a general chair of several conferences in robotics including IROS 2002, AIM 2007, FSR 2007, ISRR 2009.
Luke Muehlhauser: In 2004 you co-authored Introduction to Autonomous Mobile Robots, which offers tutorials on many of the basic tasks of autonomous mobile robots: locomotion, kinematics, perception, localization, navigation, and planning.
In your estimation, what are the most common approaches to “gluing” these functions together? E.g. are most autonomous mobile robots designed using an agent architecture, or some other kind of architecture?
Roland Siegwart: Mobile robots are very complex systems, that have to operate in real world environments and have to take decisions based on uncertain and only partially available information. In order to do so, the robot’s locomotion, perception and navigation system has to be best adapted to the environment and application setting. So robotics is before all a systems engineering task requiring a broad knowledge and creativity. A wrongly chosen sensor setup cannot be compensated by the control algorithms. In my view, the only proven concepts for autonomous decision making with mobile robots are Gaussian Processes and Bayes Filters. They allow to deal with uncertain and partial information in a consistent way and enable learning. Gaussian Processes and Bayes filter can model a large variety of estimation and decision processes and can be implemented in different forms, e.g. as the well-known Kalman Filter estimator.
Most mobile robots use some sort of agent architecture. However, this is not a key issue in mobile robots, but rather an implementation issue for systems that run multiple tasks in parallel. The main perception, navigation and control algorithms have to adapt to unknown situation in a somewhat predictably and consistent manner. Therefore the algorithms and navigation concepts should also allow the robotics engineer to learn from experiments. This is only possible, if navigation, control and decision making is not implemented in a black-box manner, but in a model based approach taking best advantage of prior knowledge and systems models.
Luke: So are you saying that the glue which holds together the perception, navigation, and control algorithms is typically an agent architecture, and this is largely because you need to integrate those functions in a model-based manner which can reveal to the engineer what’s going wrong (in early experiments) and how to improve it? Or are you saying something else?
Roland: You understanding is only partially correct. Yes, most robot systems make use of some sort of an agent architecture, because it is the most evident concept to implement independent parallel tasks, like for example robot localization and security stop using the bumper signals. However, I don’t see agent architecture as a major issue in robotics or as the main glue. The glue for designing and implementing autonomous robots is with the fundamental understanding of all key elements and its interplay by the robotics engineer. Furthermore, Gaussian Processes and Bayes filter are today the most promising and proven approach for autonomous navigation, especially Simultaneous Localization and Mapping.
Luke: As robotic systems are made increasingly general and capable, do you think a shift in techniques will be required? E.g. 15 years from now do you expect Gaussian Processes and Bayes filters to be even more dominant in robotics than they are today, or do you expect rational agent architectures to ascend, or do you expect hybrid systems control to take over, or what? (Wild speculation is allowed; I know you’re not a crystal ball!)
Roland: I consider Gaussian Processes and Bayes filters the most powerful tools to create rational agents. They enable to learn correlations and models, and to reason about situations and future goals. This model-based approaches will gain importance in contrast to behavior-base approaches. However, there will probably never be a single unifying approach for creating intelligent agents.
Robotics is the art of combining sensing, actuation and intelligent control in the most creative and optimal way.
Luke: Why do you expect model-based approaches to gain importance relative to behavior-based approaches?
Roland: In order to take “wise” decisions and plan actions, a robot has to be able to anticipate reactions its decisions and actions might have. This can only be realized by models, that form the basis for predictions. Furthermore, unsupervised learning also requires models that enable the robot system to learn from experience. Models enable the robot to generalize experiences which is not really possible with behavior-based approaches.
Luke: From your perspective, what has been some of the most interesting work in model-based approaches to autonomous robots in the past 5 years?
Roland: I think the most prominent model-based approach in robotics is within SLAM (Simultaneous Localization and Mapping) which can considered to be pretty much solved.
Thanks to consistent application to Gaussian Processes and Bayes filters, and appropriate error modelling, SLAM is today feasible with different sensors (Laser, Vision) and on wheeled and flying platforms.
Large scale maps with considerable dynamics, changes in lightning conditions and loop closures have been demonstrated be groups from Oxford, Sidney University, MIT, ETH an many more.
An other robotics field, where a lot of progress has been achieved by model base approaches, is in imitation learning of complex manipulation task. By combining physical models of human arms and robot manipulators with probabilistic processes, learning of various manipulation task has been demonstrated by groups at USC, DLR, KIT, EPFL and many other places.
Luke: Thanks, Roland!
Did you like this post? You may enjoy our other Conversations posts, including: