Pervasive Artificial Intelligence Research (PAIR) Labs
Safe Explainable AI via Behavior Decomposition
Principal Investigator:Prof. Jacky Baltes
—
Summary
The goal of this project is to develop novel algorithms that are able to transform neural networks and deep learning architectures learned into representations that are susceptible to analysis and verification. Artificial neural networks and especially deep learning approaches are popular at the moment, because of theiroutstanding performance on a variety of tasks. One important drawback of artificial neural networks is the fact that they act as a black box and that is impossible in many cases to extract the knowledge from the network. Therefore, a user can never be sure if the network learned the correct function or not. This may lead to poor and incorrect performance of the network or even biases against certain classes of users (e.g., misclassifying images of black people or Asians). Since more and more AI algorithms directly and substantially impact people’s lives, there has been a recent push towards explainable AI, that is AI algorithms whose performance can be interpreted, analyzed, and understood by humans. We propose behavior tree programs, an extension of Brook’s subsumption architecture, as an intermediate representation suitable to model the important aspects of perception, motion planning, and goal reasoning of several important classes of robot systems. We investigate and evaluate our approach in three domains: (a) self-driving cars, (b) nuclear power plant operation robots, and (c) service robots. These robot systems pose unique and important challenges for AI. In the self-driving car application, we investigate methods for converting and visualization the mapping from images (e.g., color image) into perceptions (e.g,a traffic light that is currently green is 30m in front of the car). The goal is to evaluate the robustness of the perception against other images. In the nuclear power plant operation robot domain, we investigate the motion plans generated by a robot through reinforcement learning for decommissioning generator set (e.g., evaluate radiation space distribution for the planning of decommissioning strategy or separate waste in different radiation level). In the AGV domain, we transpilegoal directed behaviorinto plans (i.e., action sequences to achieve a manufacture process such as carrying a product box to an assigned location). Initially, we use a white box approach, that is we use knowledge about the internal structure of the system in our conversion. For example, the output of deep learning network or the reinforcement learner will be used directly. In the last stage, we will treat the system as a black box and infer an approximate behavior tree program for a robot system without knowledge about its internal structure.
Keywords
Explainable AI, behavior tree decomposition, nuclear plant operation robot, autonomous guide vehicle, self-driving cars.
Innovations
- We propose behavior tree programs, an extension of Brook’s subsumption architecture, as an intermediate representation suitable to model the important aspects of perception, motion planning, and goal reasoning of several important classes of robot systems.
- We investigate and evaluate our approach in three domains: (a) self-driving cars, (b) nuclear power plant operations robots, and (c) service robots.
- The first application is the new and exciting area of self-driving cars, that is cars that can fully autonomously drive safely over highways and through city traffic.
- The second application that we use to develop and test our approach are operations robot in industrial plants.
- The last application that we employ in our research are social robots, that is robots that cooperate with humans and other robots in common tasks found in the home or at work.
Benefits
- In our project, however, we focus on a programming language that allows us to easily express the control flow behavior of robotic applications.
- The programming language should be abstract enough so that it allows us to reason about the beliefs and intentions of the robot, as this is important for explainable AI.
- Through the development of an integrated and applied robotics project, the students acquire knowledge in a practical and contextualized way. This systemic approach not only helps on the knowledge consolidation, but also, and more importantly, prepare the students in their social and interdisciplinary skills.
- Problem solving requires research, creativity, logical reasoning and action planning. Group work helps them develop their capacity of communicating ideas, reason and negotiate.
- Leadership skills are also developed in this way. All these social skills are as important as the technical ones, and maybe even more important in the long term — given the fast pace of technological advance, tools, methods, technologies, they are likely to change, be updated and replaced in short period of time. Social skills, like the ones mentioned above, help students be prepared for these changes.
- In the two sample applications that we propose, we address the themes of self-driving cars and nuclear power plant robots, both of which will play a major role in society in the future.