Machine learning allows automated systems to identify structures and physical laws based on measured data, which is particularly useful in areas where an analytic derivation of a model is too tedious or not possible. Research in reinforcement learning led to impressive results and superhuman performance in well-structured tasks and games. However, to this day, data-driven models are rarely employed in the control of safety critical systems, because the success of a controller, which is based on these models, cannot be guaranteed. Therefore, the research presented in this talk analyzes the closed-loop behavior of learning control laws by means of rigorous proofs. More specifically, we propose a control law based on Gaussian process (GP) models, which actively avoids uncertainties in the state space and favors trajectories along the training data, where the system is well-known. We show that this behavior is optimal as it maximizes the probability of asymptotic stability. Additionally, we consider an event-triggered online learning control law, which safely explores an initially unknown system. It only takes new training data whenever the uncertainty in the system becomes too large. As the control law only requires a locally precise model, this novel learning strategy has a high data efficiency and provides safety guarantees.
Biography: Jonas Umlauft received the B.Sc. and M.Sc. degree in electrical engineering and information technology from the Technical University of Munich, Germany, in 2013 and 2015, respectively. His Master’s thesis was completed at the Computational and Biological Learning Group at the University of Cambridge, UK. Since May 2015, he is a PhD student at the Chair of Information-oriented Control, Department of Electrical and Computer Engineering at the Technical University of Munich, Germany. His current research interests include the safety of data-driven control loops and active exploration using self-learning autonomous systems.