Intelligent Systems
Note: This research group has relocated.


2022


The Wheelbot: A Jumping Reaction Wheel Unicycle
The Wheelbot: A Jumping Reaction Wheel Unicycle

Geist, A. R., Fiene, J., Tashiro, N., Jia, Z., Trimpe, S.

IEEE Robotics and Automation Letters, 7(4):9683-9690, IEEE, 2022 (article)

Abstract
Combining off-the-shelf components with 3D- printing, the Wheelbot is a symmetric reaction wheel unicycle that can jump onto its wheels from any initial position. With non-holonomic and under-actuated dynamics, as well as two coupled unstable degrees of freedom, the Wheelbot provides a challenging platform for nonlinear and data-driven control research. This letter presents the Wheelbot's mechanical and electrical design, its estimation and control algorithms, as well as experiments demonstrating both self-erection and disturbance rejection while balancing.

link (url) DOI [BibTex]

2022

link (url) DOI [BibTex]

2021


Learning-enhanced robust controller synthesis with rigorous statistical and control-theoretic guarantees
Learning-enhanced robust controller synthesis with rigorous statistical and control-theoretic guarantees

Fiedler, C., Scherer, C. W., Trimpe, S.

In 60th IEEE Conference on Decision and Control (CDC), IEEE, December 2021 (inproceedings) Accepted

Abstract
The combination of machine learning with control offers many opportunities, in particular for robust control. However, due to strong safety and reliability requirements in many real-world applications, providing rigorous statistical and control-theoretic guarantees is of utmost importance, yet difficult to achieve for learning-based control schemes. We present a general framework for learning-enhanced robust control that allows for systematic integration of prior engineering knowledge, is fully compatible with modern robust control and still comes with rigorous and practically meaningful guarantees. Building on the established Linear Fractional Representation and Integral Quadratic Constraints framework, we integrate Gaussian Process Regression as a learning component and stateof-the-art robust controller synthesis. In a concrete robust control example, our approach is demonstrated to yield improved performance with more data, while guarantees are maintained throughout.

link (url) [BibTex]

2021

link (url) [BibTex]


no image
Local policy search with Bayesian optimization

Müller, S., von Rohr, A., Trimpe, S.

In Advances in Neural Information Processing Systems 34, 25, pages: 20708-20720, (Editors: Ranzato, M. and Beygelzimer, A. and Dauphin, Y. and Liang, P. S. and Wortman Vaughan, J.), Curran Associates, Inc., Red Hook, NY, 35th Conference on Neural Information Processing Systems (NeurIPS 2021) , December 2021 (inproceedings)

Abstract
Reinforcement learning (RL) aims to find an optimal policy by interaction with an environment. Consequently, learning complex behavior requires a vast number of samples, which can be prohibitive in practice. Nevertheless, instead of systematically reasoning and actively choosing informative samples, policy gradients for local search are often obtained from random perturbations. These random samples yield high variance estimates and hence are sub-optimal in terms of sample complexity. Actively selecting informative samples is at the core of Bayesian optimization, which constructs a probabilistic surrogate of the objective from past samples to reason about informative subsequent ones. In this paper, we propose to join both worlds. We develop an algorithm utilizing a probabilistic model of the objective function and its gradient. Based on the model, the algorithm decides where to query a noisy zeroth-order oracle to improve the gradient estimates. The resulting algorithm is a novel type of policy search method, which we compare to existing black-box algorithms. The comparison reveals improved sample complexity and reduced variance in extensive empirical evaluations on synthetic objectives. Further, we highlight the benefits of active sampling on popular RL benchmarks.

arXiv GitHub link (url) [BibTex]

arXiv GitHub link (url) [BibTex]


Using Physics Knowledge for Learning Rigid-Body Forward Dynamics with Gaussian Process Force Priors
Using Physics Knowledge for Learning Rigid-Body Forward Dynamics with Gaussian Process Force Priors

Rath, L., Geist, A. R., Trimpe, S.

In Proceedings of the 5th Conference on Robot Learning, 164, pages: 101-111, Proceedings of Machine Learning Research, (Editors: Faust, Aleksandra and Hsu, David and Neumann, Gerhard), PMLR, 5th Conference on Robot Learning (CoRL 2021), November 2021 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


GoSafe: Globally Optimal Safe Robot Learning
GoSafe: Globally Optimal Safe Robot Learning

Baumann, D., Marco, A., Turchetta, M., Trimpe, S.

In 2021 IEEE International Conference on Robotics and Automation (ICRA 2021), pages: 4452-4458, IEEE, Piscataway, NJ, IEEE International Conference on Robotics and Automation (ICRA 2021), October 2021 (inproceedings)

DOI [BibTex]

DOI [BibTex]


Probabilistic robust linear quadratic regulators with Gaussian processes
Probabilistic robust linear quadratic regulators with Gaussian processes

von Rohr, A., Neumann-Brosig, M., Trimpe, S.

Proceedings of the 3rd Conference on Learning for Dynamics and Control, pages: 324-335, Proceedings of Machine Learning Research (PMLR), Vol. 144, (Editors: Jadbabaie, Ali and Lygeros, John and Pappas, George J. and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.), PMLR, Brookline, MA 02446 , 3rd Annual Conference on Learning for Dynamics and Control (L4DC), June 2021 (conference)

Abstract
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design. While learning-based control has the potential to yield superior performance in demanding applications, robustness to uncertainty remains an important challenge. Since Bayesian methods quantify uncertainty of the learning results, it is natural to incorporate these uncertainties in a robust design. In contrast to most state-of-the-art approaches that consider worst-case estimates, we leverage the learning methods’ posterior distribution in the controller synthesis. The result is a more informed and thus efficient trade-off between performance and robustness. We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin. The formulation is based on a recently proposed algorithm for linear quadratic control synthesis, which we extend by giving probabilistic robustness guarantees in the form of credibility bounds for the system’s stability. Comparisons to existing methods based on worst-case and certainty-equivalence designs reveal superior performance and robustness properties of the proposed method.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


On exploration requirements for learning safety constraints
On exploration requirements for learning safety constraints

Massiani, P., Heim, S., Trimpe, S.

In Proceedings of the 3rd Conference on Learning for Dynamics and Control, pages: 905-916, Proceedings of Machine Learning Research (PMLR), Vol. 144, (Editors: Jadbabaie, Ali and Lygeros, John and Pappas, George J. and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie), PMLR, 3rd Annual Conference on Learning for Dynamics and Control (L4DC), June 2021 (inproceedings)

Abstract
Enforcing safety for dynamical systems is challenging, since it requires constraint satisfaction along trajectory predictions. Equivalent control constraints can be computed in the form of sets that enforce positive invariance, and can thus guarantee safety in feedback controllers without predictions. However, these constraints are cumbersome to compute from models, and it is not yet well established how to infer constraints from data. In this paper, we shed light on the key objects involved in learning control constraints from data in a model-free setting. In particular, we discuss the family of constraints that enforce safety in the context of a nominal control policy, and expose that these constraints do not need to be accurate everywhere. They only need to correctly exclude a subset of the state-actions that would cause failure, which we call the critical set.

link (url) [BibTex]

link (url) [BibTex]


Structured learning of rigid-body dynamics: A survey and unified view from a robotics perspective
Structured learning of rigid-body dynamics: A survey and unified view from a robotics perspective

Geist, A. R., Trimpe, S.

GAMM-Mitteilungen, 44(2):e202100009, Special Issue: Scientific Machine Learning, 2021 (article)

Abstract
Accurate models of mechanical system dynamics are often critical for model-based control and reinforcement learning. Fully data-driven dynamics models promise to ease the process of modeling and analysis, but require considerable amounts of data for training and often do not generalize well to unseen parts of the state space. Combining data-driven modeling with prior analytical knowledge is an attractive alternative as the inclusion of structural knowledge into a regression model improves the model's data efficiency and physical integrity. In this article, we survey supervised regression models that combine rigid-body mechanics with data-driven modeling techniques. We analyze the different latent functions (such as kinetic energy or dissipative forces) and operators (such as differential operators and projection matrices) underlying common descriptions of rigid-body mechanics. Based on this analysis, we provide a unified view on the combination of data-driven regression models, such as neural networks and Gaussian processes, with analytical model priors. Furthermore, we review and discuss key techniques for designing structured models such as automatic differentiation.

DOI [BibTex]

DOI [BibTex]


Practical and Rigorous Uncertainty Bounds for Gaussian Process Regression
Practical and Rigorous Uncertainty Bounds for Gaussian Process Regression

Fiedler, C., Scherer, C. W., Trimpe, S.

In The Thirty-Fifth AAAI Conference on Artificial Intelligence, the Thirty-Third Conference on Innovative Applications of Artificial Intelligence, the Eleventh Symposium on Educational Advances in Artificial Intelligence, 8, pages: 7439-7447, AAAI Press, Palo Alto, CA, Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021), Thirty-Third Conference on Innovative Applications of Artificial Intelligence (IAAI 2021), Eleventh Symposium on Educational Advances in Artificial Intelligence (EAAI 2021), May 2021 (inproceedings)

Abstract
Gaussian Process regression is a popular nonparametric regression method based on Bayesian principles that provides uncertainty estimates for its predictions. However, these estimates are of a Bayesian nature, whereas for some important applications, like learning-based control with safety guarantees, frequentist uncertainty bounds are required. Although such rigorous bounds are available for Gaussian Processes, they are too conservative to be useful in applications. This often leads practitioners to replacing these bounds by heuristics, thus breaking all theoretical guarantees. To address this problem, we introduce new uncertainty bounds that are rigorous, yet practically useful at the same time. In particular, the bounds can be explicitly evaluated and are much less conservative than state of the art results. Furthermore, we show that certain model misspecifications lead to only graceful degradation. We demonstrate these advantages and the usefulness of our results for learning-based control with numerical examples.},

link (url) [BibTex]

link (url) [BibTex]


A little damping goes a long way
A little damping goes a long way

Heim, S., Millard, M., Mouel, C. L., Badri-Spröwitz, A.

In Integrative and Comparative Biology, 61(Supplement 1):E367-E367, Oxford University Press, Society for Integrative and Comparative Biology Annual Meeting (SICB Annual Meeting 2021) , March 2021 (inproceedings)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Robot Learning with Crash Constraints

Marco, A., Baumann, D., Khadiv, M., Hennig, P., Righetti, L., Trimpe, S.

IEEE Robotics and Automation Letters, 6(2):1439-1446, IEEE, February 2021 (article)

Abstract
In the past decade, numerous machine learning algorithms have been shown to successfully learn optimal policies to control real robotic systems. However, it is common to encounter failing behaviors as the learning loop progresses. Specifically, in robot applications where failing is undesired but not catastrophic, many algorithms struggle with leveraging data obtained from failures. This is usually caused by (i) the failed experiment ending prematurely, or (ii) the acquired data being scarce or corrupted. Both complicate the design of proper reward functions to penalize failures. In this paper, we propose a framework that addresses those issues. We consider failing behaviors as those that violate a constraint and address the problem of learning with crash constraints, where no data is obtained upon constraint violation. The no-data case is addressed by a novel GP model (GPCR) for the constraint that combines discrete events (failure/success) with continuous observations (only obtained upon success). We demonstrate the effectiveness of our framework on simulated benchmarks and on a real jumping quadruped, where the constraint threshold is unknown a priori. Experimental data is collected, by means of constrained Bayesian optimization, directly on the real robot. Our results outperform manual tuning and GPCR proves useful on estimating the constraint threshold.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Joint State and Dynamics Estimation With High-Gain Observers and Gaussian Process Models

Buisson-Fenet, M., Morgenthaler, V., Trimpe, S., Di Meglio, F.

IEEE Control Systems Letters, 5(5):1627-1632, 2021 (article)

Abstract
With the rising complexity of dynamical systems generating ever more data, learning dynamics models appears as a promising alternative to physics-based modeling. However, the data available from physical platforms may be noisy and not cover all state variables. Hence, it is necessary to jointly perform state and dynamics estimation. In this letter, we propose interconnecting a high-gain observer and a dynamics learning framework, specifically a Gaussian process state-space model. The observer provides state estimates, which serve as the data for training the dynamics model. The updated model, in turn, is used to improve the observer. Joint convergence of the observer and the dynamics model is proved for high enough gain, up to the measurement and process perturbations. Simultaneous dynamics learning and state estimation are demonstrated on simulations of a mass-spring-mass system.

DOI [BibTex]

DOI [BibTex]


Wireless Control for Smart Manufacturing: Recent Approaches and Open Challenges
Wireless Control for Smart Manufacturing: Recent Approaches and Open Challenges

Baumann, D., Mager, F., Wetzker, U., Thiele, L., Zimmerling, M., Trimpe, S.

Proceedings of the IEEE, 109(4):441-467, 2021 (article)

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Learning Event-triggered Control from Data through Joint Optimization
Learning Event-triggered Control from Data through Joint Optimization

Funk, N., Baumann, D., Berenz, V., Trimpe, S.

IFAC Journal of Systems and Control, 16, pages: 100144, 2021 (article)

Abstract
We present a framework for model-free learning of event-triggered control strategies. Event-triggered methods aim to achieve high control performance while only closing the feedback loop when needed. This enables resource savings, e.g., network bandwidth if control commands are sent via communication networks, as in networked control systems. Event-triggered controllers consist of a communication policy, determining when to communicate, and a control policy, deciding what to communicate. It is essential to jointly optimize the two policies since individual optimization does not necessarily yield the overall optimal solution. To address this need for joint optimization, we propose a novel algorithm based on hierarchical reinforcement learning. The resulting algorithm is shown to accomplish high-performance control in line with resource savings and scales seamlessly to nonlinear and high-dimensional systems. The method’s applicability to real-world scenarios is demonstrated through experiments on a six degrees of freedom real-time controlled manipulator. Further, we propose an approach towards evaluating the stability of the learned neural network policies.

arXiv link (url) DOI [BibTex]


Event-triggered Learning for Linear Quadratic Control
Event-triggered Learning for Linear Quadratic Control

Schlüter, H., Solowjow, F., Trimpe, S.

IEEE Transactions on Automatic Control, 66(10):4485-4498, 2021 (article)

arXiv DOI [BibTex]

arXiv DOI [BibTex]


no image
Controller Design via Experimental Exploration With Robustness Guarantees

Holicki, T., Scherer, C. W., Trimpe, J. S.

IEEE Control Systems Letters, 5(2):641-646, 2021 (article)

DOI [BibTex]

DOI [BibTex]

2020


A Learnable Safety Measure
A Learnable Safety Measure

Heim, S., Rohr, A. V., Trimpe, S., Badri-Spröwitz, A.

Proceedings of the Conference on Robot Learning, 100, pages: 627-639, Proceedings of Machine Learning Research, (Editors: Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei), PMLR, Conference on Robot Learning, October 2020 (article)

Arxiv [BibTex]

2020

Arxiv [BibTex]


A little damping goes a long way: a simulation study of how damping influences task-level stability in running
A little damping goes a long way: a simulation study of how damping influences task-level stability in running

Heim, S., Millard, M., Le Mouel, C., Badri-Spröwitz, A.

Biology Letters, 16(9):20200467, September 2020 (article)

Abstract
It is currently unclear if damping plays a functional role in legged locomotion, and simple models often do not include damping terms. We present a new model with a damping term that is isolated from other parameters: that is, the damping term can be adjusted without retuning other model parameters for nominal motion. We systematically compare how increased damping affects stability in the face of unexpected ground-height perturbations. Unlike most studies, we focus on task-level stability: instead of observing whether trajectories converge towards a nominal limit-cycle, we quantify the ability to avoid falls using a recently developed mathematical measure. This measure allows trajectories to be compared quantitatively instead of only being separated into a binary classification of ‘stable' or ‘unstable'. Our simulation study shows that increased damping contributes significantly to task-level stability; however, this benefit quickly plateaus after only a small amount of damping. These results suggest that the low intrinsic damping values observed experimentally may have stability benefits and are not simply minimized for energetic reasons. All Python code and data needed to generate our results are available open source.

link (url) DOI Project Page [BibTex]


Event-triggered Learning
Event-triggered Learning

Solowjow, F., Trimpe, S.

Automatica, 117, pages: 109009, Elsevier, July 2020 (article)

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Learning of sub-optimal gait controllers for magnetic walking soft millirobots
Learning of sub-optimal gait controllers for magnetic walking soft millirobots

Culha, U., Demir, S. O., Trimpe, S., Sitti, M.

In Robotics: Science and Systems XVI, pages: P070, (Editors: Toussaint, Marc and Bicchi, Antonio and Hermans, Tucker), RSS Foundation, Robotics: Science and Systems 2020 (RSS 2020), 2020 (inproceedings)

Abstract
Untethered small-scale soft robots have promising applications in minimally invasive surgery, targeted drug delivery, and bioengineering applications as they can access confined spaces in the human body. However, due to highly nonlinear soft continuum deformation kinematics, inherent stochastic variability during fabrication at the small scale, and lack of accurate models, the conventional control methods cannot be easily applied. Adaptivity of robot control is additionally crucial for medical operations, as operation environments show large variability, and robot materials may degrade or change over time,which would have deteriorating effects on the robot motion and task performance. Therefore, we propose using a probabilistic learning approach for millimeter-scale magnetic walking soft robots using Bayesian optimization (BO) and Gaussian processes (GPs). Our approach provides a data-efficient learning scheme to find controller parameters while optimizing the stride length performance of the walking soft millirobot robot within a small number of physical experiments. We demonstrate adaptation to fabrication variabilities in three different robots and to walking surfaces with different roughness. We also show an improvement in the learning performance by transferring the learning results of one robot to the others as prior information.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Actively Learning Gaussian Process Dynamics
Actively Learning Gaussian Process Dynamics

Buisson-Fenet, M., Solowjow, F., Trimpe, S.

Proceedings of the 2nd Conference on Learning for Dynamics and Control, 120, pages: 5-15, Proceedings of Machine Learning Research (PMLR), (Editors: Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie), PMLR, 2nd Annual Conference on Learning for Dynamics and Control (L4DC), June 2020 (conference)

Abstract
Despite the availability of ever more data enabled through modern sensor and computer technology, it still remains an open problem to learn dynamical systems in a sample-efficient way. We propose active learning strategies that leverage information-theoretical properties arising naturally during Gaussian process regression, while respecting constraints on the sampling process imposed by the system dynamics. Sample points are selected in regions with high uncertainty, leading to exploratory behavior and data-efficient training of the model. All results are verified in an extensive numerical benchmark.

ArXiv link (url) [BibTex]

ArXiv link (url) [BibTex]


Learning Constrained Dynamics with Gauss Principle adhering Gaussian Processes
Learning Constrained Dynamics with Gauss Principle adhering Gaussian Processes

Geist, A. R., Trimpe, S.

In Proceedings of the 2nd Conference on Learning for Dynamics and Control, 120, pages: 225-234, Proceedings of Machine Learning Research (PMLR), (Editors: Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie), PMLR, 2nd Annual Conference on Learning for Dynamics and Control (L4DC), June 2020 (inproceedings)

Abstract
The identification of the constrained dynamics of mechanical systems is often challenging. Learning methods promise to ease an analytical analysis, but require considerable amounts of data for training. We propose to combine insights from analytical mechanics with Gaussian process regression to improve the model's data efficiency and constraint integrity. The result is a Gaussian process model that incorporates a priori constraint knowledge such that its predictions adhere to Gauss' principle of least constraint. In return, predictions of the system's acceleration naturally respect potentially non-ideal (non-)holonomic equality constraints. As corollary results, our model enables to infer the acceleration of the unconstrained system from data of the constrained system and enables knowledge transfer between differing constraint configurations.

Proceedings of Machine Learning Research link (url) [BibTex]

Proceedings of Machine Learning Research link (url) [BibTex]


Data-efficient Autotuning with Bayesian Optimization: An Industrial Control Study
Data-efficient Autotuning with Bayesian Optimization: An Industrial Control Study

Neumann-Brosig, M., Marco, A., Schwarzmann, D., Trimpe, S.

IEEE Transactions on Control Systems Technology, 28(3):730-740, May 2020 (article)

Abstract
Bayesian optimization is proposed for automatic learning of optimal controller parameters from experimental data. A probabilistic description (a Gaussian process) is used to model the unknown function from controller parameters to a user-defined cost. The probabilistic model is updated with data, which is obtained by testing a set of parameters on the physical system and evaluating the cost. In order to learn fast, the Bayesian optimization algorithm selects the next parameters to evaluate in a systematic way, for example, by maximizing information gain about the optimum. The algorithm thus iteratively finds the globally optimal parameters with only few experiments. Taking throttle valve control as a representative industrial control example, the proposed auto-tuning method is shown to outperform manual calibration: it consistently achieves better performance with a low number of experiments. The proposed auto-tuning framework is flexible and can handle different control structures and objectives.

arXiv (PDF) DOI Project Page [BibTex]

arXiv (PDF) DOI Project Page [BibTex]


no image
Robust Model-free Reinforcement Learning with Multi-objective Bayesian Optimization

Turchetta, M., Krause, A., Trimpe, S.

In 2020 IEEE International Conference on Robotics and Automation (ICRA 2020), pages: 10702-10708, IEEE, Piscataway, NJ, IEEE International Conference on Robotics and Automation (ICRA 2020), May 2020 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Sliding Mode Control with Gaussian Process Regression for Underwater Robots

Lima, G. S., Trimpe, S., Bessa, W. M.

Journal of Intelligent & Robotic Systems, 99(3-4):487-498, January 2020 (article)

DOI [BibTex]

DOI [BibTex]


Hierarchical Event-triggered Learning for Cyclically Excited Systems with Application to Wireless Sensor Networks
Hierarchical Event-triggered Learning for Cyclically Excited Systems with Application to Wireless Sensor Networks

Beuchert, J., Solowjow, F., Raisch, J., Trimpe, S., Seel, T.

IEEE Control Systems Letters, 4(1):103-108, January 2020 (article)

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Control-guided Communication: Efficient Resource Arbitration and Allocation in Multi-hop Wireless Control Systems
Control-guided Communication: Efficient Resource Arbitration and Allocation in Multi-hop Wireless Control Systems

Baumann, D., Mager, F., Zimmerling, M., Trimpe, S.

IEEE Control Systems Letters, 4(1):127-132, January 2020 (article)

arXiv PDF DOI [BibTex]


Excursion Search for Constrained Bayesian Optimization under a Limited Budget of Failures
Excursion Search for Constrained Bayesian Optimization under a Limited Budget of Failures

Marco, A., Rohr, A. V., Baumann, D., Hernández-Lobato, J. M., Trimpe, S.

2020 (proceedings) In revision

Abstract
When learning to ride a bike, a child falls down a number of times before achieving the first success. As falling down usually has only mild consequences, it can be seen as a tolerable failure in exchange for a faster learning process, as it provides rich information about an undesired behavior. In the context of Bayesian optimization under unknown constraints (BOC), typical strategies for safe learning explore conservatively and avoid failures by all means. On the other side of the spectrum, non conservative BOC algorithms that allow failing may fail an unbounded number of times before reaching the optimum. In this work, we propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures. Empirical validation shows that our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods. In addition, we propose an original algorithm for unconstrained Bayesian optimization inspired by the notion of excursion sets in stochastic processes, upon which the failures-aware algorithm is built.

arXiv code (python) PDF [BibTex]


no image
Online learning with stability guarantees: A memory-based warm starting for real-time MPC

Schwenkel, L., Gharbi, M., Trimpe, S., Ebenbauer, C.

Automatica, 122, pages: 109247, 2020 (article)

DOI [BibTex]


Spatial Scheduling of Informative Meetings for Multi-Agent Persistent Coverage
Spatial Scheduling of Informative Meetings for Multi-Agent Persistent Coverage

Haksar, R. N., Trimpe, S., Schwager, M.

IEEE Robotics and Automation Letters, 5(2):3027-3034, 2020 (article)

DOI [BibTex]

DOI [BibTex]


Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control
Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control

Nubert, J., Koehler, J., Berenz, V., Allgower, F., Trimpe, S.

IEEE Robotics and Automation Letters, 5(2):3050-3057, 2020 (article)

Abstract
Fast feedback control and safety guarantees are essential in modern robotics. We present an approach that achieves both by combining novel robust model predictive control (MPC) with function approximation via (deep) neural networks (NNs). The result is a new approach for complex tasks with nonlinear, uncertain, and constrained dynamics as are common in robotics. Specifically, we leverage recent results in MPC research to propose a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction. The presented robust MPC scheme constitutes a one-layer approach that unifies the often separated planning and control layers, by directly computing the control command based on a reference and possibly obstacle positions. As a separate contribution, we show how the computation time of the MPC can be drastically reduced by approximating the MPC law with a NN controller. The NN is trained and validated from offline samples of the MPC, yielding statistical guarantees, and used in lieu thereof at run time. Our experiments on a state-of-the-art robot manipulator are the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]

2019


Controlling Heterogeneous Stochastic Growth Processes on Lattices with Limited Resources
Controlling Heterogeneous Stochastic Growth Processes on Lattices with Limited Resources

Haksar, R., Solowjow, F., Trimpe, S., Schwager, M.

In Proceedings of the 58th IEEE International Conference on Decision and Control (CDC) , pages: 1315-1322, 58th IEEE International Conference on Decision and Control (CDC), December 2019 (conference)

PDF [BibTex]

2019

PDF [BibTex]


Fast Feedback Control over Multi-hop Wireless Networks with Mode Changes and Stability Guarantees
Fast Feedback Control over Multi-hop Wireless Networks with Mode Changes and Stability Guarantees

Baumann, D., Mager, F., Jacob, R., Thiele, L., Zimmerling, M., Trimpe, S.

ACM Transactions on Cyber-Physical Systems, 4(2):18, November 2019 (article)

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Predictive Triggering for Distributed Control of Resource Constrained Multi-agent Systems
Predictive Triggering for Distributed Control of Resource Constrained Multi-agent Systems

Mastrangelo, J. M., Baumann, D., Trimpe, S.

In Proceedings of the 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages: 79-84, 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys), September 2019 (inproceedings)

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Event-triggered Pulse Control with Model Learning (if Necessary)
Event-triggered Pulse Control with Model Learning (if Necessary)

Baumann, D., Solowjow, F., Johansson, K. H., Trimpe, S.

In Proceedings of the American Control Conference, pages: 792-797, American Control Conference (ACC), July 2019 (inproceedings)

arXiv PDF [BibTex]

arXiv PDF [BibTex]


Data-driven inference of passivity properties via Gaussian process optimization
Data-driven inference of passivity properties via Gaussian process optimization

Romer, A., Trimpe, S., Allgöwer, F.

In Proceedings of the European Control Conference, European Control Conference (ECC), June 2019 (inproceedings)

PDF [BibTex]

PDF [BibTex]


Trajectory-Based Off-Policy Deep Reinforcement Learning
Trajectory-Based Off-Policy Deep Reinforcement Learning

Doerr, A., Volpp, M., Toussaint, M., Trimpe, S., Daniel, C.

In Proceedings of the International Conference on Machine Learning (ICML), International Conference on Machine Learning (ICML), June 2019 (inproceedings)

Abstract
Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.

arXiv PDF [BibTex]

arXiv PDF [BibTex]


Resource-aware IoT Control: Saving Communication through Predictive Triggering
Resource-aware IoT Control: Saving Communication through Predictive Triggering

Trimpe, S., Baumann, D.

IEEE Internet of Things Journal, 6(3):5013-5028, June 2019 (article)

Abstract
The Internet of Things (IoT) interconnects multiple physical devices in large-scale networks. When the 'things' coordinate decisions and act collectively on shared information, feedback is introduced between them. Multiple feedback loops are thus closed over a shared, general-purpose network. Traditional feedback control is unsuitable for design of IoT control because it relies on high-rate periodic communication and is ignorant of the shared network resource. Therefore, recent event-based estimation methods are applied herein for resource-aware IoT control allowing agents to decide online whether communication with other agents is needed, or not. While this can reduce network traffic significantly, a severe limitation of typical event-based approaches is the need for instantaneous triggering decisions that leave no time to reallocate freed resources (e.g., communication slots), which hence remain unused. To address this problem, novel predictive and self triggering protocols are proposed herein. From a unified Bayesian decision framework, two schemes are developed: self triggers that predict, at the current triggering instant, the next one; and predictive triggers that check at every time step, whether communication will be needed at a given prediction horizon. The suitability of these triggers for feedback control is demonstrated in hardware experiments on a cart-pole, and scalability is discussed with a multi-vehicle simulation.

PDF arXiv DOI [BibTex]


Feedback Control Goes Wireless: Guaranteed Stability over Low-power Multi-hop Networks
Feedback Control Goes Wireless: Guaranteed Stability over Low-power Multi-hop Networks

(Best Paper Award)

Mager, F., Baumann, D., Jacob, R., Thiele, L., Trimpe, S., Zimmerling, M.

In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, pages: 97-108, 10th ACM/IEEE International Conference on Cyber-Physical Systems, April 2019 (inproceedings)

Abstract
Closing feedback loops fast and over long distances is key to emerging applications; for example, robot motion control and swarm coordination require update intervals below 100 ms. Low-power wireless is preferred for its flexibility, low cost, and small form factor, especially if the devices support multi-hop communication. Thus far, however, closed-loop control over multi-hop low-power wireless has only been demonstrated for update intervals on the order of multiple seconds. This paper presents a wireless embedded system that tames imperfections impairing control performance such as jitter or packet loss, and a control design that exploits the essential properties of this system to provably guarantee closed-loop stability for linear dynamic systems. Using experiments on a testbed with multiple cart-pole systems, we are the first to demonstrate the feasibility and to assess the performance of closed-loop control and coordination over multi-hop low-power wireless for update intervals from 20 ms to 50 ms.

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]

2018


Deep Reinforcement Learning for Event-Triggered Control
Deep Reinforcement Learning for Event-Triggered Control

Baumann, D., Zhu, J., Martius, G., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), pages: 943-950, 57th IEEE International Conference on Decision and Control (CDC), December 2018 (inproceedings)

arXiv PDF DOI Project Page [BibTex]

2018

arXiv PDF DOI Project Page [BibTex]


Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds
Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds

Reeb, D., Doerr, A., Gerwinn, S., Rakitsch, B.

In Proceedings Neural Information Processing Systems, Neural Information Processing Systems (NIPS) , December 2018 (inproceedings)

Abstract
Gaussian Processes (GPs) are a generic modelling tool for supervised learning. While they have been successfully applied on large datasets, their use in safety critical applications is hindered by the lack of good performance guarantees. To this end, we propose a method to learn GPs and their sparse approximations by directly optimizing a PAC-Bayesian bound on their generalization performance, instead of maximizing the marginal likelihood. Besides its theoretical appeal, we find in our evaluation that our learning method is robust and yields significantly better generalization guarantees than other common GP approaches on several regression benchmark datasets.

[BibTex]

[BibTex]


Efficient Encoding of Dynamical Systems through Local Approximations
Efficient Encoding of Dynamical Systems through Local Approximations

Solowjow, F., Mehrjou, A., Schölkopf, B., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), pages: 6073 - 6079 , Miami, Fl, USA, December 2018 (inproceedings)

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Depth Control of Underwater Robots using Sliding Modes and Gaussian Process Regression
Depth Control of Underwater Robots using Sliding Modes and Gaussian Process Regression

Lima, G. S., Bessa, W. M., Trimpe, S.

In Proceeding of the 15th Latin American Robotics Symposium, João Pessoa, Brazil, 15th Latin American Robotics Symposium, November 2018 (inproceedings)

Abstract
The development of accurate control systems for underwater robotic vehicles relies on the adequate compensation for hydrodynamic effects. In this work, a new robust control scheme is presented for remotely operated underwater vehicles. In order to meet both robustness and tracking requirements, sliding mode control is combined with Gaussian process regression. The convergence properties of the closed-loop signals are analytically proven. Numerical results confirm the stronger improved performance of the proposed control scheme.

[BibTex]

[BibTex]


Gait learning for soft microrobots controlled by light fields
Gait learning for soft microrobots controlled by light fields

Rohr, A. V., Trimpe, S., Marco, A., Fischer, P., Palagi, S.

In International Conference on Intelligent Robots and Systems (IROS) 2018, pages: 6199-6206, Piscataway, NJ, USA, International Conference on Intelligent Robots and Systems 2018, October 2018 (inproceedings)

Abstract
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing environments. However, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is highly data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semisynthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot’s locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on lightcontrolled soft microrobots and probabilistic learning control.

arXiv IEEE Xplore link (url) DOI Project Page Project Page [BibTex]

arXiv IEEE Xplore link (url) DOI Project Page Project Page [BibTex]


no image
Learning-Based Robust Model Predictive Control with State-Dependent Uncertainty

Soloperto, R., Müller, M. A., Trimpe, S., Allgöwer, F.

In Proceedings of the IFAC Conference on Nonlinear Model Predictive Control (NMPC), Madison, Wisconsin, USA, 6th IFAC Conference on Nonlinear Model Predictive Control, August 2018 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Learning an Approximate Model Predictive Controller with Guarantees

Hertneck, M., Koehler, J., Trimpe, S., Allgöwer, F.

IEEE Control Systems Letters, 2(3):543-548, July 2018 (article)

Abstract
A supervised learning framework is proposed to approximate a model predictive controller (MPC) with reduced computational complexity and guarantees on stability and constraint satisfaction. The framework can be used for a wide class of nonlinear systems. Any standard supervised learning technique (e.g. neural networks) can be employed to approximate the MPC from samples. In order to obtain closed-loop guarantees for the learned MPC, a robust MPC design is combined with statistical learning bounds. The MPC design ensures robustness to inaccurate inputs within given bounds, and Hoeffding’s Inequality is used to validate that the learned MPC satisfies these bounds with high confidence. The result is a closed-loop statistical guarantee on stability and constraint satisfaction for the learned MPC. The proposed learning-based MPC framework is illustrated on a nonlinear benchmark problem, for which we learn a neural network controller with guarantees.

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]