Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Worst-case Satisfaction of STL Specifications Using Feedforward Neural Network Controllers

Worst-case Satisfaction of STL Specifications Using Feedforward Neural Network Controllers In this paper, a reinforcement learning approach for designing feedback neural network controllers for nonlinear systems is proposed. Given a Signal Temporal Logic (STL) specification which needs to be satisfied by the system over a set of initial conditions, the neural network parameters are tuned in order to maximize the satisfaction of the STL formula. The framework is based on a max-min formulation of the robustness of the STL formula. The maximization is solved through a Lagrange multipliers method, while the minimization corresponds to a falsification problem. We present our results on a vehicle and a quadrotor model and demonstrate that our approach reduces the training time more than 50 percent compared to the baseline approach. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Embedded Computing Systems (TECS) Association for Computing Machinery

Worst-case Satisfaction of STL Specifications Using Feedforward Neural Network Controllers

Loading next page...
 
/lp/association-for-computing-machinery/worst-case-satisfaction-of-stl-specifications-using-feedforward-neural-2sNrH8KNtC

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Association for Computing Machinery
Copyright
Copyright © 2019 ACM
ISSN
1539-9087
eISSN
1558-3465
DOI
10.1145/3358239
Publisher site
See Article on Publisher Site

Abstract

In this paper, a reinforcement learning approach for designing feedback neural network controllers for nonlinear systems is proposed. Given a Signal Temporal Logic (STL) specification which needs to be satisfied by the system over a set of initial conditions, the neural network parameters are tuned in order to maximize the satisfaction of the STL formula. The framework is based on a max-min formulation of the robustness of the STL formula. The maximization is solved through a Lagrange multipliers method, while the minimization corresponds to a falsification problem. We present our results on a vehicle and a quadrotor model and demonstrate that our approach reduces the training time more than 50 percent compared to the baseline approach.

Journal

ACM Transactions on Embedded Computing Systems (TECS)Association for Computing Machinery

Published: Oct 8, 2019

Keywords: Reinforcement learning

References