Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Real-time control for fuel-optimal Moon landing based on an interactive deep reinforcement learning algorithm

Real-time control for fuel-optimal Moon landing based on an interactive deep reinforcement... Abstract In this study, a real-time optimal control approach is proposed using an interactive deep reinforcement learning algorithm for the Moon fuel-optimal landing problem. Considering the remote communication restrictions and environmental uncertainties, advanced landing control techniques are demanded to meet the high requirements of real-time performance and autonomy in the Moon landing missions. Deep reinforcement learning (DRL) algorithms have been recently developed for real-time optimal control but suffer the obstacles of slow convergence and difficult reward function design. To address these problems, a DRL algorithm is developed using an actor-indirect method architecture to achieve the optimal control of the Moon landing mission. In this DRL algorithm, an indirect method is employed to generate the optimal control actions for the deep neural network (DNN) learning, while the trained DNNs provide good initial guesses for the indirect method to promote the efficiency of training data generation. Through sufficient learning of the state-action relationship, the trained DNNs can approximate the optimal actions and steer the spacecraft to the target in real time. Additionally, a nonlinear feedback controller is developed to improve the terminal landing accuracy. Numerical simulations are given to verify the effectiveness of the proposed DRL algorithm and demonstrate the performance of the developed optimal landing controller. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Astrodynamics Springer Journals

Real-time control for fuel-optimal Moon landing based on an interactive deep reinforcement learning algorithm

Astrodynamics , Volume 3 (4): 12 – Dec 1, 2019

Loading next page...
 
/lp/springer-journals/real-time-control-for-fuel-optimal-moon-landing-based-on-an-cQES1g7oYw
Publisher
Springer Journals
Copyright
2019 Tsinghua University Press
ISSN
2522-008X
eISSN
2522-0098
DOI
10.1007/s42064-018-0052-2
Publisher site
See Article on Publisher Site

Abstract

Abstract In this study, a real-time optimal control approach is proposed using an interactive deep reinforcement learning algorithm for the Moon fuel-optimal landing problem. Considering the remote communication restrictions and environmental uncertainties, advanced landing control techniques are demanded to meet the high requirements of real-time performance and autonomy in the Moon landing missions. Deep reinforcement learning (DRL) algorithms have been recently developed for real-time optimal control but suffer the obstacles of slow convergence and difficult reward function design. To address these problems, a DRL algorithm is developed using an actor-indirect method architecture to achieve the optimal control of the Moon landing mission. In this DRL algorithm, an indirect method is employed to generate the optimal control actions for the deep neural network (DNN) learning, while the trained DNNs provide good initial guesses for the indirect method to promote the efficiency of training data generation. Through sufficient learning of the state-action relationship, the trained DNNs can approximate the optimal actions and steer the spacecraft to the target in real time. Additionally, a nonlinear feedback controller is developed to improve the terminal landing accuracy. Numerical simulations are given to verify the effectiveness of the proposed DRL algorithm and demonstrate the performance of the developed optimal landing controller.

Journal

AstrodynamicsSpringer Journals

Published: Dec 1, 2019

References