Access the full text.
Sign up today, get DeepDyve free for 14 days.
M. Deisenroth, C. Rasmussen (2011)
PILCO: A Model-Based and Data-Efficient Approach to Policy Search
E. Todorov, Weiwei Li (2005)
A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systemsProceedings of the 2005, American Control Conference, 2005.
Stéphane Ross, Joelle Pineau, B. Chaib-draa, Pierre Kreitmann (2011)
A Bayesian Approach for Learning and Planning in Partially Observable Markov Decision ProcessesJ. Mach. Learn. Res., 12
Sam Prentice, N. Roy (2009)
The Belief Roadmap: Efficient Planning in Belief Space by Factoring the CovarianceThe International Journal of Robotics Research, 28
J Lrdg, M. Lengyel, B. Reichert, P. Donald, Greenberg, T Lpw, M. Erez, An, Hst, T. Horsch, F. Schwarz, References Atbm, J Ahuactzin, E.-G Talbi, P. Bessì, E. Mazer, P Ch, Y. Chen, Hwang, Sandros (1996)
Probabilistic Roadmaps for Path Planning in High-Dimensional Con(cid:12)guration Spaces
L. Foster, Alex Waagen, Nabeela Aijaz, Michael Hurley, Apolonio Luis, Joel Rinsky, Chandrika Satyavolu, M. Way, P. Gazis, A. Srivastava (2009)
Stable and Efficient Gaussian Process CalculationsJ. Mach. Learn. Res., 10
Jonathan Ko, D. Fox (2008)
GP-BayesFilters: Bayesian filtering using Gaussian process prediction and observation modelsAutonomous Robots, 27
R. Sutton, David McAllester, Satinder Singh, Y. Mansour (1999)
Policy Gradient Methods for Reinforcement Learning with Function Approximation
J. Berg, P. Abbeel, Ken Goldberg (2010)
LQG-MP: Optimized path planning for robots with motion uncertainty and imperfect state informationThe International Journal of Robotics Research, 30
Jack Wang, David Fleet, Aaron Hertzmann (2005)
Gaussian Process Dynamical Models
J. Asmuth, M. Littman (2011)
Learning is planning: near Bayes-optimal reinforcement learning via Monte-Carlo tree search
S. Ounpraseuth (2008)
Gaussian Processes for Machine LearningJournal of the American Statistical Association, 103
S M LaValle (2006)
Planning algorithms
Sariel Har-Peled, P. Indyk, R. Motwani (1998)
Approximate nearest neighbors: towards removing the curse of dimensionalityTheory Comput., 8
(2007)
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Gaussian Process Dynamical Model
I. Grondman, M. Vaandrager, L. Buşoniu, Robert Babuška, E. Schuitema (2012)
Efficient Model Learning Methods for Actor–Critic ControlIEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42
(2006)
PRINCIPLES OF ROBOT MOTION, Theory, Algorithms and Implementations, by Howie Choset et al., MIT Press, 2005. xix + 603 pp., index, ISBN 0-262-03327-5, 433 references (Hb. £38.95)Robotica, 24
Yuya Okadome, Yutaka Nakamura, Yumi Shikauchi, S. Ishii, H. Ishiguro (2013)
Fast Approximation Method for Gaussian Process Regression Using Hash Function for Non-uniformly Distributed Data
Yuya Okadome, Yutaka Nakamura, K. Urai, Y. Nakata, H. Ishiguro (2014)
Confidence-based roadmap using Gaussian process regression for a robot control2014 IEEE/RSJ International Conference on Intelligent Robots and Systems
Geoffrey Hinton (2002)
Training Products of Experts by Minimizing Contrastive DivergenceNeural Computation, 14
A. Bry, N. Roy (2011)
Rapidly-exploring Random Belief Trees for motion planning under uncertainty2011 IEEE International Conference on Robotics and Automation
(2006)
Pattern recognition and machine learning (1st ed.)
JM Wang (2008)
Gaussian process dynamical models for human motionIEEE Transactions on Pattern Analysis and Machine Intelligence, 30
S. LaValle, J. Kuffner (1999)
Randomized Kinodynamic PlanningThe International Journal of Robotics Research, 20
R. Dechter, J. Pearl (1985)
Generalized best-first search strategies and the optimality of A*J. ACM, 32
Neil Lawrence (2003)
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data
A. Marco, Philipp Hennig, J. Bohg, S. Schaal, Sebastian Trimpe (2016)
Automatic LQR tuning based on Gaussian process global optimization2016 IEEE International Conference on Robotics and Automation (ICRA)
Y. Kuwata, S. Karaman, Justin Teo, Emilio Frazzoli, J. How, G. Fiore (2009)
Real-Time Motion Planning With Applications to Autonomous Urban DrivingIEEE Transactions on Control Systems Technology, 17
A. Ijspeert (2008)
Central pattern generators for locomotion control in animals and robots: A reviewNeural networks : the official journal of the International Neural Network Society, 21 4
Evangelos Theodorou, J. Buchli, S. Schaal (2010)
Reinforcement learning of motor skills in high dimensions: A path integral approach2010 IEEE International Conference on Robotics and Automation
H M Choset (2005)
Principles of robot motion: Theory, algorithms, and implementations
J. Langford (2010)
Efficient Exploration in Reinforcement Learning
S. Dalibard, Antonio Khoury, F. Lamiraux, A. Nakhaei, M. Taïx, J. Laumond (2013)
Dynamic walking and whole-body motion planning for humanoid robots: an integrated approachThe International Journal of Robotics Research, 32
L. Chrisman (1992)
Reinforcement Learning with Perceptual Aliasing: The Perceptual Distinctions Approach
E. Dijkstra (1959)
A note on two problems in connexion with graphsNumerische Mathematik, 1
K. Doya (2000)
Reinforcement Learning in Continuous Time and SpaceNeural Computation, 12
M. Spaan, N. Vlassis (2004)
A point-based POMDP algorithm for robot planningIEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004, 3
L. Kavraki, Steven LaValle (2008)
Motion Planning
Jan Peters, S. Schaal (2008)
Reinforcement learning of motor skills with policy gradientsNeural networks : the official journal of the International Neural Network Society, 21 4
Recent advances in high performance computing have allowed sampling-based motion planning methods to be successfully applied to practical robot control problems. In such methods, a graph representing the local connectivity among states is constructed using a mathematical model of the controlled target. The motion is planned using this graph. However, it is difficult to obtain an appropriate mathematical model in advance when the behavior of the robot is affected by unanticipated factors. Therefore, it is crucial to be able to build a mathematical model from the motion data gathered by monitoring the robot in operation. However, when these data are sparse, uncertainty may be introduced into the model. To deal with this uncertainty, we propose a motion planning method using Gaussian process regression as a mathematical model. Experimental results show that satisfactory robot motion can be achieved using limited data.
Autonomous Robots – Springer Journals
Published: Aug 4, 2016
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.