Robot Policy Improvement With Natural Evolution Strategies for Stable Nonlinear Dynamical System | IEEE Journals & Magazine | IEEE Xplore

Robot Policy Improvement With Natural Evolution Strategies for Stable Nonlinear Dynamical System


Abstract:

Robot learning through kinesthetic teaching is a promising way of cloning human behaviors, but it has its limits in the performance of complex tasks with small amounts of...Show More

Abstract:

Robot learning through kinesthetic teaching is a promising way of cloning human behaviors, but it has its limits in the performance of complex tasks with small amounts of data, due to compounding errors. In order to improve the robustness and adaptability of imitation learning, a hierarchical learning strategy is proposed: low-level learning comprises only behavioral cloning with supervised learning, and high-level learning constitutes policy improvement. First, the Gaussian mixture model (GMM)-based dynamical system is formulated to encode a motion from the demonstration. We then derive the sufficient conditions of the GMM parameters that guarantee the global stability of the dynamical system from any initial state, using the Lyapunov stability theorem. Generally, imitation learning should reason about the motion well into the future for a wide range of tasks; it is significant to improve the adaptability of the learning method by policy improvement. Finally, a method based on exponential natural evolution strategies is proposed to optimize the parameters of the dynamical system associated with the stiffness of variable impedance control, in which the exploration noise is subject to stability conditions of the dynamical system in the exploration space, thus guaranteeing the global stability. Empirical evaluations are conducted on manipulators for different scenarios, including motion planning with obstacle avoidance and stiffness learning.
Published in: IEEE Transactions on Cybernetics ( Volume: 53, Issue: 6, June 2023)
Page(s): 4002 - 4014
Date of Publication: 05 August 2022

ISSN Information:

PubMed ID: 35930520

Funding Agency:


I. Introduction

Ever since the onset of pioneering research into robot learning methods of learning by demonstration have attracted much attention. Robot learning can facilitate applications in industry, manufacturing area, healthcare, etc., because it directly clones motor skills by extracting task-relevant information that can be transferred to the robot [1]–[3]. Generally, traditional imitation methods use supervised learning to obtain the regression parameters by modeling dynamic motion primitives (DMPs) [4] and Gaussian mixture models (GMMs) [5]. However, a major drawback of these methods is that they are not so adaptable and highly dependent on large amounts of data [2]–[6]. Therefore, they tend to be restricted to real-world robotic applications. In a real-world scenario, it is significant to design a more efficient learning policy based on the finite expert data.

Contact IEEE to Subscribe

References

References is not available for this document.