I. Introduction
In the last decades, the use of robotic manipulators have been spreading in many applications such as intervention in hazardous environments, robotic surgery, prosthetics, industrial assembly, among others [1]. In these scenarios, the human operator is often essential, especially in unstructured environments where the automation of robot actions can be a complex task. In this context, the development of human-machine interfaces (HMIs) can help to increase the performance, versatility and teleoperation in the field of robotic manipulators. Several HMIs have been studied to control these mechatronic platforms by using different interface devices such as keyboards, joysticks, inertial measurement units (IMUs) [2], cameras [3], Kinect sensors [4], haptic devices [5], and electromyography (EMG) based systems [2], [6]. In fact, the control of manipulators that uses advanced sensors such as EMG-based hand gesture recognition (HGR) systems is still an open research problem.