Loading [MathJax]/extensions/MathMenu.js
Whole-Body Teleoperation for Mobile Manipulation at Zero Added Cost | IEEE Journals & Magazine | IEEE Xplore

Whole-Body Teleoperation for Mobile Manipulation at Zero Added Cost


Abstract:

Demonstration data plays a key role in learning complex behaviors and training robotic foundation models. While effective control interfaces exist for static manipulators...Show More

Abstract:

Demonstration data plays a key role in learning complex behaviors and training robotic foundation models. While effective control interfaces exist for static manipulators, data collection remains cumbersome and time intensive for mobile manipulators due to their large number of degrees of freedom. While specialized hardware, avatars, or motion tracking can enable whole-body control, these approaches are either expensive, robot-specific, or suffer from the embodiment mismatch between robot and human demonstrator. In this work, we present MoMa-Teleop, a novel teleoperation method that infers end-effector motions from existing interfaces and delegates the base motions to a previously developed reinforcement learning agent, leaving the operator to focus fully on the task-relevant end-effector motions. This enables whole-body teleoperation of mobile manipulators with no additional hardware or setup costs via standard interfaces such as joysticks or hand guidance. Moreover, the operator is not bound to a tracked workspace and can move freely with the robot over spatially extended tasks. We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks. As the generated data covers diverse whole-body motions without embodiment mismatch, it enables efficient imitation learning. By focusing on task-specific end-effector motions, our approach learns skills that transfer to unseen settings, such as new obstacles or changed object positions, from as little as five demonstrations.
Published in: IEEE Robotics and Automation Letters ( Volume: 10, Issue: 4, April 2025)
Page(s): 3198 - 3205
Date of Publication: 10 February 2025

ISSN Information:

Funding Agency:

No metrics found for this document.

I. Introduction

While robots have reached the hardware capabilities to tackle a wide range of household tasks, generating and executing such motions remains an open problem. The efficient collection of diverse robotic data has become a key factor in teaching such motions via imitation learning [1], [2], [3], [4], [5]. Although a wide variety of interfaces, teleoperation methods, and kinesthetic teaching approaches exist for static manipulators, collecting demonstrations for mobile manipulation platforms is still challenging. Their large number of degrees of freedom (DoF) often overwhelm standard input methods such as joysticks and keyboards or lead to a large cognitive load when trying to coordinate all the necessary buttons and joysticks. While motion tracking systems [6], [7], [8], [9] and exoskeletons [4], [10], [11] provide more intuitive interfaces, they are confronted with the correspondence problem if the morphology of robot and human do not match. Furthermore, exoskeletons are highly specialized, expensive equipment, and tracking-based methods restrict the operator from staying within the tracked area, not allowing them to move freely with the mobile robot and having to operate from afar.

Usage
Select a Year
2025

View as

Total usage sinceFeb 2025:122
020406080100120140JanFebMarAprMayJunJulAugSepOctNovDec01220000000000
Year Total:122
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.