Loading [MathJax]/extensions/MathZoom.js
Imitating Tool-Based Garment Folding From a Single Visual Observation Using Hand-Object Graph Dynamics | IEEE Journals & Magazine | IEEE Xplore

Imitating Tool-Based Garment Folding From a Single Visual Observation Using Hand-Object Graph Dynamics


Abstract:

Garment folding is a ubiquitous domestic task that is difficult to automate due to the highly deformable nature of fabrics. In this article, we propose a novel method of ...Show More

Abstract:

Garment folding is a ubiquitous domestic task that is difficult to automate due to the highly deformable nature of fabrics. In this article, we propose a novel method of learning from demonstrations that enables robots to autonomously manipulate an assistive tool to fold garments. In contrast to traditional methods (that rely on low-level pixel features), our proposed solution uses a dense visual descriptor to encode the demonstration into a high-level hand-object graph (HoG) that allows to efficiently represent the interactions between the manipulated tool and robots. With that, we leverage graph neural network to autonomously learn the forward dynamics model from HoGs, then, given only a single demonstration, the imitation policy is optimized with a model predictive controller to accomplish the folding task. To validate the proposed approach, we conducted a detailed experimental study on a robotic platform instrumented with vision sensors and a custom-made end-effector that interacts with the folding board.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 4, April 2024)
Page(s): 6245 - 6256
Date of Publication: 01 January 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Robots have been extensively used to support people in a variety of activities of daily living. Garment folding is a clear example of a monotonous service task that can theoretically be performed by robots but which, in practice, is difficult to solve by using these state-of-the-art strategies [1], [2]. One possible solution to alleviate the complexity of manipulating fabrics is to enable the robot to learn how to use an assistive tool by observing an expert demonstration and then imitating the behavior. This approach is typically referred to as imitation learning (IL) [3], [4], a technique that enables autonomous agents (e.g., robots) to acquire complex skills from simple sensory data without requiring to hard-code the strategies. Our aim in this work is to solve the garment folding problem by using an assistive tool under the IL paradigm.

Select All
1.
J. Borràs, G. Alenyà and C. Torras, "A grasping-centered analysis for cloth manipulation", IEEE Trans. Robot., vol. 36, no. 3, pp. 924-936, Jun. 2020.
2.
R. Jangir, G. Alenya and C. Torras, "Dynamic cloth manipulation with deep reinforcement learning", Proc. IEEE Int. Conf. Robot. Autom., pp. 4630-4636, 2020.
3.
S. Schaal, "Is imitation learning the route to humanoid robots?", Trends Cogn. Sci., vol. 3, no. 6, pp. 233-242, 1999.
4.
B. D. Argall et al., "A survey of robot learning from demonstration", Robot. Auton. Syst., vol. 57, no. 5, pp. 469-483, 2009.
5.
J. Kober and J. Peters, "Learning motor primitives for robotics", Proc. IEEE Int. Conf. Robot. Autom., pp. 2112-2118, 2009.
6.
P. Abbeel, A. Coates and A. Y. Ng, "Autonomous helicopter aerobatics through apprenticeship learning", Int. J. Robot. Res., vol. 29, no. 13, pp. 1608-1639, 2010.
7.
Y. Zhang, F. Qiu, T. Hong, Z. Wang and F. Li, "Hybrid imitation learning for real-time service restoration in resilient distribution systems", IEEE Trans. Ind. Informat., vol. 18, no. 3, pp. 2089-2099, Mar. 2021.
8.
J. Stria et al., "Garment perception and its folding using a dual-arm robot", Proc. IEEE/RSJ Int. Conf. Robots Intell. Syst., pp. 61-67, 2014.
9.
Y. C. Hou, K. S. Mohamed Sahari, L. Y. Weng, D. N. T. How and H. Seki, "Particle-based perception of garment folding for robotic manipulation purposes", Int. J. Adv. Robot. Syst., vol. 14, no. 6, 2017.
10.
A. Doumanoglou et al., "Folding clothes autonomously: A complete pipeline", IEEE Trans. Robot., vol. 32, no. 6, pp. 1461-1478, Dec. 2016.
11.
A. Hussein et al., "Imitation learning: A survey of learning methods", ACM Comput. Surveys, vol. 50, no. 2, pp. 1-35, 2017.
12.
T. M. Moerland et al., "Model-based reinforcement learning: A survey", Found. Trends Mach. Learn., vol. 16, no. 1, pp. 1-118, 2023.
13.
A. Billard and M. J. Matarić, "Learning human arm movements by imitation:: Evaluation of a biologically inspired connectionist architecture", Robot. Auton. Syst., vol. 37, no. 2/3, pp. 145-160, 2001.
14.
F. Torabi, G. Warnell and P. Stone, "Behavioral cloning from observation", Proc. 27th Int. Joint Conf. Artif. Intell., pp. 4950-4957, 2018.
15.
P. Zhou, J. Zhu, S. Huo and D. Navarro-Alarcon, "LaSeSOM: A latent and semantic representation framework for soft object manipulation", IEEE Robot. Autom. Lett., vol. 6, no. 3, pp. 5381-5388, Jul. 2021.
16.
D. Huang, B. Li, Y. Li and C. Yang, "Cooperative manipulation of deformable objects by single-leader-dual-follower teleoperation", IEEE Trans. Ind. Electron., vol. 69, no. 12, pp. 13162-13170, Dec. 2022.
17.
Y. Liu, A. Gupta, P. Abbeel and S. Levine, "Imitation from observation: Learning to imitate behaviors from raw video via context translation", Proc. IEEE Int. Conf. Robot. Autom., pp. 1118-1125, 2018.
18.
S. Arora and P. Doshi, "A survey of inverse reinforcement learning: Challenges methods and progress", Artif. Intell., vol. 297, 2021.
19.
J. Ramírez, W. Yu and A. Perrusquía, "Model-free reinforcement learning from expert demonstrations: A survey", Artif. Intell. Rev., vol. 55, no. 4, pp. 3213-3241, 2022.
20.
Y. Chebotar, K. Hausman, M. Zhang, G. Sukhatme, S. Schaal and S. Levine, "Combining model-based and model-free updates for trajectory-centric reinforcement learning", Proc. Int. Conf. Mach. Learn., pp. 703-711, 2017.
21.
P. Englert, A. Paraschos, J. Peters and M. P. Deisenroth, "Model-based imitation learning by probabilistic trajectory matching", Proc. IEEE Int. Conf. Robot. Autom., pp. 1922-1927, 2013.
22.
C. Wang et al., "Offline-online learning of deformation model for cable manipulation with graph neural networks", IEEE Robot. Autom. Lett., vol. 7, no. 2, pp. 5544-5551, Apr. 2022.
23.
P. Florence, L. Manuelli and R. Tedrake, "Self-supervised correspondence in visuomotor policy learning", IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 492-499, Apr. 2020.
24.
P. R. Florence, L. Manuelli and R. Tedrake, "Dense object nets: Learning dense visual object descriptors by and for robotic manipulation", Proc. Conf. Robot. Learn., pp. 373-385, 2018.
25.
Q. Chen, J. Xu and V. Koltun, "Fast image processing with fully-convolutional networks", Proc. IEEE Int. Conf. Comput. Vis., pp. 2497-2506, 2017.
26.
I. M. Pelayo, Geodesic Convexity in Graphs, Berlin, Germany:Springer, 2013.
27.
J. Zhou et al., "Graph neural networks: A review of methods and applications", AI Open, vol. 1, pp. 57-81, 2020.
28.
A. Sanchez-Gonzalez et al., "Learning to simulate complex physics with graph networks", Proc. Int. Conf. Mach. Learn., pp. 8459-8468, 2020.
29.
A. Wächter and L. T. Biegler, "Line search filter methods for nonlinear programming: Motivation and global convergence", SIAM J. Optim., vol. 16, no. 1, pp. 1-31, 2005.
30.
A. Wächter and L. T. Biegler, "On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming", Math. Program., vol. 106, no. 1, pp. 25-57, 2006.

Contact IEEE to Subscribe

References

References is not available for this document.