Loading [MathJax]/extensions/MathMenu.js
UniDexGrasp++: Improving Dexterous Grasping Policy Learning via Geometry-aware Curriculum and Iterative Generalist-Specialist Learning | IEEE Conference Publication | IEEE Xplore

UniDexGrasp++: Improving Dexterous Grasping Policy Learning via Geometry-aware Curriculum and Iterative Generalist-Specialist Learning


Abstract:

We propose a novel, object-agnostic method for learning a universal policy for dexterous object grasping from realistic point cloud observations and proprioceptive inform...Show More

Abstract:

We propose a novel, object-agnostic method for learning a universal policy for dexterous object grasping from realistic point cloud observations and proprioceptive information under a table-top setting, namely UniDexGrasp++. To address the challenge of learning the vision-based policy across thousands of object instances, we propose Geometry-aware Curriculum Learning (GeoCurriculum) and Geometry-aware iterative Generalist-Specialist Learning (GiGSL) which leverage the geometry feature of the task and significantly improve the generalizability. With our proposed techniques, our final policy shows universal dexterous grasping on thousands of object instances with 85.4% and 78.2% success rate on the train set and test set which outperforms the state-of-the-art baseline UniDexGrasp by 11.7% and 11.3%, respectively.
Date of Conference: 01-06 October 2023
Date Added to IEEE Xplore: 15 January 2024
ISBN Information:

ISSN Information:

Conference Location: Paris, France
References is not available for this document.

1. Introduction

In this work, we present a novel dexterous grasping policy learning pipeline, UniDexGrasp++. Same to UniDexGrasp [69], UniDexGrasp++ is trained on 3000+ different object instances with random object poses under a table-top setting. It significantly outperforms the previous SOTA and achieves 85.4% and 78.2% success rates on the train and test set.

Select All
1.
Pieter Abbeel and Andrew Y Ng, "Apprenticeship learning via inverse reinforcement learning", Proceedings of the twenty-first international conference on Machine learning, pp. 1, 2004.
2.
Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Ma-teusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas et al., "Solving rubik’s cube with a robot hand", 2019.
3.
Sheldon Andrews and Paul G Kry, "Goal directed multi-finger manipulation: Control policies and analysis", Computers & Graphics, vol. 37, no. 7, pp. 830-839, 2013.
4.
Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray et al., "Learning dexterous in-hand manipulation", The International Journal of Robotics Research, vol. 39, no. 1, pp. 3-20, 2020.
5.
Yunfei Bai and C. Karen Liu, "Dexterous manipulation using both palm and fingers", 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1560-1565, 2014.
6.
Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Siegwart and Juan Nieto, "Volumetric grasping network: Real-time 6 dof grasp detection in clutter", 2021.
7.
Tao Chen, Megha Tippur, Siyang Wu, Vikash Kumar, Edward Adelson and Pulkit Agrawal, "Visual dexterity: In-hand dexterous manipulation from depth", 2022.
8.
Tao Chen, Jie Xu and Pulkit Agrawal, "A system for general in-hand object re-orientation", Conference on Robot Learning, 2021.
9.
Sammy Christen, Muhammed Kocabas, Emre Aksan, Jemin Hwangbo, Jie Song and Otmar Hilliges, "D-grasp: Physically plausible dynamic grasp synthesis for hand-object in teractions", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
10.
Karl Cobbe, Chris Hesse, Jacob Hilton and John Schul-man, "Leveraging procedural generation to benchmark reinforcement learning", International Conference on machine learning, pp. 2048-2056, 2020.
11.
Nikhil Chavan Dafle, Alberto Rodriguez, Robert Paolini, Bowei Tang, Siddhartha S Srinivasa, Michael Erdmann, et al., "Extrinsic dexterity: In-hand manipulation with external forces", 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1578-1585, 2014.
12.
Qiyu Dai, Yan Zhu, Yiran Geng, Ciyu Ruan, Jiazhao Zhang and He Wang, "Graspnerf: Multiview-based 6-dof grasp detection for transparent and specular objects using generalizable nerf", 2022.
13.
Mehmet R Dogar and Siddhartha S Srinivasa, "Push-grasping with dexterous hands: Mechanics and a method", 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2123-2130, 2010.
14.
Mehmet R. Dogar and Siddhartha S. Srinivasa, "Push-grasping with dexterous hands: Mechanics and a method", 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2123-2130, 2010.
15.
Yan Duan, Xi Chen, Rein Houthooft, John Schulman and Pieter Abbeel, "Benchmarking deep reinforcement learning for continuous control", International Conference on machine learning, pp. 1329-1338, 2016.
16.
Hongjie Fang, Hao-Shu Fang, Sheng Xu and Cewu Lu, "Transcg: A large-scale real-world dataset for transparent object depth completion and a grasping baseline", IEEE Robotics and Automation Letters, pp. 1-8, 2022.
17.
Hao-Shu Fang, Chenxi Wang, Minghao Gou and Cewu Lu, "Graspnet-1billion: A large-scale benchmark for general object grasping", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11444-11453, 2020.
18.
Justin Fu, Katie Luo and Sergey Levine, "Learning robust rewards with adversarial inverse reinforcement learning", 2017.
19.
Haoran Geng, Ziming Li, Yiran Geng, Jiayi Chen, Hao Dong and He Wang, Partmanip: Learning cross-category generalizable part manipulation policy from point cloud observations, 2023.
20.
Haoran Geng, Helin Xu, Chengyang Zhao, Chao Xu, Li Yi, Siyuan Huang, et al., "Gapartnet: Cross-category domain-generalizable object perception and manipulation via generalizable and actionable parts", 2022.
21.
Yiran Geng, Boshi An, Haoran Geng, Yuanpei Chen, Yaodong Yang and Hao Dong, "End-to-end affor-dance learning for robotic manipulation", 2022.
22.
Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar and Sergey Levine, "Divide-and-conquer reinforcement learning", 2017.
23.
Minghao Gou, Hao-Shu Fang, Zhanda Zhu, Sheng Xu, Chenxi Wang and Cewu Lu, "Rgb matters: Learning 7-dof grasp poses on monocular rgbd images", 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 13459-13466, 2021.
24.
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel and Sergey Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", International Conference on machine learning, pp. 1861-1870, 2018.
25.
Jonathan Ho and Stefano Ermon, "Generative adversarial imitation learning", Advances in neural information processing systems, vol. 29, 2016.
26.
Wenlong Huang, Igor Mordatch, Pieter Abbeel and Deepak Pathak, "Generalization in dexterous manipulation via geometry-aware multi-task learning", 2021.
27.
Stephen James, Zicong Ma, David Rovick Arrojo and Andrew J Davison, "Rlbench: The robot learning benchmark & learning environment", IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3019-3026, 2020.
28.
Zhiwei Jia, Xuanlin Li, Zhan Ling, Shuang Liu, Yiran Wu and Hao Su, "Improving policy optimization with generalist-specialist learning", International Conference on Machine Learning, pp. 10104-10119, 2022.
29.
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke et al., "Scalable deep reinforcement learning for vision-based robotic manipulation", Conference on Robot Learning, pp. 651-673, 2018.
30.
Michael Kelly, Chelsea Sidrane, Katherine Driggs-Campbell and Mykel J Kochenderfer, "Hg-dagger: Interactive imitation learning with human experts", 2019 International Conference on Robotics and Automation (ICRA), pp. 8077-8083, 2019.

Contact IEEE to Subscribe

References

References is not available for this document.