Visual Completion Of 3D Object Shapes From A Single View For Robotic Tasks | IEEE Conference Publication | IEEE Xplore

Visual Completion Of 3D Object Shapes From A Single View For Robotic Tasks


Abstract:

The goal of this paper is to predict 3D object shape to improve the visual perception of robots in grasping and manipulation tasks. The planning of image-based robotic ma...Show More

Abstract:

The goal of this paper is to predict 3D object shape to improve the visual perception of robots in grasping and manipulation tasks. The planning of image-based robotic manipulation tasks depends on the recognition of the object's shape. Mostly, the manipulator robots usually use a camera with configuration eye-in-hand. This fact limits the calculation of the grip on the visible part of the object. In this paper, we present a 3D Deep Convolutional Neural Network to predict the hidden parts of objects from a single-view and to accomplish recovering the complete shape of them. We have tested our proposal with both previously seen objects and novel objects from a well-known dataset.
Date of Conference: 06-08 December 2019
Date Added to IEEE Xplore: 20 January 2020
ISBN Information:
Conference Location: Dali, China
Citations are not available for this document.

I. INTRODUCTION

Knowing the complete 3D geometry of an object is indispensable for the physical interaction between the robots and the outside world such as object recognition, grasping, and object manipulation. In this work, we aim to tackle the problem of occlusion in grasping and manipulation tasks through predicting the complete 3D shape from a single 2.5D depth view. If the shape of the object was known, robots could get some ideas of what actions to consider like path planning and generating stable grasps. For this objective, we designed and trained a 3D convolutional neural network to do the shape reconstruction. This is a very challenging task because different 3D models can be obtained from the same single view. Therefore, our solution should have the ability of generalization.

Cites in Papers - |

Cites in Papers - IEEE (2)

Select All
1.
Mohamed Tahoun, Omar Tahri, Juan Antonio Corrales Ramón, Youcef Mezouar, "Visual-Tactile Fusion for 3D Objects Reconstruction from a Single Depth View and a Single Gripper Touch for Robotics Tasks", 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp.6786-6793, 2021.
2.
Liang Pan, "ECG: Edge-aware Point Cloud Completion with Graph Convolution", IEEE Robotics and Automation Letters, vol.5, no.3, pp.4392-4398, 2020.
Contact IEEE to Subscribe

References

References is not available for this document.