Unknown

Dataset Information

0

Learning Suction Graspability Considering Grasp Quality and Robot Reachability for Bin-Picking.


ABSTRACT: Deep learning has been widely used for inferring robust grasps. Although human-labeled RGB-D datasets were initially used to learn grasp configurations, preparation of this kind of large dataset is expensive. To address this problem, images were generated by a physical simulator, and a physically inspired model (e.g., a contact model between a suction vacuum cup and object) was used as a grasp quality evaluation metric to annotate the synthesized images. However, this kind of contact model is complicated and requires parameter identification by experiments to ensure real world performance. In addition, previous studies have not considered manipulator reachability such as when a grasp configuration with high grasp quality is unable to reach the target due to collisions or the physical limitations of the robot. In this study, we propose an intuitive geometric analytic-based grasp quality evaluation metric. We further incorporate a reachability evaluation metric. We annotate the pixel-wise grasp quality and reachability by the proposed evaluation metric on synthesized images in a simulator to train an auto-encoder-decoder called suction graspability U-Net++ (SG-U-Net++). Experiment results show that our intuitive grasp quality evaluation metric is competitive with a physically-inspired metric. Learning the reachability helps to reduce motion planning computation time by removing obviously unreachable candidates. The system achieves an overall picking speed of 560 PPH (pieces per hour).

SUBMITTER: Jiang P 

PROVIDER: S-EPMC8987443 | biostudies-literature | 2022

REPOSITORIES: biostudies-literature

altmetric image

Publications

Learning Suction Graspability Considering Grasp Quality and Robot Reachability for Bin-Picking.

Jiang Ping P   Oaki Junji J   Ishihara Yoshiyuki Y   Ooga Junichiro J   Han Haifeng H   Sugahara Atsushi A   Tokura Seiji S   Eto Haruna H   Komoda Kazuma K   Ogawa Akihito A  

Frontiers in neurorobotics 20220324


Deep learning has been widely used for inferring robust grasps. Although human-labeled RGB-D datasets were initially used to learn grasp configurations, preparation of this kind of large dataset is expensive. To address this problem, images were generated by a physical simulator, and a physically inspired model (e.g., a contact model between a suction vacuum cup and object) was used as a grasp quality evaluation metric to annotate the synthesized images. However, this kind of contact model is co  ...[more]

Similar Datasets

| S-EPMC7038393 | biostudies-literature
| S-EPMC10819387 | biostudies-literature
| S-EPMC7806048 | biostudies-literature
| S-EPMC6111311 | biostudies-literature
| S-EPMC7670580 | biostudies-literature
| S-EPMC6690730 | biostudies-literature
| S-EPMC9237678 | biostudies-literature
| S-EPMC9834356 | biostudies-literature
| S-EPMC4143804 | biostudies-literature
| S-EPMC8940679 | biostudies-literature