完整後設資料紀錄
DC 欄位語言
dc.contributor.authorSu, Yung-Shanen_US
dc.contributor.authorLu, Shao-Huangen_US
dc.contributor.authorSer, Po-Shengen_US
dc.contributor.authorHsu, Wei-Tingen_US
dc.contributor.authorLai, Wei-Chengen_US
dc.contributor.authorXie, Biaoen_US
dc.contributor.authorHuang, Hong-Mingen_US
dc.contributor.authorLee, Teng-Yoken_US
dc.contributor.authorChen, Hung-Wenen_US
dc.contributor.authorYu, Lap-Faien_US
dc.contributor.authorWang, Hsueh-Chengen_US
dc.date.accessioned2020-10-05T02:01:29Z-
dc.date.available2020-10-05T02:01:29Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-7281-4004-9en_US
dc.identifier.issn2153-0858en_US
dc.identifier.urihttp://hdl.handle.net/11536/155264-
dc.description.abstractThe Amazon Picking Challenge and the Amazon Robotics Challenge have shown significant progress in object picking from a cluttered scene, yet object placement remains challenging. It is useful to have pose-aware placement based on human and machine readable pieces on an object. For example, the brandname of an object placed on a shelf should be facing the human customers. The robotic vision challenges in the object placement task: a) the semantics and geometry of the object to be placed need to be analysed jointly; b) and the occlusions among objects in a cluttered scene could make it hard for proper understanding and manipulation. To overcome these challenges, we develop a pose-aware placement approach by spotting the semantic labels (e.g., brandnames) of objects in a cluttered tote and then carrying out a sequence of actions to place the objects on a shelf or on a conveyor with desired poses. Our major contributions include 1) providing an open benchmark dataset of objects and brandnames with multi-view segmentation for training and evaluations; 2) carrying out comprehensive evaluations for our brandname-based fully convolutional network (FCN) that can predict the affordance and grasp to achieve pose-aware placement, whose success rates decrease along with clutters; 3) showing that active manipulation with two cooperative manipulators and grippers can effectively handle the occlusion of brandnames. We analyzed the success rates and discussed the failure cases to provide insights for future applications.en_US
dc.language.isoen_USen_US
dc.titlePose-Aware Placement of Objects with Semantic Labels - Brandname-based Affordance Prediction and Cooperative Dual-Arm Active Manipulationen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)en_US
dc.citation.spage4760en_US
dc.citation.epage4767en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000544658403126en_US
dc.citation.woscount0en_US
顯示於類別:會議論文