完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLo, Yuan-Mauen_US
dc.contributor.authorChang, Chin-Chenen_US
dc.contributor.authorWay, Der-Loren_US
dc.contributor.authorShih, Zen-Chungen_US
dc.date.accessioned2020-07-01T05:22:09Z-
dc.date.available2020-07-01T05:22:09Z-
dc.date.issued2020-05-01en_US
dc.identifier.urihttp://dx.doi.org/10.3390/app10093101en_US
dc.identifier.urihttp://hdl.handle.net/11536/154572-
dc.description.abstractThe conventional warping method only considers translations of pixels to generate stereo images. In this paper, we propose a model that can generate stereo images from a single image, considering both translation as well as rotation of objects in the image. We modified the appearance flow network to make it more general and suitable for our model. We also used a reference image to improve the inpainting method. The quality of images resulting from our model is better than that of images generated using conventional warping. Our model also better retained the structure of objects in the input image. In addition, our model does not limit the size of the input image. Most importantly, because our model considers the rotation of objects, the resulting images appear more stereoscopic when viewed with a device.en_US
dc.language.isoen_USen_US
dc.subjectstereo imagesen_US
dc.subjectview synthesisen_US
dc.subjectneural networken_US
dc.subjectsemantic segmentationen_US
dc.subjectdepth estimationen_US
dc.titleGeneration of Stereo Images Based on a View Synthesis Networken_US
dc.typeArticleen_US
dc.identifier.doi10.3390/app10093101en_US
dc.identifier.journalAPPLIED SCIENCES-BASELen_US
dc.citation.volume10en_US
dc.citation.issue9en_US
dc.citation.spage0en_US
dc.citation.epage0en_US
dc.contributor.department多媒體工程研究所zh_TW
dc.contributor.departmentInstitute of Multimedia Engineeringen_US
dc.identifier.wosnumberWOS:000535541900114en_US
dc.citation.woscount0en_US
顯示於類別:期刊論文