完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChang, Rong-Jieen_US
dc.contributor.authorChang, Chin-Chenen_US
dc.contributor.authorWay, Der-Loren_US
dc.contributor.authorShih, Zen -Chungen_US
dc.date.accessioned2018-08-21T05:56:26Z-
dc.date.available2018-08-21T05:56:26Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn2306-2274en_US
dc.identifier.urihttp://hdl.handle.net/11536/146200-
dc.description.abstractIn this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.en_US
dc.language.isoen_USen_US
dc.subjectSemantic segmentationen_US
dc.subjectMotion estimationen_US
dc.subjectNeural networken_US
dc.subjectStyle transferen_US
dc.titleAn Improved Style Transfer Approach for Videosen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2018 INTERNATIONAL WORKSHOP ON ADVANCED IMAGE TECHNOLOGY (IWAIT)en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.department多媒體工程研究所zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.contributor.departmentInstitute of Multimedia Engineeringen_US
dc.identifier.wosnumberWOS:000434996800121en_US
顯示於類別:會議論文