完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChiu, Yen-Chiaen_US
dc.contributor.authorLiu, Li-Yien_US
dc.contributor.authorWang, Tsaipeien_US
dc.date.accessioned2018-08-21T05:53:42Z-
dc.date.available2018-08-21T05:53:42Z-
dc.date.issued2018-05-01en_US
dc.identifier.issn1380-7501en_US
dc.identifier.urihttp://dx.doi.org/10.1007/s11042-017-4910-8en_US
dc.identifier.urihttp://hdl.handle.net/11536/145037-
dc.description.abstractThis paper discusses the topic of automatic segmentation and extraction of important segments of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the each segment. Such information then enables automatic generation of video summary that contains only the important segments. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to human annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.en_US
dc.language.isoen_USen_US
dc.subjectGoogle Glassen_US
dc.subjectSmart glassesen_US
dc.subjectEgocentric videoen_US
dc.subjectVideo abstractionen_US
dc.subjectVideo segmentationen_US
dc.subjectVideo summarizationen_US
dc.subjectVideo diaryen_US
dc.titleAutomatic segmentation and summarization for videos taken with smart glassesen_US
dc.typeArticleen_US
dc.identifier.doi10.1007/s11042-017-4910-8en_US
dc.identifier.journalMULTIMEDIA TOOLS AND APPLICATIONSen_US
dc.citation.volume77en_US
dc.citation.spage12679en_US
dc.citation.epage12699en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000433202100046en_US
顯示於類別:期刊論文