完整後設資料紀錄
DC 欄位語言
dc.contributor.author邱彥嘉zh_TW
dc.contributor.author王才沛zh_TW
dc.contributor.authorChiu, Yen-Chiaen_US
dc.contributor.authorWang, Tsai-Peien_US
dc.date.accessioned2018-01-24T07:38:02Z-
dc.date.available2018-01-24T07:38:02Z-
dc.date.issued2016en_US
dc.identifier.urihttp://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070356625en_US
dc.identifier.urihttp://hdl.handle.net/11536/139454-
dc.description.abstract本論文的題目是自動分段和精簡使用Google Glass拍出來的影片,主要是使用Google Glass拍攝的影片和紀錄的感測器資訊,在事後將影片分割成幾個片段,並且評估每一段影片的重要性,之後使用者可以利用這些資訊做影片精簡。 演算法使用的特徵有顏色特徵、細節特徵、移動特徵和對話特徵四大類,之後透過兩個多層的Neural Network分別做影片分段和重要性評估的訓練和辨識,最後我們使用一個有系統的評估方式,將本論文自動產生的結果對比多位使用者手動標記的結果,呈現本方法實作效果的好壞。zh_TW
dc.description.abstractThis thesis discusses the topic of automatic segmentation and summarization of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the extracted segments. Such information then enables automatic generation of video summary. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to expert annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.en_US
dc.language.isozh_TWen_US
dc.subjectGoogle Glasszh_TW
dc.subjectSmart Glasseszh_TW
dc.subject影片精簡zh_TW
dc.subject影片分段zh_TW
dc.subjectGoogle Glassesen_US
dc.subjectSmart Glassesen_US
dc.subjectVideo Abstractionen_US
dc.subjectVideo Segmentationen_US
dc.subjectVideo Summarizationen_US
dc.title基於Google Glass之影片自動分段與精簡方法zh_TW
dc.titleAutomatic Methods for Segmenting and Summarizing Videos Taken with Google Glassesen_US
dc.typeThesisen_US
dc.contributor.department多媒體工程研究所zh_TW
顯示於類別:畢業論文