標題: 基於Google Glass之影片自動分段與精簡方法
Automatic Methods for Segmenting and Summarizing Videos Taken with Google Glasses
作者: 邱彥嘉
王才沛
Chiu, Yen-Chia
Wang, Tsai-Pei
多媒體工程研究所
關鍵字: Google Glass;Smart Glasses;影片精簡;影片分段;Google Glasses;Smart Glasses;Video Abstraction;Video Segmentation;Video Summarization
公開日期: 2016
摘要: 本論文的題目是自動分段和精簡使用Google Glass拍出來的影片,主要是使用Google Glass拍攝的影片和紀錄的感測器資訊,在事後將影片分割成幾個片段,並且評估每一段影片的重要性,之後使用者可以利用這些資訊做影片精簡。 演算法使用的特徵有顏色特徵、細節特徵、移動特徵和對話特徵四大類,之後透過兩個多層的Neural Network分別做影片分段和重要性評估的訓練和辨識,最後我們使用一個有系統的評估方式,將本論文自動產生的結果對比多位使用者手動標記的結果,呈現本方法實作效果的好壞。
This thesis discusses the topic of automatic segmentation and summarization of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the extracted segments. Such information then enables automatic generation of video summary. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to expert annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070356625
http://hdl.handle.net/11536/139454
顯示於類別:畢業論文