Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 林峪鋒 | en_US |
dc.contributor.author | Lin, Yu-Feng | en_US |
dc.contributor.author | 林進燈 | en_US |
dc.contributor.author | Lin, Chin-Teng | en_US |
dc.date.accessioned | 2014-12-12T02:38:20Z | - |
dc.date.available | 2014-12-12T02:38:20Z | - |
dc.date.issued | 2012 | en_US |
dc.identifier.uri | http://140.113.39.130/cdrfb3/record/nctu/#GT070060036 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/73604 | - |
dc.description.abstract | 本篇論文中,我們提出了一個實用,可靠度高且通用的人體動作辨識系統。首先在各種動作的特徵分析上,我們從每段動作內的不同時間範圍找出我們感興趣點並加以累積,描述出我們感興趣點的集合。做不同動作時,這些集合在不同時間和空間中會有不同的分布,使其具有高度辨識性。接續運用這些集合組合出本篇的強健特徵來代表人體不同的動作,因為這些集合具有高識別度的特性,以及組合出的特徵在多重視角下做同一個動作時會有相似的值,讓我們可以運用在多視角人體動作辨識上。在實際應用上,我們提出的方法能夠自動在一段未知的影片中,根據已學習的多項指定動作,自動分析並判斷該動作在影片中的開始於結束時間。而不必透過人為的方式,手動標定欲測動作的開始與結束。進而達到完全自動化的功能。在實驗中,我們用KTH, WEIZMANN 以及MuHAVi 這三種公開的人體動作資料庫來證明我們提出的方法比大部分現有的方法的辨識效果還要好。並且用這三種資料庫做跨資料庫的實驗,實驗結果顯示本篇所提出的系統在不同的場景做同個動作時,還是具備不受場景影響的特性,此點更證明了我們提出的系統可以在多重角度以及多重場景下達到動作辨識。 | zh_TW |
dc.description.abstract | In this paper, we propose a practical, reliable and generic system for video-based human action recognition. For description of different actions performed in different view, we use our view-invariant features to address multi-view action recognition. These features are obtained by extracting holistic features from different temporal scales clouds which is modeled explicitly global spatial and temporal distribution of interest points alone. Using our view-invariant features is highly discriminative and more robust for recognizing actions under view change. For practical application, we propose a mechanism that it can watch actions a person doing at image sequences and separate these actions according to training data. Besides, using our scheme, we can label the beginning and end of the action sequence automatically without manually setting. Experiments using the KTH and WEIZMANN and MuHAVi datasets demonstrate that our approach outperforms most existing methods. In addition, the experiments also show our system performed well in that training and testing are cross dataset. In the other words, our system does not need to retrain when scenarios was changed. The trained database is applicable for a variety of different environment. | en_US |
dc.language.iso | zh_TW | en_US |
dc.subject | 動作辨識 | zh_TW |
dc.subject | ACTION | en_US |
dc.subject | RECOGNITION | en_US |
dc.title | 強健特徵之自動化多視角動作辨識 | zh_TW |
dc.title | Automatic Multi-View Action Recognition with Robust Features | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 電控工程研究所 | zh_TW |
Appears in Collections: | Thesis |