完整後設資料紀錄
DC 欄位語言
dc.contributor.author陳冠樺en_US
dc.contributor.authorChen, Kuan-Huaen_US
dc.contributor.author林奕成en_US
dc.contributor.authorLin, I-Chenen_US
dc.date.accessioned2014-12-12T02:44:39Z-
dc.date.available2014-12-12T02:44:39Z-
dc.date.issued2014en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT070156607en_US
dc.identifier.urihttp://hdl.handle.net/11536/76018-
dc.description.abstract即時且準確地估計人體姿勢是具有相當挑戰性的問題,近年來最尖端的研究結果使用隨機森林訓練機器的方法來及時且準確的辨識出人體各個部位的資訊。然而若要使用此種訓練方法去得到好的辨識結果需要相當大量的訓練資料,而要獲得大量被標示好的人體訓練資料是一件相當昂貴的且難以達成的工作。在用隨機森林訓練方法中,訓練資料的缺少往往會導致不準確的結果產生,為了解決這樣的問題,我們提出了兩階段訓練模型的方法來即時的估計辨識人體上半身姿勢。在我們的實驗中,我們的兩階段訓練方法可以用少量有限的資料下得到可令人滿意的辨識結果,之後再利用這些人體辨識的資訊形成各個部位的機率分布圖來估計人體的骨架,最後再用隨機抽樣一致最佳化來即時的得到上半身人體姿勢。zh_TW
dc.description.abstractEstimating upper human poses accurately in real time is a challenging problem. Lately, the state-of-art work adopted randomized forest training method to acquire human parts accurately in real time. However, this method needs enormous training data to obtain favorable results. And these training images with high quality ground truth labels are very expensive to obtain. In the randomized forest training method, lacking of training data will lead to improper results. In order to solve this problem, we propose a two-stage training model method to estimate human upper body poses in real-time. In our experiment our method can obtain satisfied outcome without large amounts of training data and perform accurately in real-time. Then we adopt this recognized information forming the probability maps and use these maps to formulate our estimated function. After that, we apply RANSAC method to optimize our formula and acquire the final estimated human upper pose.en_US
dc.language.isoen_USen_US
dc.subject估計人體半身姿勢zh_TW
dc.subject隨機森林zh_TW
dc.subject兩階段訓練模型zh_TW
dc.subjectEstimating human upper body poseen_US
dc.subjectRandomized foresten_US
dc.subjectTwo-stage training modelen_US
dc.title從單一深度攝影機即時估計上半身位置zh_TW
dc.titleReal-time upper body pose estimation from a single depth cameraen_US
dc.typeThesisen_US
dc.contributor.department多媒體工程研究所zh_TW
顯示於類別:畢業論文