標題: 以少量的動作感知器來驅動角色動作之研究
Character Animation Driven by Sparse Motion Sensors
作者: 劉峻豪
林奕成
多媒體工程研究所
關鍵字: 電腦動畫;互動介面;WII控制器;感知器;Computer animation;interactive interfaces;Wii remotes;sensors
公開日期: 2010
摘要: 動作捕捉系統提供了一個高品質且準確的技術,來驅動虛擬角色的動作。然而,昂貴的價格與繁雜的後續處理過程,使得此一技術難以普遍運用在互動式的環境中。在本論文中,我們提出一個利用少量的感知器裝置,即可驅動角色動作的方法。首先,我們要求使用者配戴少量的感知器,並且跟隨範例動作舞動,以取得一動作主題於感知器之間的特性。其次,動作資料會切成數個片段,並且以動作圖的資料結構,連接各個平順的片段動作。然後,透過隱馬爾可夫模型來計算一動作片段至另一片段的機率,我們發現此動作重建的方法,可以更加真實及可靠。最後,為了增加動作的變化性,我們更進一步的混合類似使用者舞動的數個片段,而資料庫則無此一還原動作片段。因此,在即時的互動介面環境下,使用者可利用少量且廉價的動作感知器裝置,以驅動虛擬角色的動作。此外,該簡易之系統裝置,更可運用於各類高級互動之介面系統。
Motion capture devices provide an accurate and high definition technique to generate avatar’s motion from reality. Nevertheless, the expensive cost and tedious post-processing make it difficult to gain the popularity and to perform on interactive applications. In this thesis, we propose a data-driven approach to drive character animation by sparse motion sensors. To acquire the motion characteristics of a subject (player), we ask a user to follow exemplar motion and record the acceleration and angular velocity from sparse motion sensors. The motion data are then divided into several clips, and we construct the action graph to connect each smooth pair of clips. While applying hidden Markov Model to calculate the probability of transition from one clip to the other, it shows that the reconstructed motion is much more realistic and reliable. Finally, to extend the motion variety, we farther blend clips that are similar to user's action for playing motion unseen in the database. In real-time, the user is capable of driving avatar’s motion by inexpensive sensors. In addition, such an easy-to-use system can also be applied to advanced interaction systems.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079757543
http://hdl.handle.net/11536/46082
顯示於類別:畢業論文


文件中的檔案:

  1. 754301.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。