標題: 以統計方法為基礎之二維角色動畫合成
Statistical Approaches for 2D Character Animation
作者: 周芸鋒
Chou, Yun-Feng
施仁忠
Shih, Zen-Chung
資訊科學與工程研究所
關鍵字: 影像形變;無母數迴歸;橢圓徑向基底函數;函數近似;貝氏推論;時間序列;Image deformation;Nonparametric regression;Elliptic radial basis functions;Functional approximation;Bayesian inference;Time series
公開日期: 2010
摘要: 傳統二維動畫製作是屬於一個勞動密集型態的製作過程,也就是以人工手繪的方式逐格繪製該動畫中每一格人物角色的姿勢,且以固定的畫面更新率,產生該人物角色的動作或行為,而製作過程中,耗費大量的人力與物力在繪製每格人物角色的姿勢,與為其所繪製之姿勢進行上色工作。為了節省上述傳統二維動畫製作所耗費的人力與成本,本論文提出一個新的動畫合成方法,取代傳統人工手繪的方式,我們的方法以統計分析與推論為基礎,以較為有限的人工介入,來合成逼真的二維角色動畫。我們透過統計學中的無母數迴歸分析,有效率地描述靜態影像中,預先採樣的角色位移資訊,藉此合成該靜態影像中角色的二維動畫。此外,二維角色動畫可以被視為一個三維的空間與時間轉換問題,我們根據數張連續的靜態影像中的同一人物,研究其在不同時間點個別姿勢之相對關係,我們採用時間序列的概念,來分析與預測該角色一連串適宜的連續動作。 在本論文中,我們把二維角色動畫製作分成不同的多媒體應用,包括新視角的合成、臉部表情與說話嘴形的模擬、肢體動作合成。如上述所示,我們透過無母數迴歸,產生出由另一個視點觀看影像中人物角色的效果,且進一步模擬該角色與輸入語音同步的說話嘴形和臉部表情。針對影像中該角色的輪廓資訊,本論文將介紹一種特殊的資料參數表示式:橢圓徑向基底函數,主要用於描述於橢圓表面採樣之資訊。我們利用無母數迴歸當中的橢圓徑向基底函數核迴歸去描述並預測該人物角色形狀的改變,藉此產生角色動畫,而且,為了在角色變形之後,仍維持原有的角色細節或特徵,無母數迴歸中的局部加權迴歸則被用來加強區域細節的控制,藉此保有該角色的原有特徵。此外,我們進行時間序列分析,從數張連續影像中,針對影像中同一人物角色在不同時間點的姿勢,來分析該角色的肢體移動軌跡,我們提出一個無母數貝氏方法來估計代表該移動軌跡的時間序列,並依照所估計的時間序列,模擬該角色的行為或動作。本論文最終將更深入探討如何透過所提之統計方法來合成被動元件的動畫,也就是合成由自然界外力所造成的被動元件移動,如合成出因風吹拂,造成樹木搖曳與水起漣漪的效果。 本論文提出一個從靜態影像中,有效率地合成出二維角色動畫的方法。實驗成果充分驗證本論文所提方法之可行性與可塑性,不但能夠有效模擬出逼真的角色動作,所估計的移動軌跡能因應所提供角色不同時間點的姿勢而變化,產生出的動畫亦減少不自然的扭曲現象,另一方面,本論文所提方法特別適合於智能化的多媒體應用,可用於如虛擬人物的合成,我們也相信此方法能加速整個動畫製作的過程。
Traditionally, the production of 2D animation is a labor-intensive artisan process of building up sequences of drawn images by hand which, when shown one after the other one at a fixed rate, resemble a movement. Most work and hence time is spent on drawing, inking, and coloring the individual animated characters for each of the frames. Instead of the traditional animation generated by hand, we introduce a novel method by enhancing still pictures and making characters move in convincing ways. The proposed method is based on the statistical analysis and inference, while minimizing users’ intervention. We adopt nonparametric regression to efficiently analyze the displacements of the pre-sampled data from characters in still pictures and use it to generate 2D character animation directly. Furthermore, 2D character animation is regarded as 3D transformation problem, which consists of a 2D spatial displacement and a 1D shift in time. Hence, we focus on the temporal relationship of different poses of the same character in these still pictures. Time series is applied to analyze the character’s movement and forecast a sequence of the suitable limbs movement of the character. In this dissertation, 2D character animation involves novel view generation, expressive talking face simulation, and limbs movement synthesis. Considering characters in still pictures, we focus on nonparametric regression to generate a novel view and an expressive facial animation synchronized with the input speech of a character. Kernel regression with elliptic radial basis functions (ERBFs) is proposed to describe and deform the shape of the character in image space. Note that the novel parametric representation, ERBFs, can be applied to represent the observations of the shape on the unit ellipse. For preserving patterns within the deformed shape, locally weighted regression (LOESS) is applied to fit the details with local control. Furthermore, time series is used to analyze the limb movement of a character and represent the motion trajectory. Note that a character’s motion could be described by a series of non-continuous poses of a character from a sequence of contiguous frames. According to these poses, we investigate a nonparametric Bayesian approach to construct the time series model representing the character’s motion trajectory. Then we can synthesize a sequence of the motion by using the motion trajectory. Last but not the least, we also investigate how to adopt the proposed statistical approaches mentioned above to animate passive elements. The movements of passive elements involving natural movements that respond to natural forces in some fashion like trees swaying and water rippling could be synthesized. Given a picture of a tree, we make it sway. Given a picture of a pond, we make it ripple. The solutions are developed to animate photographs or paintings effectively. Experimental results show that our method effectively simulates plausible movements for 2D character animation. They also show that the estimated motion trajectory best matches the given still frames. In comparison to previous approaches, our proposed method synthesizes smooth animations, while minimizing unnatural distortion and having the advantages of being more controllable. Moreover, the proposed method is especially suitable for intelligent multimedia applications in virtual human generation. We believe that the provided solutions are easy to use, and empower a much quicker animation production.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079223819
http://hdl.handle.net/11536/40416
顯示於類別:畢業論文


文件中的檔案:

  1. 381901.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。