標題: | 區域式主動表現模型演算法應用於人臉特徵匹配 Local Active Appearance Models Using For Face Feature Matching |
作者: | 林育弘 Yu-Hung Lin 林進燈 Chin-Teng Lin 電控工程研究所 |
關鍵字: | 人臉追蹤;人臉特徵;主動表現模型;appearance model,;feature matching |
公開日期: | 2006 |
摘要: | 以主動表現模型為基礎,提出一個新的模型建立方法:區域式主動表現模型。主動表現模型目前應用廣泛,它結合形狀與紋理的資訊,使用主成分分析法建立一個統計模型。在主動表現模型特徵匹配中,使用模型紋理與影像紋理資訊的差異預測模型參數的走向而得到最佳的匹配。
由觀察得知,人臉主要變化的區域為眼睛與嘴巴,而且彼此間變化大致上獨立。所以在擷取形狀與紋理資料後,我們將形狀與紋理資料分成三個區域:眼睛、嘴巴、其他區域。每個區域都有自己的形狀與紋理資訊。將這三個區域分別利用主成分分析法各自建立三個獨立模型:眼睛、嘴巴與其他區域模型。將三個模型結合即為我們提出的區域式主動表現模型。
區域式主動表現模型使用區域式的紋理相減差異去預測模型參數,各模型間的收斂獨立運作。而位移參數則用全域紋理相減差異來預測。由實驗可得1)區域式主動表現模型模擬效果表現較主動表現模型佳。2)因為區域式主動表現模型的區域各自獨立,不需考慮到資料間變化的排列組合,所以需要的訓練資料較少即可達到一樣的模擬效果,且在訓練資料較少的情形下,需要挑選的特徵向量較少,所需提供給模型參數儲存的空間也可以減少,即可用比較小的空間達到一樣的模擬程度。3)區域式主動表現模型的模型參數各自獨立,所以在人臉特徵匹配後,若要再做嘴巴眼睛的表情辨識可直接將相關的模型參數直接用來做分類器的輸入。 We propose a new model based on active appearance model (AAM), called local active appearance model (LAAM). A popular approach, active appearance model, uses a combined statistical model (PCA) of shape and texture. The AAM searches use the texture residual between the model and the target image to predict improved model parameters to obtain the best possible match.. After observation, we find the eyes and mouth regions of face vary a lot and change nearly independently. After extract facial shape data and texture data from manual labeled images, we divide shape and texture data into three major regions : eyes, mouth, and other region. Each region has their shape and texture information. We form their statistical model of shape and texture. Model of eyes, model of mouth, and model of other are establish. Combining three models, we can get the local active appearance model (LAAM). LAAM model predict improved model parameters using local texture difference between the model and the target image. The prediction of translation parameter uses global texture difference between the model and the target image. After experiment, we can get conclusion as follow: 1) The LAAM modeling performs better than AAM modeling. 2) Because of the independence character of eyes, mouths and other models. We don’t need to consider about the relation of each region’s expression. For the same performance the training data number can reduce. For the fewer of data number, we can choose fewer number of eigenvector to get the same performance. 3) The LAAM model parameter is independent of each other. If we want to do the classification of the expression of eyes or mouth, the model parameter can be sent to a classifier directly. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT009412538 http://hdl.handle.net/11536/80668 |
顯示於類別: | 畢業論文 |