標題: | 用於視覺監控系統的人臉可辨識度測量 Measurement of Face Recognizability for Visual Surveillance |
作者: | 曹育誠 Tsao Yu-Cheng 李錫堅 His-Jian Lee 資訊科學與工程研究所 |
關鍵字: | 人臉可辨識度;視覺監控系統;Face Recognizability;Visual Surveillance |
公開日期: | 2003 |
摘要: | 本論文之研究目的在於提出一個評估人臉可辨識程度的機制。在一個監控環境內,監控系統可能需要辨識所偵測到的人臉,對於一張較難以辨識的人臉,例如側臉、或非正臉,通常會形成錯誤的辨識結果,導致辨識系統效能下降。如果能在人臉被辨識之前,先評估一個可正確被辨識的程度。對於辨識系統而言,可以藉由避免辨識低辨識度的人臉,減少不避要的工作量,並且能夠提升監控系統整體的效能。我們所提出的系統,分成為四個部份:前景偵測、頭部區域擷取、人臉特徵偵測、可辨識度計算。
第一個部份,先以一個統計式的方法來建立背景模組。這個背景模組在輸入一連串背景樣本後,分析背景影像內每個點的灰階值分佈,再計算其機率分佈。對輸入的影像,假設某個點的灰階值出現機率低於一定的門檻值,則代表這個點應該不屬於前景。最後找出所有前景點的連通單元,形成前景。
第二個部份,我們提出一個從前景頭部區域尋找橢圓的方法。因為頭部形狀比較接近於橢圓形,以橢圓為樣板擷取頭部影像,能更準確地擷取頭部影像。
第三個部份,尋找人臉特徵的位置,也就是眼睛和嘴巴的位置。我們希望能夠利用眼睛和嘴巴在人臉具有較強烈的方向特徵,依這個特徵來找尋眼睛和嘴巴的位置。首先,我們先從頭部區域依膚色灰階值來擷取人臉的區塊。再對人臉區塊計算方向統計圖,方向統計圖可以分析出區域的方向分佈。然後再由幾個較強的方向分佈來尋找人臉上往這些方向分佈的線段,這些線段有可能為眼睛、嘴巴所形成的線段。過濾其它的線段後,找出線段中心分佈密集的區域。再依據眼睛、嘴巴的相對位置、顏色特徵來找驗證所找到的線段是否為眼睛和嘴巴所形成的。最後從找出的線段來推算出眼睛和嘴巴的位置。
第四個部份,以兩眼和嘴巴形成的三角形、方向的對稱度、區域灰階值的對稱程度,來區分人臉的轉向為三大類:正臉、有轉動的臉和非正臉。人臉的可辨識度以人臉大小,和兩眼和嘴巴的三角形和等腰三角形的相似和度為計算依據。
實驗的部份,測試可辨識度和實際辨識率之間的關係。實驗結果發現辨識率在可辨識度下降的情況,也會跟著下降。這代表我們所提出的可辨識度能夠有效提供辨識系統一個辨識度的參考。 In this paper, measurement of face recognizability is proposed to evaluate the possible recognition degree of a face before face recognition. If we could measure the face recognizable degree after face detection, we can increase the efficiency by avoiding recognizing the face with power recognizabilities. The system consists of four stages: foreground segmentation, head extraction, facial component identification and recognizability measurement. In the first stage, based on a statistical background model, we apply the background subtraction approach to segment the foreground. After computing the probabilities for all pixels in the image, the connected pixels in the foreground would be collected as the foreground regions. In the second stage, we adopt an ellipse finding algorithm to extract the head region from the foreground regions. In the third stage, the system finds the facial components, eyes and mouth. Based on the features of the orientation distribution on the face regions, we collect the lines of the face with strong orientations. After line filtering, we could get the lines formed by the facial components. Lastly, the lines would be identified from the facial components based on their properties. The locations of the facial components could be evaluated from the locations of lines. In the forth stage, we use three features, that is, the triangle formed by eyes and mouth, the degree of the direction symmetry and the degree of the region symmetry to classify the facial orientations as three classes: frontal, oriented and non-frontal face. The recognizability of the face is then defined as the function of the face size and the relative locations of eyes and mouth. In our experiments, we tested the relation between the recognizability and the recognition rate. The experimental results show that the recognizability could be used as a measurement determines whether we will perform face recognition or not. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT009117573 http://hdl.handle.net/11536/50125 |
Appears in Collections: | Thesis |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.