Title: | 基於時序影像差之人體輪廓擷取與頭部偵測 Video Human Silhouette Extraction and Human Head Detection Based on Temporal Difference |
Authors: | 陳嘉臨 Chen, Chia-Lin 張志永 Chang, Jyh-Yeong 電控工程研究所 |
Keywords: | 時序影像差;前景分離;人體頭部偵測;temporal difference;foreground segmentation;human head detection |
Issue Date: | 2009 |
Abstract: | 前後景分離是將我們感興趣的物體(前景)從靜止的影像(背景)中分割出來。此分離技術經常是許多影像監控系統的第一個步驟,前後景分離的正確性高度影響了之後的步驟,例如:物體追蹤、姿態辨識、動作辨識,由此可見其重要性。在這篇論文中,我們提出了一個基於時序影像差的人體輪廓擷取技術,可以在不事先建立背景模型的情況下,從非完全控制的環境中(室外或是有光線變化的室內)將人體輪廓擷取出來,並且對於背景的亮度變化具有適應性。我們從影像中連續三個畫面得到時序影像差,並且結合邊緣資訊,將畫面中變動物體的邊緣取出。畫面中變動物體的邊緣可能不完整,為一個非封閉的曲線,因此我們提出背景區域成長演算法,可以在物體的邊緣輪廓不完整的情況下,擷取出影像中前景的輪廓。
人體的形狀經常與其他物體的形狀有很大的差異,因此可以視為人體偵測的重要線索,其中又以人體的頭部為最重要的人體特徵。我們將基於時序影像差之前後景分離法作為頭部偵測的前處理,以簡化複雜的背景並且縮減頭部的搜尋範圍。接著我們提出了基於模糊理論樣板比對的方法結合形狀與顏色的資訊將頭部定位。首先建立了人體頭部的左右邊緣模型、人體膚色模型與人體髮色模型,分別建立人體頭部的左右模型可以使得可偵測的人體頭部寬度具有可變動的範圍,並對不同角度的人頭(例如:正面、側面和背面)具有適應性。利用人體頭部的左右邊緣模型與邊緣資訊做比對,偵測出搜尋範圍中各區域與模型相似的程度作為形狀分數,再利用人體膚色模型與人體髮色模型計算出區域中屬於膚色或髮色的程度作為顏色分數,最後結合形狀分數與顏色分數將人體的頭部定位。藉由頭部的定位可以確認所偵測的前景為人體,並對之後的人臉辨識、動作辨識提供有用的資訊。 Foreground-background segmentation is the process of separating the objects of interest (foreground) from the rest of the image (the background). It is often the first step in many visual-based surveillance systems and therefore a crucial process. The following processes such as tracking, pose estimation, and action recognition are highly dependent on the accuracy of the segmentation results. In this thesis, we propose a human silhouette extraction method based on temporal differencing for extracting complete human silhouette without a pre-built background model. The proposed method adapts quickly to changes in the scene and can extract human silhouette from incompletely controlled environment (outdoor or indoor with illumination change). We combine the temporal differencing from three successive video frames, current together with previous and the next, and the edge image to subtract the outline of moving object in the frame. The outline of the moving object could be incomplete therefore a non-closed curve. Hence, we propose a novel background region growing technique which grows the background region and then obtains the foreground silhouette from the incomplete edge image. The shape of a human is often very different from the shape of other objects. Shape-based detection of humans can therefore be a powerful cue. Human head (including face) is the most important feature in the human shape. We take temporal differencing method as a pre-processing step before human head detection, which can simplify the complex backgrounds and reduce the detecting area. Then we propose a fuzzy theory based pattern-matching technique which combines the shape and color information to locate human head. We begin with building left head-shape model, right head-shape model, skin color model and hair color model. Detecting with two head-shape models gives somewhat size tolerance capability in human head width and adapts to different view angles, such as frontal view, lateral view, diagonal view, and so on. We compare the edge map of the given image with the pre-built left head-shape model and right head-shape model to detect head candidates. Then we use skin and hair color model to compute the belongness degree of each pixel within the head candidate area. Consequently, we combine the shape matching technique and color matching technique to better estimate the location of a human head. The resultant of human head detection can confirm the foreground extracted is human silhouette and is useful for face recognition, face tracking and motion recognition. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT079712605 http://hdl.handle.net/11536/44497 |
Appears in Collections: | Thesis |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.