Title: | 主動學習與融合之部位模型應用於多視角車輛偵測 Part Model with Active Learning and Fusion for Multiview Vehicle Detection |
Authors: | 劉芝綾 林進燈 陳鴻祺 LIU, CHIH-LING Lin, Chin-Teng Chen, Hung-chi 電控工程研究所 |
Keywords: | 車輛偵測;主動學習;部位模型;vehicle detection;active learning;part model |
Issue Date: | 2017 |
Abstract: | 基於電腦視覺的車輛偵測技術目前已廣泛運用於生活中。然而,多數基於電腦視覺的車輛偵測系統僅針對單一視角進行偵測,或者容易受到部分遮蔽的影響,使偵測效果有所折扣。本研究提出一套新的多視角車輛偵測系統,將車輛分為對稱的五個視角分別訓練與偵測,偵測方式採用部位模型偵測的方法來對抗部分遮蔽以及車輛的高變異度等問題。
本系統車輛偵測主要分為兩階段:第一階段先濾除大部分背景來達成加快偵測速度與提升偵測準確率的目的,方法包含利用色彩空間轉換搭配貝氏定理濾除綠樹,以及使用平滑區塊濾除道路或天空等背景資訊;第二階段本系統將車輛分為多個部位模型,每個模型採用主動學習方式選出樣本搭配支持向量機訓練,使各個部位模型分類器更為強健,接著利用位置的統計資訊做為部位分類器結合時的權重,使系統更加貼近於資料集,另外若某些部位表現較差也可以透過更換部位的方式來提升偵測準確率。
在本論文中驗證了本系統在前處理以及部位模型偵測的效能,且在Pascal VOC 2007公開資料集的車輛偵測取得非常好的成績,平均精度達到60.2%,勝過許多其他的車輛偵測方法。 Car detection techniques based on computer vision can be widely used in our daily lives. However, most of these techniques aim to detect only single-view vehicles, and the performance would be easily affected by partial occlusion. Therefore, we propose a novel multi-view car detection system, which use part model to deal with partial occlusion problem and the high variance between all kinds of cars. Our system is divided by two steps: at first we filter out most of the background to accelerate the detection time and increase accuracy. The methods contain using color transform together with Bayesian rule to filter out tree parts, and using smooth area detection to filter out road or sky background. Second, we divide cars into several part models, each of which is trained by samples, collected by our proposed active learning algorithm, using support vector machine. While combining these part models, we use statistics of the relative positions as their weights to make our system more suitable for the dataset. Furthermore, if the performance of some part is not good enough, we can change part to increase the accuracy. In this paper, we validate the performance of our preprocessing and the part model algorithm in multi-view car detection. We have achieved a competitive grade with average precision up to 60.2%, better than many other methods. |
URI: | http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070460016 http://hdl.handle.net/11536/142222 |
Appears in Collections: | Thesis |