標題: | 利用架設在視訊監控車上成對的全方位攝影裝置作週遭環境監控之研究 A Study on Surrounding Environment Monitoring by a Video Surveillance Car with Two 2-camera Omni-imaging Devices |
作者: | 陳俊甫 Chen, Chun-Fu 蔡文祥 Tsai, Wen-Hsiang 資訊科學與工程研究所 |
關鍵字: | 光流分析法;全方位攝影裝置;車輛偵測;optical flow;omni-camera;car detection |
公開日期: | 2010 |
摘要: | 本研究利用架設在一視訊監控車頂上的兩組全方位攝影裝置來達到視訊監控的功能,主要應用於監控行車視角的盲點和周遭的車輛。
在本研究中,此二全方位攝影機裝置可用以監看車輛周遭任何角度的影像畫面。此外,本研究利用光流分析法直接套用在連續擷取的影像上,並利用影像的移動向量分析目前車輛的行走方向,產生對應方位的透視影像(perspective-view image),方便駕駛觀看。另一方面,本研究亦提出一種「透視對應表」(perspective mapping table),可以快速地將全方位影像轉成透視影像,提供駕駛觀看行車紀錄。
同時,本研究利用全方位攝影系統所拍攝的影像,可靜態監控周遭靜止的車輛並求得立體資訊。另利用影像處理技術取出影像中的車體區域,並偵測車窗底緣的對應點,計算車輛位置,進而產生監控車週遭環境的俯視圖。
除了偵測靜態的車輛,本研究也提出了行駛當中的視訊監控車偵測到停止或移動的周遭車輛的方法。另亦使用光流分析法,配合全方位攝影機所擷取到的連續影像,利用有高度的物體會產生較大移動向量的性質,將車體給大致分割出來,進而使用「k均值分群法」(k-means clustering)去偵測出車體,接著透過區域增長法去找出較完整的車體,最後再利用預先造好的車輛模型去做比對,藉以取得周遭車輛的位置資訊,劃出車輛周遭的俯視圖,供車輛駕駛觀看。
上述方法的實驗結果皆甚良好,顯示所提視訊監控系統確實可行。 In this study, methods are proposed for video surveillance by a video surveillance vehicle equipped with a pair of two-camera omni-imaging devices on its roof, with emphasis on monitoring of blind spots and nearby cars around the vehicle. First, for generating perspective-view images to facilitate inspection of the vehicle’s surrounding environment, a space-mapping table and an r-rho mapping table are created to accelerate the related coordinate transformation process. Also, a method for generating the perspective-view image of the surrounding area of the vehicle by estimating the vehicle’s moving direction using optical flow analysis is proposed. For off-line inspection of the driving history, a method of using a perspective-mapping table proposed in this study to generate a series of perspective-view images of any view direction decided by mouse clicking is proposed as well. Furthermore, a method for monitoring a nearby static car around the surveillance vehicle is proposed, which employs image processing and pattern recognition techniques like ground region elimination, moment-preserving thresholding, region growing, etc. to segment a car shape out of the omni-image. Also proposed is a method for extracting the bottom-edge points of the car window and eliminating the outlier points by simple linear regression, in order to compute the 3D data of the detected car and generate a surround map. In addition, a method for monitoring a nearby static or moving car from a moving video surveillance vehicle is proposed, which may be used to segment the nearby car region in the omni-image by the use of motion vector lengths. To further grow a complete car shape from the segmented regions, a method for finding the pixels of the car body by a k-means algorithm and using the pixels as seed points to grow the entire car region by the use of color information is also proposed. With the aid of a space-mapping table, car masks derived from a simple car model are used for locating the position of the detected car. Finally, a top-view surround map showing the relative position of the detected car with respect to the video surveillance vehicle is generated. Good experimental results are also shown, which prove the feasibility of the proposed methods for real video surveillance applications. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT079855594 http://hdl.handle.net/11536/48329 |
Appears in Collections: | Thesis |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.