標題: 使用慣性量測儀於空間幾何限制之 影像特徵點追蹤演算法與 感測器融合技術
Geometric Constraint Image Feature Tracking and Sensor Fusion Technique with Visual-IMU Information
作者: 曾勁源
Tseng, Chin-Yuan
胡竹生
黃育綸
Jwu-Sheng Hu
Yu-Lun Huang
電控工程研究所
關鍵字: 影像特徵點追蹤演算法;錨點定位;影像里程計;卡爾曼濾波器;三焦張量;feature tracking;locating anchors;visual odometry;kalman filter;trifocal tensor
公開日期: 2014
摘要: 本論文提出將單一攝影機結合慣性測量裝置於:1)影像特徵點追蹤; 2)無線網路的錨點定位; 3)影像里程計的三種應用方式。在現有的影像特徵點追蹤或比對演算法中,大多是使用色彩梯度變化或是特徵點描述子,來追蹤或比對不同影像間的特徵點對應關係,這樣的特徵點對應方式,缺少了攝影機與特徵點在三度空間中,相對位置的幾何限制關係。本論文提出一套結合單一攝影機與慣性測量裝置的影像特徵點追蹤演算法,當攝影機為平行移動的狀況下,其影像間的極點(epipole)在影像上的位置為相對不動的點,稱之為擴張焦點(Focus of Expansion: FOE),而空間中的靜態特徵點在平行移動的影像畫面中,呈現的投影軌跡會重疊於極線(epipolar line)上,這也限制了特徵點搜尋的範圍,本論文提出以慣性測量裝置中的陀螺儀資訊,模擬幾何空間中的平行影像,利用平行影像之間的空間幾何限制,藉以限制靜態特徵點投影於影像平面上的區域,在空間幾何限制之下,搭配特徵點周圍色彩梯度變化的檢驗,可以提升特徵點在追蹤與比對時的正確率。 在慣性測量儀輔助影像裝置(IMU-camera)的軌跡估測演算法中,我們將感測器融合技術應用於: 1) 無線感測網路中的錨點(anchor node)位置估測系統,為了求得錨點在空間中的位置,我們需要在空間中的不同位置上,收集各個錨點所發出的訊號強度資訊,因此本論文中提出一套具有實際尺度資訊的移動軌跡估測演算法,以人類步態模型來取代對加速規二次積分求移動距離的方法,並將慣性感測裝置、影像資訊以及無線感測裝置的量測訊號,用卡爾曼濾波器來完成感測器融合技術,最後使用三邊測量(trilateration)演算法來求得空間中錨點的位置; 2) 搭配多個狀態限制卡爾曼濾波器(Kalman filter)與三焦張量幾何(trifocal tensor)的視覺輔助慣性測程器(visual-IMU odometry),對於環境結構,我們不做任何的假設,也不做空間結構的重建,我們利用空間中特徵點在三張影像之間的幾何限制關係,估測具有尺度資訊的攝影機移動軌跡。實驗結果顯示,所提出的慣性測量儀輔助影像裝置,在影像特徵點追蹤、錨點位置估測以及影像里程計的應用中,皆可有效提升其精準度與可靠度。
This dissertation includes three kinds of applications with visual inertial sensor information in 1) image feature tracking, 2) anchor location estimation in wireless sensor networks (WSN), and 3) visual-IMU odometer. In image feature tracking, the epipolar geometry is an important constraint to limit the feature moving area. In this dissertation, the following property is explored: the optical flow vector of the static feature point lies on the epipolar line of cameras with pure translation. For monocular camera motion, the epipolar line then becomes a scan line for feature searching. A constraint feature selection method by using the direction of the epipolar line to filter unstable feature points is proposed. The geometric constraints have no relation to the scene structure or the ratio of the inlier/outlier feature points. To realize the proposed idea, an inertial measurement unit (IMU) is needed to give the rotational information among camera poses. We propose an IMU aided geometric constraint (IGC) feature tracking algorithm. The IGC feature tracking algorithm provides a strength geometric constraint during the feature tracking procedure, and the tracking complexity is . Beyond the geometric constraints, the verification of the tracking result becomes very simple. We propose two kinds of sensor fusion algorithm in anchor node location estimation in wireless sensor network and visual-IMU odometer by using IMU-camera device. In anchor node location estimation, we combined camera trajectory estimation algorithm with a human walking model to realize a scaled visual odometry. Instead of double integration of acceleration, the scale factor from the walking speed estimation uses only the acceleration information of the body. The loosely-coupled approach fuses the RSSI data and attitude of VO to provide an accurate motion trajectory and anchor node locations simultaneously. In visual-IMU odometer, the proposed method uses multi-state constraint Kalman-filter and geometrical constraints of the trifocal tensor and pure translation geometry. The multi-state constraint Kalman filter can fuse the information from the camera and IMU, and the trifocal tensor and pure translation geometric constraints can provide a reliable static feature selection without scene reconstruction. The experiment of the feature tracking includes the latent aperture problem, repeated pattern problem and low texture problem, and also concludes the trajectory estimation results and analysis of locating anchors in WSN and visual-IMU odometer. The experiment results show the effectiveness and robustness of the proposed method.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079612826
http://hdl.handle.net/11536/76491
Appears in Collections:Thesis