标题: 使用惯性量测仪于空间几何限制之 影像特征点追踪演算法与 感测器融合技术
Geometric Constraint Image Feature Tracking and Sensor Fusion Technique with Visual-IMU Information
作者: 曾劲源
Tseng, Chin-Yuan
胡竹生
黄育纶
Jwu-Sheng Hu
Yu-Lun Huang
电控工程研究所
关键字: 影像特征点追踪演算法;锚点定位;影像里程计;卡尔曼滤波器;三焦张量;feature tracking;locating anchors;visual odometry;kalman filter;trifocal tensor
公开日期: 2014
摘要: 本论文提出将单一摄影机结合惯性测量装置于:1)影像特征点追踪; 2)无线网路的锚点定位; 3)影像里程计的三种应用方式。在现有的影像特征点追踪或比对演算法中,大多是使用色彩梯度变化或是特征点描述子,来追踪或比对不同影像间的特征点对应关系,这样的特征点对应方式,缺少了摄影机与特征点在三度空间中,相对位置的几何限制关系。本论文提出一套结合单一摄影机与惯性测量装置的影像特征点追踪演算法,当摄影机为平行移动的状况下,其影像间的极点(epipole)在影像上的位置为相对不动的点,称之为扩张焦点(Focus of Expansion: FOE),而空间中的静态特征点在平行移动的影像画面中,呈现的投影轨迹会重叠于极线(epipolar line)上,这也限制了特征点搜寻的范围,本论文提出以惯性测量装置中的陀螺仪资讯,模拟几何空间中的平行影像,利用平行影像之间的空间几何限制,藉以限制静态特征点投影于影像平面上的区域,在空间几何限制之下,搭配特征点周围色彩梯度变化的检验,可以提升特征点在追踪与比对时的正确率。
在惯性测量仪辅助影像装置(IMU-camera)的轨迹估测演算法中,我们将感测器融合技术应用于: 1) 无线感测网路中的锚点(anchor node)位置估测系统,为了求得锚点在空间中的位置,我们需要在空间中的不同位置上,收集各个锚点所发出的讯号强度资讯,因此本论文中提出一套具有实际尺度资讯的移动轨迹估测演算法,以人类步态模型来取代对加速规二次积分求移动距离的方法,并将惯性感测装置、影像资讯以及无线感测装置的量测讯号,用卡尔曼滤波器来完成感测器融合技术,最后使用三边测量(trilateration)演算法来求得空间中锚点的位置; 2) 搭配多个状态限制卡尔曼滤波器(Kalman filter)与三焦张量几何(trifocal tensor)的视觉辅助惯性测程器(visual-IMU odometry),对于环境结构,我们不做任何的假设,也不做空间结构的重建,我们利用空间中特征点在三张影像之间的几何限制关系,估测具有尺度资讯的摄影机移动轨迹。实验结果显示,所提出的惯性测量仪辅助影像装置,在影像特征点追踪、锚点位置估测以及影像里程计的应用中,皆可有效提升其精准度与可靠度。
This dissertation includes three kinds of applications with visual inertial sensor information in 1) image feature tracking, 2) anchor location estimation in wireless sensor networks (WSN), and 3) visual-IMU odometer. In image feature tracking, the epipolar geometry is an important constraint to limit the feature moving area. In this dissertation, the following property is explored: the optical flow vector of the static feature point lies on the epipolar line of cameras with pure translation. For monocular camera motion, the epipolar line then becomes a scan line for feature searching. A constraint feature selection method by using the direction of the epipolar line to filter unstable feature points is proposed. The geometric constraints have no relation to the scene structure or the ratio of the inlier/outlier feature points. To realize the proposed idea, an inertial measurement unit (IMU) is needed to give the rotational information among camera poses. We propose an IMU aided geometric constraint (IGC) feature tracking algorithm. The IGC feature tracking algorithm provides a strength geometric constraint during the feature tracking procedure, and the tracking complexity is . Beyond the geometric constraints, the verification of the tracking result becomes very simple.
We propose two kinds of sensor fusion algorithm in anchor node location estimation in wireless sensor network and visual-IMU odometer by using IMU-camera device. In anchor node location estimation, we combined camera trajectory estimation algorithm with a human walking model to realize a scaled visual odometry. Instead of double integration of acceleration, the scale factor from the walking speed estimation uses only the acceleration information of the body. The loosely-coupled approach fuses the RSSI data and attitude of VO to provide an accurate motion trajectory and anchor node locations simultaneously. In visual-IMU odometer, the proposed method uses multi-state constraint Kalman-filter and geometrical constraints of the trifocal tensor and pure translation geometry. The multi-state constraint Kalman filter can fuse the information from the camera and IMU, and the trifocal tensor and pure translation geometric constraints can provide a reliable static feature selection without scene reconstruction. The experiment of the feature tracking includes the latent aperture problem, repeated pattern problem and low texture problem, and also concludes the trajectory estimation results and analysis of locating anchors in WSN and visual-IMU odometer. The experiment results show the effectiveness and robustness of the proposed method.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079612826
http://hdl.handle.net/11536/76491
显示于类别:Thesis