完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Chou, Chih-Chung | en_US |
dc.contributor.author | Seo, Young Woo | en_US |
dc.contributor.author | Wang, Chieh-Chih | en_US |
dc.date.accessioned | 2018-08-21T05:53:50Z | - |
dc.date.available | 2018-08-21T05:53:50Z | - |
dc.date.issued | 2018-08-01 | en_US |
dc.identifier.issn | 1556-4959 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1002/rob.21778 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/145230 | - |
dc.description.abstract | For any visual feature-based SLAM (simultaneous localization and mapping) solutions, to estimate the relative camera motion between two images, it is necessary to find correct correspondence between features extracted from those images. Given a set of feature correspondents, one can use a n-point algorithm with robust estimation method, to produce the best estimate to the relative camera pose. The accuracy of a motion estimate is heavily dependent on the accuracy of the feature correspondence. Such a dependency is even more significant when features are extracted from the images of the scenes with drastic changes in viewpoints and illuminations and presence of occlusions. To make a feature matching robust to such challenging scenes, we propose a new feature matching method that incrementally chooses a five pairs of matched features for a full DoF (degree of freedom) camera motion estimation. In particular, at the first stage, we use our 2-point algorithm to estimate a camera motion and, at the second stage, use this estimated motion to choose three more matched features. In addition, we use, instead of the epipolar constraint, a planar constraint for more accurate outlier rejection. With this set of five matching features, we estimate a full DoF camera motion with scale ambiguity. Through the experiments with three, real-world data sets, our method demonstrates its effectiveness and robustness by successfully matching features (1) from the images of a night market where presence of frequent occlusions and varying illuminations, (2) from the images of a night market taken by a handheld camera and by the Google street view, and (3) from the images of a same location taken daytime and nighttime. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | mapping | en_US |
dc.subject | perception | en_US |
dc.title | A two-stage sampling for robust feature matching | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1002/rob.21778 | en_US |
dc.identifier.journal | JOURNAL OF FIELD ROBOTICS | en_US |
dc.citation.volume | 35 | en_US |
dc.citation.spage | 779 | en_US |
dc.citation.epage | 801 | en_US |
dc.contributor.department | 電機工程學系 | zh_TW |
dc.contributor.department | Department of Electrical and Computer Engineering | en_US |
dc.identifier.wosnumber | WOS:000437836900009 | en_US |
顯示於類別: | 期刊論文 |