完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChou, Chih-Chungen_US
dc.contributor.authorSeo, Young Wooen_US
dc.contributor.authorWang, Chieh-Chihen_US
dc.date.accessioned2018-08-21T05:53:50Z-
dc.date.available2018-08-21T05:53:50Z-
dc.date.issued2018-08-01en_US
dc.identifier.issn1556-4959en_US
dc.identifier.urihttp://dx.doi.org/10.1002/rob.21778en_US
dc.identifier.urihttp://hdl.handle.net/11536/145230-
dc.description.abstractFor any visual feature-based SLAM (simultaneous localization and mapping) solutions, to estimate the relative camera motion between two images, it is necessary to find correct correspondence between features extracted from those images. Given a set of feature correspondents, one can use a n-point algorithm with robust estimation method, to produce the best estimate to the relative camera pose. The accuracy of a motion estimate is heavily dependent on the accuracy of the feature correspondence. Such a dependency is even more significant when features are extracted from the images of the scenes with drastic changes in viewpoints and illuminations and presence of occlusions. To make a feature matching robust to such challenging scenes, we propose a new feature matching method that incrementally chooses a five pairs of matched features for a full DoF (degree of freedom) camera motion estimation. In particular, at the first stage, we use our 2-point algorithm to estimate a camera motion and, at the second stage, use this estimated motion to choose three more matched features. In addition, we use, instead of the epipolar constraint, a planar constraint for more accurate outlier rejection. With this set of five matching features, we estimate a full DoF camera motion with scale ambiguity. Through the experiments with three, real-world data sets, our method demonstrates its effectiveness and robustness by successfully matching features (1) from the images of a night market where presence of frequent occlusions and varying illuminations, (2) from the images of a night market taken by a handheld camera and by the Google street view, and (3) from the images of a same location taken daytime and nighttime.en_US
dc.language.isoen_USen_US
dc.subjectmappingen_US
dc.subjectperceptionen_US
dc.titleA two-stage sampling for robust feature matchingen_US
dc.typeArticleen_US
dc.identifier.doi10.1002/rob.21778en_US
dc.identifier.journalJOURNAL OF FIELD ROBOTICSen_US
dc.citation.volume35en_US
dc.citation.spage779en_US
dc.citation.epage801en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000437836900009en_US
顯示於類別:期刊論文