Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jiun-Fu | en_US |
dc.contributor.author | Wang, Chieh-Chih | en_US |
dc.contributor.author | Chou, Cheng-Fu | en_US |
dc.date.accessioned | 2018-08-21T05:53:36Z | - |
dc.date.available | 2018-08-21T05:53:36Z | - |
dc.date.issued | 2018-05-01 | en_US |
dc.identifier.issn | 0921-8890 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1016/j.robot.2018.02.004 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/144904 | - |
dc.description.abstract | Multiple target tracking in crowded urban environments is a daunting task. High crowdedness complicates motion modeling, and occlusion makes tracking difficult as well. Based on the variable-structure multiple -model (VSMM) estimation framework, this paper extends an interacting object tracking (IOT) scheme with occlusion detection and a virtual measurement model for occluded areas. IOT is composed of a scene interaction model and a neighboring object interaction model. The scene interaction model considers the long-term interactions of a moving object and surroundings, and the neighboring object interaction model considers three short-term interactions. With these interacting object models, the motion feature of a moving object can be represented with the weight of each model. A virtual measurement model is proposed to exploit the motion feature with the IOT scheme under occlusion. The proposed approach was validated using a stationary 2D LIDAR. To verify in occlusion, a 3D LIDAR based benchmark system was developed to extract occluded moving segments. The ample experimental results show that the proposed IOT scheme tracks over 57% of occluded moving objects in an urban intersection. (C) 2018 Elsevier B.V. All rights reserved. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Multitarget tracking | en_US |
dc.subject | Interaction | en_US |
dc.subject | LIDAR | en_US |
dc.title | Multiple target tracking in occlusion area with interacting object models in urban environments | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1016/j.robot.2018.02.004 | en_US |
dc.identifier.journal | ROBOTICS AND AUTONOMOUS SYSTEMS | en_US |
dc.citation.volume | 103 | en_US |
dc.citation.spage | 68 | en_US |
dc.citation.epage | 82 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:000430764100006 | en_US |
Appears in Collections: | Articles |