完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLee, Tzu-Kuangen_US
dc.contributor.authorKuo, Yu-Chiaoen_US
dc.contributor.authorHuang, Shih-Hsuanen_US
dc.contributor.authorWang, Guan-Shengen_US
dc.contributor.authorLin, Chih-Yuen_US
dc.contributor.authorTseng, Yu-Cheeen_US
dc.date.accessioned2020-05-05T00:01:59Z-
dc.date.available2020-05-05T00:01:59Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-5386-7646-2en_US
dc.identifier.issn1525-3511en_US
dc.identifier.urihttp://hdl.handle.net/11536/154041-
dc.description.abstractCollecting vehicle surrounding information is a key issue for accident prevention and autonomous driving applications. Although GPS and 4G/LTE are widely accepted. it is still a challenge for a vehicle to get complete information of its surrounding vehicles. In this work, we consider the integration of multi-sensory data through V2V communications to help a vehicle to understand its complex surroundings. We propose a fusion algorithm that can integrate four types of sensory inputs: V2V communications, GPS, camera, and inertial data. We show that through such fusion, it is possible for a vehicle to visually see the driving states of its surrounding vehicles.en_US
dc.language.isoen_USen_US
dc.subjectAutonomous Drivingen_US
dc.subjectData Fusionen_US
dc.subjectSensingen_US
dc.subjectV2V communicationen_US
dc.titleAugmenting Car Surrounding Information by Inter-Vehicle Data Fusionen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC)en_US
dc.citation.spage0en_US
dc.citation.epage0en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000519086300048en_US
dc.citation.woscount0en_US
顯示於類別:會議論文