完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Lee, Tzu-Kuang | en_US |
dc.contributor.author | Kuo, Yu-Chiao | en_US |
dc.contributor.author | Huang, Shih-Hsuan | en_US |
dc.contributor.author | Wang, Guan-Sheng | en_US |
dc.contributor.author | Lin, Chih-Yu | en_US |
dc.contributor.author | Tseng, Yu-Chee | en_US |
dc.date.accessioned | 2020-05-05T00:01:59Z | - |
dc.date.available | 2020-05-05T00:01:59Z | - |
dc.date.issued | 2019-01-01 | en_US |
dc.identifier.isbn | 978-1-5386-7646-2 | en_US |
dc.identifier.issn | 1525-3511 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/154041 | - |
dc.description.abstract | Collecting vehicle surrounding information is a key issue for accident prevention and autonomous driving applications. Although GPS and 4G/LTE are widely accepted. it is still a challenge for a vehicle to get complete information of its surrounding vehicles. In this work, we consider the integration of multi-sensory data through V2V communications to help a vehicle to understand its complex surroundings. We propose a fusion algorithm that can integrate four types of sensory inputs: V2V communications, GPS, camera, and inertial data. We show that through such fusion, it is possible for a vehicle to visually see the driving states of its surrounding vehicles. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Autonomous Driving | en_US |
dc.subject | Data Fusion | en_US |
dc.subject | Sensing | en_US |
dc.subject | V2V communication | en_US |
dc.title | Augmenting Car Surrounding Information by Inter-Vehicle Data Fusion | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC) | en_US |
dc.citation.spage | 0 | en_US |
dc.citation.epage | 0 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:000519086300048 | en_US |
dc.citation.woscount | 0 | en_US |
顯示於類別: | 會議論文 |