完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Lin, Yong-Xiang | en_US |
dc.contributor.author | Tan, Daniel Stanley | en_US |
dc.contributor.author | Chen, Yung-Yao | en_US |
dc.contributor.author | Huang, Ching-Chun | en_US |
dc.contributor.author | Hua, Kai-Lung | en_US |
dc.date.accessioned | 2020-10-05T02:01:57Z | - |
dc.date.available | 2020-10-05T02:01:57Z | - |
dc.date.issued | 2020-07-01 | en_US |
dc.identifier.issn | 1070-986X | en_US |
dc.identifier.uri | http://dx.doi.org/10.1109/MMUL.2020.3008529 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/155364 | - |
dc.description.abstract | Self-driving cars leverage on semantic segmentation to understand an urban scene. However, it is costly to collect segmentation labels, thus, synthetic datasets are used to train segmentation models. Unfortunately, the synthetic to real domain shift causes these models to perform poorly. Prior works use adversarial training to align features of both synthetic and real-world images. We observe that background objects tend to be similar across domains, while foreground objects tend to have more variations. Using this insight, we propose an adaptation method that uses foreground and background cues and adapt them separately. We also propose a mask-aware gated discriminator that learns soft masks from the input foreground and background masks instead of naively performing binary masking that immediately removes information outside of the predicted masks. We evaluate our method on two different datasets and show that our method outperforms several state-of-the-art baselines, which verifies the effectiveness of our approach. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Image segmentation | en_US |
dc.subject | Semantics | en_US |
dc.subject | Adaptation models | en_US |
dc.subject | Logic gates | en_US |
dc.subject | Automobiles | en_US |
dc.subject | Training data | en_US |
dc.subject | Computer science | en_US |
dc.subject | Autonmous automobiles | en_US |
dc.title | Domain Adaptation With Foreground/Background Cues and Gated Discriminators | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/MMUL.2020.3008529 | en_US |
dc.identifier.journal | IEEE MULTIMEDIA | en_US |
dc.citation.volume | 27 | en_US |
dc.citation.issue | 3 | en_US |
dc.citation.spage | 44 | en_US |
dc.citation.epage | 53 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:000567430300006 | en_US |
dc.citation.woscount | 0 | en_US |
顯示於類別: | 期刊論文 |