Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ma, Chih-Yao | en_US |
dc.contributor.author | Hang, Hsueh-Ming | en_US |
dc.date.accessioned | 2019-04-03T06:38:51Z | - |
dc.date.available | 2019-04-03T06:38:51Z | - |
dc.date.issued | 2015-01-01 | en_US |
dc.identifier.issn | 1534-7362 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1167/15.6.19 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/128110 | - |
dc.description.abstract | Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | visual attention | en_US |
dc.subject | saliency map | en_US |
dc.subject | depth saliency | en_US |
dc.subject | eye-fixation database | en_US |
dc.title | Learning-based saliency model with depth information | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1167/15.6.19 | en_US |
dc.identifier.journal | JOURNAL OF VISION | en_US |
dc.citation.volume | 15 | en_US |
dc.citation.issue | 6 | en_US |
dc.citation.spage | 0 | en_US |
dc.citation.epage | 0 | en_US |
dc.contributor.department | 電子工程學系及電子研究所 | zh_TW |
dc.contributor.department | Department of Electronics Engineering and Institute of Electronics | en_US |
dc.identifier.wosnumber | WOS:000357858600019 | en_US |
dc.citation.woscount | 7 | en_US |
Appears in Collections: | Articles |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.