Full metadata record
DC FieldValueLanguage
dc.contributor.authorMa, Chih-Yaoen_US
dc.contributor.authorHang, Hsueh-Mingen_US
dc.date.accessioned2019-04-03T06:38:51Z-
dc.date.available2019-04-03T06:38:51Z-
dc.date.issued2015-01-01en_US
dc.identifier.issn1534-7362en_US
dc.identifier.urihttp://dx.doi.org/10.1167/15.6.19en_US
dc.identifier.urihttp://hdl.handle.net/11536/128110-
dc.description.abstractMost previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.en_US
dc.language.isoen_USen_US
dc.subjectvisual attentionen_US
dc.subjectsaliency mapen_US
dc.subjectdepth saliencyen_US
dc.subjecteye-fixation databaseen_US
dc.titleLearning-based saliency model with depth informationen_US
dc.typeArticleen_US
dc.identifier.doi10.1167/15.6.19en_US
dc.identifier.journalJOURNAL OF VISIONen_US
dc.citation.volume15en_US
dc.citation.issue6en_US
dc.citation.spage0en_US
dc.citation.epage0en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000357858600019en_US
dc.citation.woscount7en_US
Appears in Collections:Articles


Files in This Item:

  1. eafaeade50c84e03bb57d43a9b3f8c21.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.