標題: | Learning-based saliency model with depth information |
作者: | Ma, Chih-Yao Hang, Hsueh-Ming 電子工程學系及電子研究所 Department of Electronics Engineering and Institute of Electronics |
關鍵字: | visual attention;saliency map;depth saliency;eye-fixation database |
公開日期: | 1-一月-2015 |
摘要: | Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website. |
URI: | http://dx.doi.org/10.1167/15.6.19 http://hdl.handle.net/11536/128110 |
ISSN: | 1534-7362 |
DOI: | 10.1167/15.6.19 |
期刊: | JOURNAL OF VISION |
Volume: | 15 |
Issue: | 6 |
起始頁: | 0 |
結束頁: | 0 |
顯示於類別: | 期刊論文 |