完整後設資料紀錄
DC 欄位語言
dc.contributor.authorTsai, Chia-Mingen_US
dc.contributor.authorLin, Chia-Wenen_US
dc.contributor.authorLin, Weisien_US
dc.contributor.authorPeng, Wen-Hsiaoen_US
dc.date.accessioned2017-04-21T06:49:58Z-
dc.date.available2017-04-21T06:49:58Z-
dc.date.issued2009en_US
dc.identifier.isbn978-1-4244-5653-6en_US
dc.identifier.urihttp://hdl.handle.net/11536/134908-
dc.description.abstractWe conduct subjective tests to evaluate the performance of scalable video coding with different spatial-domain bit-allocation methods, visual attention models, and motion feature extractors in the literature. For spatial-domain bit allocation, we use the selective enhancement and quality layer assignment methods. For characterizing visual attention, we use the motion attention model and perceptual quality significant map. For motion features, we adopt motion vectors from hierarchical B-picture coding and optical flow. Experimental results show that a more accurate visual attention model leads to better perceptual quality. In cooperation with a visual attention model, the selective enhancement method, compared to the quality layer assignment, achieves better subjective quality when an ROI has enough bit allocation and its texture is not complex. The quality layer assignment method is suitable for region-wise quality enhancement due to its frame-based allocation nature.en_US
dc.language.isoen_USen_US
dc.subjectVisual attention modelen_US
dc.subjectScalable video codingen_US
dc.subjectPerceptual codingen_US
dc.subjectVideo adaptationen_US
dc.titleA COMPARATIVE STUDY ON ATTENTION-BASED RATE ADAPTATION FOR SCALABLE VIDEO CODINGen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6en_US
dc.citation.spage969en_US
dc.citation.epage+en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000280464300242en_US
dc.citation.woscount0en_US
顯示於類別:會議論文