完整後設資料紀錄
DC 欄位語言
dc.contributor.authorTsai, Wen-Jiinen_US
dc.contributor.authorChen, Jian-Yuen_US
dc.date.accessioned2014-12-08T15:38:21Z-
dc.date.available2014-12-08T15:38:21Z-
dc.date.issued2010-12-01en_US
dc.identifier.issn1051-8215en_US
dc.identifier.urihttp://dx.doi.org/10.1109/TCSVT.2010.2087816en_US
dc.identifier.urihttp://hdl.handle.net/11536/26262-
dc.description.abstractTransmission of compressed video signals over error-prone networks exposes the information to losses and errors. To reduce the effects of these losses and errors, this paper presents a joint spatial-temporal estimation method which takes advantages of data correlation in these two domains for better recovery of the lost information. The method is designed for the hybrid multiple description coding which splits video signals along spatial and temporal dimensions. In particular, the proposed method includes fixed and content-adaptive approaches for estimation method selection. The fixed approach selects the estimation method based on description loss cases, while the adaptive approach selects the method according to pixel gradients. The experimental results demonstrate that improved error resilience can be accomplished by the proposed estimation method.en_US
dc.language.isoen_USen_US
dc.subjectLost description estimationen_US
dc.subjectmultiple description codingen_US
dc.subjectspatial segmentationen_US
dc.subjecttemporal segmentationen_US
dc.titleJoint Temporal and Spatial Error Concealment for Multiple Description Video Codingen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/TCSVT.2010.2087816en_US
dc.identifier.journalIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGYen_US
dc.citation.volume20en_US
dc.citation.issue12en_US
dc.citation.spage1822en_US
dc.citation.epage1833en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000286932600015-
dc.citation.woscount5-
顯示於類別:期刊論文


文件中的檔案:

  1. 000286932600015.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。