完整後設資料紀錄
DC 欄位語言
dc.contributor.authorShen, YCen_US
dc.contributor.authorYou, SCDen_US
dc.date.accessioned2014-12-08T15:26:13Z-
dc.date.available2014-12-08T15:26:13Z-
dc.date.issued2003en_US
dc.identifier.isbn0-7803-8185-8en_US
dc.identifier.urihttp://hdl.handle.net/11536/18612-
dc.description.abstractRendering spatial sound using a headset for five-channel signals requires convolving the incoming signals with the head-related impulse responses representing the sound directions of the five channels. Although the idea is simple, the implementation requires a large amount of computation. In this paper, we propose a simple approach to reduce the computational burden to about one-fifth of that of direct implementation. Compared with the approach based on the CAPZ (Common-Acoustical-Pole and Zero) model, the proposed approach has a better error performance with almost the same computation.en_US
dc.language.isoen_USen_US
dc.titleRendering spatial sound on headsets for five-channel audioen_US
dc.typeProceedings Paperen_US
dc.identifier.journalICICS-PCM 2003, VOLS 1-3, PROCEEDINGSen_US
dc.citation.spage715en_US
dc.citation.epage718en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000222026600146-
顯示於類別:會議論文