完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLai, Chen-Yenen_US
dc.contributor.authorLo, Yu-Wenen_US
dc.contributor.authorShen, Yih-Liangen_US
dc.contributor.authorChi, Tai-Shihen_US
dc.date.accessioned2018-08-21T05:57:02Z-
dc.date.available2018-08-21T05:57:02Z-
dc.date.issued2017-01-01en_US
dc.identifier.issn2309-9402en_US
dc.identifier.urihttp://hdl.handle.net/11536/146964-
dc.description.abstractIn this paper, we propose a plastic auditory model based neural network for speech enhancement. The proposed system integrates a spectro-temporal analytical auditory model with a multi-layer fully-connected network to form a quasi-CNN structure. The initial kernels of the convolutional layer are derived from the neuro-physiological auditory model. To simulate the plasticity of cortical neurons for attentional hearing, the kernels are allowed to adjust themselves pertaining to the task at hand. For the application of speech enhancement, the Fourier spectrogram instead of the auditory spectrogram is used as input to the proposed neural network such that the cleaned speech signal can be well reconstructed. The proposed system performs comparably with standard DNN and CNN systems when plenty resources are available. Meanwhile, under the limited-resource condition, the proposed system outperforms standard systems in all test settings.en_US
dc.language.isoen_USen_US
dc.titlePlastic multi-resolution auditory model based neural network for speech enhancementen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017)en_US
dc.citation.spage605en_US
dc.citation.epage609en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000425879400103en_US
顯示於類別:會議論文