Full metadata record
DC FieldValueLanguage
dc.contributor.authorChou, Hsinen_US
dc.contributor.authorChen, Ming-Tsoen_US
dc.contributor.authorChi, Tai-Shihen_US
dc.date.accessioned2019-04-02T06:04:14Z-
dc.date.available2019-04-02T06:04:14Z-
dc.date.issued2018-01-01en_US
dc.identifier.urihttp://hdl.handle.net/11536/150759-
dc.description.abstractIn this paper, we build up a hybrid neural network (NN) for singing melody extraction from polyphonic music by imitating human pitch perception. For human hearing, there are two pitch perception models, the spectral model and the temporal model, in accordance with whether harmonics are resolved or not. Here, we first use NNs to implement individual models and evaluate their performance in the task of singing melody extraction. Then, we combine the NNs to constitute the composite NN to simulate the duplex model, which complements the pitch perception from unresolved harmonics of the spectral model using the temporal model. Simulation results show the proposed composite NN outperforms other conventional methods in singing melody extraction.en_US
dc.language.isoen_USen_US
dc.subjectpitch perceptionen_US
dc.subjectduplex modelen_US
dc.subjectmelody extractionen_US
dc.subjectdeep neural networken_US
dc.subjectCNNen_US
dc.titleA HYBRID NEURAL NETWORK BASED ON THE DUPLEX MODEL OF PITCH PERCEPTION FOR SINGING MELODY EXTRACTIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)en_US
dc.citation.spage381en_US
dc.citation.epage385en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000446384600076en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper