Full metadata record
DC FieldValueLanguage
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorKuo, Che-Yuen_US
dc.date.accessioned2019-10-05T00:09:45Z-
dc.date.available2019-10-05T00:09:45Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-4799-8131-1en_US
dc.identifier.issn1520-6149en_US
dc.identifier.urihttp://hdl.handle.net/11536/152935-
dc.description.abstractMonaural source separation based on recurrent neural network is learned to characterize the sequential patterns in source signals based on dynamic states which are propagated through time. The hidden states are assumed to be deterministic along a single path where a shared long short-term memory (LSTM) is used. Such assumptions may not faithfully reflect the randomness and the variety of temporal features in mixed signals. To strengthen the capability of LSTM in source separation, we propose a stochastic Markov LSTM where the regression from the mixed signal to its source signals is learned with a stochastic indicator of Markov state which selects the state-dependent LSTM for signal separation at each time. A set of LSTMs is discovered to capture the structural diversity of temporal signals or the stochastic trajectory of state transitions for sequential prediction. A new state machine is constructed to learn the complicated latent semantics in heterogeneous and structural mappings between mixed signals and source signals. The Gumbel-softmax sampling is implemented in the backpropagation algorithm with discrete Markov states. Experiments on speech enhancement illustrate the merit of the proposed stochastic Markov LSTM in terms of short-term objective intelligibility measure of the separated speech.en_US
dc.language.isoen_USen_US
dc.subjectSource separationen_US
dc.subjectdeep sequential learningen_US
dc.subjectstochastic transitionen_US
dc.subjectMarkov stateen_US
dc.subjectlatent variable modelen_US
dc.titleSTOCHASTIC MARKOV RECURRENT NEURAL NETWORK FOR SOURCE SEPARATIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)en_US
dc.citation.spage8072en_US
dc.citation.epage8076en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000482554008062en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper