Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhao, Zhongyaoen_US
dc.contributor.authorLiu, Chengyuen_US
dc.contributor.authorLi, Yaoweien_US
dc.contributor.authorLi, Yixuanen_US
dc.contributor.authorWang, Jingyuen_US
dc.contributor.authorLin, Bor-Shyhen_US
dc.contributor.authorLi, Jianqingen_US
dc.date.accessioned2019-05-02T00:25:57Z-
dc.date.available2019-05-02T00:25:57Z-
dc.date.issued2019-01-01en_US
dc.identifier.issn2169-3536en_US
dc.identifier.urihttp://dx.doi.org/10.1109/ACCESS.2019.2900719en_US
dc.identifier.urihttp://hdl.handle.net/11536/151670-
dc.description.abstractProgress in wearable techniques makes the long-term daily electrocardiogram (ECG) monitoring possible. However, the long-term wearable ECGs can be significantly contaminated by various noises, which affect the detection and diagnosis of cardiovascular diseases (CVDs). The situation becomes more serious for wearable ECG screening, where the data are huge, and doctors have no way to visually check the signal quality episode-by-episode. Therefore, automatic and accurate noise rejection for the wearable big-data ECGs is craving. This paper addressed this issue and proposed a noise rejection method for wearable ECGs based on the combination of modified frequency slice wavelet transform (MFSWT) and convolutional neural network (CNN). Wearable ECGs were recorded using the newly developed 12-lead Lenovo smart ECG vest with a sample rate of 500 Hz and a resolution of 16 bit. One thousand 10-s ECG segments were picked up and were manually labeled into three quality types: clinically useful segments with good signal quality (type A), clinically useful segments with poor signal quality (type B), and clinically useless segments (pure noises, type C). Each of the 1,000 10-s ECG segments were transformed into a 2-D time-frequency (T-F) image using the MFSWT, with a pixel size of 200x50. Then, the 2-D grayscale images from MFSWT were fed into a 13-layer CNN model for training the classification models. Results from the standard 5-folder cross-validation showed that the proposed combination method of MFSWT and CNN achieved a highest classification accuracy of 86.3%, which was higher than the comparable methods from continuous wavelet transform (CWT) and artificial neural networks (ANN). The combination of MFSWT and CNN also had a good calculation efficiency. This paper indicated that the combination of MFSWT and CNN is a potential method for automatic identification of noisy segments from wearable ECG recordings.en_US
dc.language.isoen_USen_US
dc.subjectWearable ECGen_US
dc.subjectsignal quality assessment (SQA)en_US
dc.subjectconvolutional neural network (CNN)en_US
dc.subjectmodified frequency slice wavelet transform (MFSWT)en_US
dc.titleNoise Rejection for Wearable ECGs Using Modeified Frequency Slice Wavelet Transform and Convolutional Neural Networksen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ACCESS.2019.2900719en_US
dc.identifier.journalIEEE ACCESSen_US
dc.citation.volume7en_US
dc.citation.spage34060en_US
dc.citation.epage34067en_US
dc.contributor.department影像與生醫光電研究所zh_TW
dc.contributor.departmentInstitute of Imaging and Biomedical Photonicsen_US
dc.identifier.wosnumberWOS:000463247500001en_US
dc.citation.woscount0en_US
Appears in Collections:Articles