Full metadata record
DC FieldValueLanguage
dc.contributor.authorChou, Kuang-Penen_US
dc.contributor.authorPrasad, Mukeshen_US
dc.contributor.authorWu, Dien_US
dc.contributor.authorSharma, Nabinen_US
dc.contributor.authorLi, Dong-Linen_US
dc.contributor.authorLine, Yu-Fengen_US
dc.contributor.authorBlumenstein, Michaelen_US
dc.contributor.authorLine, Wen-Chiehen_US
dc.contributor.authorLin, Chin-Tengen_US
dc.date.accessioned2018-08-21T05:53:31Z-
dc.date.available2018-08-21T05:53:31Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn2169-3536en_US
dc.identifier.urihttp://dx.doi.org/10.1109/ACCESS.2018.2809552en_US
dc.identifier.urihttp://hdl.handle.net/11536/144800-
dc.description.abstractAutomated human action recognition has the potential to play an important role in public security, for example, in relation to the multiview surveillance videos taken in public places, such as train stations or airports. This paper compares three practical, reliable, and generic systems for multiview video-based human action recognition, namely, the nearest neighbor classifier, Gaussian mixture model classifier, and the nearest mean classifier. To describe the different actions performed in different views, view-invariant features are proposed to address multiview action recognition. These features are obtained by extracting the holistic features from different temporal scales which are modeled as points of interest which represent the global spatial-temporal distribution. Experiments and cross-data testing are conducted on the KTH, WEIZMANN, and MuHAVi datasets. The system does not need to be retrained when scenarios are changed which means the trained database can be applied in a wide variety of environments, such as view angle or background changes. The experiment results show that the proposed approach outperforms the existing methods on the KTH and WEIZMANN datasets.en_US
dc.language.isoen_USen_US
dc.subjectMulti-view videoen_US
dc.subjectaction recognitionen_US
dc.subjectfeature extractionen_US
dc.subjectbackground subtractionen_US
dc.subjectclassificationen_US
dc.subjectmachine learningen_US
dc.titleRobust Feature-Based Automated Multi-View Human Action Recognition Systemen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ACCESS.2018.2809552en_US
dc.identifier.journalIEEE ACCESSen_US
dc.citation.volume6en_US
dc.citation.spage15283en_US
dc.citation.epage15296en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000429258300001en_US
Appears in Collections:Articles