Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Han-Yangen_US
dc.contributor.authorChang, Ya-Chingen_US
dc.contributor.authorHsieh, Yi-Yuen_US
dc.contributor.authorChen, Hua-Tsungen_US
dc.contributor.authorChuang, Jen-Huien_US
dc.date.accessioned2018-08-21T05:57:14Z-
dc.date.available2018-08-21T05:57:14Z-
dc.date.issued2017-01-01en_US
dc.identifier.urihttp://hdl.handle.net/11536/147201-
dc.description.abstractDue to the advantages of high mobility and the ability to fly in the sky, drone has inspired more and more applications in recent years. On the other hand, deep learning-based human activity analysis is an important research topic in security surveillance; however, there are few research works on such analysis with aerial images so far. Because of perspective projection, people in aerial images look tilted, which would degrade the performance of human activity analysis. In order to cope with the issue of perspective projection for aerial images, we modify the CNN architecture of a state-ofthe-art object detection method, YOLOv2 [12], and build an aerial image dataset with a drone for new model training. Finally, a post -processing method is proposed to classify the pose of a detected person as normal or abnormal, so that the task of human activity analysis with aerial images can be accomplished.en_US
dc.language.isoen_USen_US
dc.subjectDeep learningen_US
dc.subjectdroneen_US
dc.subjecthuman activity analysisen_US
dc.subjecthuman detectionen_US
dc.subjectimage processingen_US
dc.titleDEEP LEARNING-BASED HUMAN ACTIVITY ANALYSIS FOR AERIAL IMAGESen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2017 INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ISPACS 2017)en_US
dc.citation.spage713en_US
dc.citation.epage718en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000428142000135en_US
Appears in Collections:Conferences Paper