Full metadata record
DC FieldValueLanguage
dc.contributor.authorNjoo, Gunarto Sindoroen_US
dc.contributor.authorLai, Chien-Hsiangen_US
dc.contributor.authorHsu, Kuo-Weien_US
dc.date.accessioned2018-08-21T05:56:50Z-
dc.date.available2018-08-21T05:56:50Z-
dc.date.issued2016-01-01en_US
dc.identifier.issn2376-6816en_US
dc.identifier.urihttp://hdl.handle.net/11536/146721-
dc.description.abstractInferring activities on smartphones is a challenging task. Prior works have elaborated on using sensory data from built-in hardware sensors in smartphones or taking advantage of location information to understand human activities. In this paper, we explore two types of data on smartphones to conduct activity inference: 1) Spatial-Temporal: reflecting daily routines from the combination of spatial and temporal patterns, 2) Application: perceiving specialized apps that assist the user's activities. We employ multi-view learning model to accommodate both types of data and use weighted linear kernel model to aggregate the views. Note that since resources of smartphones are limited, activity inference on smartphones should consider the constraints of resources, such as the storage, energy consumption, and computation power. Finally, we compare our proposed method with several classification methods on a real dataset to evaluate the effectiveness and performance of our method. The experimental results show that our approach outperforms other methods regarding the balance between accuracy, running time, and storage efficiency.en_US
dc.language.isoen_USen_US
dc.titleExploring Multi-View Learning for Activity Inferences on Smartphonesen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2016 CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI)en_US
dc.citation.spage212en_US
dc.citation.epage219en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.identifier.wosnumberWOS:000406594200029en_US
Appears in Collections:Conferences Paper