Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 楊華智 | zh_TW |
dc.contributor.author | 曾煜棋 | zh_TW |
dc.contributor.author | Yang, Hua-Chih | en_US |
dc.contributor.author | Tseng, Yu-Chee | en_US |
dc.date.accessioned | 2018-01-24T07:38:09Z | - |
dc.date.available | 2018-01-24T07:38:09Z | - |
dc.date.issued | 2016 | en_US |
dc.identifier.uri | http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070156823 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/139577 | - |
dc.description.abstract | 隨著行動運算及感測技術的進步,穿戴式裝置逐漸改變人類的生活模式。在這樣的潮流下,人類與運算裝置間的界面不斷地演進。手勢、姿態等肢體語言,逐漸變成人機介面的一部份,也因此增添了許多新興應用。目前雖然已有一些研究致力於以電腦自動化的方式辨識人類的頭部姿態,但大多的研究都基於影像處理技術,較少考慮以頭部穿戴式裝置為基礎的研究。有鑑於此,本研究運用機器學習以穿戴式裝置感知頭部姿態,包括: 抬頭、低頭、左轉、右轉、點頭及搖頭。我們提出一套名為 HGR (Head Gestures Recognition using Wearable Devices) 的演算法,其利用頭部穿戴式裝置內的感測器收集使用者的姿態變化資訊,再利用以能量為基礎的切段方法找出有動作發生的資料區段,再對這些資料區段擷取時域特徵,最後將特徵輸入至內嵌於手機的分類器以辨識頭部姿態類型。本研究將所提出的方法雛型實作於Ardunio平台上,並以真實資料驗證所提出的方法效率。實驗結果顯示所提出的方法能夠有效率地辨識不同類型的頭部姿態,其平均正確率高達95%。 | zh_TW |
dc.description.abstract | With the improving on mobile computing and sensing technologies, the wearable device change our lifestyle gradually. In this trend, the interface between human and computing devices continue to evolve. The body language such as hand gesture and posture etc., they become a part of the human-computer interface and thus add many new applications. Although there are some studies have been dedicated to identify the head pose by computer-automated way, but most studies are based on image processing technology and less related research about head wearable. So this study will use the head wearable and machine learning to sense our head gesture, including: the rise, down, turn left, turn right, nodding and shaking the head. We propose an algorithm called the HGR (Head Gestures Recognition using Wearable Devices) which collected the information of posture change by the sensor of head wearable device and use energy-based mode to find out the data sections that have action occurrence and then capture the time-domain characteristics of these data sections. Finally, we input the characteristics to the classifier which built-in the phone to recognize the type of head gesture. In this study, we use above proposed mode to implement on Ardunio platform and validate the efficiency of this mode by real data. According to the result of the experiments show that the proposed mode can identify different types of head pose efficiently and the average accuracy rate is up to 95 %. | en_US |
dc.language.iso | zh_TW | en_US |
dc.subject | 人機介面 | zh_TW |
dc.subject | 姿態辨識 | zh_TW |
dc.subject | 機器學習 | zh_TW |
dc.subject | 穿戴裝置 | zh_TW |
dc.subject | gesture recognition | en_US |
dc.subject | human-computer interface | en_US |
dc.subject | machine learning | en_US |
dc.subject | wearable devices | en_US |
dc.title | 運用機器學習以穿戴式裝置感知頭部姿態 | zh_TW |
dc.title | Classifying Head Gestures by Wearable Devices through Machine Learning | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 資訊學院資訊學程 | zh_TW |
Appears in Collections: | Thesis |