Full metadata record
DC FieldValueLanguage
dc.contributor.author游以正en_US
dc.contributor.authorYi-Cheng Youen_US
dc.contributor.author李錫堅en_US
dc.contributor.authorHsi-Jian Leeen_US
dc.date.accessioned2014-12-12T01:58:52Z-
dc.date.available2014-12-12T01:58:52Z-
dc.date.issued2004en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT009117594en_US
dc.identifier.urihttp://hdl.handle.net/11536/50347-
dc.description.abstract在本論文中,我們提出一個能偵測人類危險動作的監視系統以用於老年人居家照顧。此系統是在彩色錄像上運作,且由前景偵測、人類擷取、姿勢分析與動作辨識等四個階段所組成。 在第一個階段,我們使用一個統計式的背景模組來偵測前景圖素,而這些前景圖素再經由雜訊消除、型態學濾波、連通元件擷取與補洞等四個程序來歸類成前景區域。我們是根據最近鄰居密度估計法(k nearest neighbor (kNN) density estimation)來建立背景模組,而此估計法為一個非參數式的密度估計法。在第二個階段,我們根據一個前景區域所佔有的圖像數與其膚色圖像數佔整個前景區域圖像數的比例來決定該前景區域是否包含單個人。在第三個階段,若某個前景區域被視為是單個人,我們便使用一個以輪廓為基礎的姿勢分析法來估計該單個人的姿勢。首先,我們計算出用來表示其姿勢的輪廓外型特徵,而輪廓外型特徵為輪廓的質心、主軸與正規化輪廓等。之後,我們便利用這些特徵將其姿勢分類到七個主要姿勢中的一個;這七個主要姿勢為站著、彎腰、腳彎曲的坐著、蹲著、跪著、腳伸直的坐著與躺著等姿勢。在第四個階段,我們使用一個以七個主要姿勢所建立的姿勢狀態轉換圖來辨識人類的動作,而姿勢的轉換是依據姿勢分析後的結果。根據姿勢的轉換,我們可以將人類的動作分類成正常動作與危險動作,若兩個連續的姿勢在狀態轉換圖中不能直接轉換意味著有危險的動作發生。 在我們的實驗中,我們用300個人類姿勢來測試姿勢估計的正確率;經過測試,我們得到87%的正確率。對於人類動作的辨識,我們使用四組連續影像來測試;經過測試,我們發現所有的危險動作皆能被偵測出來。zh_TW
dc.description.abstractIn this thesis, a surveillance system used to detect dangerous human actions in an indoor environment for elder’s home care is proposed. The system operates on color video imagery. It consists of four stages: foreground detection, human extraction, posture analysis, and action recognition. In the first stage, foreground pixels of each image are detected using a statistical background model, and grouped into foreground regions through four processes: noise eliminating, morphological filtering, connected-component extracting, and hole filling. The construction of the background model is based on the k nearest neighbor (kNN) density estimation, which is a nonparametric density estimation technique. In the second stage, whether a foreground region contains a single person is determined according to the number of its pixels and the proportion of its skin color pixels to its pixels. If a foreground region is considered as a single person, then a silhouette-based posture analysis is applied to this foreground region to estimate the posture of the single person in the third stage. First, shape features of the single person silhouette used to represent the single person posture are computed. The features are the centroid, the major axis, and the normalized silhouette of the silhouette. Then, the posture of the single person is classified into one of seven main postures using these features: standing, stooping, sitting with crooked legs, squatting, kneeling, sitting with stretched legs, and lying down/prone. In the fourth stage, we use a posture state transition diagram constructed with the seven main postures to recognize human actions. The results of the silhouette-based posture analysis are used to implement state transitions. According to the state transitions, any human action can be classified into normal action or dangerous action. If two successive postures do not have a transition in the diagram, it means that a dangerous action occurs. In our experiments, we tested 300 human postures for posture estimation. The accuracy rate is 86.7%. For human action recognition, we tested four image sequences. All dangerous actions in these sequences can be detected accurately.en_US
dc.language.isoen_USen_US
dc.subject背景模組zh_TW
dc.subject人類姿勢估計zh_TW
dc.subject人類活動辨識zh_TW
dc.subjectBackground Modelen_US
dc.subjectHuman Posture Estimationen_US
dc.subjectHuman Action Recognitionen_US
dc.title用於老年人居家照顧的人類活動辨識zh_TW
dc.titleHuman Action Recognition for Elder's Home Careen_US
dc.typeThesisen_US
dc.contributor.department資訊科學與工程研究所zh_TW
Appears in Collections:Thesis


Files in This Item:

  1. 759401.pdf
  2. 759402.pdf
  3. 759403.pdf
  4. 759404.pdf
  5. 759405.pdf
  6. 759406.pdf
  7. 759407.pdf
  8. 759408.pdf
  9. 759409.pdf
  10. 759410.pdf
  11. 759411.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.