標題: | Fuzzy Rule Inference Based Human Activity Recognition |
作者: | Chang, Jyh-Yeong Shyu, Jia-Jye Cho, Chien-Wen 電控工程研究所 Institute of Electrical and Control Engineering |
公開日期: | 2009 |
摘要: | Human activity recognition plays an essential role in e-health applications, such as automatic nursing home systems, human-machine interface, home care system, and smart home applications. Many of human activity recognition systems only used the posture of an image frame to classify an activity. But transitional relationships of postures embedded in the temporal sequence are important information for human activity recognition. In this paper, we combine temple posture matching and fuzzy rule reasoning to recognize an action. Firstly, a fore-ground subject is extracted and converted to a binary image by a statistical background model based on frame ratio, which is robust to illumination changes. For better efficiency and separability, the binary image is then trans-formed to a new space by eigenspace and canonical space transformation, and recognition is done in canonical space. A three image frame sequence, 5:1 down sampling from the video, is converted to a posture sequence by template matching. The posture sequence is classified to an action by fuzzy rules inference. Fuzzy rule approach can not only combine temporal sequence information for recognition but also be tolerant to variation of action done by different people. In our experiment, the proposed activity recognition method has demonstrated higher recognition accuracy of 91.8% than the HMM approach by about 5.4 %. |
URI: | http://hdl.handle.net/11536/14843 http://dx.doi.org/10.1109/CCA.2009.5280999 |
ISBN: | 978-1-4244-4601-8 |
ISSN: | 1085-1992 |
DOI: | 10.1109/CCA.2009.5280999 |
期刊: | 2009 IEEE CONTROL APPLICATIONS CCA & INTELLIGENT CONTROL (ISIC), VOLS 1-3 |
起始頁: | 211 |
結束頁: | 215 |
Appears in Collections: | Conferences Paper |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.