完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Prasad, Mukesh | en_US |
dc.contributor.author | Zheng, Ding-Rong | en_US |
dc.contributor.author | Mery, Domingo | en_US |
dc.contributor.author | Puthal, Deepak | en_US |
dc.contributor.author | Sundaram, Suresh | en_US |
dc.contributor.author | Lin, Chin-Teng | en_US |
dc.date.accessioned | 2019-08-02T02:24:21Z | - |
dc.date.available | 2019-08-02T02:24:21Z | - |
dc.date.issued | 2018-01-01 | en_US |
dc.identifier.issn | 1877-0509 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1016/j.proes.2018.10.500 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/152489 | - |
dc.description.abstract | This paper proposes a method to allow users to select target species for detection, generate an initial detection model by selecting a small piece of image sample and as the movie plays, continue training this detection model automatically. This method has noticeable detection results for several types of objects. The framework of this study is divided into two parts: the initial detection model and the online learning section. The detection model initialization phase use a sample size based on the proportion of users of the Haar-like features to generate a pool of features, which is used to train and select effective classifiers. Then, as the movie plays, the detecting model detects the new sample using the NN Classifier with positive and negative samples and the similarity model calculates new samples based on the fusion background model to calculate a new sample and detect the relative similarity to the target. From this relative similarity-based conservative classification of new samples, the conserved positive and negative samples classified by the video player are used for automatic online learning and training to continuously update the classifier. In this paper, the results of the test for different types of objects show the ability to detect the target by choosing a small number of samples and performing automatic online learning, effectively reducing the manpower needed to collect a large number of image samples and a large amount of time for training. The Experimental results also reveal good detection capability. (C) 2018 The Authors. Published by Elsevier Ltd. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Object detection | en_US |
dc.subject | On-line learning | en_US |
dc.subject | learning from video | en_US |
dc.subject | real-time streaming | en_US |
dc.title | A fast and self-adaptive on-line learning detection system | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.doi | 10.1016/j.proes.2018.10.500 | en_US |
dc.identifier.journal | INNS CONFERENCE ON BIG DATA AND DEEP LEARNING | en_US |
dc.citation.volume | 144 | en_US |
dc.citation.spage | 13 | en_US |
dc.citation.epage | 22 | en_US |
dc.contributor.department | 電子工程學系及電子研究所 | zh_TW |
dc.contributor.department | Department of Electronics Engineering and Institute of Electronics | en_US |
dc.identifier.wosnumber | WOS:000471275300002 | en_US |
dc.citation.woscount | 0 | en_US |
顯示於類別: | 會議論文 |