Full metadata record
DC FieldValueLanguage
dc.contributor.author林子甄zh_TW
dc.contributor.author蔡文錦zh_TW
dc.contributor.author陳華總zh_TW
dc.contributor.authorLin, Zi-Zhenen_US
dc.contributor.authorTsai, Wen-Jiinen_US
dc.contributor.authorChen, Hua-Tsungen_US
dc.date.accessioned2018-01-24T07:42:02Z-
dc.date.available2018-01-24T07:42:02Z-
dc.date.issued2017en_US
dc.identifier.urihttp://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070456636en_US
dc.identifier.urihttp://hdl.handle.net/11536/142309-
dc.description.abstract隨著深度學習技術在各個領域的突破,使得自動駕駛的議題在近幾年備受重視。而自動駕駛最重要的地方就是要能持續監控車輛周圍環境保障行車安全,所以週遭物體的偵測對於預防駕駛發生致命的意外是非常重要的一環。
You only look once (YOLO) 是目前使用深度學習的物體偵測技術中,速度最快且最準確的方法之一,但是YOLO對於畫面中較小的物體以及影片中的移動物體會有偵測效果不佳的問題。所以本論文提出針對影像做前處理,希望能讓較小的物體能夠被偵測。此外,利用物體在多張連續影像的連貫性,提出使用光流法讓更多的物體能正確地被偵測到。我們使用KITTI 資料庫的影像做實驗,實驗結果說明我們提出的方法針對YOLO的偵測結果有改善。
zh_TW
dc.description.abstractWith the development of deep learning, the issue of automatic driving in recent years have been attract to attention. Safe autonomous driving requires detection of surrounding obstacles, moving objects, and identifying drivable areas. Thus, detecting the surrounding objects to prevent a fatal accident is a very important problem.
You only look once (YOLO) is a state-of-the-art, real-time object detection system. However, the system may fail to detect small or moving objects in a video. The failure of detecting moving objects is probably due to the blurring of objects. To make the detection system more robust, we propose to use pre-processing techniques such as perspective transform and splitting to handle small-object detection; and use optical flow to take advantage of temporal consistency of moving objects in consecutive frames. The experimental results on KITTI dataset show that, in comparison to original YOLO, the proposed method work well in terms of both recall rates and precision.
en_US
dc.language.isoen_USen_US
dc.subject物體偵測zh_TW
dc.subject光流法zh_TW
dc.subjectobject detectionen_US
dc.subjectoptical flowen_US
dc.title利用時間訊息進行物體偵測zh_TW
dc.titleImage object detection with the aid of temporal informationen_US
dc.typeThesisen_US
dc.contributor.department多媒體工程研究所zh_TW
Appears in Collections:Thesis