標題: | 運用擴增實境及環場影像技術實做具有自動學習能力的戶外園區車內導覽系統 In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability |
作者: | 唐心駿 Tang, Hsin-Jun 蔡文祥 Tsai, Wen-Hsiang 資訊科學與工程研究所 |
關鍵字: | 戶外園區;擴增實境;環場影像;自動學習;車內導覽;outdoor parks;augmented reality;omni-image;automatic learning;in-car tour guidance |
公開日期: | 2013 |
摘要: | 本研究利用車輛與電腦視覺技術,建立一個基於擴增實境(augmented reality; AR)技術且具有自動學習能力的室外園區導覽系統。利用此系統,使用者可以輕易的建立導覽地圖供系統使用。當車上乘客乘坐在導覽車上時,可以接收到園區的導覽資訊,如路徑上車輛周圍之建築物名稱等。此導覽資訊可顯示在車內乘客的手持裝置上影像的建築物上。
為實現此一擴增實境導覽系統,本研究首先在學習階段建立環境地圖,地圖中包含導覽的路徑、附近建築物所在位置及其名稱等資料。所有資料經過手動或半自動的學習之後,會儲存至一資料庫中以供導覽階段使用。
接下來,本研究提出一個自動學習環境中垂直線特徵的方法,在此一特徵學習的階段,車輛會裝上一全球定位系統(GPS)裝置以及一個具有俯仰雙視角的環場影像裝置,並在導覽的路段上行進。在行進的路段上的每個點,系統會分析一組俯仰視角的環場影像、偵測出影像中的垂直線,並搭配GPS裝置,計算出周圍垂直線的位置以及高度。分析完的垂直線會被加入地圖中,作為之後導覽用的參考地標。
再者,本研究提出了一利用偵測周圍垂直線進行車輛定位的方法,該法利用之前學習過的資訊以及GPS裝置,可以從一張仰視的環場影像中偵測已學習過的垂直線,並利用偵測出的資訊計算目前車輛的位置。
最後,本研究進行環場攝影機影像之轉換,產生出一模擬乘客看出車外之情景的影像,並將此影像顯示在手持裝置上。同時,以此影像為基底,並利用車輛定位的結果,將建築物在影像上的位置計算出來,藉以疊加此建築物之名稱在該影像之上,達到AR導覽的效果。
上述方法經過實驗得到良好的結果,顯示本研究所提出的系統與方法確實可行。 In this study, an augmented-reality based in-car tour guidance system with an automatic learning capability for use in outdoor park areas using computer vision techniques has been proposed. With the proposed system, a user can construct a tour guidance map for a park area in a simple and clear way, and use this map to provide tour guidance information to in-car passengers. When a passenger is in a vehicle driven in a park area, he/she can get from the system tour guidance information mainly about the names of the nearby buildings appearing along the way on the guidance path. The building names are augmented on the passenger-view image which is displayed on the mobile device held by the passenger. To implement the proposed system with the above-mentioned capability, at first an environment map is generated in the learning phase, which includes the information about the tour path and the along-path buildings (mainly the building names). All the data are learned either manually or by programs, and saved into the database for use in the navigation phase. Secondly, a method for automatic learning of the along-path vertical-line features, mainly, the edges of light poles, is proposed for use by the system. In this feature-learning stage, the vehicle equipped with a GPS device and a two-camera omni-imaging device is driven on a pre-selected guidance path. On each visited spot of the path, the system analyzes the input omni-image pair taken by the upper and lower cameras of the imaging device respectively, to detect the nearby vertical-line features and compute the positions and heights of them by the use of the GPS device. And the learned features are added to the map as landmarks for vehicle localization in the navigation phase. Next, a method for vehicle localization is proposed for use by the system. The method analyzes the omni-image taken by the upper camera of the imaging device to detect the learned features by the use of the learned information about them and the GPS device. It then computes the vehicle position by using the relation between the features and the vehicle. Finally, a method for AR-based guidance is proposed, which at first generates a passenger-view image by transforming the omni-image acquired from the upper omni-camera onto the user’s mobile-device screen. The method then uses the passenger-view image as a base, and augments the building names on the image before the image is displayed. To accomplish this function, the system computes the position of each building on the passenger-view image by using the result of vehicle localization. Good experimental results are also presented to show the feasibility of the proposed methods for real applications. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT070256009 http://hdl.handle.net/11536/74986 |
顯示於類別: | 畢業論文 |