標題: 利用智慧眼鏡及電腦視覺技術在藝術展覽空間作擴增實境式導覽
An Augmented Reality-based System for Art-exhibition Guidance by Computer Vision Techniques Using Smart Glasses
作者: 林育樺
蔡文祥
陳永昇
Lin, Yu-Hua
Tsai, Wen-Hsiang
Chen, Yong-Sheng
多媒體工程研究所
關鍵字: 電腦視覺;室內導覽;擴增實境;智慧眼鏡;Computer Vision Techniques;Indoor Guidance;Augmented Reality;Smart Glasses
公開日期: 2016
摘要: 當人們在參觀一個陌生的室內展覽時,可能會因為展覽空間過大或是動線太過複雜,而無法找到想要欣賞的藝術品。對此,本研究提出一個應用於室內展覽空間的擴增實境式即時導覽系統。此系統利用電腦視覺的技術作為定位方式,規劃出到擬欣賞藝術品的最短路徑,並且將藝術品的資訊擴增到使用者所戴智慧眼鏡的螢幕之上。 首先,在導覽開始前,系統先經過一套建立環境地圖的學習程序,其結果包含展覽空間平面圖、藝術品的相關資訊,以及展覽空間影像的資料庫。並且以本研究所提出的一些規則決定出沿途影像的拍攝節點,在這些節點上拍攝展覽空間的影像,最後將所有影像儲存到一個資料庫之中,提供導覽階段時之用。 接下來,本研究提出一個使用者定位的演算法,該法是以加速穩健特徵(SURFs)做影像辨識,藉由一種影像消失點相機校正的方法,計算出使用者位置。該方法會以智慧眼鏡上的相機拍攝影像,接著利用加速穩健特徵,將拍攝影像與事先建立的影像資料庫做比對。接著,根據使用者拍攝的影像與比對結果之間的特徵點關係,以相機校正的方法,推算出相機位置及方向參數,當做使用者位置及方向參數。 為了能夠達成即時導覽的要求,本系統的學習程序和導覽程序各自採用了不同的加速的方法。最後,本研究提出一擴增實境式導覽與藝術品資訊介紹的方法。根據使用者的位置及方向參數,所提系統會利用戴克斯特拉(Dijkstra)演算法,規劃一條從使用者位置到事先選取藝術品之間的最短路徑,並在智慧眼鏡螢幕中的沿路影像上貼上一擴增實境式箭頭,來指引使用者該往哪裡走。此外,也會在智慧眼鏡螢幕上擴增眼前所見藝術品之名稱,以及藝術品的相關資訊,供使用者參考。 最後,本研究有良好的實驗結果,可用以證明所提系統確實可行。
When people visit a new indoor art exhibition, they usually need a guidance system to tell them how to arrive artworks desired to see. In this study, a real-time augmented reality (AR)-based guidance system built on a pair of smart-glasses for use in indoor art-exhibition spaces has been proposed. The system is based on computer vision and AR techniques to achieve automatic guidance functions. At first, the system is designed to be based on image matching for user localization in the exhibition space, and so an environment map, which includes a plan drawing of the exhibition space, a database of artwork images in the environment, and all artwork information, is established. The artwork images are taken according to a grid-like arrangement of space nodes with eight images taken around 360o at each node. All the learned data are saved into a database for use in the guidance phase. Next, a method for user localization is proposed for use by the system. Specifically, the server-side of the system receives the image taken by the client-side camera and matches the image against the image database using speeded up robust features (SURFs) extracted from the images. Then, the system derives the user’s orientation and position parameters according to the corresponding relationship between the image taken by the user and the matching result. Finally, the client-side receives the result of user localization from the sever-side for use in drawing an AR guidance arrow on the smart-glasses screen. To realize real-time guidance, methods for speeding up image transmission and processing are also proposed for uses both in the learning phase and in the guidance phase. Consequently, user localization can be updated, and the AR guidance arrow and the information of the artwork can be displayed on the screen of the smart-glasses, all in real-time. Finally, a method for path planning and AR-based guidance and is proposed. Based on the derived user’s position, the system plans a shortest path from the user’s location to a pre-selected destination artwork by use of the Dijkstra algorithm. Furthermore, the AR guidance arrow is rendered in a 3D fashion and augmented on the smart-glasses screen to guide the user where to go and which direction to turn to. When the user arrives at the desired artwork, the detailed information of the artwork is augmented on the screen for appreciation. Good experimental results are also presented, which show the feasibility of the proposed methods and the system for real-time art-exhibition guidance applications.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070456622
http://hdl.handle.net/11536/138398
Appears in Collections:Thesis