簡易檢索 / 詳目顯示

研究生: 宋逸慈
Yi-Tzu Sung
論文名稱: 利用Google Map圖像於地區導覽圖之定位
Navigation Map Positioning via Google Map Image
指導教授: 楊傳凱
Chuan-Kai Yang
口試委員: 賴源正
Yuan-Cheng Lai
林伯慎
Bor-Shen Lin
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 80
中文關鍵詞: 導覽圖定位Google地圖圖像影像處理特徵比對
外文關鍵詞: Navigation Map Positioning, Google Map Image, Image Processing, Feature Matching
相關次數: 點閱:288下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

當抵達一個園區或觀光地區時,民眾通常會藉由當地區域的導覽圖來了解該區域的資訊,並以此來尋找設施地點。雖然當地會在多處放有大型導覽圖,但只要離開大型導覽圖就會難以得知自己之所在地。為解決上述情況,本文希望可以直接在導覽圖標示出自己的位置。

為上述目的,本論文提出一種結合導覽圖與Google Map圖像之方法來定位於導覽圖上,基於此兩張圖像之形狀結構(如建築物形狀、路線等)會有部分相似形狀,我們使用圖像比對來得到整體對應關係,並進行定位。使用者需框選出兩張圖像的相似區域,而系統會先對圖像進行前處理,隨後進行特徵點偵測並對框選區域進行圖像特徵比對,再藉由圖像的比對結果來進行定位,最後將估算出來的定位點移至道路上,並將最終所得之定位點標記到導覽圖上。經過多次實驗後,估算定位點之平均誤差百分比為0.77%。


When people arrive at a park or tourist attraction, they usually use the locally provided navigation map to get the information about the area and to find the location of the facilities. Although there are some large navigation maps prepared for visitors to use, it is still difficult to know where we are if we become far away from these maps. To solve this issue, we wish that we can directly pinpoint our location on the navigation map.

For this purpose, this thesis proposes a method that considers the navigation map and the Google Map image to locate a user’s position on the navigation map. Since the shapes and structures of these two images (such as building shapes, road lines, etc.) may not be very similar, we use some image matching techniques to obtain an overall correspondence. First, a user has to select some similar regions from two images. The system first performs image pre-processing, and then feature matching on the selected regions. The feature matching result is used to locate the position. After obtaining the estimated positioning points, we move them to the road in the navigation map. Finally, we can locate these points on the navigation map. After several experiments, the average error percentage of the estimated positioning points is 0.77%.

中文摘要.................................................................. III 英文摘要.................................................................. IV 誌謝 ...................................................................... V 目 錄 ..................................................................... VI 表目錄 .................................................................... VIII 圖目錄 .................................................................... IX 第一章 緒論 .............................................................. 1 1.1 研究動機與目的 ............................................................ 1 1.2 論文架構 .................................................................... 2 第二章 文獻探討 ......................................................... 3 2.1 圖像特徵偵測 ............................................................... 3 2.2 圖像比對 .................................................................... 5 2.3 導航系統 .................................................................... 7 第三章 演算法設計與系統實作............................................ 9 3.1 系統流程 .................................................................... 9 3.2 系統輸入 .................................................................... 11 3.3 影像前處理.................................................................. 13 3.3.1 邊緣偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3.2 去除文字及小圖示 . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3.3 圖像線條統一化 . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.4 圖像比對 .................................................................... 18 3.4.1 框選區域 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4.2 特徵比對 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.5 導覽圖定位.................................................................. 23 3.5.1 初步定位 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.5.2 校正定位 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.6 定位於道路上 ............................................................... 28 3.6.1 取得道路顏色 . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.6.2 取得道路線 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.6.3 將定位點移到道路上 . . . . . . . . . . . . . . . . . . . . . . . 37 第四章 結果展示與評估................................................... 40 4.1 系統環境 .................................................................... 40 4.2 實驗結果 .................................................................... 41 4.3 實驗評估 .................................................................... 60 4.3.1 定位點評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.3.2 時間評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4 實驗限制 .................................................................... 65 第五章 結論與未來展望................................................... 66 參考文獻.................................................................. 67

[1] C. Harris, M. Stephens, et al., “A combined corner and edge detector,” in Alvey vision conference, vol. 15, pp. 10–5244, Citeseer, 1988.
[2] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
[3] H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” in European conference on computer vision, pp. 404–417, Springer, 2006.
[4] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International conference on computer vision, pp. 2564–2571, Ieee, 2011.
[5] E. Rosten, R. Porter, and T. Drummond, “Faster and better: A machine learning approach to corner detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 1, pp. 105–119, 2008.
[6] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in European conference on computer vision, pp. 778–792, Springer, 2010.
[7] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” in Eur. Conf. on Computer Vision (ECCV), 2012.
[8] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” in British Machine Vision Conf. (BMVC), 2013.
[9] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration.,” VISAPP (1), vol. 2, no. 331-340, p. 2, 2009.
[10] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 4, pp. 509–522, 2002.
[11] K. Suzuki, D. Sakamoto, S. Nishi, and T. Ono, “Scan: Indoor navigation interface on a user-scanned indoor map,” in Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1–6, 2019.
[12] X. H. Ng and W. N. Lim, “Design of a mobile augmented reality-based indoor navigation system,” in 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–6, IEEE, 2020.
[13] “National taiwan university of science and technology - 國立臺灣科技大學.” https://www.ntust.edu.tw/. Accessed on 2022.
[14] J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986.
[15] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, pp. 60–65, IEEE, 2005.
[16] R. G. Von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall, “Lsd: A line segment detector,” Image Processing On Line, vol. 2, pp. 35–55, 2012.
[17] “National dong hwa university - 國立東華大學.” https://www.ndhu.edu.tw/?Lang=zh-tw. Accessed on 2022.
[18] “Taipei botanical garden - 台北植物園.” https://tpbg.tfri.gov.tw/index.php. Accessed on 2022.

無法下載圖示 全文公開日期 2025/08/11 (校內網路)
全文公開日期 2028/08/11 (校外網路)
全文公開日期 2028/08/11 (國家圖書館:臺灣博碩士論文系統)
QR CODE