Author: |
宋逸慈 Yi-Tzu Sung |
---|---|
Thesis Title: |
利用Google Map圖像於地區導覽圖之定位 Navigation Map Positioning via Google Map Image |
Advisor: |
楊傳凱
Chuan-Kai Yang |
Committee: |
賴源正
Yuan-Cheng Lai 林伯慎 Bor-Shen Lin |
Degree: |
碩士 Master |
Department: |
管理學院 - 資訊管理系 Department of Information Management |
Thesis Publication Year: | 2022 |
Graduation Academic Year: | 110 |
Language: | 中文 |
Pages: | 80 |
Keywords (in Chinese): | 導覽圖定位 、Google地圖圖像 、影像處理 、特徵比對 |
Keywords (in other languages): | Navigation Map Positioning, Google Map Image, Image Processing, Feature Matching |
Reference times: | Clicks: 610 Downloads: 0 |
Share: |
School Collection Retrieve National Library Collection Retrieve Error Report |
當抵達一個園區或觀光地區時,民眾通常會藉由當地區域的導覽圖來了解該區域的資訊,並以此來尋找設施地點。雖然當地會在多處放有大型導覽圖,但只要離開大型導覽圖就會難以得知自己之所在地。為解決上述情況,本文希望可以直接在導覽圖標示出自己的位置。
為上述目的,本論文提出一種結合導覽圖與Google Map圖像之方法來定位於導覽圖上,基於此兩張圖像之形狀結構(如建築物形狀、路線等)會有部分相似形狀,我們使用圖像比對來得到整體對應關係,並進行定位。使用者需框選出兩張圖像的相似區域,而系統會先對圖像進行前處理,隨後進行特徵點偵測並對框選區域進行圖像特徵比對,再藉由圖像的比對結果來進行定位,最後將估算出來的定位點移至道路上,並將最終所得之定位點標記到導覽圖上。經過多次實驗後,估算定位點之平均誤差百分比為0.77%。
When people arrive at a park or tourist attraction, they usually use the locally provided navigation map to get the information about the area and to find the location of the facilities. Although there are some large navigation maps prepared for visitors to use, it is still difficult to know where we are if we become far away from these maps. To solve this issue, we wish that we can directly pinpoint our location on the navigation map.
For this purpose, this thesis proposes a method that considers the navigation map and the Google Map image to locate a user’s position on the navigation map. Since the shapes and structures of these two images (such as building shapes, road lines, etc.) may not be very similar, we use some image matching techniques to obtain an overall correspondence. First, a user has to select some similar regions from two images. The system first performs image pre-processing, and then feature matching on the selected regions. The feature matching result is used to locate the position. After obtaining the estimated positioning points, we move them to the road in the navigation map. Finally, we can locate these points on the navigation map. After several experiments, the average error percentage of the estimated positioning points is 0.77%.
[1] C. Harris, M. Stephens, et al., “A combined corner and edge detector,” in Alvey vision conference, vol. 15, pp. 10–5244, Citeseer, 1988.
[2] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
[3] H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” in European conference on computer vision, pp. 404–417, Springer, 2006.
[4] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International conference on computer vision, pp. 2564–2571, Ieee, 2011.
[5] E. Rosten, R. Porter, and T. Drummond, “Faster and better: A machine learning approach to corner detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 1, pp. 105–119, 2008.
[6] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in European conference on computer vision, pp. 778–792, Springer, 2010.
[7] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” in Eur. Conf. on Computer Vision (ECCV), 2012.
[8] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” in British Machine Vision Conf. (BMVC), 2013.
[9] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration.,” VISAPP (1), vol. 2, no. 331-340, p. 2, 2009.
[10] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 4, pp. 509–522, 2002.
[11] K. Suzuki, D. Sakamoto, S. Nishi, and T. Ono, “Scan: Indoor navigation interface on a user-scanned indoor map,” in Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1–6, 2019.
[12] X. H. Ng and W. N. Lim, “Design of a mobile augmented reality-based indoor navigation system,” in 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–6, IEEE, 2020.
[13] “National taiwan university of science and technology - 國立臺灣科技大學.” https://www.ntust.edu.tw/. Accessed on 2022.
[14] J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986.
[15] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, pp. 60–65, IEEE, 2005.
[16] R. G. Von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall, “Lsd: A line segment detector,” Image Processing On Line, vol. 2, pp. 35–55, 2012.
[17] “National dong hwa university - 國立東華大學.” https://www.ndhu.edu.tw/?Lang=zh-tw. Accessed on 2022.
[18] “Taipei botanical garden - 台北植物園.” https://tpbg.tfri.gov.tw/index.php. Accessed on 2022.