簡易檢索 / 詳目顯示

研究生: 陸思宇
Si-Yu Lu
論文名稱: 基於多尺度點雲特徵之光達定位系統
LiDAR Localization System based on Multi-scale Point Cloud Features
指導教授: 陳永耀
Yung-Yao Chen
口試委員: 林淵翔
Yuan-Hsiang Lin
林敬舜
Ching-Shun Lin
陳永耀
Yung-Yao Chen
花凱龍
Kai-Lung Hua
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 60
中文關鍵詞: 點雲配准深度學習車輛定位
外文關鍵詞: Point Cloud Registration, Deep Learning, Vehicle Localizatio
相關次數: 點閱:151下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文提出了一種使用光學雷達點雲資訊的定位網路模型,用基於學習的方式實現高精度定位,與現有的基於學習的方法相比具有更好的强健性與更少的硬體占用率。模型以端到端的方式實現,從點雲幀的關鍵點提取,到匹配點對特徵和輸出車輛姿態,都以深度神經網絡的方式構建。為增加模型搜尋對應點的搜索範圍並降低硬體資源上的占用,論文采用一種基於不同搜索尺度的迭代配准方法,通過深度網絡的特徵距離與正則化層來匹配點對,顯著提高了在僅使用少量配准點時的定位精度。
通過多次的迭代匹配來壓縮搜索空間可以讓模型找到每次迭代時的最佳位置以逼近最優匹配點,並通過特別設計的正則化層為每次迭代的特徵進行正則化,這使模型在多次尺度迭代時也能有效地生成配准概率。本論文以KITTI Data Set中的Odometry序列作爲資料集進行訓練與測試,論文所提出之模型的平均角度誤差為0.535,平均位移誤差為0.127米,相較於較於現有的其他方法,本論文提出之模型的低定位誤差與運行速度證明了它在定位應用中的可行性。


This thesis proposes a localization network model using LiDAR point cloud information, which achieves centimeter-level localization accuracy in a learning-based way and has better scalability compared with existing methods. Compared with existing learning-based methods, it has better robustness and less hardware occupancy. This model is implemented in an End-to-End Method, from key-point extraction of point cloud frames to matching point-pair feature and output vehicle poses, all constructed in the form of deep neural networks. In order to increase the search range of the model to search for corresponding points and reduce the occupation of hardware resources, this thesis adopts an iterative registration method based on different search scales to match point pairs through the feature distance and regularization layer of the deep network, which significantly improves the positioning accuracy when using fewer number of registration points.
Compressing the search space through multiple iterative matching allows the model to find the best position at each iteration to approximate the optimal matching point, and regularize the features of each iteration through a specially designed regularization layer, which makes the model Registration probabilities also efficiently generated at multiple scale iterations. In this thesis, the Odometry sequence in the KITTI Data Set is used as the data set for training and testing. The average angle error of the model proposed in the paper is 0.535, and the average displacement error is 0.127 meters. Compared with other existing methods, the low localization error and high runtime speed of the model proposed in this thesis prove its feasibility in localization applications.

致謝 I 摘要 II Abstract III 目錄 IV 圖目錄 VI 表目錄 VIII 縮寫目錄 IX 第一章 緒論 1 1.1引言 1 1.2研究動機 4 1.3論文架構 5 第二章 相關工作 6 2.1基於幾何的方法 6 2.1.1基於全局配准的方法 6 2.1.2基於特徵的方法 8 2.3基於深度學習的方法 10 第三章 定位網路架構之設計 13 3.1模型架構 13 3.2點雲掃描幀中關鍵點的選取 15 3.3預建點雲地圖中參考點的生成 17 3.4基於多尺度的迭代配准 20 3.5損失函數 24 第四章 實驗研究 25 4.1資料集 25 4.2執行環境 27 4.2.1硬體規格 27 4.2.1軟體環境 28 4.3超參數 29 4.4評量指標 30 4.5實驗結果 31 4.5.1精度性能對比 31 4.5.2運行時間 33 4.5.3消融實驗 34 4.5.4可視化 35 第五章 結論與未來展望 44 5.1結論 44 5.2未來展望 45 參考文獻 46

[1]PJ. Besl, and ND. McKay, “Method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 1611, pp. 586-606, 1992.
[2]M. Magnusson, “The three-dimensional normal-distributions transform: an efficient representation for registration, surface analysis, and loop detection,” PhD diss. Örebro universitet, Dec. 2009.
[3]Y. Wang, and JM. Solomon, “Deep closest point: Learning representations for point cloud registration,” IEEE/CVF International Conference on Computer Vision, pp. 3523-3532, 2019.
[4]ZJ. Yew, and GH. Lee, “Rpm-net: Robust point matching using learned features,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11824-11833, 2020.
[5]Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” International Journal of Computer Vision, 1994.
[6]A. Censi, “An icp variant using a point-to-line metric,” IEEE International Conference on Robotics and Automation, pp. 19-25, 2008.
[7]KL. Low, “Linear least-squares optimization for point-to-plane icp surface registration,” Chapel Hill, University of North Carolina, 2004.
[8]A. Segal, D. Haehnel, and S. Thrun, “Generalized-icp,” Robotics: Science and Systems, Vol. 2, No. 4, pp. 435, 2009.
[9]S. Gold, A. Rangarajan, CP. Lu, S. Pappu, and E. Mjolsness, “New algorithms for 2D and 3D point matching: pose estimation and correspondence,” Pattern Recognition, 1998.
[10]J. Zhang, Y. Yao, and B. Deng, “Fast and robust iterative closest point,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
[11]E. Mendes, P. Koch, and S. Lacroix, “Icp-based pose-graph slam,” IEEE International Symposium on Safety, Security, and Rescue Robotics, pp. 195-200, 2016.
[12]T. Shan, and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4758-4765, 2018.
[13]S. Kato, E. Takeuchi, Y. Ishiguro, Y. Ninomiya, K. Takeda, and T. Hamada, “An open approach to autonomous vehicles,” IEEE Micro, Vol. 35, No. 6, pp. 60-68, 2015.
[14]W. Wang, MR. Saputra, P. Zhao, P. Gusmao, B. Yang, C. Chen, A. Markham, and N. Trigoni, “Deeppco: End-to-end point cloud odometry through deep parallel neural network,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3248-3254, 2019.
[15]P. Egger, PV. Borges, G. Catt, A. Pfrunder, R. Siegwart, and R. Dubé, “Posemap: Lifelong, multi-environment 3d lidar localization,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3430-3437, 2018.
[16]J. Zhang, and S. Singh, “LOAM: Lidar odometry and mapping in real-time,” Robotics: Science and Systems, Vol. 2, No. 9, pp. 1-9, 2014.
[17]S. Chen, H. Ma, C. Jiang, B. Zhou, W. Xue, Z. Xiao, and Q. Li, “NDT-LOAM: A real-time lidar odometry and mapping with weighted NDT and LFA,” IEEE Sensors Journal, pp. 3660-3671, 2021.
[18]CR. Qi, H. Su, K. Mo, and LJ. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 652-660, 2017.
[19]CR. Qi, L. Yi, H. Su, and LJ. Guibas, “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” Neural Information Processing Systems, 2017.
[20]S. Pang, D. Morris, and H. Radha, “CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 10386-10393, 2020.
[21]R. Dubé, A. Cramariuc, D. Dugas, J. Nieto, R. Siegwart, and C. Cadena, “SegMap: 3d segment mapping using data-driven descriptors,” arXiv:1804.09557, 2018.
[22]W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, “L3-net: Towards learning based lidar localization for autonomous driving,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6389-6398, 2019.
[23]W. Lu, G. Wan, Y. Zhou, X. Fu, P. Yuan, and S. Song, “Deepvcp: An end-to-end deep neural network for point cloud registration,” IEEE/CVF International Conference on Computer Vision, pp. 12-21, 2019.
[24]O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234-241, 2015.

無法下載圖示 全文公開日期 2027/08/22 (校內網路)
全文公開日期 2027/08/22 (校外網路)
全文公開日期 2027/08/22 (國家圖書館:臺灣博碩士論文系統)
QR CODE