簡易檢索 / 詳目顯示

研究生: 賴以衛
YI-WEI LAI
論文名稱: 以3D深度學習及點雲匹配技術進行機械手臂自動化複雜零件分類
Manipulator-based Auto-classification of Complex Parts Using 3D Deep Learning and Points Registration Techniques
指導教授: 林清安
Ching-An Lin
口試委員: 謝文賓
Win-Bin Shieh
陳羽薰
Yu-Hsun Chen
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 244
中文關鍵詞: 3D CAD點資料處理深度學習隨機取放機械手臂
外文關鍵詞: 3D CAD, Point data processing, Deep learning, Random bin picking, Manipulator
相關次數: 點閱:285下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 以機械手臂進行零件分類是自動化生產線的主要工作之一,利用結構光掃描器搭配AI深度學習及點雲匹配技術,可快速辨識產線上各個零件的類型,並自動計算每個零件的拾取資訊,然而,隨著零件類型、數量及幾何複雜度的提升,深度學習的數據準備作業將耗費大量時間,且以越複雜的零件進行點雲匹配時,其匹配的誤差也會隨之增加。為克服此等問題,本論文以點資料處理技術對零件的點雲進行處理,改善數據準備耗時及點雲匹配誤差的問題,據以開發一套「複雜零件隨機夾取/分類系統」,達到自動化零件分類之目的。
    本論文透過對零件之掃描點雲進行一系列濾波、分割及資料集擴增處理,由少量掃描點雲自動化產生大量點雲資料集,藉以進行深度學習的訓練,於自動化作業現場快速判別零件種類;接著以RANSAC搭配ICP法進行零件的3D CAD模型與其掃描點雲的精準匹配,將事先分析CAD模型所產生的夾取資訊轉換為零件實際擺放的夾取資訊,並依零件辨識結果及其座標轉換,以機械手臂完成零件的夾取與分類。本論文除了詳述如何以點資料處理技術建構深度學習辨識模型及達到點雲之精準匹配,也簡述如何以3D CAD模型求取零件夾取資訊,最終以多種不同幾何特性的複雜零件驗證所提方法的可行性及所開發系統的實用性。


    Sorting parts with manipulators is one of the main tasks of the automated production line. Utilizing structured light scanners with AI deep learning and point cloud matching technology can not only promptly identify the types of parts in the production line but also automatically calculate the picking information of each part. However, as the type, quantity and geometric complexity of parts increase, the data preparation of deep learning consumes an enormous amount of time. By the same token, matching error can also be amplified as more complex parts are used for point cloud matching. In order to overcome these problems, this thesis approaches the subject by using point data processing technology to process point cloud of parts, aiming to optimize the time-consuming of data preparation and the error problem of point cloud matching. Based on them, develop a "random picking/classification system of complex parts" to achieve the purpose of automatic parts classification.
    In this thesis, through a series of filtering, segmentation and incremental processing on the scanned point cloud of a 3D part, an abundant amount of point cloud data sets are automatically generated from a small quantity of scanned data. Subsequently, training through deep learning is executed for rapid recognition of the types of each part on the automation production line. Furthermore, RANSAC and ICP methods are applied to accomplish precise matching of the 3D CAD model and its scanned point cloud of the complex part, so as to convert the clamping information generated from the prior analysis of the CAD model into the clamping information of the actual placement of the part. And finally, a manipulator is used to complete the gripping and sorting of complex parts according to the information of part types and relevant coordinate transformation. In addition to detailing the process of employing point data processing technology for constructing a deep learning model and for reaching precise matching of point clouds, this thesis also describes the procedure of adopting 3D CAD models to obtain clamping information. And eventually, a variety of complex parts with different geometric characteristics are handled as a means to verify the feasibility of the proposed method and the practicability of the developed system.

    目錄 摘要 I Abstract II 誌謝 IV 目錄 V 圖目錄 X 表目錄 XIX 第一章 緒論 1 1.1 研究動機與目的 1 1.2 研究方法 3 1.3 文獻探討 4 1.4 論文架構 26 第二章 基於3D點雲之複雜零件辨識與夾取分類系統簡介 28 2.1 系統設備 28 2.1.1 結構光掃描器 28 2.1.2 EPSON機械手臂 30 2.1.3 Schunk氣動夾爪 31 2.2 系統環境及軟體開發工具簡介 33 2.2.1 整體系統環境簡介 33 2.2.2 Creo Parametric Toolkit 34 2.2.3 HP Pro S3/David SDKs 34 2.2.4 Point Cloud Library 34 2.2.5 PyTorch 35 2.2.6 EPSON Robot API 36 2.3 系統運作流程 36 2.4 複雜零件之3D點雲簡介 40 2.5 以深度學習由3D點雲辨識零件類型 44 2.5.1 應用於3D點雲之深度學習框架 45 2.5.2 深度學習訓練資料前處理 46 2.5.3 深度學習訓練與訓練後模型之使用 49 2.5.4 訓練結果 50 2.6 以點雲匹配獲得實際夾取資訊並夾取零件 52 2.6.1 應用於3D點雲匹配之方法 54 2.6.2 匹配結果比較 54 2.7 影響整體系統運行之因素 57 第三章 3D零件點雲之深度學習 58 3.1 3D點雲資料之後處理 58 3.1.1 降採樣 59 3.1.2 移除因光源不同產生的雜點 63 3.1.3 使用K-D tree搜尋點資料 66 3.1.4 分割出各零件點雲 66 3.1.5 綜合討論 71 3.2 3D點雲資料集之擴增 72 3.3 訓練3D點雲資料 79 3.4 訓練結果 92 3.5 使用訓練結果進行零件辨識 96 3.6 點資料處理對模型訓練的影響 98 3.6.1 點雲後處理技術對訓練時間與辨識率之效果 98 3.6.2 點雲的擴增對訓練時間與辨識率之效果 99 第四章 點雲匹配之夾取資訊計算 101 4.1 零件夾取基本概念 104 4.2 尋找夾取點組 107 4.2.1 以射線法求取夾取點組 107 4.2.2 以佈點密度決定夾取點組數量 110 4.2.3 排除不適當之參考點 111 4.3 分析夾取點組 112 4.3.1 排除不適當之特徵面 112 4.3.1.1 以迴圈法判斷凹特徵與凸特徵 114 4.3.1.2 判斷最終鄰接面 116 4.3.2 夾爪之開爪限制 119 4.3.3 下爪方向的干涉檢查 120 4.3.4 合爪方向的干涉檢查 129 4.3.5 夾取穩定度 133 4.4 以文件檔輸出夾取點資訊 134 4.5 以點雲匹配求出欲夾取零件之實際夾取點 135 4.5.1 產生標準點雲與掃描點雲 136 4.5.2 計算點雲之FPFH 141 4.5.3 點雲的粗匹配 143 4.5.4 點雲的精匹配 154 4.5.5 匹配後之實際座標轉換 164 4.5.6 匹配後轉換矩陣之可靠度 164 4.6 考慮實際工作環境獲得最終夾取點組 168 4.6.1 機械手臂之夾取坐標系 168 4.6.2 排除夾爪與工作面的干涉 173 4.6.3 排除下爪時夾爪與零件的干涉 175 4.6.4 獲得最終夾取點組 180 4.6.5 判定機械手臂旋轉角度 181 第五章 系統驗證 187 5.1 取得零件的夾取資訊 192 5.2 處理3D點雲資料 196 5.2.1 機械手臂與掃描器座標轉換 196 5.2.2 降低點雲之點數量 196 5.2.3 分割點雲並決定零件夾取順序 199 5.3 以3D深度學習辨識零件類型 200 5.4 匹配標準點雲與掃描點雲 201 5.4.1 計算實際夾取點組 202 5.5 夾取零件並進行分類 204 5.6 結果討論 208 5.6.1 成功拾取率與其影響因素 208 5.6.2 以深度學習進行零件辨識之時間與準確率 210 第六章 結論與未來研究方向 212 6.1 結論 212 6.2 未來研究方向 214 參考文獻 216

    [1] Besl, P.J. and McKay, N.D. (1992), “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 239-256.
    [2] Rusu, R.B., Marton, Z.C., Blodow, N. and Beetz, M. (2008), “Persistent point feature histograms for 3D point clouds,” Proceedings of the 10th International Conference on Intelligent Autonomous Systems, July, 2008, Baden, Baden, Germany, pp. 119-128.
    [3] Rusu, R.B., Blodow, N., and Beetz, M. (2009), “Fast point feature histograms (FPFH) for 3D registration,” IEEE International Conference on Robotics and Automation, May 12-17, Kobe, Japan, 2009, pp. 3212-3217.
    [4] Fischler, M.A. and Bolles, R.C. (1981), “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, Vol. 24, No. 11, pp. 381-395.
    [5] 劉彥峰(2018),「以機械手臂輔助零件隨機拾取與表面瑕疵檢測之系統開發與應用」,碩士論文,台灣科技大學機械工程系。
    [6] Bogdan, R.R. and Cousins, S. (2011), “3D is here: Point Cloud Library (PCL),” IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai, China, pp. 1-4.
    [7] Bentley, J.L. (1975), “Multidimensional binary search trees used for associative searching,” Communications of the ACM, Vol. 18, No. 9, pp. 509-517.
    [8] He, Y., Liang, B., Yang, J., Li, S. and He, J. (2017), “An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features,” Sensors, Vol. 17, No. 8, pp. 1862.
    [9] Mouna A. (2017), “Normal Distribution Transform with Point Projection for 3D Point Cloud Registration,” International Conference on Control & Signal Processing, Vol. 25, pp. 117-121.
    [10] Pulli, K., Abi-Rached, H., Duchamp, T., Shapiro, L.G. and Stuetzle, W. (1998), “Acquisition and visualization of colored 3D objects,” Proceedings of the 14th International Conference on Pattern Recognition, August 20, 1998, Brisbane, Queensland, Australia, pp. 11-15.
    [11] Rusu, R.B., Bradski, G., Thibaux, R. and Hsu, J. (2010), “Fast 3d recognition and pose using the viewpoint feature histogram,” IEEE/RSJ International Conference on Intelligent Robots and Systems, October 18-22, 2010, Taipei, Taiwan, pp. 2155-2162.
    [12] Aldoma, A., Vincze, M., Blodow, N., Gossow, D., Gedikli, S., Rusu, R.B. (2011), “CAD-model recognition and 6DOF pose estimation using 3D cues,” IEEE International Conference on Computer Vision Workshops, November 06-13, 2011, Barcelona, Spain, pp. 585-592.
    [13] Sipiran, I. and Bustos, B. (2011), “Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes,” The Visual Computer, Vol. 27, No. 11, pp. 963-976.
    [14] 陳柏均(2016),「基於3D物件辨識與對位之機械手臂夾取系統」,碩士論文,台北科技大學自動化科技研究所。
    [15] Bellandi, P., Docchio, F. and Sansoni, G. (2013), “Roboscan: A combined 2D and 3D vision system for improved speed and flexibility in pick-and-place operation,” The International Journal of Advanced Manufacturing Technology, Vol. 69, No. 13, pp. 1873-1886.
    [16] 郭皓淵(2014),「利用單張深度影像的三維物體定位與姿態估測」,碩士論文,國立清華大學資訊系統與應用研究所。
    [17] Romero, J. (2011), From human to robot grasping, Doctoral Thesis, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
    [18] Wong, L.L.S. (2008), Robotic grasping on the stanford artificial intelligence robot, Master's Thesis, Department of Computer Science, Stanford Univeristy, CA, USA.
    [19] Choi, C., Taguchi, Y., Tuzel, O., Liu, M. and Ramalingam, S. (2012), “Voting-based pose estimation for robotic assembly using a 3D sensor,” Proceedings of IEEE International Conference on Robotics and Automation, May 14, 2012, St. Paul, MN, USA, pp. 1724-1731.
    [20] Liu, M-Y., Tuzel, O., Veeraraghavan, A., Taguchi, Y., Marks, T.K. and Chellappa, R. (2012), “Fast object localization and pose estimation in heavy clutter for robotic bin picking,” International Journal of Robotics Research, Vol. 31, No. 8, pp. 951-973.
    [21] 吳政輝(2019),「基於CAD模型之物體姿態辨識及其於機械臂隨機堆疊抓取之應用」,碩士論文,國立交通大學電控工程研究所。
    [22] 顏鳳婷(2014),「以機械手臂搭配Creo Toolkit程式及3D點資料處理技術進行簡易零件組裝」,碩士論文,國立台灣科技大學機械工程系研究所。
    [23] 王柏富(2019),「以3D CAD模型及3D點資料處理技術進行自動化機械手臂物件夾取」,碩士論文,國立台灣科技大學機械工程系研究所。
    [24] 張仁智(2019),「以機械手臂進行複雜幾何零件之自動化夾取」,碩士論文,國立台灣科技大學機械工程系研究所。
    [25] Joseph, R., Divvala, S., Girshick, R. and Farhadi, A. (2016), “You Only Look Once: Unified, real-time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27-30, 2016, Las Vegas, NV, USA, pp. 779-788.
    [26] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C-Y. and Berg, A.C. (2016), “SSD: Single Shot MultiBox Detector,” European conference on computer vision, October 11-14, 2016, Amsterdam, The Netherlands, pp. 21-37.
    [27] Kumar, R., Lal, S., Kumar, S. and Chand, P. (2014), “Object detection and recognition for a pick and place robot,” IEEE Asia-Pacific World Congress on Computer Science and Engineering, November 04-05, 2014, Nadi, Fiji, pp. 1-7.
    [28] Zeng, A., Yu, K.T., Song, S., Suo, D., Walker, E., Rodriguez, A. and Xiao, J. (2017), “Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge,” IEEE International Conference on Robotics and Automation (ICRA), May 29-June 3, 2017, Singapore, Singapore, pp. 1383-1386.
    [29] Simonyan, K. and Zisserman, A. (2014), “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, May 7-9, 2014, San Diego, CA, USA.
    [30] Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X. and Xiao, J. (2015), “3D ShapeNets: A deep representation for volumetric shapes,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MV, USA, pp. 1912-1920.
    [31] Maturana, D. and Scherer, S. (2015), “VoxNet: A 3d convolutional neural network for real-time object recognition,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 28-Oct 2, 2015, Hamburg, Germany, pp. 922-928.
    [32] Qi, C.R., Su, H., Mo, K. and Guibas, L.J. (2017), “PointNet: Deep learning on point sets for 3D classification and segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA, pp. 652-660.
    [33] 蔡碩倫(2019),「應用深度學習及雙目立體視覺於機械手臂夾取物體作業」,碩士論文,國立台北科技大學工業工程與管理研究所。
    [34] 丁凱庭(2019),「以3D點資料之深度學習搭配機械手臂進行自動化零件分類」,碩士論文,國立台灣科技大學機械工程系研究所。
    [35] Point Cloud Library (PCL), Retrieved from http://pointclouds.org/documentation/
    [36] MeshLab Retrieved from https://www.meshlab.net/
    [37] M. C. Lee (2016), “几何特征系列:Point feature histogram(点特征直方图),” Retrieved from http://lemonc.me/point-feature-histogram.html
    [38] M. C. Lee (2016), “几何特征系列:Fast point feature histogram(快速点特征直方图),” Retrieved from http://lemonc.me/fast-point-feature-histogram.html
    [39] EPSON Prosix S5-A701S (S5), Retrieved from https://neon.epson-europe.com/robots/products/product.php?id=10884&content=547/
    [40] SCHUNK GmbH & Co. KG., Retrieved from https://wettmeister.schunk.com/tr_en/gripping-systems/product/2764-0360140-rh-918/
    [41] F. Xia, “PointNet.pytorch,” Retrieved from https://github.com/fxia22/pointnet.pytorch, 2018.
    [42] HP 3D Structured Light Scanner Pro S3, Retrieved from https://www8.hp.com/us/en/campaign/3Dscanner/overview.html.
    [43] https://face2ai.com/Math-Probability-5-6-The-Normal-Distributions-P2/

    無法下載圖示 全文公開日期 2025/01/25 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE