研究生: |
賴以衛 YI-WEI LAI |
---|---|
論文名稱: |
以3D深度學習及點雲匹配技術進行機械手臂自動化複雜零件分類 Manipulator-based Auto-classification of Complex Parts Using 3D Deep Learning and Points Registration Techniques |
指導教授: |
林清安
Ching-An Lin |
口試委員: |
謝文賓
Win-Bin Shieh 陳羽薰 Yu-Hsun Chen |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 機械工程系 Department of Mechanical Engineering |
論文出版年: | 2022 |
畢業學年度: | 110 |
語文別: | 中文 |
論文頁數: | 244 |
中文關鍵詞: | 3D CAD 、點資料處理 、深度學習 、隨機取放 、機械手臂 |
外文關鍵詞: | 3D CAD, Point data processing, Deep learning, Random bin picking, Manipulator |
相關次數: | 點閱:285 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
以機械手臂進行零件分類是自動化生產線的主要工作之一,利用結構光掃描器搭配AI深度學習及點雲匹配技術,可快速辨識產線上各個零件的類型,並自動計算每個零件的拾取資訊,然而,隨著零件類型、數量及幾何複雜度的提升,深度學習的數據準備作業將耗費大量時間,且以越複雜的零件進行點雲匹配時,其匹配的誤差也會隨之增加。為克服此等問題,本論文以點資料處理技術對零件的點雲進行處理,改善數據準備耗時及點雲匹配誤差的問題,據以開發一套「複雜零件隨機夾取/分類系統」,達到自動化零件分類之目的。
本論文透過對零件之掃描點雲進行一系列濾波、分割及資料集擴增處理,由少量掃描點雲自動化產生大量點雲資料集,藉以進行深度學習的訓練,於自動化作業現場快速判別零件種類;接著以RANSAC搭配ICP法進行零件的3D CAD模型與其掃描點雲的精準匹配,將事先分析CAD模型所產生的夾取資訊轉換為零件實際擺放的夾取資訊,並依零件辨識結果及其座標轉換,以機械手臂完成零件的夾取與分類。本論文除了詳述如何以點資料處理技術建構深度學習辨識模型及達到點雲之精準匹配,也簡述如何以3D CAD模型求取零件夾取資訊,最終以多種不同幾何特性的複雜零件驗證所提方法的可行性及所開發系統的實用性。
Sorting parts with manipulators is one of the main tasks of the automated production line. Utilizing structured light scanners with AI deep learning and point cloud matching technology can not only promptly identify the types of parts in the production line but also automatically calculate the picking information of each part. However, as the type, quantity and geometric complexity of parts increase, the data preparation of deep learning consumes an enormous amount of time. By the same token, matching error can also be amplified as more complex parts are used for point cloud matching. In order to overcome these problems, this thesis approaches the subject by using point data processing technology to process point cloud of parts, aiming to optimize the time-consuming of data preparation and the error problem of point cloud matching. Based on them, develop a "random picking/classification system of complex parts" to achieve the purpose of automatic parts classification.
In this thesis, through a series of filtering, segmentation and incremental processing on the scanned point cloud of a 3D part, an abundant amount of point cloud data sets are automatically generated from a small quantity of scanned data. Subsequently, training through deep learning is executed for rapid recognition of the types of each part on the automation production line. Furthermore, RANSAC and ICP methods are applied to accomplish precise matching of the 3D CAD model and its scanned point cloud of the complex part, so as to convert the clamping information generated from the prior analysis of the CAD model into the clamping information of the actual placement of the part. And finally, a manipulator is used to complete the gripping and sorting of complex parts according to the information of part types and relevant coordinate transformation. In addition to detailing the process of employing point data processing technology for constructing a deep learning model and for reaching precise matching of point clouds, this thesis also describes the procedure of adopting 3D CAD models to obtain clamping information. And eventually, a variety of complex parts with different geometric characteristics are handled as a means to verify the feasibility of the proposed method and the practicability of the developed system.
[1] Besl, P.J. and McKay, N.D. (1992), “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 239-256.
[2] Rusu, R.B., Marton, Z.C., Blodow, N. and Beetz, M. (2008), “Persistent point feature histograms for 3D point clouds,” Proceedings of the 10th International Conference on Intelligent Autonomous Systems, July, 2008, Baden, Baden, Germany, pp. 119-128.
[3] Rusu, R.B., Blodow, N., and Beetz, M. (2009), “Fast point feature histograms (FPFH) for 3D registration,” IEEE International Conference on Robotics and Automation, May 12-17, Kobe, Japan, 2009, pp. 3212-3217.
[4] Fischler, M.A. and Bolles, R.C. (1981), “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, Vol. 24, No. 11, pp. 381-395.
[5] 劉彥峰(2018),「以機械手臂輔助零件隨機拾取與表面瑕疵檢測之系統開發與應用」,碩士論文,台灣科技大學機械工程系。
[6] Bogdan, R.R. and Cousins, S. (2011), “3D is here: Point Cloud Library (PCL),” IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai, China, pp. 1-4.
[7] Bentley, J.L. (1975), “Multidimensional binary search trees used for associative searching,” Communications of the ACM, Vol. 18, No. 9, pp. 509-517.
[8] He, Y., Liang, B., Yang, J., Li, S. and He, J. (2017), “An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features,” Sensors, Vol. 17, No. 8, pp. 1862.
[9] Mouna A. (2017), “Normal Distribution Transform with Point Projection for 3D Point Cloud Registration,” International Conference on Control & Signal Processing, Vol. 25, pp. 117-121.
[10] Pulli, K., Abi-Rached, H., Duchamp, T., Shapiro, L.G. and Stuetzle, W. (1998), “Acquisition and visualization of colored 3D objects,” Proceedings of the 14th International Conference on Pattern Recognition, August 20, 1998, Brisbane, Queensland, Australia, pp. 11-15.
[11] Rusu, R.B., Bradski, G., Thibaux, R. and Hsu, J. (2010), “Fast 3d recognition and pose using the viewpoint feature histogram,” IEEE/RSJ International Conference on Intelligent Robots and Systems, October 18-22, 2010, Taipei, Taiwan, pp. 2155-2162.
[12] Aldoma, A., Vincze, M., Blodow, N., Gossow, D., Gedikli, S., Rusu, R.B. (2011), “CAD-model recognition and 6DOF pose estimation using 3D cues,” IEEE International Conference on Computer Vision Workshops, November 06-13, 2011, Barcelona, Spain, pp. 585-592.
[13] Sipiran, I. and Bustos, B. (2011), “Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes,” The Visual Computer, Vol. 27, No. 11, pp. 963-976.
[14] 陳柏均(2016),「基於3D物件辨識與對位之機械手臂夾取系統」,碩士論文,台北科技大學自動化科技研究所。
[15] Bellandi, P., Docchio, F. and Sansoni, G. (2013), “Roboscan: A combined 2D and 3D vision system for improved speed and flexibility in pick-and-place operation,” The International Journal of Advanced Manufacturing Technology, Vol. 69, No. 13, pp. 1873-1886.
[16] 郭皓淵(2014),「利用單張深度影像的三維物體定位與姿態估測」,碩士論文,國立清華大學資訊系統與應用研究所。
[17] Romero, J. (2011), From human to robot grasping, Doctoral Thesis, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.
[18] Wong, L.L.S. (2008), Robotic grasping on the stanford artificial intelligence robot, Master's Thesis, Department of Computer Science, Stanford Univeristy, CA, USA.
[19] Choi, C., Taguchi, Y., Tuzel, O., Liu, M. and Ramalingam, S. (2012), “Voting-based pose estimation for robotic assembly using a 3D sensor,” Proceedings of IEEE International Conference on Robotics and Automation, May 14, 2012, St. Paul, MN, USA, pp. 1724-1731.
[20] Liu, M-Y., Tuzel, O., Veeraraghavan, A., Taguchi, Y., Marks, T.K. and Chellappa, R. (2012), “Fast object localization and pose estimation in heavy clutter for robotic bin picking,” International Journal of Robotics Research, Vol. 31, No. 8, pp. 951-973.
[21] 吳政輝(2019),「基於CAD模型之物體姿態辨識及其於機械臂隨機堆疊抓取之應用」,碩士論文,國立交通大學電控工程研究所。
[22] 顏鳳婷(2014),「以機械手臂搭配Creo Toolkit程式及3D點資料處理技術進行簡易零件組裝」,碩士論文,國立台灣科技大學機械工程系研究所。
[23] 王柏富(2019),「以3D CAD模型及3D點資料處理技術進行自動化機械手臂物件夾取」,碩士論文,國立台灣科技大學機械工程系研究所。
[24] 張仁智(2019),「以機械手臂進行複雜幾何零件之自動化夾取」,碩士論文,國立台灣科技大學機械工程系研究所。
[25] Joseph, R., Divvala, S., Girshick, R. and Farhadi, A. (2016), “You Only Look Once: Unified, real-time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27-30, 2016, Las Vegas, NV, USA, pp. 779-788.
[26] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C-Y. and Berg, A.C. (2016), “SSD: Single Shot MultiBox Detector,” European conference on computer vision, October 11-14, 2016, Amsterdam, The Netherlands, pp. 21-37.
[27] Kumar, R., Lal, S., Kumar, S. and Chand, P. (2014), “Object detection and recognition for a pick and place robot,” IEEE Asia-Pacific World Congress on Computer Science and Engineering, November 04-05, 2014, Nadi, Fiji, pp. 1-7.
[28] Zeng, A., Yu, K.T., Song, S., Suo, D., Walker, E., Rodriguez, A. and Xiao, J. (2017), “Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge,” IEEE International Conference on Robotics and Automation (ICRA), May 29-June 3, 2017, Singapore, Singapore, pp. 1383-1386.
[29] Simonyan, K. and Zisserman, A. (2014), “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, May 7-9, 2014, San Diego, CA, USA.
[30] Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X. and Xiao, J. (2015), “3D ShapeNets: A deep representation for volumetric shapes,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MV, USA, pp. 1912-1920.
[31] Maturana, D. and Scherer, S. (2015), “VoxNet: A 3d convolutional neural network for real-time object recognition,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 28-Oct 2, 2015, Hamburg, Germany, pp. 922-928.
[32] Qi, C.R., Su, H., Mo, K. and Guibas, L.J. (2017), “PointNet: Deep learning on point sets for 3D classification and segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA, pp. 652-660.
[33] 蔡碩倫(2019),「應用深度學習及雙目立體視覺於機械手臂夾取物體作業」,碩士論文,國立台北科技大學工業工程與管理研究所。
[34] 丁凱庭(2019),「以3D點資料之深度學習搭配機械手臂進行自動化零件分類」,碩士論文,國立台灣科技大學機械工程系研究所。
[35] Point Cloud Library (PCL), Retrieved from http://pointclouds.org/documentation/
[36] MeshLab Retrieved from https://www.meshlab.net/
[37] M. C. Lee (2016), “几何特征系列:Point feature histogram(点特征直方图),” Retrieved from http://lemonc.me/point-feature-histogram.html
[38] M. C. Lee (2016), “几何特征系列:Fast point feature histogram(快速点特征直方图),” Retrieved from http://lemonc.me/fast-point-feature-histogram.html
[39] EPSON Prosix S5-A701S (S5), Retrieved from https://neon.epson-europe.com/robots/products/product.php?id=10884&content=547/
[40] SCHUNK GmbH & Co. KG., Retrieved from https://wettmeister.schunk.com/tr_en/gripping-systems/product/2764-0360140-rh-918/
[41] F. Xia, “PointNet.pytorch,” Retrieved from https://github.com/fxia22/pointnet.pytorch, 2018.
[42] HP 3D Structured Light Scanner Pro S3, Retrieved from https://www8.hp.com/us/en/campaign/3Dscanner/overview.html.
[43] https://face2ai.com/Math-Probability-5-6-The-Normal-Distributions-P2/