簡易檢索 / 詳目顯示

研究生: 林冠成
Kuan-Cheng Lin
論文名稱: 以3D CAD模型及點雲匹配之深度學習進行複雜零件的隨機拾取
Random Bin Picking of Complex Parts Using 3D CAD Model and Deep Learning of Point Cloud Registration
指導教授: 林清安
Ching-An Lin
口試委員: 蔡孟勳
Meng-Shiun Tsai
林其禹
Chyi-Yeu Lin
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 231
中文關鍵詞: 3D CAD點資料處理深度學習機械手臂隨機拾取
外文關鍵詞: 3D CAD, Point data processing, Deep learning, Robotic arm, Random bin picking
相關次數: 點閱:250下載:10
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 機械手臂經常搭配3D視覺對零件進行掃描,將掃描所得之點資料透過各種演算法計算出零件擺放的方位,然後根據計算之方位以機械手臂拾取零件。當零件變得複雜時,受限於零件的幾何形狀及掃描的角度,掃描的點資料可能殘缺不全,傳統的點資料處理技術即無法正確計算出零件的方位。為了解決此問題,本論文將深度學習技術應用於殘缺點資料之訓練及推論,以解決複雜零件在掃描角度限制下無法快速/精確拾取的問題。
    本論文透過分析零件的3D CAD模型自動化取得機械手臂零件夾取所需的資訊,並以數個零件的3D CAD模型來模擬零件任意擺放於工作台的種種可能位置及方位,以利自動化產生大量的”模擬化掃描點雲”,藉此訓練出能快速且精確辨別零件類別及方位的深度學習模型。接著以訓練後的深度學習模型在自動化作業現場快速辨別各個零件的方位,並重建整個場景,然後將零件的夾取資訊轉換至實際零件所在的方位,依據現場零件的方位執行干涉檢測,最後以機械手臂進行零件的拾取與放置。
    研究結果顯示,以自動化產生的點雲資料集相較於人工逐一掃描零件,效率提高了約20,000倍,此外,以堆疊的零件對系統進行驗證,單一複雜零件的夾取成功率最高可達90%,當系統在面對不同的複雜零件且零件數量增加時,仍能有效的夾取零件。


    Robotic arms are often used in conjunction with 3D vision to scan components, and the scanned point data is processed using various algorithms to calculate the orientation of the parts. Based on the calculated orientation, the robotic arm then picks up the parts. However, when the parts become more complex, the scanned point data may be incomplete due to the limitations of the part's geometry and the scanning angles. Traditional point data processing techniques fail to accurately calculate the orientation of the parts in such cases. To address this issue, this thesis applies deep learning techniques to train and infer from incomplete point data, aiming to solve the problem of the inability to quickly and accurately pick up complex parts under scanning angle constraints.
    In this thesis, the 3D CAD models of the parts are automatically analyzed to obtain the necessary information for gripping the parts with the robotic arm. Multiple 3D CAD models of the parts are used to simulate various possible positions and orientations of the parts on the workbench, enabling the automated generation of a large amount of "simulated scanned point clouds." This data is then used to train a deep learning model capable of rapidly and accurately identifying the part type and orientation. Subsequently, the trained deep learning model is employed to quickly identify the orientations of the parts in real-time operations. After that, the entire scene is reconstructed and the gripping information of the parts is transformed to the actual orientations of the parts. Collision detection is performed based on the orientations of the parts on-site, and finally, the robotic arm picks up and places the parts accordingly.
    The research results demonstrate that the efficiency is improved by approximately 20,000 times compared to manually scanning the parts through the automated generation of point cloud datasets. Additionally, when stacking parts, the success rate of gripping a single complex part can reach up to 90%. Even when faced with different complex parts and an increased number of parts, the system is still capable of effectively gripping the parts

    摘要 I Abstract II 誌謝 III 目錄 IV 圖目錄 VIII 表目錄 XVIII 第一章 緒論 1 1.1 研究動機與目的 1 1.2 研究方法 2 1.3 論文架構 24 第二章 基於零件的3D CAD模型自動化產生可能的夾取點組 26 2.1 零件夾取基本概念 26 2.2 搜尋可能的夾取點組 32 2.2.1以射線法搜尋可能的夾取點組 32 2.2.2以佈點密度控制射線起點之數量 37 2.3 分析夾取點組 39 2.3.1排除不適當的夾取點組 39 2.3.1.1搜尋面積過小的面 41 2.3.1.2搜尋凹特徵的面 41 2.3.1.2.1以迴圈法判斷凹特徵及凸特徵44 2.3.1.2.2判斷最終鄰接面 48 2.3.2夾爪之開爪與合爪的限制 51 2.3.3夾取穩定度 51 2.3.4下爪干涉檢測 53 2.3.5合爪干涉檢測 61 2.4 夾取點組降採樣 67 2.5 輸出最終夾取點組的資訊 71 第三章 點雲匹配之深度學習 73 3.1 自動化產生深度學習所需的點雲資料集 74 3.1.1標準點雲 75 3.1.2產生深度學習所需的模擬化掃描點雲 79 3.2 訓練點雲匹配之深度學習模型 93 3.2.1 GCNet 93 3.2.2正規化點雲數據 114 3.2.3訓練深度學習模型 115 3.3 訓練結果 120 3.4 對真實掃描點雲進行點雲匹配 130 第四章 基於點雲匹配計算零件的夾取資訊 140 4.1 機械手臂座標系 140 4.2 場景重建 146 4.3 計算實際情況的夾取點組 151 4.3.1轉換夾取點組的資訊 151 4.3.2排除夾爪與工作平台的干涉 152 4.3.3現場的下爪及合爪之干涉檢測 155 4.4 計算機械手臂旋轉角度 157 第五章 實例驗證 161 5.1 硬體設備 161 5.2 軟體開發工具與系統環境 165 5.3 系統運作流程 167 5.4 實例驗證一 172 5.4.1取得零件的夾取資訊 173 5.4.2 點雲資料之掃描與處理 175 5.4.3以深度學習模型進行點雲匹配 179 5.4.4基於點雲匹配之場景重建 181 5.4.5基於點雲匹配轉換夾取點組的資訊 182 5.4.6零件的夾取與分類 184 5.5 實例驗證二 187 5.6 結果與討論 198 第六章 結論與未來研究方向 202 6.1 結論 202 6.2 未來研究方向 203 參考文獻 205

    [1] Matsumura, R., Harada, K., Domae, Y., & Wan, W. “Learning Based Industrial Bin-picking Trained with Approximate Physics Simulator,“ International Conference on Intelligent Autonomous Systems, Macau, China, November 03-08, 2019.
    [2] Buchholz, D., Bin-picking: New Approaches for a Classical Problem, Springer Publishing Company, Incorporated, ISBN: 978-3-319-26498 -1, Berlin/Heidelberg, Germany, 2015.
    [3] Le, T. T. and Lin, C. Y., “Bin-picking for Planar Objects Based on a Deep Learning Network: A Case Study of USB Packs,” Sensors, Vol. 19, No.16, 2019.
    [4] Bae, J. H., Jo, H., Kim, D. W. and Song, J. B., “Grasping System for Industrial Application Using Point Cloud-based Clustering,” 20th International Conference on Control, Automation and Systems, Busan, Korea, October 13-16, 2020.
    [5] 劉彥峰,「以機械手臂輔助零件隨機拾取與表面瑕疵檢測之系統開發與應用」,碩士論文,國立臺灣科技大學機械工程系研究所,2018。
    [6] Martinez, C., Chen, H. and Boca, R., “Automated 3D Vision Guided Bin Picking Process for Randomly Located Industrial Parts,” IEEE International Conference on Industrial Technology, Seville, Spain, March 17-19, 2015
    [7] Song, K. T., Wu, C. H. and Jiang, S. Y., “CAD-based Pose Estimation Design for Random Bin Picking Using a RGB-D Camera,” Journal of Intelligent & Robotic Systems, Vol. 87, pp. 455-470, 2017.
    [8] 賴以衛,「以3D深度學習及點雲匹配技術進行機械手臂自動化複雜零件分類」,碩士論文,國立臺灣科技大學機械工程系研究所,2022。
    [9] 林建利,「以點資料處理及深度學習自動化分類堆疊擺放之零件」,碩士論文,國立臺灣科技大學機械工程系研究所,2023。
    [10] Mahler, J. and Goldberg, K., “Learning Deep Policies for Robot Bin Picking by Simulating Robust Grasping Sequences,” 1st Annual Conference on Robot Learning, Mountain View, California, USA, November 13-15, 2017.
    [11] Alonso, M., Izaguirre, A. and Graña, M., “Current Research Trends in Robot Grasping and Bin Picking,” International Joint Conference SOCO’18-CISIS’18-ICEUTE’18, San Sebastián, Spain, June 6-8, 2018.
    [12] 王柏富,「以3D CAD模型及3D點資料處理技術進行自動化機械手臂物件夾取」,碩士論文,國立臺灣科技大學機械工程系研究所,2019。
    [13] Sainul, I. A., Deb, S. and Deb, A. K., “A Novel Object Slicing Based Grasp Planner for 3D Object Grasping Using Underactuated Robot Gripper,” IECON 2019-45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, October 14-17, 2019.
    [14] 張仁智,「以機械手臂進行複雜幾何零件之自動化夾取」,碩士論文,國立臺灣科技大學機械工程系研究所,2019。
    [15] Xiong, B., Li, D., Zhou, Z. and Li, F., “Fast Registration of Terrestrial LiDAR Point Clouds Based on Gaussian-Weighting Projected Image Matching,” Remote Sensing, Vol. 14, No. 6, pp. 1466, 2022.
    [16] Besl, P.J. and McKay, N.D., “A Method for Registration of 3-D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 239-256, 1992.
    [17] Yang, J., Li, H., Campbell, D. and Jia, Y., “Go-ICP: A Globally Optimal Solution to 3D ICP Point-set Registration,” IEEE transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 11, pp. 2241-2254, 2015.
    [18] Rusu, R.B., Marton, Z.C., Blodow, N. and Beetz, M., “Persistent Point Feature Histograms for 3D Point Clouds,” Proceedings of the 10th International Conference on Intelligent Autonomous Systems, Baden, Baden, Germany, pp. 119-128, July 2008.
    [19] Rusu, R.B., Blodow, N. and Beetz, M., “Fast Point Feature Histograms (FPFH) for 3D Registration,” IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 3212-3217, May 12-17, 2009.
    [20] Drost, B., Ulrich, M., Navab, N. and Ilic, S., “Model Globally, Match Locally: Efficient and Robust 3D Object Recognition,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, June 13-18, 2010.

    [21] Fischler, M.A. and Bolles, R.C., “Random Sample Consensus: a Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communications of the ACM, Vol. 24, No. 11, pp. 381-395, 1981.
    [22] Zhou, Q. Y., Park, J. and Koltun, V., “Fast Global Registration,” Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016.
    [23] Yang, H., Shi, J. and Carlone, L., “Teaser: Fast and Certifiable Point Cloud Registration,” IEEE Transactions on Robotics, Vol. 37, No. 2, pp. 314-333, 2020.
    [24] Wang, Y. and Solomon, J. M., “Deep Closest Point: Learning Representations for Point Cloud Registration,” Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, October 27-November 2, 2019.
    [25] Sarode, V., Li, X., Goforth, H., Aoki, Y., Srivatsan, R. A., Lucey, S. and Choset, H., “Pcrnet: Point cloud Registration Network Using Pointnet Encoding,” arXiv preprint, arXiv:1908.07906, 2019.
    [26] Qi, C.R., Su, H., Mo, K. and Guibas, L.J., “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 652-660, July 21-26, 2017.
    [27] Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A. and Schindler, K., “Predator: Registration of 3d Point Clouds with Low Overlap,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, June 20-25, 2021.
    [28] Zhu, L., Guan, H., Lin, C. and Han, R., “Leveraging Inlier Correspondences Proportion for Point Cloud Registration,” arXiv preprint, arXiv:2201.12094, 2022.
    [29] Dong, Z., Liang, F., Yang, B., Xu, Y., Zang, Y., Li, J., Wang, Y., Dai, W., Fan, H. and Stilla, U., “Registration of Large-scale Terrestrial Laser Scanner Point Clouds: A Review and Benchmark,” ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 163, pp. 327-342, 2020.
    [30] Pan, L., Chen, X., Cai, Z., Zhang, J., Zhao, H., Yi, S. and Liu, Z., “Variational Relational Point Completion Network,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, June 20-25, 2021.
    [31] Yew, Z. J. and Lee, G. H., “Rpm-net: Robust Point Matching Using Learned Features,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, June 13-19, 2020.
    [32] Fu, K., Liu, S., Luo, X. and Wang, M., “Robust Point Cloud Registration Framework Based on Deep Graph Matching,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, June 20-25, 2021.
    [33] Pan, L., Cai, Z. and Liu, Z., “Robust Partial-to-partial Point Cloud Registration in a Full Range,” arXiv preprint, arXiv:2111.15606, 2021.
    [34] Katz, S., Tal, A. and Basri, R., “Direct Visibility of Point Sets,” ACM Transactions on Graphics, Vol. 26, No. 3, Article 24, July 2007.
    [35] Thomas, H., Qi, C. R., Deschaud, J. E., Marcotegui, B., Goulette, F. and Guibas, L. J., “Kpconv: Flexible and Deformable Convolution for Point Clouds,” Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, October 27-November 2, 2019.
    [36] Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X. and Xiao, J., “3D Shapenets: A Deep Representation for Volumetric Shapes,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, June 07-12, 2015.
    [37] Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H. and He, Q., “A Comprehensive Survey on Transfer Learning,” Proceedings of the IEEE, Vol. 109, No. 1, pp. 43-76, 2020.
    [38] Sarode, V., Dhagat, A., Srivatsan, R. A., Zevallos, N., Lucey, S. and Choset, H., “MaskNet: A Fully-convolutional Network to Estimate Inlier Points,” International Conference on 3D Vision, Fukuoka, Japan, November 25-28, 2020.
    [39] HP 3D Structured Light Scanner Pro S3, Retrieved from https://physim etrics.com/hp-3d-structured-light-scanner-pro-s3/
    [40] EPSON, Retrieved from https://www.epson.eu/en_EU/
    [41] SCHUNK, Retrieved from https://schunk.com/ca/en
    [42] Creo, Retrieved from https://www.ptc.com/tw/products/creo
    [43] Point Cloud Library (PCL), Retrieved from https://pointclouds.org/
    [44] Open3D, Retrieved from http://www.open3d.org/
    [45] CloudCompare, Retrieved from https://www.danielgm.net/cc/

    QR CODE