簡易檢索 / 詳目顯示

研究生: 徐沛宏
PEI-HONG XU
論文名稱: 基於3D視覺之PVC T型管姿態估測與機器人夾取研究
Study of PVC T-Shaped Pipe Pose Estimation and Robot Grasping based on 3D Vision
指導教授: 蔡明忠
Ming-Jong Tsai
口試委員: 郭永麟
Yong-Lin Kuo
詹朝基
Chao-Chi Chan
楊棧雲
Chan-Yun Yang
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 88
中文關鍵詞: 3D視覺姿態估測機器人夾取影像特徵分析PVC T型管
外文關鍵詞: 3D Vision, pose estimation, robot gripping, feature analysis, PVC T-shaped joint
相關次數: 點閱:166下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

隨著工業4.0的蓬勃發展,智慧製造已經成為各個產業不可或缺的發展方向,整合機器人與3D視覺已成為目前重要的研究方向,可有效的減少自動化成本及提高生產效率。目前已有許多產業導入此技術,可以取代人力快速且準確地完成設定的動作,但其大多應用在放置於平面或是外型簡單的物件,要在堆疊環境中準確偵測物件位置與抓取目標位置,對於機器人抓取仍具有許多挑戰。本研究針對具有曲面且為半對稱形狀的PVC T型管提出了目標物姿態估測演算法之機器人夾取系統,此系統由電腦、3D攝影機、六軸機械手臂與電控平行夾爪組成。首先透過3D攝影機取得目標區域的色彩與深度同步資訊,針對曲面T型管的幾何特徵進行物件提取與特徵點偵測,取得夾取點位置、表面傾斜角度與主軸偏移角度,最後將運算結果轉換為機器人控制參數,傳輸給機器人系統執行目標夾取動作與定點擺放。研究結果顯示,此演算法能準確估測PVC T型管之姿態並執行夾取任務。目標點之高度數值(Z)與不同高度水平座標值(X,Y)測試結果顯示最大平均誤差為0.94 mm,而不同角度的姿態估測最大平均誤差為1.85度,可實際達成目標六維夾取姿態夾取與正確擺放。此類六軸姿態估測演算法與夾取姿態也可應用於其他零件,如水五金等,再針對目標物特徵進行演算法調整,即可應用於工廠生產線中減少整料時間、空間與人力資源,有效降低生產成本。


With the flourishing development of Industry 4.0 and smart manufacturing, it has become an essential direction for various industries to integrate robotics and machine vision. Integrating robotics and machine vision has become a significant research focus, and many industries have already adopted this technology. It can replace manual labor to perform tasks rapidly and accurately as programmed. However, to accurately detect the position of objects and grip complex objects in the stacking environment still poses numerous challenges for robot grasping. This study presents a robotic gripping system with a target object pose estimation algorithm for PVC T-shaped joint that has curved surface and a semi-symmetric shape. The system consists of a PC, a 3D camera, and six-axis robotic arm with electronically controlled parallel gripper. The synchronized color and depth information of a target region is obtained by using the 3D camera. Image extraction and feature point detection techniques are then applied to extract the gripping position and robot gripper pose based on the geometric features. Finally, the computational results are converted into robot control parameters, which are transmitted to the robot system to execute the desired gripping action and fixed-point placement. The experimental result indicate that hat this algorithm can accurately estimate the pose of PVC T-shaped joint for performing gripping tasks. The test results of the target point's height value (Z) and horizontal coordinates (X, Y) at different heights reveal a maximum average error of 0.94 mm. The maximum average error in pose estimation at different angles is 1.85 degrees. The algorithm can also be applied to other components, such as plumbing by adjusting the algorithm based on the target object's characteristics. This can be employed in factory production lines to reduce material handling time, space, and labor resources, effectively lowering production costs.

致謝 I 摘要 II Abstract III 目錄 IV 圖目錄 VII 表目錄 X 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 2 1.3 研究方法 3 1.4 本文架構 4 第二章 文獻探討與相關技術發展 5 2.1 智慧機械手臂應用與發展 5 2.2 3D視覺 7 2.3 影像分割 7 2.4 物件姿態估測及夾取規劃 8 第三章 系統架構 10 3.1 系統架構與流程 10 3.2 3D視覺系統 13 3.2.1 Intel RealSense D415 3D景深攝影機 14 3.2.2 相機校準 15 3.2.3 3D視覺成像技術 15 3.2.4 深度視野及影像 18 3.3 機器人系統 19 3.3.1 六軸機械手臂介紹 19 3.3.2 電控二指夾爪介紹 21 3.4 軟體開發環境介紹 22 3.4.1 影像讀取及特徵偵測 23 3.4.2 機械手臂開發軟體 23 3.4.3 夾爪控制指令 24 第四章 六軸姿態估測夾取演算法之機器人系統 25 4.1 影像處理與特徵偵測 26 4.2 視覺系統座標轉換 35 4.2.1 像素座標轉換視覺座標 36 4.2.1 視覺座標轉換工件座標 39 4.3 目標物六軸姿態估測 41 4.4 機器人夾取姿態規劃 43 4.4.1 夾取姿態計算 43 4.4.2 夾取點座標轉換 49 第五章 實驗結果與討論 53 5.1 特徵點座標分析 53 5.2 六軸姿態估測結果分析 56 5.2.1 目標物提取 57 5.2.2 姿態估測 58 5.3 實際夾取實驗與討論 63 5.3.1 單一物件夾取分析 64 5.3.2 堆疊物件估測與夾取結果 67 5.3.3 研究限制與夾取失敗討論 69 第六章 結論與未來研究方向 71 6.1 結論 71 6.2 未來研究方向 72 參考文獻 73

[1] G. Zhang, S. Yang, P. Hu, and H. Deng, "Advances and prospects of vision-based 3D shape measurement methods," Machines, vol. 10, no. 2, p. 124, 2022.
[2] L. Yang, "Talking about the industrial robot grasping technology based on machine vision," in Journal of Physics: Conference Series, 2021, vol. 1769, no. 1: IOP Publishing, p. 012075.
[3] Z. Yin and Y. Li, "Overview of Robotic Grasp Detection from 2D to 3D," Cognitive Robotics, 2022.
[4] H. Lin, "Robotic manipulation based on 3d vision: A survey," in Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems, 2020, pp. 1-5.
[5] G. Du, K. Wang, S. Lian, and K. Zhao, "Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review," Artificial Intelligence Review, vol. 54, no. 3, pp. 1677-1734, 2021.
[6] Y. Yu, Z. Sun, X. Zhao, J. Bian, and X. Hui, "Design and implementation of an automatic peach-harvesting robot system," in 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), 2018: IEEE, pp. 700-705.
[7] Y. Xiong, C. Peng, L. Grimstad, P. J. From, and V. Isler, "Development and field evaluation of a strawberry harvesting robot with a cable-driven gripper," Computers and electronics in agriculture, vol. 157, pp. 392-402, 2019.
[8] H. Zhang et al., "A fast detection and grasping method for mobile manipulator based on improved faster R-CNN," Industrial Robot: the international journal of robotics research and application, vol. 47, no. 2, pp. 167-175, 2020.
[9] F. Hefner, S. Schmidbauer, and J. Franke, "Pose error correction of a robot end-effector using a 3D visual sensor for control cabinet wiring," Procedia CIRP, vol. 93, pp. 1133-1138, 2020.
[10] G. Zhang, S. Jia, D. Zeng, and Z. Zheng, "Object detection and grabbing based on machine vision for service robot," in 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), 2018: IEEE, pp. 89-94.
[11] M. Zia Ur Rahman, M. Usman, A. Farea, N. Ahmad, I. Mahmood, and M. Imran, "Vision-based mobile manipulator for handling and transportation of supermarket products," Mathematical Problems in Engineering, vol. 2022, 2022.
[12] 3D感測技術發展與應用趨勢.Available: https://dahetalk.com/
[13] S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, "Image segmentation using deep learning: A survey," IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 7, pp. 3523-3542, 2021.
[14] H.-D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, "Color image segmentation: advances and prospects," Pattern recognition, vol. 34, no. 12, pp. 2259-2281, 2001.
[15] A. Ückermann, C. Elbrechter, R. Haschke, and H. Ritter, "3D scene segmentation for autonomous robot grasping," in 2012 IEEE/RSJ international conference on intelligent robots and systems, 2012: IEEE, pp. 1734-1740.
[16] A. Ückermann, R. Haschke, and H. Ritter, "Real-time 3D segmentation of cluttered scenes for robot grasping," in 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), 2012: IEEE, pp. 198-203.
[17] W. Xiaoxiao, N. S. Beng, and P. S. binti Sulaiman, "A Review of Machine Vision Pose Measurement," 2023.
[18] H. Shin, H. Hwang, H. Yoon, and S. Lee, "Integration of deep learning-based object recognition and robot manipulator for grasping objects," in 2019 16th international conference on ubiquitous robots (UR), 2019: IEEE, pp. 174-178.
[19] 謝宗翰, 「以立體視覺為基礎之物體辨識和姿態估測抓取系統,」碩士論文, 電機工程系, 南臺科技大學, 台南市, 2019. Available: https://hdl.handle.net/11296/mv9686
[20] H. Lin, T. Zhang, Z. Chen, H. Song, and C. Yang, "Adaptive fuzzy Gaussian mixture models for shape approximation in robot grasping," International Journal of Fuzzy Systems, vol. 21, pp. 1026-1037, 2019.
[21] 孫永富, 「使用低成本立體相機建立工業機器人立體視覺與應用於散裝零件夾取之研究,」碩士論文, 自動化及控制研究所, 國立臺灣科技大學, 台北市, 2019.
[22] J. Su, Z.-Y. Liu, H. Qiao, and C. Liu, "Pose-estimation and reorientation of pistons for robotic bin-picking," Industrial Robot: An International Journal, vol. 43, no. 1, pp. 22-32, 2016.
[23] 林宇宸, 「基於3D視覺與姿態估測演算法之機器人隨機抓取系統,」碩士論文, 自動化及控制研究所, 國立臺灣科技大學, 台北市, 2021.
[24] Z. He, W. Feng, X. Zhao, and Y. Lv, "6D pose estimation of objects: Recent technologies and challenges," Applied Sciences, vol. 11, no. 1, p. 228, 2020.
[25] H.-Y. Lin, S.-C. Liang, and Y.-K. Chen, "Robotic grasping with multi-view image acquisition and model-based pose estimation," IEEE Sensors Journal, vol. 21, no. 10, pp. 11870-11878, 2020.
[26] C.-H. Wu, S.-Y. Jiang, and K.-T. Song, "CAD-based pose estimation for random bin-picking of multiple objects using a RGB-D camera," in 2015 15th International Conference on Control, Automation and Systems (ICCAS), 2015: IEEE, pp. 1645-1649.
[27] X. Liang and H. Cheng, "Rgb-d camera based 3d object pose estimation and grasping," in 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), 2019: IEEE, pp. 1279-1284.
[28] 章宇賢, 「基於深度學習與三維姿態估測之機械臂抓取系統設計,」碩士論文, 電控工程研究所, 國立交通大學, 新竹市, 2019. Available: https://hdl.handle.net/11296/3875f9
[29] 鄭祐晨, 「基於卷積類神經網路之六自由度物體姿態估測演算法,」碩士論文, 電控工程研究所, 國立交通大學, 新竹市, 2017. Available: https://hdl.handle.net/11296/43t5y6
[30] 吳宜儒, 「基於深度學習與立體影像之機械手臂智慧夾取,」碩士論文, 動力機械工程學系, 國立清華大學, 新竹市, 2019. Available: https://hdl.handle.net/11296/awzcns
[31] H. Zhang et al., "A practical robotic grasping method by using 6-D pose estimation with protective correction," IEEE Trans. Ind. Electron., vol. 69, no. 4, pp. 3876-3886, 2021.
[32] C. Zhuang, Z. Wang, H. Zhao, and H. Ding, "Semantic part segmentation method based 3D object pose estimation with RGB-D images for bin-picking," Robotics and Computer-Integrated Manufacturing, vol. 68, p. 102086, 2021.
[33] Intel® RealSense™ Camera D400 Series Product Family Datasheet. Available:https://www.intelrealsense.com/wp-content/uploads/2020/06/Intel-RealSense-D400-Series-Datasheet-June-2020.pdf
[34] Epson VT6L Product Catalog. Available:https://w3.epson.com.tw/uploadfiles/brochure/VT6L.pdf.
[35] Product-sheet-Adaptive-Grippers-EN.pdf. Available:https://blog.robotiq.com/hubfs/Product-sheets/Adaptive%20Grippers/Product-sheet-Adaptive-Grippers-EN.pdf
[36] EPSON RC+ 7.0 (Ver.7.1) 使用者指南 專案發展與管理 Rev.1. Available: https://www.epson.com.cn/robots/admin/modules/page_editor/uploads/pdf/Chinese%20Simplified/cs_EPSONRC_UsersGuide71_r1.pdf.
[37] EPSON RC+ 7.0 (Ver.7.1) SPEL 語言參考 + Rev.2. Available: https://www.epson.com.cn/robots/admin/modules/page_editor/uploads/pdf/Chinese%20Simplified/cs_SPEL_Ref71_r2.pdf
[38] Robotiq 2F-85 & 2F-140 Instruction Manual. Available: https://assets.robotiq.com/website-assets/support_documents/document/2F-85_2F-140_General_PDF_20210623.pdf?_ga=2.242841089.2079191522.1688355725-1820058645.1688355601
[39] Epson RC+ 7.0 User’s Guide. Available: https://files.support.epson.com/far/docs/epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf
[40] OpenCV Open Source Computer Vision. Available:https://docs.opencv.org/3.4/

無法下載圖示
全文公開日期 2026/08/23 (校外網路)
全文公開日期 2026/08/23 (國家圖書館:臺灣博碩士論文系統)
QR CODE