簡易檢索 / 詳目顯示

研究生: 林宇宸
Yu-Chen Lin
論文名稱: 基於3D視覺與姿態估測演算法之機器人隨機抓取系統
Study of 3D Vision and Pose Estimation Algorithm for Robotic Random Picking System
指導教授: 蔡明忠
Ming-Jong Tsai
口試委員: 李敏凡
Min-Fan Lee
郭永麟
Yong-Lin Kuo
李振豪
Chen-Hao Li
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 95
中文關鍵詞: 姿態估測3D視覺機器人觸覺感測隨機取放
外文關鍵詞: Pose Estimation, 3D Vision, Robot, Tactile Sensing, Random Bin Picking
相關次數: 點閱:240下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著工業4.0及智慧製造技術的蓬勃發展,機器視覺與運動控制系統的搭配成為相關技術與應用的焦點,許多食品業和超商也漸漸導入機器人自動化技術,來協助店鋪營運與提升消費者的便利性,因技術的廣泛應用使得機器人抓取技術在抓取角度以及障礙物上面臨許多挑戰。
    本研究提出了基於目標物抓取姿態估測演算法之機器人揀貨系統,此系統的目的是在隨機擺放之揀貨箱上進行物件最佳抓取姿態估測並抓取,模擬無人超商進行隨機取放的揀貨上架流程。透過立體相機開發3D視覺系統與搭配電控夾爪與觸覺感測器之六軸機器人系統,首先對不同軟硬度商品進行觸覺感測找出相應適合的夾爪力道設定,之後利用深度相機對目標抓取對象進行位置座標與方向座標估測,再將此視覺座標資料轉換至機器人座標給機器人系統使用,最後使用商品模型進行各方位角度與重複性的實驗檢測。
    研究結果顯示,物體定位檢測之精度為± 0.1 mm,目標物定位之深度檢測成果平均誤差在1 mm以下,三種傾斜角度各重複10次之測量準確率達95%,整體實際抓取成功率達到90%以上。此類商品應用演算法即可達到姿態估測效益並節省大量前置作業時間,所提出之機器人系統能有效完成商品揀貨流程。


    With the dramatically development of Industry 4.0 and smart manufacturing, both of machine vision and motion control systems have become the mainstream technologies on smart manufacturing applications. Because of various industry application, object grasping technology facing challenges on view angle or occlusion.
    This study presents a robotic random bin picking system based on an object gripping pose estimation algorithm. This system aims to evaluate the best grasping pose of objects in random bin picking system, which composed by a 3D vision system and a six-axis robot with electric grippers and tactile sensors. The integration system can be divided into several parts. First, conducting tactile sensing to find the gripper force. Second, evaluating the position and direction coordinates of object by using a depth C.C.D. (RealSense D415), then mapping the vision coordinates into robot’s coordinates which are transmitted to robot system. Several merchandise models are used for pose and repeatability verification.
    The experimental results indicate that the object detection accuracy is ± 0.1 mm. The average error of target depths is less than 1 mm with max depth of 440 mm. The ten times measurement accuracy rate for three different angles are over 95%, and grasping success rate are over 90%. With the development of pose estimation algorithm, the robotic system can precisely perform the grasping process with the best pose.

    目錄 致謝 I 摘要 II Abstract III 目錄 IV 圖目錄 VII 表目錄 X 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 2 1.3 研究方法 3 1.4 本文架構 4 第二章 文獻探討 5 2.1 物件姿態估測 6 2.2 機器人箱揀貨系統 9 第三章 系統架構 12 3.1 系統架構與流程 12 3.2 系統硬體介紹 15 3.2.1 3D深度相機 15 3.2.2 機器人系統 16 3.2.3 觸覺感測系統 18 3.3 系統軟體介紹 19 3.3.1 影像處理 19 3.3.2 機器人系統軟體 20 3.3.3 夾爪控制 20 3.4 相機成像與深度技術 21 3.4.1 相機校準 21 3.4.2 相機成像 21 3.4.3 深度視野與無效深度帶 22 3.4.4 深度技術 23 第四章 最佳夾取座標演算法之機器人系統 25 4.1 影像處理與輪廓生成 26 4.2 目標物定位檢測 32 4.2.1 中心點計算 32 4.2.2 深度值計算(像素與視覺座標轉換) 32 4.3 物體表面法向量計算 36 4.4 夾取姿態估測 39 4.5 機器人智慧夾取系統 48 4.5.1 預設夾持力 48 4.5.2 夾爪訊息反饋 48 第五章 實驗結果與討論 49 5.1 目標物定位檢測分析 49 5.2 表面傾斜角演算法結果 52 5.3 觸覺感測與夾爪回饋分析 58 5.4 物件夾取成果分析 61 5.4.1 單一物件夾取 61 5.4.2 單一物件各面向(不同傾斜角度)分析 64 5.4.3 模擬商品堆疊辨識與夾取結果 67 5.4.4 夾取失敗與研究限制討論 71 第六章 結論與未來研究方向 74 6.1 結論 74 6.2 未來研究方向 75 參考文獻 76

    [1] 機器視覺是什麼. Available: https://www.vsk.com.tw/blog/107-columnist/345-machine-vision-expert-4.html
    [2] D. Guo, F. Sun, T. Kong, H. Liu, “Deep vision networks for real-time robotic grasp detection,” International Journal of Advanced Robotic Systems, Jan. 2017, DOI: 10.1177/1729881416682706.
    [3] M. Farag, A. N. A. Ghafar and M. H. ALSIBAI, “Real-Time Robotic Grasping and Localization Using Deep Learning-Based Object Detection Technique,” Proc. IEEE Int. Conf. Autom. Control Intell. Syst., Selangor, Malaysia, Jun. 2019.
    [4] Machine Vision Market Trends & Growth Report, 2021-2028. Available: https://www.grandviewresearch.com/industry-analysis/machine-vision-market
    [5] H. Guo, H. Xiao, S. Wang, W. He and K. Yuan, “Real-time detection and classification of machine parts with embedded system for industrial robot grasping,” Proc. IEEE Int. Conf. Mechatronics Autom., Beijing, China, Aug. 2015.
    [6] H. Shin, H. Hwang, H. Yoon and S. Lee, “Integration of deep learning-based object recognition and robot manipulator for grasping objects,” Proc. Int. Conf. Ubiquitous Robot., Jeju, Korea, Jun. 2019.
    [7] X. Fan, X. Wang and Y. Xiao, “A combined 2D-3D vision system for automatic robot picking,” Proc. Int. Conf. Adv. Mechatronic Sys., Kumamoto, Japan, Aug. 2014.
    [8] C. Martinez, H. Chen and R. Boca, “Automated 3D vision guided bin picking process for randomly located industrial parts,” Proc. IEEE Int. Conf. Ind. Technol., Seville, Spain, Mar. 2015.
    [9] P. Tsarouchi, S. A. Matthaiakis, G. Michalos, S. Makris and G. Chryssolouris, “A method for detection of randomly placed objects for robotic handling.” CIRP J. Manuf. Sci. Technol., vol. 14, pp. 20-27, 2016, DOI: 10.1016/j.cirpj.2016.04.005.
    [10] P. Tsarouchi, S. A. Matthaiakis, G. Michalos, S. Makris and G. Chryssolouris, “A method for detection of randomly placed objects for robotic handling,” CIRP J. Manuf. Sci. Technol., vol. 14, pp. 20-27, 2016, DOI: 10.1016/j.cirpj.2016.04.005
    [11] R. Newbury, K. He, A. Cosgun and T. Drummond, “Learning to Place Objects Onto Flat Surfaces in Upright Orientations,” IEEE Robot. Autom., vol. 6, no. 3, pp. 4377-4384, Jul. 2021, DOI: 10.1109/LRA.2021.3068122.
    [12] Z. Wang, Y. Torigoe and S. Hirai, “A Prestressed Soft Gripper: Design, Modeling, Fabrication, and Tests for Food Handling,” IEEE Robot. Autom., vol. 2, no. 4, pp. 1909-1916, Oct. 2017, DOI: 10.1109/LRA.2017.2714141.
    [13] Y. Xie, B. Zhang, J. Zhou, Y. Bai and M. Zhang, “An Integrated Multi-Sensor Network for Adaptive Grasping of Fragile Fruits: Design and Feasibility Tests,” Sensors, vol. 20, no. 4973, pp. 1-23, 2020.
    [14] L. He, Q. Lu, S. Abad, N. Rojas and T. Nanayakkara, “Soft Fingertips With Tactile Sensing and Active Deformation for Robust Grasping of Delicate Objects,” IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 2714-2721, Apr. 2020, DOI: 10.1109/LRA.2020.2972851.
    [15] H. Zhang, J. Tan, C. Zhao, Z. Liang, L. Liu, H. Zhong and S. Fan, “A fast detection and grasping method for mobile manipulator based on improved faster R-CNN,” Ind. Robot, vol. 47, no. 2, pp. 167-175, 2020, DOI: 10.1108/IR-07-2019-0150
    [16] S. Sajjan, M. Moore, M. Pan, G. Nagaraja, J. Lee, A. Zeng and S. Song, “Clear Grasp: 3D Shape Estimation of Transparent Objects for Manipulation,” Proc. IEEE Int. Conf. Rob. Autom., Paris, France, 2020.
    [17] G. Du, K. Wang, S. Lian and K. Zhao, “Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review,” Artif. Intell. Rev., vol. 54, pp. 1677–1734, 2021, DOI: 10.1007/s10462-020-09888-5
    [18] S. Stevšić, S. Christen and O. Hilliges, “Learning to Assemble: Estimating 6D Poses for Robotic Object-Object Manipulation,” IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 1159-1166, Apr. 2020, DOI: 10.1109/LRA.2020.2967325.
    [19] R. Barth, J. Hemming and E. J. Van Henten, “Angle estimation between plant parts for grasp optimisation in harvest robots,” Biosyst. Eng., vol. 183, no. 37, pp. 26-46, 2019, DOI: 10.1016/j.biosystemseng.2019.04.006
    [20] D. De Gregorio, R. Zanella, G. Palli, S. Pirozzi and C. Melchiorri, “Integration of Robotic Vision and Tactile Sensing for Wire-Terminal Insertion Tasks,” IEEE Trans. Autom. Sci. Eng., vol. 16, no. 2, pp. 585-598, Apr. 2019, DOI: 10.1109/TASE.2018.2847222.
    [21] Y. Xiong, Y. Ge and P. J. From, “An obstacle separation method for robotic picking of fruits in clusters.” Comput. Electron. Agric., vol. 175, no. 105397, 2020, DOI: 10.1016/j.compag.2020.105397.
    [22] L. Pauly, M. V. Baiju, P. Viswanathan, P. Jose, D. Paul and D. Sankar, “CAMbot: Customer assistance mobile manipulator robot,” IEEE Bombay Sect. Symp., Mumbai, India, 2015.
    [23] C. Balaguer, A. Gimenez, A. Jardon, R. Cabas and R. Correal, “Live experimentation of the service robot applications for elderly people care in home environments,” IEEE/RSJ Int. Conf. Intell. Robots Syst., Edmonton, AB, Canada, 2005.
    [24] X. Chen, Y. Sun, Q. Zhang and F. Liu, “Two-stage grasp strategy combining CNN-based classification and adaptive detection on a flexible hand.” Appl. Soft Comput., vol. 97, no. 106729, 2020, DOI: 10.1016/j.asoc.2020.106729.
    [25] Muslikhin, J. Horng, S. Yang and M. Wang, “Object Localization and Depth Estimation for Eye-in-Hand Manipulator Using Mono Camera,” IEEE Access, vol. 8, pp. 121765-121779, 2020, DOI: 10.1109/ACCESS.2020.3006843.
    [26] V. Loing, R. Marlet and M. Aubry, “Virtual training for a real application: Accurate object-robot relative localization without calibration.” Int. J. Comput. Vision, vol. 126, pp. 1045-1060, 2018, DOI: 10.1007/s11263-018-1102-6.
    [27] X. Liang and H. Cheng, “RGB-D Camera based 3D Object Pose Estimation and Grasping,” IEEE Int. Conf. Cyber Technol. Autom., Control Intell. Syst., Suzhou, China, 2019.
    [28] N. Guo, B. Zhang, J. Zhou, K. Zhan and S. Lai, “Pose estimation and adaptable grasp configuration with point cloud registration and geometry understanding for fruit grasp planning.” Comput. Electron. Agric., vol. 179, no. 105818, 2020, DOI: 10.1016/j.compag.2020.105818.
    [29] J. Guo et al., “Efficient Center Voting for Object Detection and 6D Pose Estimation in 3D Point Cloud,” IEEE Trans. Image Process., vol. 30, pp. 5072-5084, 2021, DOI: 10.1109/TIP.2021.3078109.
    [30] N. Hajari, G. Lugo Bustillo, H. Sharma and I. Cheng, “Marker-Less 3d Object Recognition and 6d Pose Estimation for Homogeneous Textureless Objects: An RGB-D Approach.” Sensors, vol. 20, no. 5098, pp. 1-22, 2020, DOI: 10.3390/s20185098.
    [31] J. Vidal, C. Lin and R. Martí, “6D pose estimation using an improved method based on point pair features,” Proc. - Int. Conf. Control, Autom. Robot., Auckland, New Zealand, 2018.
    [32] C. Wang, R. Martín-Martín, D. Xu, J. Lv, C. Lu, L. Fei-Fei and Y. Zhu, “6-PACK: Category-level 6D Pose Tracker with Anchor-Based Keypoints,” Proc. IEEE Int. Conf. Rob. Autom., Paris, France, 2020.
    [33] H. -Y. Lin, S. -C. Liang and Y. -K. Chen, “Robotic Grasping With Multi-View Image Acquisition and Model-Based Pose Estimation,” IEEE Sensors J., vol. 21, no. 10, pp. 11870-11878, May. 2021, DOI: 10.1109/JSEN.2020.3030791.
    [34] Z. Dong, S. Liu, T. Zhou, H. Cheng, L. Zeng, X. Yu, H. Liu, “PPR-Net:Point-wise Pose Regression Network for Instance Segmentation and 6D Pose Estimation in Bin-picking Scenarios,” IEEE Int. Conf. Intell. Rob. Syst., Macau, China, 2019.
    [35] Y. Wu, Y. Fu and S. Wang, “Deep instance segmentation and 6D object pose estimation in cluttered scenes for robotic autonomous grasping,” Ind. Robot, vol. 47, no. 4, pp. 593-606, 2020, DOI: 10.1108/IR-12-2019-0259.
    [36] P. Song, Z. Fu and L. Liu, “Grasp planning via hand-object geometric fitting.” Vis. Comput., vol. 34, pp. 257–270, 2018, DOI: 10.1007/s00371-016-1333-x
    [37] A. Adán, A. S. Vázquez, P. Merchán and R. Heradio, “Direction Kernels: using a simplified 3D model representation for grasping,” Mach. Vision Appl., vol. 24, pp. 351–370, 2013, DOI: 10.1007/s00138-011-0351-y.
    [38] 孫永富,使用低成本3D相機建立工業機器人3D視覺與應用於散裝物件夾取之研究,碩士論文,台灣科技大學自控所,2019。
    [39] 黃鼎元,整合工業機器人3D色彩視覺與智慧柔性夾取系統並應用於散裝物體夾取之研究,碩士論文,台灣科技大學自控所,2020。
    [40] C. Martinez, H. Chen and R. Boca, “Automated 3D vision guided bin picking process for randomly located industrial parts,” Proc. IEEE Int. Conf. Ind. Technol., Seville, Spain, 2015.
    [41] C. Wu, S. Jiang and K. Song, “CAD-based pose estimation for random bin-picking of multiple objects using a RGB-D camera,” Proc. Int. Conf. Control, Autom. Syst., Busan, Korea (South), 2015.
    [42] W. Chiu, “Dual laser 3D scanner for Random Bin Picking system,” Int. Conf. Adv. Robotics Intell. Syst., Taipei, Taiwan, 2015.
    [43] W. Abbeloos and T. Goedemé, “Point Pair Feature Based Object Detection for Random Bin Picking,” Proc. Conf. Comput. Robot Vis., Victoria, BC, Canada, 2016.
    [44] F. Spenrath and A. Pott, “Using Neural Networks for Heuristic Grasp Planning in Random Bin Picking,” IEEE Int. Conf. Autom. Sci. Eng., Munich, Germany, 2018.
    [45] Y. Chen, G. Sun, H. Lin and S. Chen, “Random Bin Picking with Multi-view Image Acquisition and CAD-Based Pose Estimation,” Proc. IEEE Int. Conf. Syst., Man, Cybern., Miyazaki, Japan, 2018.
    [46] D. Liu, S. Arai, Y. Xu, F. Tokuda and K. Kosuge, “6D Pose Estimation of Occlusion-Free Objects for Robotic Bin-Picking Using PPF-MEAM With 2D Images (Occlusion-Free PPF-MEAM),” IEEE Access, vol. 9, pp. 50857-50871, 2021, DOI: 10.1109/ACCESS.2021.3068467.
    [47] Intel RealSense Camera D400 Series Product Family Datasheet. Available: https://www.intelrealsense.com/wp-content/uploads/2020/06/Intel-RealSense-D400-Series-Datasheet-June-2020.pdf
    [48] Epson VT6L Product Catalog. Available: https://w3.epson.com.tw/uploadfiles/brochure/VT6L.pdf
    [49] Robotiq 2F Adaptive Gripper Product Sheet. Available: https://blog.robotiq.com/hubfs/Product-sheets/Adaptive%20Grippers/Product-sheet-Adaptive-Grippers-EN.pdf
    [50] Robotiq User Interface. Available: https://assets.robotiq.com/website-assets/support_documents/document/User_Interface_PDF_20191031.pdf
    [51] 3D感測技術發展與應用趨勢. Available: https://dahetalk.com/
    [52] Epson RC+ 7.0 User’s Guide. Available: https://files.support.epson.com/far/docs/epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf
    [53] Kaehler Adrian and Bradski Gary,賴屹民譯,OpenCV 3學習手冊,臺北市:碁峰資訊。
    [54] OpenCV Open Source Computer Vision. Available: https://docs.opencv.org/3.4/
    [55] 張平,極速+好用:用C++及Python平行實作OpenCV,臺北市:佳魁數位。

    無法下載圖示 全文公開日期 2024/08/27 (校內網路)
    全文公開日期 2024/08/27 (校外網路)
    全文公開日期 2024/08/27 (國家圖書館:臺灣博碩士論文系統)
    QR CODE