簡易檢索 / 詳目顯示

研究生: 梁耕豪
Keng-Hao Liange
論文名稱: 基於RGB-D影像之三維物體抓取六軸機器手臂
RGB-D Image Based 3D Object Grasping for a 6-Axis Robot Arm
指導教授: 施慶隆
Ching-Long Shih
口試委員: 李文猶
Wen-Yo Lee
王乃堅
Nai-Jian Wang
吳修明
Hsiu-Ming Wu
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 71
中文關鍵詞: 六軸機器臂影像伺服控制機器學習三維物體定位手眼標定法
外文關鍵詞: Robot Arm, Image Servo Control, Machine Learning, 3D Object Localization, Hand-eye Calibration
相關次數: 點閱:251下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文旨在運用色彩空間分類方法與紅外線測距相機搭配六 軸機器手臂實現任意視角評估目標物體三維抓取機器人。首先由色彩相機所提供的三通道色彩影像進行 語義 分割,以此完成目標物體與背景的影像分割。接者 分析 影像中物體的輪廓可 求得 抓取 方向 之 候選抓取線段,再以深度相機搭配相機內部參數將影像資訊轉換成 空間 點雲資料進行 三維 抓取姿態評估 。最後透過六軸機器手臂正 /反向運動學找出其各軸角度供末端執行器 移動至目標物體抓取位置實施抓取動作,運動過程 中加入 T-Curve點到點 速度控制使 機器手臂 移動時不 會 產生抖動 的 現象 。本論文 另一個重點為任意視角三維抓取 ,將目標物體 抓取 姿態與 RGB-D相機 安 裝位置 間之關係用齊次轉換矩陣表示 ,並分別將相機 安裝位置 與末端執行器之齊次轉換矩陣、末端執行器 位置 與 機器手臂基底 之齊次轉換矩陣求出,進而獲得機器手臂基底與目標物體間之關係 完成 任意視角 的 三維 目標物體 抓取。 最 終 我們 設定 六軸機器手臂於不同的觀測視角 進行 目標物體抓取姿態辨識 末端執行器將依辨識結果移至 目標物體抓取姿態完成 目標物體 抓取。


    This thesis aims to implement a 3D grasping robot by using an infrared distance meter and the color space of a camera for image classification to evaluate target objects from any view of a six-axis robot arm. First, the three-channel color image is provided by a RGB camera, so as to extract the edge by finding the pixels locating on the boundary between foreground objects and the background. Second, the candidate of grasping points among the contour of object is obtained, and transformed to point cloud by camera intrinsic parameters and distance meter. Third, point cloud data is used to determine a 3D object grasping position and orientation. Finally, the robot arm is planned by forward/backward kinematics so as to the end-effector reach the object grasping position and orientation. In addition, T-curve speed control law is used to avoid jerk during the robot arm moving. The focus of this thesis is the arbitrary viewpoint 3D grasping. We represent the relationship between object grasping configuration and RGB-D camera mounting position by homogeneous transformation matrix, and the homogeneous transformation matrices between RGB-D camera mounting position and end effector configuration and between end effector configuration and robot arm basis respectively. Consequently, we could complete arbitrary viewpoint 3D grasping via relationship between robot arm basis and target objects grasping position. Final, robot arm is set up in different point of view to estimate 3D object grasping and the end effector would be instructed to reach the target object grasping position.

    摘要 ........................ I Abstract ........................ II 目錄 ........................ III 圖目錄 ........................ V 表目錄 ......................................... VIII 第1章 緒論 ..................... 1 研究動機與目的 ...................................................... 1 文獻回顧 ............................................................ 1 論文大綱 ........................................................................... 3 第2章 系統架構及控制流程 ............................................................... 4 系統架構 ................................................................................. 4 硬體介紹 ............................................................... 5 系統介紹 ................................................................ 8 第3章 六軸機械臂控制 ............................................... 12 正向運動學 .......................................................................... 14 3.1.1 齊次轉換矩陣 ...................................................... 14 3.1.2 DH Table ......................................................... 15 3.1.3 四連桿機構 ......................................................... 18 反向運動學 ................................................................................ 19 點到點運動控制 .............................................................. 20 手眼標定 ......................................................................... 23 第4章 物體姿態及位置估測 ......................................... 26 Mask RCNN ....................................................................... 28 2D 抓取點偵測 ...................................................................... 29 抓取限度長度測量 ...................................................................... 32 4.3.1 建立相機模型 ................................................................. 32 三維目標物體抓取姿態及位置估測 ............................................. 34 第5章 實驗結果與討論 ............................................................ 37 六軸機器手臂控制 .................................................. 38 5.1.1 正向運動學 ......................................................................... 38 5.1.2 反向運動學 ......................................................................... 41 5.1.3 點到點運動控制 ........................................................................ 42 機器手臂及相機校正 ................................................................. 43 5.2.1 相機校正 ................................................................................ 43 5.2.2 手眼標定 ..................................................................... 45 二維影像抓取位置評估 ........................................................................ 47 5.3.1 前後景分割 ........................................................................... 47 5.3.2 抓取點偵測 ........................................................... 49 三維物體位置姿態評估 ............................................................... 51 5.4.1 影像三維重建 .................................................................... 51 5.4.2 三維目標物體抓取位置及姿態評估 ........................................... 54 第6章 結論與建議 ........................................................................... 58 結論 .................................... 58 建議 .................................... 59 參考文獻 ....................................... 61

    [1] D. Constantin, M. Lupoae, C. Baciu, and D.-I. Buliga, "Forward Kinematic Analysis of an Industrial Robot," in Proceedings of the International Conference on Mechanical Engineering (ME 2015), 2015, pp. 90-95.
    [2] R. Hartenberg and J. Danavit," Kinematic Synthesis of Linkages," New York: McGraw-Hill, 1964.
    [3] G. Antonelli, S. Chiaverini, M. Palladino, G. P. Gerio, and G. Renga, "JOINT SPACE POINT-TO-POINT MOTION PLANNING FOR ROBOTS. AN INDUSTRIAL IMPLEMENTATION," IFAC Proceedings Volumes, vol. 38, no. 1, pp. 187-192, 2005/01/01/ 2005.
    [4] J. Denavit and R. S. Hartenberg, "A Kinematic Notation for Lower-pair Mechanisms Based on Matrices," 1955.
    [5] G. Du, K. Wang, S. Lian, and K. J. A. I. R. Zhao, "Vision-based Robotic Grasping from Object Localization, Object Pose Estimation to Grasp Estimation for Parallel Grippers: a review," vol. 54, no. 3, pp. 1677-1734, 2021.
    [6] H. Shin, H. Hwang, H. Yoon, and S. Lee, "Integration of Deep Learning-Based Object Recognition and Robot Manipulator for Grasping Objects," in 2019 16th International Conference on Ubiquitous Robots (UR), 2019, pp. 174-178.
    [7] L. Chen, P. Huang, Y. Li, and Z. Meng, "Edge-Dependent Efficient Grasp Rectangle Search in Robotic Grasp Detection," IEEE/ASME Transactions on Mechatronics, vol. 26, no. 6, pp. 2922-2931, 2021.
    [8] Z. Zhengyou, "Flexible Camera Calibration by Viewing a Plane from Unknown Orientations," in Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, vol. 1, pp. 666-673 vol.1.
    [9] R. Y. Tsai and R. K. Lenz, "A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/eye Calibration," IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 345-358, 1989.
    [10] I. Lenz, H. Lee, and A. J. T. I. J. o. R. R. Saxena, "Deep Learning For Detecting Robotic Grasps," vol. 34, no. 4-5, pp. 705-724, 2015.
    [11] J. Redmon and A. Angelova, "Real-time Grasp Detection Using Convolutional Neural Networks," in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 1316-1322.
    [12] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
    [13] A. M. Hafiz and G. M. J. I. j. o. m. i. r. Bhat, "A Survey on Instance Segmentation: state of the art," vol. 9, no. 3, pp. 171-189, 2020.
    62
    [14] S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, "Automatic Generation and Detection of Highly Reliable Fiducial Markers Under Occlusion," Pattern Recognition, vol. 47, no. 6, pp. 2280-2292, 2014/06/01/ 2014.
    [15] T. Tsujibayashi, K. Inoue, and M. Yoshioka, "Normal Estimation of Surface in PointCloud Data for 3D Parts Segmentation," in 2018 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), 2018, pp. 1-5.
    [16] 施慶隆施慶隆, 電機工程電機工程, 李文猶李文猶, and 電機工程電機工程, 機電整合控制機電整合控制: 多軸運動設計多軸運動設計與應用與應用. 全華圖書全華圖書, 2015.

    QR CODE