簡易檢索 / 詳目顯示

研究生: Sergio Omar Chong Lugo
Sergio Omar Chong Lugo
論文名稱: 3D視覺為基之機器手臂含時間軌跡建立系統
3D Vision Based Robot Manipulator Timed Trajectory Generation System
指導教授: 林其禹
Chyi-Yeu Lin
口試委員: 邱士軒
Shih-Hsuan Chiu
范欽雄
Chin-Shyurng Fahn
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 70
中文關鍵詞: 世界座標系/機器人座標系校正反向運動學影像處理點雲處理SVD演算法
外文關鍵詞: world/robot calibration, inverse kinematics, image processing, point cloud processing, SVD algorithm
相關次數: 點閱:223下載:25
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文提出了一種使用多個3D相機來預測商用型多關節機器人手臂之運動的方法,在本文中同時使用兩個Kinect V2裝置來感測機器人手臂。3D相機從組織點雲中取得顏色和深度的資訊。由於每個Kinect V2裝置與機器人手臂基座之間的位置和方向是未知的,因此應用了QR markers和Singular Value Decomposition (SVD)演算法來校正Kinect V2裝置與機器人手臂基座機器人手臂基座之間的關係,這種校正方法為機器人手臂和所有3D相機提供一個獨特的座標系統。
在本論文中所使用的機械手臂無法在實際執行時同步回傳末端效應器和關節旋轉角至個人電腦,因此使用基於組織點雲及SVD演算法的處理方式來有助於解決此問題,在此離線處理中,透過將一個由四種不同顏色的球所組成的工具裝設在機械手臂法蘭面,Kinect V2將同時記錄機器人手臂與環境的組織點雲。每個被記錄的點雲都包含四種不同色球的3D和RGB資訊,影像及點雲的處理將有助於找到各色球中心在3D空間中的位置,並建立一組四個3D點。SVD演算法每隔特定的點雲模型紀錄時間將使用此組點座標在尋找手臂終端效果器對於手臂基座座標的姿態。
終端效果器各種姿態所需關節角度由V-REP External Inverse Kinematics library功能計算求得。所有數據將用於計算各關節角度,而在兩次影像之間未知的關節角度將以內插方式計算,繪出角度及時間關係圖,根據此時間關係圖,Kinect V2可以線上感測機械臂之動態。


The current thesis proposes a method for anticipating the motion of a commercial articulated robot arm using multiple 3D cameras, in this case two Kinect v2 devices that sense the robotic arm simultaneously. 3D cameras retrieve color and depth information in the form of organized point clouds. Each Kinect v2 device is located on an unknown position and orientation relative to the robotic arm base, therefore a robot-world calibration method for each Kinect V2 device based on QR markers and the Singular Value Decomposition (SVD) algorithm is applied. This calibration method results in a unique coordinate system for whole the robotic arm and all 3D cameras. The robotic arm used in this thesis work is not capable of sending its end-effector pose and joint angles to a personal computer during online operation. Instead, an offline approach based on image and pointcloud processing and the SVD algorithm helps to solve this problem. In this offline approach, the robotic arm flange holds an attachment of four colored reduced-size balls. During robot’s motion, Kinect V2 devices generate time measured organized pointcloud frames of the robotic arms and its environment. Each recorded pointcloud frame contains 3D and RGB information of the colored balls. Applying image and pointcloud processing helps to find the position of the center of each colored ball in 3D space, which represent a set of 3D points. The SVD algorithm uses this set of points to find the end-effector pose relative to the robotic arm base coordinate system at a particular point cloud frame recorded time. The joint angles for each end-effector pose come from the application of Inverse Kinematics. All this data takes the form of a time history of end-effector joint angles, ideal for interpolation, in case of an unknown end-effector joint angles configuration at a different time. With this time history, Kinect V2 devices sense the robotic arm motion online.

Master's Thesis Recommendation Form Qualification Form by Master's Degree Examination Committee Acknowledgement Abstract 摘要 Contents List of Figures List of Tables Chapter 1 - Introduction 1.1 Problem Formulation 1.2 Research Purpose 1.3 Writing Scheme Chapter 2 - Background 2.1 Articulated Robotic Arm 2.1.1 Robot Modelling in V-REP Robot Simulator 2.1.2 Forward Kinematics 2.1.3 V-REP External Inverse Kinematics Library 2.2 3D Vision Cameras 2.2.1 Passive Stereo Camera 2.2.2 LiDAR Scanner/Pulsed Time of Flight (ToF) Camera 2.2.3 Continuous Wave Time of Flight (ToF) Camera 2.2.4 Structured Light Camera 2.3 Robot/World Calibration 2.4 ArUco Markers 2.5 SVD Algorithm Chapter 3 - Methodology 3.1 Multiple Kinect V2/Robot Extrinsic Calibration 3.1.1 Environment setup 3.1.2 ArUco Calibration Board 3.1.3 QR markers’ Position Estimation in 3D Space 3.1.4 SVD Algorithm Solution 3.1.5 Implementation 3.2 Robot Operation Time Measurement 3.3 Colored Balls Attachment for End-Effector Pose Estimation 3.4 Multiple Kinect Time Measured Point Cloud Acquisition 3.5 Offline End-Effector Pose Estimation 3.5.1 RGB Information Extraction from Point Cloud 3.5.2 3D Data Acquisition from Colored Balls 3.5.3 3D Center Ball Estimation in Point Cloud Space 3.5.4 Arrangement of Ball Sets 3.5.5 End-Effector Pose Estimation 3.6 Time History Generation of Joint Angles Chapter 4 - Experimental Results 4.1 V-REP External Inverse Kinematics Library Accuracy Testing 4.1.1 Setup 4.1.2 Results 4.2 Offline End-Effector Pose Estimation Testing 4.2.1 Setup 4.2.2 Results 4.3 Time History Testing 4.3.1 Setup 4.3.2 Results Chapter 5 - Conclusions and Future Work 5.1 Conclusions 5.2 Future Work Bibliography

[1] Alzarok H, Fletcher S, Longstaff A. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks. Sensors. 2017;17(12):104.
[2] PCL - Point Cloud Library (PCL) [Internet]. Pointclouds.org. 2018 [cited 1 July 2018]. Available from: http://pointclouds.org
[3] OpenKinect - libfreenect2 [Internet]. GitHub. 2018 [cited 1 July 2018]. Available from: https://github.com/OpenKinect/libfreenect2
[4] OpenCV library [Internet]. Opencv.org. 2018 [cited 1 July 2018]. Available from: https://opencv.org
[5] Eigen [Internet]. Eigen.tuxfamily.org. 2018 [cited 1 July 2018]. Available from: http://eigen.tuxfamily.org/index.php?title=Main_Page
[6] Boost C++ Libraries [Internet]. Boost.org. 2018 [cited 1 July 2018]. Available from: https://boost.org
[7] Coppelia Robotics V-REP: Create. Compose. Simulate. Any Robot. [Internet]. Coppeliarobotics.com. 2018 [cited 1 July 2018]. Available from: http://coppeliarobotics.com
[8] Articulated Robot RA620 Manual. HIWIN, n.d.
[9] Craig J. Introduction to robotics. 3rd ed. Upper Saddle River, N.J: Pearson Education; 2005.
[10] Zhao Z, Wang T, Wang D. Inverse kinematic analysis of the general 6R serial manipulators based on unit dual quaternion and Dixon resultant. 2017 Chinese Automation Congress (CAC). 2017.
[11] Deo A, Walker I. Overview of damped least-squares methods for Inverse Kinematics of robot manipulators. Journal of Intelligent & Robotic Systems. 1995;14(1):43-68.
[12] Buss S R. Introduction to Inverse Kinematics with Jacobian Transpose, PseudoInverse and Damped Least Squares Methods. Typeset manuscript. Available from World Wide Web (http://math.ucsd.edu/∼sbuss/ResearchWeb/), 2004.
[13] Fankhauser P, Bloesch M, Rodriguez D, Kaestner R, Hutter M, Siegwart R. Kinect v2 for mobile robot navigation: Evaluation and modeling. 2015 International Conference on Advanced Robotics (ICAR). 2015.
[14] Wu L, Ren H. Finding the Kinematic Base Frame of a Robot by Hand-Eye Calibration Using 3D Position Data. IEEE Transactions on Automation Science and Engineering. 2017;14(1):314-324.
[15] Wu L, Wang J, Qi L, Wu K, Ren H, Meng M. Simultaneous Hand–Eye, Tool–Flange, and Robot–Robot Calibration for Comanipulation by Solving the AXB = YCZ Problem. IEEE Transactions on Robotics. 2016;32(2):413-428.
[16] ArUco: a minimal library for Augmented Reality applications based on OpenCV [Internet]. 2018 [cited 1 July 2018]. Available from: https://www.uco.es/investiga/grupos/ava/node/26
[17] Oomori S, Nishida T, Kurogi S. Point cloud matching using singular value decomposition. Artificial Life and Robotics. 2016;21(2):149-154.
[18] Muller N, Magaia L, Herbst B. Singular Value Decomposition, Eigenfaces, and 3D Reconstructions. SIAM Review. 2004;46(3):518-545.
[19] Bellekens B, Spryut V, Berkvens Maarten Weyn R. A survey of rigid 3D pointcloud registration algorithms. Fourth International Conference on Ambient Computing, Applications, Services and Technologies. 2014;:8 - 13.

QR CODE