簡易檢索 / 詳目顯示

研究生: 劉尚欣
Shang-Hsin - Liu
論文名稱: 應用三維影像於眼在手架構機械手臂之物件辨識與夾取系統
Applying 3D Images to Object Recognition and Fetching Systems of Eye-in-hand Robotic Manipulators
指導教授: 徐勝均
Sheng-Dong Xu
口試委員: 陳金聖
Chin-Sheng Chen
柯正浩
Cheng-Hao Ko
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 97
中文關鍵詞: KinectOpenCV點雲物件辨識機械手臂
外文關鍵詞: Kinect, OpenCV, Point cloud, Object recognition, Mechanical arm
相關次數: 點閱:1187下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文探討應用三維影像於眼在手架構機械手臂之物件辨識與夾取系統,以便讓機械手臂可以自動抓取目標物件。
    在系統架構上,首先採用三維ToF(Time of flight)深度感應器Kinect for Windows V2 來取得二維影像數據與三維點雲數據。再配合使用Staubli TX60L六個自由度的工業機械手臂,將Robotiq兩指夾具與Kinect安裝至機械手臂的末端,以眼在手的方式作為此開發平台的硬體架構。
    在物件識別與姿態判別的實驗中,本論文針對:(一)物件分散擺放與(二)物件堆疊擺放,兩種假設情況進行實驗。在第一個實驗中,先使用加速穩健特徵(Speeded up robust features, SURF)演算法,判斷目標物件是否在工作區中,當目標物件在工作區中,再使用VFH(Viewpoint feature histogram, VFH)計算物件的姿態,最後將計算結果做為機械手臂路經軌跡規劃的依據。而在第二個實驗中,使用對應分群演算法(Correspondence grouping algorithms)中的幾何一致性(Geometrical consistency, GC)方法與霍夫表決(Hough voting)方法,判斷目標物件是否存在於工作區中。當目標物件存在於工作區中,但是部分被其它物件遮蔽,只要能匹配出足夠的關鍵點,依然可以辨識出工作區中的目標物件。當判斷目標物件在工作區中,即可將計算結果做為機械手臂路經軌跡規劃的依據。
    實驗結果驗證了本論文中所提出應用三維影像於眼在手架構機械手臂之物件辨識與夾取系統的可行性。


    In this thesis, we discuss applying 3D images to object recognition and fetching systems of eye-in-hand robotic manipulators so that the robotic manipulators can automatically fetch objects.
    In the developed system structure, we first adopt 3D ToF (Time of flight) Depth Sensor, Kinect for Windows V2, to get 2D image data and 3D point cloud data. Then, a six DOF (Degrees-of-Freedom) industrial machinery arm, working in eye-in-hand type equipped with two-finger fixture and a Kinect at the end of the arm, is used as the hardware structure of the developing platform.
    In the experiments of object identification and attitude determination, we assume two special cases, i) objects are scattered, and ii) objects are stacked. In the first experiment, SURF(Speeded Up Robust Features)algorithm is used to judge if the objects are in the working area. If the answer is yes, VFH (Viewpoint Feature Histogram) algorithm is adopted to determine the attitude of the objects. Finally, the calculation results will be the basis of trajectory plan for the robot arm. In the second experiment, GC(Geometrical Consistency)and Hough Voting in CGA(Correspondence Grouping Algorithm)are used to judge if the objects are in the working area. Even though some parts of the objects located in the working area are obscures by other objects, we still can identify the target object as long as you can match enough key points. When the object is in the working area, the calculation results can be the basis of trajectory plan for the robot arm.
    Experiments show the feasibility of applying 3D images to object recognition and fetching systems of eye-in-hand robotic manipulators.

    摘要·································· I Abstract·································· II 誌謝·································· III 目錄·································· IV 圖目錄·································· VII 表目錄·································· XIII 第 1 章 緒論·································· 1 1.1 前言·································· 1 1.2 研究動機·································· 2 1.3 文獻回顧·································· 3 1.4 論文架構·································· 4 第 2 章 系統架構·································· 6 2.1 系統構想設置·································· 6 2.2 測試目標物件·································· 7 2.3 程序描述·································· 9 2.4 關鍵模組·································· 12 2.4.1 3D 影像感測器·································· 12 2.4.2 機械手臂·································· 14 2.4.3 夾具·································· 17 2.5 開發環境·································· 18 第 3 章 機械手臂·································· 20 3.1 機械臂運動學·································· 20 3.2 機械手臂正向運動學·································· 20 3.3 機械手臂開鏈(Open chain)設計·································· 22 3.4 DENAVIT-HARTENBERG 規範·································· 23 3.5 機械手臂反向運動學·································· 28 第 4 章 物件辨識·································· 30 4.1 2D 特徵描述·································· 30 4.1.1 SURF(Speeded up robust features)演算法簡介·································· 30 4.1.2 Hessian 矩陣·································· 30 4.1.3 尺度空間(Scale space)·································· 32 4.1.4 特徵點的定位·································· 33 4.1.5 確定主要方向·································· 33 4.1.6 特徵點描述運算元·································· 34 4.2 3D 特徵描述·································· 34 4.2.1 濾波·································· 34 4.2.2 點雲分割·································· 35 4.2.3 特徵描述·································· 39 4.2.4 雜亂場景與嚴重遮擋之物件識別·································· 44 4.2.5 資料結構·································· 48 第 5 章 實驗結果·································· 50 5.1 機械手臂抓取系統·································· 50 5.2 實驗情況一各物件分散擺放於 2D 特徵描述結果·································· 52 5.3 實驗情況一各物件分散擺放於 3D 特徵描述結果·································· 54 5.4 實驗情況二各物件堆疊且自由型式擺放於 3D 特徵描述結果·································· 61 第 6 章 結論與未來展望·································· 72 參考文獻·································· 75

    [1] “COGNEX company,” http://www.cognex.com/
    [2] Fletcher Dunn and Ian Parberry, 3D Math Primer for Graphics and Game Development, USA, 2002.
    [3] “OpenCV library,” http://opencv.org/
    [4] “Point cloud library,” http://pointclouds.org/
    [5] Gary Bradski and Adrian Kaehler, Learning OpenCV Computer Vision with the OpenCV Library, O'Reilly Media, USA, 2008.
    [6] Samarth Brahmbhatt, Practical OpenCV, Apress Media, USA, 2013.
    [7] Xinjian Fan, Xuelin Wang, and Yongfei Xiao, “A shape-based stereo matching algorithm for binocular vision,” International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), Wuhan, Hubei, China, October 18-19, 2014, pp. 70-74.
    [8] Piotr Suliga, “A feature analysis of a laser triangulation stand used to acquire a 3D screw thread image,” International Carpathian Control Conference (ICCC), Tatranska Lomnica, Slovak Republic, May 29-June 1, 2016, pp. 702-705.
    [9] Tengku Afendi, F. Kurugollu, D. Crookes, and Ahmed Bouridane “A frontal view gait recognition based on 3D imaging using a time of flight camera,” European Signal Processing Conference (EUSIPCO), Lisbon, Portugal, September 1-5, 2014, pp. 2435-2439.
    [10] Datong Chen, Wen Gao, and Xilin Chen, “A new approach of recovering 3-D shape from structure-lighting,” International Conference on Signal Processing (ICOSP), vol. 2, Beijing, China, October 14-18, 1996, pp. 839-842.
    [11] Kumagai Masaaki and Yoshida Minoru, “Development of a vision based 3D range sensor using modulated light,” SICE-ICASE International Joint Conference (SICE-ICCAS), Busan, South Korea, October 18-21, 2006, pp. 2300-2305.
    [12] 中村 薫、杉浦 司、高田 智、上田 智章,「KINECT for Windows SDKKinect for Windows v2対応版」,秀和,日本国東京都,2015。
    [13] Hsu-Nan Yen, Guan-Lin Chen, and Jui-Hsin Tsao, “Control of industrial inspection modules using Kinect somatosensory technology,” IEEE International Conference on Consumer Electronics (ICCE), Taipei, Taiwan, June 6-8, 2015, pp. 350-351.
    [14] Tomasz Hachaj, Marek R. Ogiela, and Katarzyna Koptyra, “Effectiveness comparison of Kinect and Kinect 2 for recognition of Oyama karate techniques,” International Conference on Network-Based Information Systems (NBiS), Taipei, Taiwan, September 2-4, 2015, pp. 332-337.
    [15] Wen-Chung Chang and Chia-Hung Wu, “Eye-in-hand vision-based robotic bin-picking with active laser projection,” International Journal of Advanced Manufacturing Technology, vol. 85, no. 9, pp. 2873-2885, 2016.
    [16] 郭佳文,「機器人整合3D物體辨識與夾取系統應用於工廠自動化」,國立臺灣大學電機資訊學院電機工程學系碩士學位論文,2015年7月。
    [17] 丁學亮,「Staubli工業機器人標定算法和實驗研究」,浙江理工大學機械與自動控制學院碩士學位論文,中國,2014。
    [18] “STAUBLI company,” http://www.staubli.com/
    [19] “Leica Geosystems company,” http://leica-geosystems.com/products/laser-tracker-systems
    [20] Hyunsup Yoon, Hwan-Ik Chung, and Hernsoo Hahn, “SURF algorithm with color and global characteristics,” ICROS-SICE International Joint Conference (ICCAS-SICE), Fukuoka, Japan, August 18-21, 2009, pp. 183-187.
    [21] Paul. J. Besl and Neil D. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, 1992.
    [22] Oca Hoeflein, “How Microsoft Kinect works with Infrared,” https://www.youtube.com/watch?v=dTKlNGSH9Po
    [23] 林其禹、郭重顯、邱士軒、李敏凡、范欽雄、林伯慎,「智慧型機器人原理與應用」,高立,2013年6月1日。
    [24] “Robotiq company,” http://robotiq.com/
    [25] Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, and Giuseppe Oriolo, Robotics: Modelling, Planning and Control, Springer, Italy, 2011.
    [26] Shabbir Kurbanhusen Mustafa and Sunil Kumar Agrawal, “On the force-closure analysis of n-DOF cable-driven open chains based on reciprocal screw theory,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 22-31, 2012.
    [27] Prabot Nanua and Kenneth J. Waldron, “Direct kinematic solution of a Stewart platform,” IEEE International Conference on Robotics and Automation, Scottsdale (ICRA), Arizona, USA, May 14-19, 1989, pp. 431-437.
    [28] Wolfgang Weber, “Automatic generation of the Denavit-Hartenberg convention,” International Symposium on Robotics and German Conference on Robotics (ISR) (ROBOTIK), Munich, Germany, June 7-9, 2010, pp. 1-7.
    [29] Chin-Sheng Chen, Shih-Kang Chen, and Cheng-Hsien Lai, “Real-time coplanar NURBS curve fitting and interpolation for 6-DOF robot arm,” International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, Taiwan, May 29-31, 2015, pp. 1-6.
    [30] 林唯修,「六軸機械手臂之研製與位置控制」,國立台灣科技大學高分子工程系碩士學位論文,2009年7月22日。
    [31] Ying-Shieh Kung, Ming-Kuang Wu, Bui Thi Hai Linh, Tz-Han Jung, Shin-Hon Lee, and Wen-Chuan Chen, “Design of inverse kinematics IP for a six-axis articulated manipulator,” International Automatic Control Conference (CACS), Nantou, Taiwan, December 2-4, 2013, pp. 300-205.
    [32] 施慶隆、李文猶,「機電整合控制多軸運動設計與應用」,第3版,全華,2015年5月18日。
    [33] Fengxu Guan, Xiaolong Liu, Weixing Feng, and Hongwei Mo, “Multi target recognition based on SURF algorithm,” International Congress on Image and Signal Processing (CISP), vol. 1, Hangzhou, China, December 16-18, 2013, pp. 448-453.
    [34] Gee-Sern Hsu, Chyi-Yeu Lin, and Jia-Shan Wu, “Real-time 3-D object recognition using scale invariant feature transform and stereo vision,” International Conference on Autonomous Robots and Agents (ICARA), Wellington, New Zealand, February 10-12, 2009, pp. 239-244.
    [35] Martin A. Fischler and Robert C. Bolles “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381-395, 1981.
    [36] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, “SURF: Speeded up robust features,” European Conference on Computer Vision (ECCV), vol. 3951, Graz, Austria, May 7-13, 2006, pp. 404-417.
    [37] 林家儒,「運用顏色與SURF關鍵點特徵分類之快速多廣告看板計次系統」,國立臺北科技大學資訊工程系碩士學位論文,2011年8月。
    [38] 蔡宗翰,「結合像素差異法與SURF之景深量測系統」,國立臺灣師範大學應用電子科技學系碩士學位論文,2012年7月。
    [39] 徐尉嘉,「嵌入式影像處理系統及SURF特徵擷取」,國立交通大學電機學院電機與控制學程碩士學位論文,2012年8月。
    [40] Rusu Bogdan Radu, Zoltan Csaba Marton, Nico Blodow, Mihai Dolha, and Michael Beetz, “Towards 3D point cloud based object maps for household environments,” Robotics and Autonomous Systems, vol. 56, no. 11, pp. 927-941, 2008.
    [41] Qing Wang, Sanjeev R. Kulkarni, and Sergio Verdu, “Divergence estimation for multidimensional densities via k-nearest-neighbor distances,” IEEE Transactions on Information Theory, vol. 55, no.5, pp. 2392-2405, 2009.
    [42] 簡郁潔,「採用隨機抽樣之強固主成份分析演算法」,國立清華大學資訊工程系碩士學位論文,2006年7月。
    [43] Lars-Alexander Albrecht, “Curvature-Based Analysis of Point Clouds,” Master’s thesis, Institute of Parallel and Distributed Systems, University of Stuttgart, Germany, 2014.
    [44] Radu Bogdan Rusu, Gary Bradski, Romain Thibaux, and John Hsu, “Fast 3D recognition and pose using the viewpoint feature histogram,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, October 18-22, 2010, pp. 2155-2162.
    [45] Radu Bogdan Rusu, Zoltan Csaba Marton, Nico Blodow, and Michael Beetz, “Persistent point feature histograms for 3D point clouds,” Intelligent Autonomous Systems (IAS), Baden-Baden, Germany, July 23-25, 2008, pp. 119-128.
    [46] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz , “Fast point feature histograms (FPFH) for 3D registration,” IEEE International Conference on Robotics and Automation Conference (ICRA), Kobe, Japan, May 12-17, 2009, pp. 3212-3217.
    [47] Hui Chen and Bir Bhanu, “3D free-form object recognition in range images using local surface patches,” International Conference on Pattern Recognition (ICPR), vol. 3, Cambridge, England, August 23-26, 2004, pp. 136-139.
    [48] Federico Tombari and Luigi Di Stefano, “Object recognition in 3D scenes with occlusions and clutter by Hough voting”, Pacific-Rim Symposium on Image and Video Technology (PSIVT), Singapore, November 14-17, 2010, pp. 349-355.
    [49] Aitor Aldoma, Zoltan-Csaba Marton, Federico Tombari, Walter Wohlkinger,
    Christian Potthast, Bernhard Zeisl, Radu Bogdan Rusu, Suat Gedikli, and Markus Vincze, “Tutorial: Point cloud library: Three-dimensional object recognition and 6 DOF pose estimation,” IEEE Robotics & Automation Magazine, vol. 19, no.3, pp. 80-91, 2012.
    [50] Jon Louis Bentley, “Multidimensional binary search trees used for associative searching,” Communications of the ACM, vol. 18, no. 9, pp. 509-517, 1975.
    [51] Marius Muja and David G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” International Conference on Computer Vision Theory and Applications (VISAPP), vol. 1, Lisboa, Portugal, February 5-8, 2009, pp. 331-340.
    [52] Won-Seok Choi, Yang-Shin Kim, Se-Young Oh, and Jeihun Lee, “Fast iterative closest point framework for 3D LIDAR data in intelligent vehicle,” IEEE Intelligent Vehicles Symposium (IV), Alcala de Henares, Spain, June 3-7, 2012, pp. 1029-1034.
    [53] Radu Bogdan Rusu and Steve Cousins, “3D is here: Point cloud library (PCL),” IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13, 2011, pp. 1-4.
    [54] “Kinect hardware,” https://developer.microsoft.com/en-us/windows/kinect/hardware
    [55] “Using DLP® development kits for 3D optical metrology systems,” http://www.ti.com/lit/an/dlpa026/dlpa026.pdf
    [56] Dong Ik Ko and Gaurav Agarwal, “Gesture recognition: Enabling natural interactions with electronics,” http://www.ti.com/lit/wp/spry199/spry199.pdf

    QR CODE