簡易檢索 / 詳目顯示

研究生: 孫永富
Yung-Fu Sun
論文名稱: 使用低成本立體相機建立工業機器人立體視覺與應用於散裝零件夾取之研究
Study of Stereoscopic Vision of Industrial Robots Using Low-Cost Stereo Camera for Random Bin Picking
指導教授: 蔡明忠
Ming-Jong Tsai
口試委員: 蔡明忠
Ming-Jong Tsai
李敏凡
Min-Fan Lee
林志哲
Chih-Jer Lin
蔡裕祥
Yu-Shiang Tsai
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 112
中文關鍵詞: 立體相機散裝零件夾取機器視覺影像處理立體視覺
外文關鍵詞: Stereo camera, Random bin picking, Machine vision, Imaging system, Stereoscopic vision
相關次數: 點閱:257下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

為了因應越來越多少量多樣的生產模式,生產系統已漸漸由彈性製造系統發展為智慧製造系統,在發展智慧機械的領域中智慧機器人又常常扮演著不可或缺的角色,要讓目前工業機器人具有智慧,第一階段就是賦予機器人各種感知的能力,如:視覺、觸覺、力覺、聽覺……等,再透過AI的演算法使之成為智慧機器人。
本研究是使用低成本消費型立體相機,取代昂貴的工業用3D視覺感測器建立工業機器人的立體視覺,並模擬應用於壓鑄零件自動化組立生產線上之散裝零件夾取。本研究採用改良後的霍夫轉換找出散裝零件中心位置的像素座標,再透過立體相機的深度數據計算出零件中心位置的視覺座標並轉換為與機器人共用的使用者座標,接著計算出零件XY方向的傾斜角度及夾爪夾取軌跡的進入點與到達點的方位通知機器人,機器人依據此軌跡從散裝零件箱隨機夾取零件進行下一工程加工。
實驗過程中測試了立體視覺最大能辨識的零件傾斜角度,也設計了當辨識失敗時機器人的補救措施,機器人能正確的從散裝零件箱中依最佳順序夾取每一個零件,實現了使用低成本立體相機建立工業機器人立體視覺與應用於散裝零件夾取。


In order to respond to more and more diverse production models, production systems have gradually evolved from flexible manufacturing systems to smart manufacturing systems. In the field of developing smart machinery, intelligent robots often play an indispensable role. In order to make the current industrial robots intelligent, the first stage is to give the robot a variety of cognitive capabilities, such as: vision, touch, force, hearing, etc. Then, through the AI learning algorithm can be a smart robot.
This study uses Intel RealSense low-cost depth camera to replace an expensive industrial 3D visual sensor to establish a stereo vision for industrial robots, which might be used in the die-casting parts automation assembly line. This study employs the improved Hough transform to find out the center position of the bulk part. By using the depth data of the stereo camera, and the space coordinates of the detected center position, the rotation angles for the XY axis are calculated. Then, the stereoscopic coordinate coordinates of the orientation are converted into industrial robot coordinates, for the jaw clip operation. The robot is notified of the position of the entry point and the arrival point of the trajectory. Therefore, random bin picking of parts in a bulk box by a robot can be automatously operated for the next engineering process.
From the experiment al results, the maximum recognizable part tilt angle of stereo vision was tested. The robot's remedial measures were also designed when the recognition is failed. The robot can correctly pick each part from the bulk box with the best order. The low-cost stereo cameras can be applied to establish stereoscopic vision for industrial robots for the random bin picking.

中文摘要 III Abstract IV 目錄 VI 圖目錄 IX 表目錄 XII 第一章 緒論 1 1.1 研究背景及動機 1 1.2 研究目的與方法 2 1.3 論文貢獻 3 1.4 論文架構 3 第二章 文獻回顧與技術探討 5 2.1 機器人箱子揀貨相關研究(Robotic Bin Picking) 5 2.1.1 結構化檢貨(Structured Bin Picking) 6 2.1.2 隨機揀貨(Random Bin Picking) 7 2.2 立體相機(Stereo Camera)介紹 8 2.2.1 立體視覺 (Stereoscopic Vision) 8 2.2.2 相機校準 9 2.2.3 深度計算 13 2.2.4 深度解析度 15 2.3 工業機器人常見機械結構 17 2.4 六軸垂直關節型機器人 19 2.4.1 順向運動學 20 2.4.2 逆向運動學 23 2.4.3 奇異點(Singularity)探討 26 第三章 系統開發環境 28 3.1 系統架構圖及流程 28 3.2 散裝零件及夾爪設計 31 3.3 立體相機 33 3.3.1 深度視野計算 34 3.3.2 無效深度帶計算 35 3.3.3 動態校準 36 3.3.4 深度RMS誤差討論 38 3.3.5 最小深度距離 40 3.3.6 紋理投影儀 40 3.4 機器人系統 42 3.4.1 各種座標設定 42 3.4.2 機器人閘道器 45 3.5 通信系統 45 3.5.1 Modbus TCP通信協定 45 3.5.2 Winsock簡介 51 3.5.3 EtherNet/IP通信協定 53 第四章 立體影像處理 57 4.1 立體視覺座標計算 57 4.1.1 深度影像對齊 57 4.1.2 深度後處理過濾器 58 4.1.3 像素座標與視覺座標轉換 59 4.2 視覺座標與機器人座標轉換 62 4.2.1 視覺座標轉換 64 4.3 背景移除 66 4.3.1 建立背景模型 67 4.3.2 影像形態學 67 4.3.3 尋找輪廓 70 4.3.4 前景取出 71 4.4 散裝零件辨識 71 4.4.1 霍夫圓形轉換 72 4.4.2 圓形零件辨識 74 4.4.3 機器人夾取路徑起訖點計算 76 第五章 實驗成果與討論 83 5.1 移除背景效果 83 5.2 散裝零件辨識 85 5.2.1 散裝零件辨識效果 85 5.2.2 最大辨識角度測試 90 5.2.3 超過最大檢測角度問題探討 92 5.2.4 無零件檢知 94 5.3 失敗模式探討與處理 94 5.3.1 重整零件 94 5.3.2 機器人自我保護及重新嘗試動作 95 第六章 結論與未來研究方向 96 6.1 結論 96 6.2 未來研究方向 97 參考文獻 98

[1] Adrian Kaehler and Gary Bradski, Learning OpenCV 3, Oreilly & Associates Inc,2017.
[2] Jason P. de Villiers, F. Wilhelm Leuschnerb and Ronelle Geldenhuysb, "Centi-pixel accurate real-time inverse distortion correction, " Proc. SPIE 7266, Optomechatronic Technologies 2008.
[3] D.C. Brown, "Decentering Distortion of Lenses, " Photometric Engineering , vol. 32,no. 3, pp.444–462,1966.
[4] Bogusław Cyganek and J. Paul Siebert, An Introduction to 3D Computer Vision Techniques and Algorithms, John Wiley & Sons, Ltd., 2009.
[5] Pascal Monasse, Jean-Michel Morel and Zhongwei Tang, "Three-step image rectification, " British Machine Vision Conference, Aug 2010, Aberystwyth, United Kingdom. pp.89.1–89.10, 2010.
[6] International Organization for Standardization, https://www.iso.org/ obp/ui/#iso: std:iso:8373: ed-2: v1: en: term:2.9.
[7] Definition of a robot (PDF). Dansk Robot Forening. Retrieved on 2007-09-10.
[8] FANUC Corporation, FANUC Robot LR Mate 200iD/4S /4SH/4SC MECHANICAL UNIT OPERATOR'S MANUAL, FANUC Corporation, 2015.
[9] 鐘子涵,「六軸垂直關節型機器人運動學/反運動學硬體實現與誤差分析」,碩士論文,電機工程系,南台科技大學,2013。
[10] Intel Corporation, D415 Series ProductBrief, Intel Corporation, 2017.
[11] Intel Corporation, Intel RealSense D400 Family Datasheet, Intel Corporation, 2019.
[12] Anders Grunnet-Jepsen, John N. Sweetser, and John Woodfill, Best Known Methods for Tuning Intel RealSenseTM D400 Depth Cameras for Best Performance Ver1.9, Intel Corporation, 2018.
[13] Anders Grunnet-Jepsen, John N. Sweetser, Paul Winer, Akihiro Takagi and John Woodfill, Projectors for Intel RealSense Depth Cameras D4xx Ver1.0, Intel Corporation, 2018.
[14] FANUC Corporation, FANUC Robot series R-30iB/R-30iB Mate CONTROLLER (Basic Operation), FANUC Corporation, 2018.
[15] Modbus Organization, MODBUS Application Protocol Specification V1.1b3, Modbus Organization, 2012.
[16] FANUC Corporation, FANUC robot R-30iB Mate CONTROLLER EtherNet/IP OPERATOR'S MANUAL, FANUC Corporation, 2018.
[17] Michael A. Wirth, Ph.D., "Morphological Image Analysis, " University of Guelph Computing and Information Science Image Processing Group,2004
[18] 江修、黃偉峰,「六軸機械手臂之控制理論分析與應用」,機械工業雜誌,277期,pp.57-P73,2006.10。
[19] Maciej Kurc, Krzysztof Wegner and Marek Domański, "Transformation of depth maps produced by ToF cameras, " 2014 International Conference on Signals and Electronic Systems (ICSES), Poznan, pp. 1-4, 2014.
[20] Miles Hansard, Seungkyu Lee, Ouk Choi, Radu Horaud, Time-of-Flight Cameras: Principles, Methods and Applications, Springer, 2012.
[21] Intel RealSense產品網站, Available from: https://dev.intelrealsense.com/docs
[22] Intel RealSense SDK2.0 GitHub, Available from: https:// github.com /IntelRealSense / librealsense/wiki/Projection-in-RealSense-SDK-2.0
[23] Duane C. Brown, "Close-range camera calibration, " Photogrammetric Engineering, vol. 37, pp.855–866, 1971.
[24] Quan-Hai Wang, Fang Miao, Biao Liu and Can-Ping Li, "Panorama Image Distortion Correction Method based on Straight Baseline, " 2008 International Conference on Apperceiving Computing and Intelligence Analysis, Chengdu, pp. 203-207, 2008.
[25] Andrea Fusiello, Emanuele Trucco and Alessandro Verri, "A compact algorithm for rectification of stereo pairs, " Machine Vision and Applications, vol. 12, no. 1, pp. 16-22, 2000.
[26] Marc Pollefeys, Reinhard Koch and Luc Van Goo, "A simple and efficient rectification method for general motion, " Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, pp. 496-501 vol.1, 1999.
[27] Ser-Nam Lim, Anurag Mittal, Larry S. Davis and Nikos Paragios, "Uncalibrated stereo rectification for automatic 3D surveillance, " 2004 International Conference on Image Processing, 2004. ICIP '04., Singapore, pp. 1357-1360 Vol.2, 2004.
[28] FANUC Corporation, FANUC LR Mate 200iD manual, FANUC Corporation, 2017.
[29] C. A. Basca, M. Talos and R. Brad, " Randomized Hough Transform for Ellipse Detection with Result Clustering, " EUROCON 2005 - The International Conference on Computer as a Tool, Belgrade, pp. 1397-1400, 2005.
[30] Pramod Kumar Pandey, Nitin khurana, Ashish Aggarwal, Amit Manocha, " Comparative Analysis of Image Segmentation Using Hough Transform, " International Journal of Applied Engineering Research, ISSN 0973-4562 Vol.7 No.11, 2012.

無法下載圖示 全文公開日期 2024/08/20 (校內網路)
全文公開日期 2024/08/20 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE