研究生: |
孫永富 Yung-Fu Sun |
---|---|
論文名稱: |
使用低成本立體相機建立工業機器人立體視覺與應用於散裝零件夾取之研究 Study of Stereoscopic Vision of Industrial Robots Using Low-Cost Stereo Camera for Random Bin Picking |
指導教授: |
蔡明忠
Ming-Jong Tsai |
口試委員: |
蔡明忠
Ming-Jong Tsai 李敏凡 Min-Fan Lee 林志哲 Chih-Jer Lin 蔡裕祥 Yu-Shiang Tsai |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 自動化及控制研究所 Graduate Institute of Automation and Control |
論文出版年: | 2019 |
畢業學年度: | 107 |
語文別: | 中文 |
論文頁數: | 112 |
中文關鍵詞: | 立體相機 、散裝零件夾取 、機器視覺 、影像處理 、立體視覺 |
外文關鍵詞: | Stereo camera, Random bin picking, Machine vision, Imaging system, Stereoscopic vision |
相關次數: | 點閱:257 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
為了因應越來越多少量多樣的生產模式,生產系統已漸漸由彈性製造系統發展為智慧製造系統,在發展智慧機械的領域中智慧機器人又常常扮演著不可或缺的角色,要讓目前工業機器人具有智慧,第一階段就是賦予機器人各種感知的能力,如:視覺、觸覺、力覺、聽覺……等,再透過AI的演算法使之成為智慧機器人。
本研究是使用低成本消費型立體相機,取代昂貴的工業用3D視覺感測器建立工業機器人的立體視覺,並模擬應用於壓鑄零件自動化組立生產線上之散裝零件夾取。本研究採用改良後的霍夫轉換找出散裝零件中心位置的像素座標,再透過立體相機的深度數據計算出零件中心位置的視覺座標並轉換為與機器人共用的使用者座標,接著計算出零件XY方向的傾斜角度及夾爪夾取軌跡的進入點與到達點的方位通知機器人,機器人依據此軌跡從散裝零件箱隨機夾取零件進行下一工程加工。
實驗過程中測試了立體視覺最大能辨識的零件傾斜角度,也設計了當辨識失敗時機器人的補救措施,機器人能正確的從散裝零件箱中依最佳順序夾取每一個零件,實現了使用低成本立體相機建立工業機器人立體視覺與應用於散裝零件夾取。
In order to respond to more and more diverse production models, production systems have gradually evolved from flexible manufacturing systems to smart manufacturing systems. In the field of developing smart machinery, intelligent robots often play an indispensable role. In order to make the current industrial robots intelligent, the first stage is to give the robot a variety of cognitive capabilities, such as: vision, touch, force, hearing, etc. Then, through the AI learning algorithm can be a smart robot.
This study uses Intel RealSense low-cost depth camera to replace an expensive industrial 3D visual sensor to establish a stereo vision for industrial robots, which might be used in the die-casting parts automation assembly line. This study employs the improved Hough transform to find out the center position of the bulk part. By using the depth data of the stereo camera, and the space coordinates of the detected center position, the rotation angles for the XY axis are calculated. Then, the stereoscopic coordinate coordinates of the orientation are converted into industrial robot coordinates, for the jaw clip operation. The robot is notified of the position of the entry point and the arrival point of the trajectory. Therefore, random bin picking of parts in a bulk box by a robot can be automatously operated for the next engineering process.
From the experiment al results, the maximum recognizable part tilt angle of stereo vision was tested. The robot's remedial measures were also designed when the recognition is failed. The robot can correctly pick each part from the bulk box with the best order. The low-cost stereo cameras can be applied to establish stereoscopic vision for industrial robots for the random bin picking.
[1] Adrian Kaehler and Gary Bradski, Learning OpenCV 3, Oreilly & Associates Inc,2017.
[2] Jason P. de Villiers, F. Wilhelm Leuschnerb and Ronelle Geldenhuysb, "Centi-pixel accurate real-time inverse distortion correction, " Proc. SPIE 7266, Optomechatronic Technologies 2008.
[3] D.C. Brown, "Decentering Distortion of Lenses, " Photometric Engineering , vol. 32,no. 3, pp.444–462,1966.
[4] Bogusław Cyganek and J. Paul Siebert, An Introduction to 3D Computer Vision Techniques and Algorithms, John Wiley & Sons, Ltd., 2009.
[5] Pascal Monasse, Jean-Michel Morel and Zhongwei Tang, "Three-step image rectification, " British Machine Vision Conference, Aug 2010, Aberystwyth, United Kingdom. pp.89.1–89.10, 2010.
[6] International Organization for Standardization, https://www.iso.org/ obp/ui/#iso: std:iso:8373: ed-2: v1: en: term:2.9.
[7] Definition of a robot (PDF). Dansk Robot Forening. Retrieved on 2007-09-10.
[8] FANUC Corporation, FANUC Robot LR Mate 200iD/4S /4SH/4SC MECHANICAL UNIT OPERATOR'S MANUAL, FANUC Corporation, 2015.
[9] 鐘子涵,「六軸垂直關節型機器人運動學/反運動學硬體實現與誤差分析」,碩士論文,電機工程系,南台科技大學,2013。
[10] Intel Corporation, D415 Series ProductBrief, Intel Corporation, 2017.
[11] Intel Corporation, Intel RealSense D400 Family Datasheet, Intel Corporation, 2019.
[12] Anders Grunnet-Jepsen, John N. Sweetser, and John Woodfill, Best Known Methods for Tuning Intel RealSenseTM D400 Depth Cameras for Best Performance Ver1.9, Intel Corporation, 2018.
[13] Anders Grunnet-Jepsen, John N. Sweetser, Paul Winer, Akihiro Takagi and John Woodfill, Projectors for Intel RealSense Depth Cameras D4xx Ver1.0, Intel Corporation, 2018.
[14] FANUC Corporation, FANUC Robot series R-30iB/R-30iB Mate CONTROLLER (Basic Operation), FANUC Corporation, 2018.
[15] Modbus Organization, MODBUS Application Protocol Specification V1.1b3, Modbus Organization, 2012.
[16] FANUC Corporation, FANUC robot R-30iB Mate CONTROLLER EtherNet/IP OPERATOR'S MANUAL, FANUC Corporation, 2018.
[17] Michael A. Wirth, Ph.D., "Morphological Image Analysis, " University of Guelph Computing and Information Science Image Processing Group,2004
[18] 江修、黃偉峰,「六軸機械手臂之控制理論分析與應用」,機械工業雜誌,277期,pp.57-P73,2006.10。
[19] Maciej Kurc, Krzysztof Wegner and Marek Domański, "Transformation of depth maps produced by ToF cameras, " 2014 International Conference on Signals and Electronic Systems (ICSES), Poznan, pp. 1-4, 2014.
[20] Miles Hansard, Seungkyu Lee, Ouk Choi, Radu Horaud, Time-of-Flight Cameras: Principles, Methods and Applications, Springer, 2012.
[21] Intel RealSense產品網站, Available from: https://dev.intelrealsense.com/docs
[22] Intel RealSense SDK2.0 GitHub, Available from: https:// github.com /IntelRealSense / librealsense/wiki/Projection-in-RealSense-SDK-2.0
[23] Duane C. Brown, "Close-range camera calibration, " Photogrammetric Engineering, vol. 37, pp.855–866, 1971.
[24] Quan-Hai Wang, Fang Miao, Biao Liu and Can-Ping Li, "Panorama Image Distortion Correction Method based on Straight Baseline, " 2008 International Conference on Apperceiving Computing and Intelligence Analysis, Chengdu, pp. 203-207, 2008.
[25] Andrea Fusiello, Emanuele Trucco and Alessandro Verri, "A compact algorithm for rectification of stereo pairs, " Machine Vision and Applications, vol. 12, no. 1, pp. 16-22, 2000.
[26] Marc Pollefeys, Reinhard Koch and Luc Van Goo, "A simple and efficient rectification method for general motion, " Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, pp. 496-501 vol.1, 1999.
[27] Ser-Nam Lim, Anurag Mittal, Larry S. Davis and Nikos Paragios, "Uncalibrated stereo rectification for automatic 3D surveillance, " 2004 International Conference on Image Processing, 2004. ICIP '04., Singapore, pp. 1357-1360 Vol.2, 2004.
[28] FANUC Corporation, FANUC LR Mate 200iD manual, FANUC Corporation, 2017.
[29] C. A. Basca, M. Talos and R. Brad, " Randomized Hough Transform for Ellipse Detection with Result Clustering, " EUROCON 2005 - The International Conference on Computer as a Tool, Belgrade, pp. 1397-1400, 2005.
[30] Pramod Kumar Pandey, Nitin khurana, Ashish Aggarwal, Amit Manocha, " Comparative Analysis of Image Segmentation Using Hough Transform, " International Journal of Applied Engineering Research, ISSN 0973-4562 Vol.7 No.11, 2012.