簡易檢索 / 詳目顯示

研究生: 黃鼎元
Ting-Yuan Huang
論文名稱: 整合工業機器人3D色彩視覺與智慧柔性夾取系統並應用於散裝物體夾取之研究
Study of 3D Color Vision and Smart Soft Grasping System Integration of Industrial Robots for Random Bin-Picking
指導教授: 蔡明忠
Ming-Jong TSAI
口試委員: 郭永麟
Yong-Lin KUO
蔡裕祥
Yu-Hsiang TSAI
江卓培
Cho-Pei JIANG
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 115
中文關鍵詞: 機器視覺3D視覺影像處理觸覺感測散裝物件夾取影像色彩分析
外文關鍵詞: Machine Vision, 3D Vision, Image Processing, Tactile Sensing, Random Bin- Picking, HSV Color Analysis
相關次數: 點閱:279下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 工業4.0及智慧製造已成為各產業必然的發展趨勢,由於在自動化製程上,機器視覺與運動控制系統進行整合,可有效提高生產效率,也讓機器視覺因此成為智慧製造相關技術及應用的焦點。本研究以結合六軸機器人3D視覺與觸覺感測之色彩分析演算法於不同軟硬度與形狀物件之夾取應用為研製目標。主軸在透過使用立體相機開發3D機器視覺次系統、配合具柔性夾爪之六軸機器人系統、並以各式不同軟硬度且具多自由度之散裝物件夾取進行驗證。軟體方面使用OpenCV C++開源程式庫保有最佳的靈活性,先是用HSV色彩轉換及邊緣檢測找出散裝物件中心位置的像素座標,再透過3D相機的深度及XY數據計算出物件中心位置的空間座標及XYZ三軸方向的傾斜角度。將此座標轉換為機器人座標,隨後結合智慧觸覺感測物件軟硬度等相關訊息進行物件第二次判別,接著計算出夾爪夾取軌跡的方位通知機器人進行夾取工作。本研究完成3D視覺與觸覺結合機器人之架構與校正,可辨識不同色彩、軟硬度與形狀之物體,並整合機器手臂於3D視覺與觸覺感測之夾取演算法,使機器人能正確的從散裝物件箱中依最佳順序夾取每一個物件,實現了使用低成本3D相機建立工業機器人3D視覺與觸覺感知應用於散裝物體之色彩辨識與夾取分裝。最後提出兩種不同情境,其一為透過HSV色彩分析進行不同水果之辨識與分類夾取,其二為不同外形物之辨識並夾取。


    Industry 4.0 and smart manufacturing become a development trend in various industries. Due to the integration of machine vision and motion control systems in the automation process, the production efficiency can be effectively improved. Therefore, machine vision has become the focus of technologies and applications in smart manufacturing. This research aims to integrate a color analysis algorithm combining 3D vision and tactile sensing for a six-axis robot to grip objects of different softness and shape. A stereo camera is used to develop a 3D vision sub-system, cooperate with a six-axis robot for picking different hardness and multi-degrees of freedom objects and soft gripping verification is made. The software uses the Open CV C++ open source library to maintain the best flexibility. Using HSV color space and edge detection to find the pixel coordinates of the center position of the detected object, and the XYZ-axis sloping angles through the deep and XY pixels of the 3D camera. The information such as the softness and hardness of the object with intelligent tactile sensing is used to perform the second recognition of the object, and then to calculate the gripper route for the robot gripping task.
    This research completes the 3D vision and tactile combination of robot architecture and calibration, which can identify objects of different colors, hardness and shapes. With the 3D vision and tactile sensing algorithm, the robot can correctly pick objects from a bulk box. The use of low-cost 3D camera is achieved to establish industrial robot 3D vision and tactile sensing for color recognition and gripping objects of different softness and shape. Finally, two different situations are proposed. One is the identification and sorting of different fruits by HSV color analysis, and the other is the identification of different shapes of objects.

    致謝 II 摘要 III 目錄 V 圖目錄 VIII 表目錄 XII 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 2 1.3 研究方法 3 1.4 本文架構 4 第二章 文獻回顧與相關技術探討 6 2.1 工業機器人演變 7 2.2 觸覺感測器之相關研究 12 2.3 研究機器人箱子揀貨相關研究 16 2.3.1 結構化揀貨 16 2.3.2 半結構化揀貨 17 2.3.3 隨機揀貨 18 第三章 系統架構及研究方法 20 3.1 系統架構與實驗方法 20 3.2 系統硬體介紹 24 3.2.1 Arduino控制板 24 3.2.2 壓力感測器 25 3.2.3 3D相機 28 3.2.4機器人系統 30 3.3 軟體開發環境 32 3.3.1影像處理 32 3.3.2觸覺感測系統 33 3.3.3機器人軟體開發 34 3.4 3D相機與視覺影像 36 3.4.1 3D視覺 36 3.4.2相機校準 37 3.4.3深度計算 42 3.4.4深度解析度 45 3.4.5 影像形態學 47 3.4.6 影像濾波 50 第四章 3D影像與機器人智慧夾取系統 52 4.1 3D視覺校正與深度座標運算 52 4.1.1相機影像對齊 52 4.1.2深度視野與無效深度帶 53 4.1.3目標動態校準 55 4.2 影像辨識系統與深度座標運算 57 4.2.1 深度與水平解析度關係 57 4.2.2 像素座標與視覺座標轉換 59 4.2.5 物體視覺辨識演算法 62 4.3 機器人夾取系統 66 4.3.1 座標設定 67 4.3.2視覺與機器人座標轉換 70 4.3.2機器人夾取路徑運算 72 第五章 實驗結果與討論 79 5.1 影像處理與觸覺量測成果 79 5.1.1影像前置處理 79 5.1.2物件辨識 79 5.1.3觸覺感測 83 5.2 實際夾取成果 86 5.2.1多種水果分類之應用情境 88 5.2.2不同類型管狀工件夾取之應用情境 90 5.3 失敗案例探討 91 5.3.1重整物件箱 91 5.3.2機器人自我保護及重新嘗試動作 91 第六章 結論與未來發展方向 92 6.1 結論 92 6.2 未來發展方向 93 參考文獻 94

    [1] K. Liu, Z. Sun and M. Fujii, "Ellipse Detection Based Bin-Picking Visual Servoing System,"2010 Chinese Conference on Pattern Recognition (CCPR), Chongqing, pp. 1-5, Oct.,2010.
    [2] X. Fan, X. Wang and Y. Xiao, "A combined 2D-3D vision system for automatic robot picking," Proceedings of the 2014 International Conference on Advanced Mechatronic Systems,Kumamoto, pp. 513-516, 2014.
    [3] C. Martinez, H. Chen and R. Boca, "Automated 3D vision guided bin picking process for randomly located industrial parts," 2015IEEE International Conference on Industrial Technology (ICIT), Seville, pp. 3172-3177,2015.
    [4] Panagiota Tsarouchi, "A method for detection of randomly placed objects for robotic handling,” CIRP Journal of Manufacturing Science and Technology, 14, pp.20-27,2016. Available: http://auo.com/single_page.php?sn=7
    [5] 邱威堯、謝伯璜、張津魁、呂尚杰、張俊隆,「堆疊物件取放技術簡介」,機械工業362期,pp.20-25,2013.05。
    [6] 呂尚杰、江博通、張俊隆,「視覺導引機器人在金屬製品取放應用簡介」,機械工業362期,pp.26-33,2013.05。
    [7] 呂尚杰、邱威堯、江博通、張俊隆,「水五金堆疊物件取放技術簡介」,機械工業雜誌,374期,pp.57-64,2014.05。
    [8] 張俊隆、呂尚杰、謝伯璜、江博通、黃國唐,「非平行式3D視覺導引機器人之電子產品組裝應用」,電子月刊,第18卷第5期,pp.106-114,2012.05。
    [9] 邱威堯、呂尚杰、江博通、張俊隆,「以雙雷射掃描為基的3D視覺導引機器人技術與應用」,電子月刊,第20卷第5期,pp.78-87,2014.05。
    [10] 唐培文,基於結構光之自動隨機物件夾取系統,碩士論文,台灣科技大學機械所,2015。
    [11] 孫永富,使用低成本3D相機建立工業機器人3D視覺與應用於散裝物件夾取之研究,碩士論文,台灣科技大學自控所,2019。
    [12] L. Meli, C. Pacchierotti, D. Prattichizzo, "Sensory subtraction in robot-assisted surgery: Fingertip skin deformation feedback to ensure safety and improve transparency in bimanual haptic interaction", IEEE Trans. Biomed. Eng., vol. 61, no. 4, pp. 1318-1327, Apr. 2014.
    [13] C. Pacchierotti, D. Prattichizzo, K. J. Kuchenbecker, "Cutaneous feedback of fingertip deformation and vibration for palpation in robotic surgery", IEEE Trans. Biomed. Eng., vol. 63, no. 2, pp. 278-287, Feb. 2016.
    [14] J. L. Toennies, J. Burgner, T. J. Withrow, "Toward haptic/aural touchscreen display of graphical mathematics for the education of blind students", Proc. IEEE World Haptics Conf., pp. 373-378, 2011.
    [15] M. Pantelios, L. Tsiknas, S. Christodoulou, T. Papatheodorou, "Haptics technology in educational applications a case study", J. Digit. Inf. Manage., vol. 2, pp. 171-178, 2004.
    [16] M. G. Jones, T. Andre, R. Superfine, R. Taylor, "Learning at the nanoscale: The impact of students’ use of remote microscopy on concepts of viruses scale and microscopy", J. Res. Sci. Teaching, vol. 40, no. 3, pp. 303-322, 2003.
    [17] M. Salada, P. Vishton, J. E. Colgate, E. Frankel, "Two experiments on the perception of slip at the fingertip", Proc. IEEE Symp. Haptic Interfaces Virtual Environ. Teleoperator Syst., pp. 146-153, 2004.
    [18] W. R. Provancher, N. D. Sylvester, "Fingerpad skin stretch increases the perception of virtual friction", IEEE Trans. Haptics, vol. 2, no. 4, pp. 212-223, Oct.-Dec. 2009.
    [19] C. Carron, "A little fold-up joystick brings haptics to portable devices", 2018, Available:https://phys.org/news/2018-04-fold-up-joystick-haptics-portable-devices.html.
    [20] E. Ackerman, "CES 2018: Tactical Haptics redesigns its magical force feedback controller",2018,Available:https://spectrum.ieee.org/tech-talk/consumer-electronics/gaming/ces-2018-tactical-haptics-redesigns-its-magical-force-feedback-controller.
    [21] S. Charara, "Go touch VR has made a haptic glove without the glove", 2017, Available: https://www.wareable.com/vr/go-touch-vr-haptic-finger-accessory-7765.
    [22] C. Pacchierotti, S. Sinclair, M. Solazzi, A. Frisoli, V. Hayward, D. Prattichizzo, "Wearable haptic systems for the fingertip and the hand: Taxonomy review and perspectives", IEEE Trans. Haptics, vol. 10, no. 4, pp. 580-600, Oct.-Dec. 2017.
    [23] D. Prattichizzo, F. Chinello, C. Pacchierotti, M. Malvezzi, "Towards wearability in fingertip haptics: A 3-DoF wearable device for cutaneous force feedback", IEEE Trans. Haptics, vol. 6, no. 4, pp. 506-516, Oct.-Dec. 2013.
    [24] M. Solazzi, A. Frisoli, M. Bergamasco, "Design of a cutaneous fingertip display for improving haptic exploration of virtual objects", Proc. IEEE Int. Symp. Robot Human Interactive Commun., pp. 1-6, 2010.
    [25] R. J. Webster III, T. E. Murphy, L. N. Verner, A. M. Okamura, "A novel two-dimensional tactile slip display: Design kinematics and perceptual experiments", ACM Trans. Appl. Perceptions, vol. 2, no. 2, pp. 150-165, 2005.
    [26] K. Minamizawa, S. Fukamachi, H. Kajimoto, N. Kawakami, S. Tachi, "Gravity Grabber: Wearable haptic display to present virtual mass sensation", Proc. SIGGRAPH Emerg. Technologies, Aug. 2007.
    [27] D. Tsetserukou, S. Hosokawa, K. Terashima, "LinkTouch: A wearable haptic device with five-bar linkage mechanism for presentation of two-DoF force feedback at the fingerpad", Proc. IEEE Haptics Symp., pp. 307-312, 2014.
    [28] D. Leonardis, M. Solazzi, I. Bortone, A. Frisoli, "A wearable fingertip haptic device with 3 DoF asymmetric 3-RSR kinematics", Proc. IEEE World Haptics Conf., pp. 388-393, 2015.
    [29] Robothistory, Available:https://www.booster-machine.com /article_detail_11.htm
    [30] Universal Robots, Available: https://www.universal-robots.com
    [31] US Smart Robot Market Forecast, Available:https://www.grandviewresearch.com
    [32] Twendy One, Available: https://robots.ieee.org/robots/twendyone/
    [33] Shadow Hand, Available:https://www.shadowrobot.com/products/dexterous-hand/
    [34] Touché Solutions, Available: https://www.touche.solutions/solutions/
    [35] Marvel JA, Saidi K, Eastman R, Technology readiness levels for randomized bin picking. Proceedings of the Workshop on Performance Metrics for Intelligent Systems. PerMIS '12. College Park, MD. New York (NY): ACM; p.109–113. 2012. p.109–113.
    [36] NIST. Agile Robotics for Industrial Automation Competition.2019.Available:https://www.nist.gov/el/intelligent-systems-division-73500/agile-robotics-industrial-automation-competition
    [37] Yasuyoshi Yokokohji, Yoshihiro Kawai, Mizuho Shibata, Yasumichi Aiyama, Shinya Kotosaka, Wataru Uemura, Akio Noda, Hiroki Dobashi, Takeshi Sakaguchi & Kazuhito Yokoi Assembly challenge: a robot competition of the Industrial Robotics category, World Robot Summit – summary of the pre-competition in 2018. Adv Robot. p.876–899. 2019, DOI: 10.1080/01691864.2019.1663609
    [38] Mizuho Shibata, Hiroki Dobashi, Wataru Uemura, Shinya Kotosaka, Yasumichi Aiyama, Takeshi Sakaguchi, Yoshihiro Kawai, Akio Noda, Kazuhito Yokoi & Yasuyoshi Yokokohji, Task-board task for assembling a belt drive unit. Adv Robot. p.677–689.2019. DOI: 10.1080/01691864.2020.1717613
    [39] Random Bin Picking, Available:https://motioncontrolsrobotics.com
    [40] Arduino UNO REV3,Available:https://store.arduino.cc/usa/arduino-uno-rev3
    [41] Force Sensitive Resistor FSR-402, Available:https://www.sparkfun.com
    [42] Force Sensitive Guide,Available:https://learn.sparkfun.com/tutorials/force-sensitive-resistor-hookup-guide#example-hardware-hookup
    [43] RealSense TM Product Family D400Series,Available:https://www.intelrealsense.com/
    wp-content/uploads/2020/06/Intel-RealSense-D400-Series-Datasheet-June-2020.pdf
    [44] EPSON 6 axis VT6-A901S, Available:https://www.epson.eu/products/robot/epson-6-axis-vt6-a901s-with-built-in-controller#specifications
    [45] Adrian Kaehler and Gary Bradski, Learning OpenCV 3, Oreilly & Associates Inc,2017.
    [46] D.C. Brown, "Decentering Distortion of Lenses, " Photometric Engineering, vol. 32,no. 3, pp.444–462,1966.
    [47] Pascal Monasse, Jean-Michel Morel and Zhongwei Tang, "Three-step image rectification, " British Machine Vision Conference, Aug 2010, Aberystwyth, United Kingdom. pp.89.1–89.10, 2010.
    [48] Bogusław Cyganek and J. Paul Siebert, An Introduction to 3D Computer Vision Techniques and Algorithms, John Wiley & Sons, Ltd., 2009.
    [49] ImageProcessing,Available:https://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/ImageProcessing-html/topic4.htm#erosion
    [50] P. Maragos, The Essential Guide to Image Processing, 293 –321 Secondsecond Academic Press, Boston 2009.
    [51] RealSense Product Family D400Series,Available:https://www.intelrealsense.com/wp-content/uploads/2020/06/Intel-RealSense-D400-Series-Datasheet-June-2020.pdf
    [52] CannyEdge,Available:https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html#explanation
    [53] Lelin Li and Wanshou Jiang, "An Improved Douglas-Peucker Algorithm for Fast Curve Approximation", Image and Signal Processing (CISP) 2010 3rd International Congress on, vol. 4, pp. 1797-1802, 2010.

    無法下載圖示 全文公開日期 2023/08/27 (校內網路)
    全文公開日期 2025/08/27 (校外網路)
    全文公開日期 2025/08/27 (國家圖書館:臺灣博碩士論文系統)
    QR CODE