簡易檢索 / 詳目顯示

研究生: 黃祐音
Yu-Yin Huang
論文名稱: 基於立體視覺之機器人3D 物件拿取研究
Study of Robotic 3D Object Grasping with Stereoscopic Vision
指導教授: 郭重顯
Chung-Hsien Kuo
蘇順豐
Shun-Feng Su
口試委員: 鍾聖倫
Sheng-Luen Chung
林惠勇
Huei-Yung Lin
蘇順豐
Shun-Feng Su
林峻永
Chun-Yeon Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 89
中文關鍵詞: 物體邊緣提取夾取區域識別夾取姿態生成背景摳圖座標轉換關係物體型態分類
外文關鍵詞: Object Edge Extraction, Grasping Area Identification, Object Grasping Pose Generation, Background Matting, Coordinate Transformation Relationship, Object Type Classification
相關次數: 點閱:151下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文提出一基於立體視覺之機器人3D物件拿取研究。其系統分成三大部分,分別為任意物體邊緣提取系統、夾取區域識別系統以及夾取姿態生成系統。任意物體邊緣提取系統以人類視覺感官判定物體型態來分類出物體不同類別,搭配背景摳圖技術,於任意背景環境下取出物體輪廓,並透過形態學影像處理提取物體完整輪廓。而在物體夾取區域識別使用前述獲得的物體完整輪廓並根據物體型態分類器類別不同來識別不同的夾取位置,生成物體平面夾取區域座標,再透過座標轉換關係將影像座標轉換至機器手臂座標來生成夾取姿態,最後將轉換後的結果輸入至於機器手臂上進行實際夾取動作。
本論文使用此一系統進行座標轉換定位實驗來驗證座標轉換之間的誤差關係,並利用物體碰點實驗分析夾取姿態誤差以及透過實際機器手臂進行物體夾取位置定位實驗,確定該物體是否可以實際轉換至手臂上進行夾取任務,透過搭配機械手臂進行完整操作可以佐證本論文,其實驗結果具有高度可靠性。


This paper proposes research on robot 3D object picking based on stereo vision. The system is divided into three parts: object grasping pose generation, grasping area identification, and object grasping pose generation. The object grasping pose generation uses human visual senses to determine the object's shape and classify different categories of objects. With the background matting technology, the object's outline is extracted in any background environment, and the complete outline of the object is extracted through morphological image processing. In recognition of the gripping object area, the complete outline of the object obtained above is used, different gripping positions are identified according to the different types of object shape classifiers, the coordinates of the gripping area on the object plane are generated, and then the image coordinates are converted to the machine through the coordinate transformation relationship. The arm coordinates are used to generate the gripping posture, and finally, the converted result is input to the robot arm for the actual gripping action.
This paper uses this system to carry out coordinate transformation and positioning experiments to verify the error relationship between coordinate transformations, and uses the object collision point experiment to analyze the grasping attitude error and the actual robot arm to carry out the object grasp background ng position positioning experiment to determine whether the object can be The actual transfer to the arm to perform the gripping task, through the complete operation with the robotic arm, can support this paper, and the experimental results are highly reliable.

指導教授推薦書 i 口試委員會審定書 ii 致謝 iii 摘要 iv Abstract v Table of Contents vi List of Tables ix List of Figures x Nomenclature xii Chapter 1 Introduction 1 1.1 Motivation and Purpose 1 1.2 Literature Review 3 1.2.1 Related Research 3 1.2.2 Robotic Grasp Detection 3 1.2.3 Background Matting 5 1.2.4 Object Type Classifier 5 1.2.5 Grasp Pose Generation 6 1.3 Organization of the Thesis 8 1.3.1 Chapter introduction 8 Chapter 2 System Architecture and Operation 9 2.1 System Flowchart 9 2.2 Hardware Architecture 11 2.3 Experimental Environment 17 Chapter 3 Methods 18 3.1 Object Edge Extraction System 18 3.1.1 Arbitrary Object Type Classifier 20 3.1.2 Background Matting 25 3.1.3 Post Processing 28 Chapter 4 Grip Area Identification 31 4.1 Long Object 31 4.2 Circle Object 32 4.3 Columnar Object 34 4.4 Blade Object 36 Chapter 5 Grip Pose Generation System 38 5.1 Clamp the center plane coordinate and convert it to the camera point cloud coordinate system 38 5.2 Conversion of Camera Point Cloud Coordinate System to World Coordinate System 41 5.3 Conversion of world coordinate system to robot arm coordinate system 44 Chapter 6 Experimental Result 45 6.1 Coordinate conversion and positioning experiment 45 6.1.1 Place 1 48 6.1.2 Place 2 49 6.1.3 Place 3 50 6.1.4 Place 4 51 6.2 Object touch point positioning experiment 53 6.2.1 Long Object 54 6.2.2 Circle Object 55 6.2.3 Columnar Object 57 6.2.4 Blade Object 59 6.3 Object grip experiment 63 6.3.1 Long Object 65 6.3.2 Circle Object 67 6.3.3 Columnar Object 69 6.3.4 Blade Object 71 6.4 Comparison of clamping positions 73 6.4.1. Cornell Grasp Dataset 73 6.4.2. Compare Results 74 Chapter 7 Conclusions and Future Works 75 7.1 Conclusions 75 7.2 Future Works 75 References 76

[1]Oztemel, Ercan, and Samet Gursev. “Literature Review of Industry 4.0 and Related Technologies.” Journal of Intelligent Manufacturing, vol. 31, no. 1, 2020, pp. 127–182
[2]Birglen, Lionel, and Thomas Schlicht. “A Statistical Review of Industrial Robotic Grippers.” Robotics and Computer-Integrated Manufacturing, vol. 49, 2018, pp. 88–97
[3]Duan, Haonan, Peng Wang, Yayu Huang, Guangyun Xu, Wei Wei and Xiaofei Shen.“Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning.” Frontiers in Neurorobotics, vol. 15, 2021, p. 658280
[4]Yuanshen Zhao, Liang Gong, Yixiang Huang and Chengliang Liu,“A Review of Key Techniques of Vision-Based Control for Harvesting Robot.” Computers and Electronics in Agriculture, vol. 127, 2016, pp. 311–323.
[5]Yin, Zhiyun, and Yujie Li. “Overview of Robotic Grasp Detection from 2D to 3D.” Cognitive Robotics, vol. 2, 2022, pp. 73–82
[6]R. C. Joshi, M. Joshi, A. G. Singh and S. Mathur, "Object Detection, Classification and Tracking Methods for Video Surveillance: A Review," 2018 4th International Conference on Computing Communication and Automation (ICCCA), 2018, pp. 1-7
[7]A. M. Chowdhury, J. Jabin, E. T. Efaz, M. Ehtesham Adnan and A. B. Habib, "Object detection and classification by cascade object training," 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), 2020, pp. 1-5
[8]Lin Yu-Ching, ”Emulating Human Grasp Strategies for Identifying
Manipulator Grasp Pose and Position with RGB-D Images, ” Master Thesis, National Taiwan University of Science and Technology Electrical Engineering Department,2020.
[9]Guoguang Du, Kai Wang, Shiguo Lian and Kaiyong Zhao. “Vision-Based Robotic Grasping from Object Localization, Object Pose Estimation to Grasp Estimation for Parallel Grippers: A Review.” Artificial Intelligence Review, vol. 54, no. 3, 2021, pp. 1677–1734.
[10]Caldera, Shehan, Alexander Rassau and Douglas Chai. “Review of Deep Learning Methods in Robotic Grasp Detection.” Multimodal Technologies and Interaction, vol. 2, no. 3, 2018, p. 57.
[11]H. Cheng, Y. Wang and M. Q. -H. Meng, "A Robot Grasping System With Single-Stage Anchor-Free Deep Grasp Detector," in IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-12, 2022, Art no. 5009712.
[12]Kumra, Sulabh, and Christopher Kanan. “Robotic Grasp Detection Using Deep Convolutional Neural Networks.” ArXiv [Cs.RO], 2016.
[13]R. C. Joshi, M. Joshi, A. G. Singh and S. Mathur, "Object Detection, Classification and Tracking Methods for Video Surveillance: A Review," 2018 4th International Conference on Computing Communication and Automation (ICCCA), 2018, pp. 1-7
[14]Satoru Takada, Sho Matsumoto and Takayuki Matsushita. “Prediction of Whole-Body Thermal Sensation in the Non-Steady State Based on Skin Temperature.” Building and Environment, vol. 68, 2013, pp. 123–133.
[15]R. C. Joshi, M. Joshi, A. G. Singh and S. Mathur, "Object Detection, Classification and Tracking Methods for Video Surveillance: A Review," 2018 4th International Conference on Computing Communication and Automation (ICCCA), 2018, pp. 1-7
[16]Ian Lenz, Honglak Lee, Ashutosh Saxena. “Deep Learning for Detecting Robotic Grasps.” The International Journal of Robotics Research, vol. 34, no. 4–5, 2015, pp. 705–724.
[17]Redmon, Joseph, and Anelia Angelova. “Real-Time Grasp Detection Using Convolutional Neural Networks.” ArXiv [Cs.RO], 2014.
[18]Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick. “Mask R-CNN.” ArXiv [Cs.CV], 2017
[19]Forte, Marco, and François Pitié. “F, B, Alpha Matting.” ArXiv [Cs.CV].
[20]S. Lin, A. Ryabtsev, S. Sengupta, B. Curless, S. Seitz and I. Kemelmacher-Shlizerman,"Real-Time High-Resolution Background Matting," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 8758-8767.
[21]Z. -Q. Zhao, P. Zheng, S. -T. Xu and X. Wu, "Object Detection With Deep Learning: A Review," in IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212-3232, Nov. 2019.
[22]D. Wang, X. Chen, H. Yi and F. Zhao, "Improvement of Non-Maximum Suppression in RGB-D Object Detection," in IEEE Access, vol. 7, pp. 144134-144143, 2019
[23]P. Huang, C. Shen and H. Hsiao, "RGBD Salient Object Detection using Spatially Coherent Deep Learning Framework," 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), 2018, pp. 1-5.
[24]Kim Jung Uk, and Yong Man Ro., “Attentive Layer Separation for Object Classification and Object Localization in Object Detection,” 2019 IEEE International Conference on Image Processing (ICIP), pp. 3995-3999, 2019.
[25]J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look Once: Unified, Real- Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788.
[26]Rafael Grompone von Gioi, Jérémie Jakubowicz, Jean-Michel Morel, and Gregory Randall. “LSD: A Line Segment Detector.” Image Processing on Line, vol. 2, 2012, pp. 35–55
[27]P. Bao, Lei Zhang and Xiaolin Wu, "Canny edge detection enhancement by scale multiplication," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 9, pp. 1485-1490, Sept. 2005.
[28]Alexey Bochkovskiy, Chien-Yao Wang and Hong-Yuan Mark Liao. “YOLOv4: Optimal Speed and Accuracy of Object Detection.” ArXiv [Cs.CV], 2020,
[29]“OpenCV: Perspective-n-Point (PnP) Pose Computation.” Opencv.Org,
[30]R. Hartley and A. Zisserman, “Multiple View Geometry,” Edu.au, 1999.“OpenCV: Perspective-n-Point (PnP) Pose Computation.” Opencv.Org,
[31]Morrison, Douglas, Peter Corke , Jürgen Leitner .“Closing the Loop for Robotic Grasping: A Real-Time, Generative Grasp Synthesis Approach.” ArXiv [Cs.RO], 2018.
[32]Chia-Lien Li,”Robotic Random Bin Picking and Classification System using Instance Segmentation and Generative Grasping Convolutional Neural Network”,Department of Mechanical Engineering College of Engineering National Taiwan University Master Thesis,2020

無法下載圖示 全文公開日期 2027/08/02 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE