簡易檢索 / 詳目顯示

研究生: 賴炫霖
Hsuan-Lin Lai
論文名稱: 結合2D和3D視覺之機器手臂蘭花芽株全自主取放系統
Robot arm based autonomous orchid bud grasping and placing system based on 2D/3D vision
指導教授: 林其禹
Chyi-Yeu Lin
口試委員: 李維楨
Wei-chen Lee
劉孟昆
Meng-Kun Liu
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 90
中文關鍵詞: 蘭花芽株操作全自主物件夾取影像處理2D物件辨識Perspective-n-Point雙目相機
外文關鍵詞: Orchid Buds Operations, Autonomous Object Grasping, Image Processing, 2D Object Recognition, Perspective-n-Point, Binocular Vision Theory
相關次數: 點閱:359下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究針對蘭花芽株培養,開發能自動以機器手臂夾取蘭花芽株出瓶和自動將蘭花芽株莖切割後的小芽株放入於瓶內的智慧自動化技術,結合2D和3D的視覺技術,並結合六軸串聯式機械手臂進行物件夾取。上述技術結合相機基礎理論、雙目視覺理論、Perspective-n-Point (PnP)來進行相機定位、機器人學和深度學習來訓練2D物件辨識。
    針對自動夾取蘭花芽株出瓶系統,因在培養中的瓶子無法使深度相機有效建模,本研究使用雙目相機來克服此問題以達成從瓶中夾取蘭花工作。針對將蘭花芽株放入培養瓶系統,因不會有杯壁干擾3D建模,故將深度相機固定於機械手臂上進行影像建模。
    為確保判斷芽株的頭部及根部位置正確,使用深度學習判斷方位,再進行夾取動作,夾取的過程中,受到夾具設計以及機器人運動學的影響,需分成兩部分進行兩段式放入,第一段過程為辨識物件的頭尾,再夾取並依序將其放入固定架上,第二段過程為規劃機械手臂放入路徑,將固定架上的蘭花芽株依序放入瓶中。


    This research aims to develop robot arm-based autonomous orchid bud operation techniques including picks orchid buds out of the bottle and place the orchid buds after fixing operation back to the bottle. The techniques implement, both 2D and 3D vision systems and the use of a six-axis robot arm used for object grasping. The techniques used include basic camera theory, binocular vision theory, Perspective-n-Point (PnP) for camera positioning, robotics and2D object recognition using deep learning.
    For the automatic grasping and picking of orchid buds out of the bottle, because the general depth camera modeling method often based on laser-assisted positioning and modeling, the depth camera cannot effectively model the transparent bottle used to contain orchid buds. This research uses a binocular camera to overcome this problem and achieve the task of grasping the orchid buds from the bottle. In the placing orchid buds back bottle system, the transparent bottle wall will not interfere with the 3D modeling. Therefore, the depth camera is used to attached on the robot arm for image modeling.
    To ensure the correct positioning of the head and roots of the orchid buds, the deep learning algorithm is used to determine the correct position, and then provide information for the grasping action. The grasping process is divided into two parts. The first stage is to identify the head and tail of the orchid buds, then grab and place it in the fixed rack in sequence. The second stage of the process is to pick the items and place back the bottle in order.

    摘要 Abstract 誌謝 目錄 圖目錄 表目錄 第一章 緒論 1-1 前言 1-2 研究動機與研究目的 1-3本文架構 第二章基礎理論 2-1 相機系統 2-1-1 相機成像原理 2-1-2 內部參數(Intrinsic Parameters) 2-1-3 外部參數(Extrinsic Parameters) 2-1-4 形變參數(Distortion Coefficients) 2-1-5深度相機 2-2 PnP(Perspective-n-Point) 2-3影像處理 - 角點偵測 2-4手眼校正 2-4-1 AX=ZB手眼校正 2-5機械手臂運動學 2-5-1手臂末端點之座標轉換與控制 2-6雙目視覺系統 2-7深度學習-2D物件辨識 2-8 影像匹配 第三章實驗器材與設置 3-1六軸機械手臂 3-2夾爪 3-3手臂定位輔助器 3-4固定架 3-5深度相機 3-6電腦規格 3-7 2D相機 3-8實驗環境 第四章實驗流程與步驟 4-1 系統架構與系統流程 4-2相機校正 4-3 PNP換算手臂與相機關係 4-4 AI前置作業 4-5夾取物件 第五章實驗結果 5-1 蘭花夾出部分 5-2蘭花放入部分 第六章 結論與未來展望 6-1 結論 6-2 未來展望 參考文獻

    1.Forsyth, D.A. and J. Ponce, A modern approach. Computer vision: a modern approach, 2003: p. 88-101.
    2.Heikkila, J. and O. Silven. A four-step camera calibration procedure with implicit image correction. in Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on. 1997. IEEE.
    3.Zhang, Z., A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence, 2000. 22.
    4.針孔投影圖,取自<https://blog.techbridge.cc/2018/04/22/intro-to-pinhole-camera-model/>
    5.針孔成像原理,取自<https://abcd40404.github.io/2018/08/08/camera-frame-world-frame/>
    6.Spong, M.W., S. Hutchinson, and M. Vidyasagar, Robot modeling and control. Vol. 3. 2006: Wiley New York.
    7.Duane, C.B., Close-range camera calibration. Photogramm. Eng, 1971. 37(8): p. 855-866.
    8.Banglei Guan, Yang Shang, and Qifeng Yu, "Planar self-calibration for stereo cameras with radial distortion," Appl. Opt. 56, 9257-9267 (2017)
    9.相機形變參數介紹,取自<https://silverwind1982.pixnet.net/blog/post/153218861>
    10.Fryer, J.G. and D.C. Brown, Lens distortion for close-range photogrammetry. Photogrammetric engineering and remote sensing, 1986. 52(1): p. 51-58.
    11.三角測量介紹,取自<https://zh.wikipedia.org/wiki/%E4%B8%89%E8%A7%92%E6%B8%AC%E9%87%8F >
    12.Gao, Xiao-Shan, et al. "Complete solution classification for the perspective-three-point problem." IEEE transactions on pattern analysis and machine intelligence 25.8 (2003): 930-943.
    13.Horaud, R., et al. An analytic solution for the perspective 4-point problem. in Computer Vision and Pattern Recognition, 1989. Proceedings CVPR'89., IEEE Computer Society Conference on. 1989. IEEE.
    14.Zhi, Lihong, and Jianliang Tang. "A complete linear 4-point algorithm for camera pose determination." AMSS, Academia Sinica 21 (2002): 239-249.
    15.Quan, Long, and ZhongdanLan. "Linear n-point camera pose determination." IEEE Transactions on pattern analysis and machine intelligence 21.8 (1999): 774-780.
    16.Harris, C. and M. Stephens. A combined corner and edge detector. in Alvey vision conference. 1988. Citeseer.
    17.Steven J. Miller. The Method of Least Squares, Department of Mathematics and Statistics, Williams College, Williamstown, Massachusetts
    18.Tsai, R.Y. and R.K. Lenz. Real time versatile robotics hand/eye calibration using 3D machine vision. in Robotics and Automation, 1988. Proceedings., 1988 IEEE International Conference on. 1988. IEEE.
    19.Dornaika, F. and R. Horaud, Simultaneous robot-world and hand-eye calibration. IEEE transactions on Robotics and Automation, 1998. 14(4): p. 617-622.
    20.Bayro-Corrochano, E., K. Daniilidis, and G. Sommer, Motor algebra for 3D kinematics: The case of the hand-eye calibration. Journal of Mathematical Imaging and Vision, 2000. 13(2): p. 79-100
    21.Strobl, K.H. and G. Hirzinger. Optimal hand-eye calibration. in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on. 2006. IEEE.
    22.機器人學圖,取自<https://zh-tw.coursera.org/lecture/robotics1/3-3-dhbiao-da-fa-3LjyX>
    23.雙目視覺,取自<https://blog.csdn.net/qq_16181387/article/details/78442758?utm_medium=distribute.pc_relevant_download.none-task-blog-searchfrombaidu-4.nonecase&depth_1-utm_source=distribute.pc_relevant_download.none-task-blog-searchfrombaidu-4.nonecas>
    24.yolov4深度學習,取自<https://medium.com/ching-i/yolo%E6%BC%94%E9%80%B2-3-yolov4%E8%A9%B3%E7%B4%B0%E4%BB%8B%E7%B4%B9-5ab2490754ef>

    QR CODE