簡易檢索 / 詳目顯示

研究生: 胡家瑜
Chia-Yu Hu
論文名稱: 結合機器人運動學與相機成像模型之即時影像處理技術
Combining Robot Kinematics and Camera Model for Real-time Image Processing Techniques
指導教授: 郭重顯
Chung-Hsien Kuo
口試委員: 蘇國和
Kuo-Ho Su
翁慶昌
Chin-Chung Wong
劉孟昆
Meng-Kun Liu
蘇順豐
Shun-Feng Su
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 66
中文關鍵詞: 影像辨識動態影像感興趣區域影像對位結合機器人正向運動學與相機成像模型影像伺服
外文關鍵詞: Visual Servo, Combining Robot Kinematics and Camera Model, Image Registration, Dynamic Region of Interest, Image Recognition
相關次數: 點閱:307下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出結合機器人運動學與相機成像模型之即時影像處理技術,在視覺伺服中屬於眼到手(Eye-to-hand)影像處理架構;一般此影像處理架構可針對機器人終端器夾持之標的物特性進行分析,然而此架構視覺範圍較廣的關係,造成影像處理速度慢,以及受到環境光源分佈不均之顏色干擾問題。因此本研究結合機器人運動學與相機成像模型來實現影像對位(Image Registration)並預測標的物成像位置,包含兩個部分,其一為正向運動學(Direct Kinematics),針對夾持於機器人終端器之待測標的物加以建構3D模型,我們可以透過編碼器回授計算出終端器所夾持標的物之三度空間位置;其二為相機成像模型(Camera Model),本文應用相機校正方法,先對攝影機進行內部參數校正,並在終端器固定位置與姿態時放置校正板,進行外部參數校正,校正後可以得到一個從三度空間轉換為2D攝影機影像空間之轉換關係。透過座標系轉換關係,在進行影像辨識時,可以即時地把運動學計算之標的物資訊轉換到影像空間,並建立動態感興趣區域,藉此降低影像處理複雜度,同時減少顏色干擾問題。為了驗證上述系統,本研究選擇一個具挑戰性的翻炒義大利麵機器人,並應用此系統來辨識麵條於翻炒過程時之影像特徵,本研究可以做為未來進行影像伺服自動翻炒機器人參數調整之依據。於實驗結果中,透過相關研究實驗來驗證翻炒鍋預測之準確性,進而在機器人進行翻炒動作時,評估義大利麵條在影像中的面積大小、質心移動軌跡,最後分析此方法能大幅地提升影像運算處理效率。


    This study proposes a real-time image processing technique, which is combined with robot kinematics and camera model. The proposed technique works on an eye-to-hand image processing architecture. The architecture can be used for the image objects at the robot end effectors. However, the wide field of view (FOV) in this architecture will cause low image processing speed, and color interference in images.
    Therefore, to achieve the image registration and predict the position of the object in image, this study combines robot kinematics and camera model together. Due to the known 3D CAD model of the object, direct kinematics can calculate its three-dimensional position from the encoder feedback. In addition, the camera model can convert the 3D workspace to 2D image coordinate and must be calibrated using calibration board first. Then, this study can obtain a conversion relationship from robot coordinate to image coordinate. During the image recognition, the conversion relationship can calculate the object position in image through the kinematics information in real time and produce dynamic region of interest (ROI). In this way, this study can reduce the complexity of image processing and also reduce the color interference in image.
    Moreover, a challenging stir frying robot is chosen to verify this system. This thesis applies the system to identify image features during the stir frying process. The results of this study can be used as visual servoing to automatically adjust parameters of stir frying robot in the future. For the experimental results, the study can verify the accuracy of pan connected to the end effector. Next, we can evaluate the area and the trajectory from center of mass for the spaghetti. At last, this study is verified to raise the speed of the image processing dramatically.

    指導教授推薦書 i 碩士學位考試委員審定書 ii 誌謝 iii 摘要 v ABSTRACT vi 目錄 vii 圖目錄 ix 表目錄 xi 符號說明 xii 第1章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 3 1.3 文獻回顧 4 1.3.1 視覺伺服相關研究 4 1.3.2 相機校正方法相關研究 5 1.3.3 動態影像分析與影像處理相關研究 6 1.4 論文架構 8 第2章 研究方法 9 2.1 翻炒機器人運動學 9 2.1.1 正向運動學 9 2.1.2 逆向運動學 14 2.2 相機成像模型 16 2.2.1 內部參數 16 2.2.2 外部參數 19 2.3 翻炒鍋影像對位實現方法 21 2.3.1 建立機器人與校正板座標系關係 21 2.3.2 相機校正流程 23 2.3.3 翻炒鍋3D模型介紹 26 2.3.4 翻炒鍋影像對位流程 28 2.3.5 影像畫面延遲時間補償 29 第3章 針對義大利麵之影像處理系統 31 3.1 機器人與影像系統介紹 31 3.1.1 影像擷取模組 32 3.1.2 絕對式編碼器 33 3.2 義大利麵條影像特徵 34 3.2.1 面積 35 3.2.2 質量中心 35 3.3 影像處理流程 36 3.4 影像處理 38 3.4.1 動態感興趣區域 38 3.4.2 背景相減法 39 3.4.3 色彩空間辨識 41 第4章 實驗結果與討論 42 4.1 不同外部參數之翻炒鍋成像結果 42 4.2 翻炒鍋影像對位結果分析 46 4.3 義大利麵條影像特徵辨識準確度分析 51 4.4 針對義大利麵條不同辨識演算法運算時間分析 59 第5章 結論與未來研究方向 62 5.1 結論 62 5.2 未來研究方向 63 參考文獻 64

    [1] J. Hill and W.T. Park, “Real Time Control of a Robot with a Mobile Camera,” Proc. 9th ISIR, Washington, D.C., pp. 233 - 246, 1979.
    [2] G. Ma, Q. Huang, Z. Yu, X. Chen, L. Meng, M. S. Sultan, W. Zhang and Y. Liu, “Hand-eye Servo and Flexible Control of an Anthropomorphic Arm,” IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1432 - 1437, 2013.
    [3] C. Kulpate, M. Mehrandezh and R. Paranjape, “An Eye-to-hand Visual Servoing Structure for 3D Positioning of a Robotic Arm using One Camera and a Flat Mirror,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1464 - 1470, 2005.
    [4] A. Hajiloo, M. Keshmiri, W. F. Xie and T. T. Wang, “Robust Online Model Predictive Control for a Constrained Image-based Visual Servoing,” IEEE Transactions on Industrial Electronics, pp. 2242 - 2250, 2015.
    [5] J. Chen, D. M. Dawson, W. E. Dixon and V. Chitrakaran, “An Optimization-based Approach for Fusing Image-based Trajectory Generation with Position-based Visual Servo Control,” IEEE Conference on Decision and Control, pp. 4034 - 4039, 2004.
    [6] R. C. Luo, S. C. Chou, X. Y. Yang and N. Peng, “Hybrid Eye-to-hand and Eye-in-hand Visual Servo System for Parallel Robot Conveyor Object Tracking and Fetching,” 40th Annual Conference of the IEEE Industrial Electronics Society, pp. 2558 - 2563, 2014.
    [7] A. Fetić, D. Jurić and D. Osmanković, “The Procedure of a Camera Calibration using Camera Calibration Toolbox for MATLAB,” MIPRO, 2012 Proceedings of the 35th International Convention, pp. 1752 - 1757, 2012.
    [8] L. Song, W. Wu, J. Guo and X. Li, “Survey on Camera Calibration Technique,” 5th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), pp. 389 - 392, 2013.
    [9] A. Kapadia, D. Braganza, D. M. Dawson and M. L. McIntyre, “Adaptive Camera Calibration with Measurable Position of Fixed Features,” American Control Conference, pp. 3869 - 3874, 2008.
    [10] R. Tsai, “A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology using Off-the-shelf TV Cameras and Lenses,” IEEE Journal of Robotics and Automation, Vol. 3, No. 4, pp. 323 - 344, 1987.
    [11] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330 - 1334, 1998.
    [12] Camera Calibration Toolbox Web Site at http://www.vision.caltech.edu/bouguetj/index.html.
    [13] S. Hutchinson, G. D. Hager and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, Vol. 12, No.5, pp.651 - 670, 1996.
    [14] C. Cedras and M. Shah, “Motion-based Recognition: A Survey,” Image and Vision Computing, Vol. 13, pp. 129 - 154, 1995.
    [15] S. Huang and J. Hong, “Moving Object Tracking System Based On Camshift and Kalman Filter,” International Conference on Consumer Electronics, Communications and Networks, pp. 1423 - 1426, 2011.
    [16] Y. Akagi, Y. Aoki, E. Hoshikawa and T. Sakashita, “Detection of Professional Techniques in Cooking by Image Processing,” Annual Conference SICE, pp. 1484 - 1488, 2007
    [17] H. Rong and X. Dai, “Action Recognition of the Hand Holding the Pot in Cooking,” 2010 Chinese Conference on Pattern Recognition (CCPR), pp. 1 - 6, 2010
    [18] R. K. Sadykhov and S. A. Kuchuk, “Background Substraction in Grayscale Images Algorithm,” IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), Vol. 1, pp. 425 - 428, 2013.
    [19] M. Paralic, “Fast Connected Component Labeling in Binary Images,” 35th International Conference on Telecommunications and Signal Processing (TSP), pp. 706 - 709, 2012.
    [20] 沈予平,「基於擴展型卡爾曼濾波器之中型雙足人形機器人足球影像追蹤與定位」,碩士論文,國立台灣科技大學,民國104年。
    [21] 馮晨桓,「針對義大利麵條之翻炒式機器人設計及參數化軌跡控制系統開發」,碩士論文,國立台灣科技大學,民國104年。

    無法下載圖示 全文公開日期 2021/08/30 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE