研究生: |
胡家瑜 Chia-Yu Hu |
---|---|
論文名稱: |
結合機器人運動學與相機成像模型之即時影像處理技術 Combining Robot Kinematics and Camera Model for Real-time Image Processing Techniques |
指導教授: |
郭重顯
Chung-Hsien Kuo |
口試委員: |
蘇國和
Kuo-Ho Su 翁慶昌 Chin-Chung Wong 劉孟昆 Meng-Kun Liu 蘇順豐 Shun-Feng Su |
學位類別: |
碩士 Master |
系所名稱: |
電資學院 - 電機工程系 Department of Electrical Engineering |
論文出版年: | 2016 |
畢業學年度: | 104 |
語文別: | 中文 |
論文頁數: | 66 |
中文關鍵詞: | 影像辨識 、動態影像感興趣區域 、影像對位 、結合機器人正向運動學與相機成像模型 、影像伺服 |
外文關鍵詞: | Visual Servo, Combining Robot Kinematics and Camera Model, Image Registration, Dynamic Region of Interest, Image Recognition |
相關次數: | 點閱:307 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文提出結合機器人運動學與相機成像模型之即時影像處理技術,在視覺伺服中屬於眼到手(Eye-to-hand)影像處理架構;一般此影像處理架構可針對機器人終端器夾持之標的物特性進行分析,然而此架構視覺範圍較廣的關係,造成影像處理速度慢,以及受到環境光源分佈不均之顏色干擾問題。因此本研究結合機器人運動學與相機成像模型來實現影像對位(Image Registration)並預測標的物成像位置,包含兩個部分,其一為正向運動學(Direct Kinematics),針對夾持於機器人終端器之待測標的物加以建構3D模型,我們可以透過編碼器回授計算出終端器所夾持標的物之三度空間位置;其二為相機成像模型(Camera Model),本文應用相機校正方法,先對攝影機進行內部參數校正,並在終端器固定位置與姿態時放置校正板,進行外部參數校正,校正後可以得到一個從三度空間轉換為2D攝影機影像空間之轉換關係。透過座標系轉換關係,在進行影像辨識時,可以即時地把運動學計算之標的物資訊轉換到影像空間,並建立動態感興趣區域,藉此降低影像處理複雜度,同時減少顏色干擾問題。為了驗證上述系統,本研究選擇一個具挑戰性的翻炒義大利麵機器人,並應用此系統來辨識麵條於翻炒過程時之影像特徵,本研究可以做為未來進行影像伺服自動翻炒機器人參數調整之依據。於實驗結果中,透過相關研究實驗來驗證翻炒鍋預測之準確性,進而在機器人進行翻炒動作時,評估義大利麵條在影像中的面積大小、質心移動軌跡,最後分析此方法能大幅地提升影像運算處理效率。
This study proposes a real-time image processing technique, which is combined with robot kinematics and camera model. The proposed technique works on an eye-to-hand image processing architecture. The architecture can be used for the image objects at the robot end effectors. However, the wide field of view (FOV) in this architecture will cause low image processing speed, and color interference in images.
Therefore, to achieve the image registration and predict the position of the object in image, this study combines robot kinematics and camera model together. Due to the known 3D CAD model of the object, direct kinematics can calculate its three-dimensional position from the encoder feedback. In addition, the camera model can convert the 3D workspace to 2D image coordinate and must be calibrated using calibration board first. Then, this study can obtain a conversion relationship from robot coordinate to image coordinate. During the image recognition, the conversion relationship can calculate the object position in image through the kinematics information in real time and produce dynamic region of interest (ROI). In this way, this study can reduce the complexity of image processing and also reduce the color interference in image.
Moreover, a challenging stir frying robot is chosen to verify this system. This thesis applies the system to identify image features during the stir frying process. The results of this study can be used as visual servoing to automatically adjust parameters of stir frying robot in the future. For the experimental results, the study can verify the accuracy of pan connected to the end effector. Next, we can evaluate the area and the trajectory from center of mass for the spaghetti. At last, this study is verified to raise the speed of the image processing dramatically.
[1] J. Hill and W.T. Park, “Real Time Control of a Robot with a Mobile Camera,” Proc. 9th ISIR, Washington, D.C., pp. 233 - 246, 1979.
[2] G. Ma, Q. Huang, Z. Yu, X. Chen, L. Meng, M. S. Sultan, W. Zhang and Y. Liu, “Hand-eye Servo and Flexible Control of an Anthropomorphic Arm,” IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1432 - 1437, 2013.
[3] C. Kulpate, M. Mehrandezh and R. Paranjape, “An Eye-to-hand Visual Servoing Structure for 3D Positioning of a Robotic Arm using One Camera and a Flat Mirror,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1464 - 1470, 2005.
[4] A. Hajiloo, M. Keshmiri, W. F. Xie and T. T. Wang, “Robust Online Model Predictive Control for a Constrained Image-based Visual Servoing,” IEEE Transactions on Industrial Electronics, pp. 2242 - 2250, 2015.
[5] J. Chen, D. M. Dawson, W. E. Dixon and V. Chitrakaran, “An Optimization-based Approach for Fusing Image-based Trajectory Generation with Position-based Visual Servo Control,” IEEE Conference on Decision and Control, pp. 4034 - 4039, 2004.
[6] R. C. Luo, S. C. Chou, X. Y. Yang and N. Peng, “Hybrid Eye-to-hand and Eye-in-hand Visual Servo System for Parallel Robot Conveyor Object Tracking and Fetching,” 40th Annual Conference of the IEEE Industrial Electronics Society, pp. 2558 - 2563, 2014.
[7] A. Fetić, D. Jurić and D. Osmanković, “The Procedure of a Camera Calibration using Camera Calibration Toolbox for MATLAB,” MIPRO, 2012 Proceedings of the 35th International Convention, pp. 1752 - 1757, 2012.
[8] L. Song, W. Wu, J. Guo and X. Li, “Survey on Camera Calibration Technique,” 5th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), pp. 389 - 392, 2013.
[9] A. Kapadia, D. Braganza, D. M. Dawson and M. L. McIntyre, “Adaptive Camera Calibration with Measurable Position of Fixed Features,” American Control Conference, pp. 3869 - 3874, 2008.
[10] R. Tsai, “A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology using Off-the-shelf TV Cameras and Lenses,” IEEE Journal of Robotics and Automation, Vol. 3, No. 4, pp. 323 - 344, 1987.
[11] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330 - 1334, 1998.
[12] Camera Calibration Toolbox Web Site at http://www.vision.caltech.edu/bouguetj/index.html.
[13] S. Hutchinson, G. D. Hager and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, Vol. 12, No.5, pp.651 - 670, 1996.
[14] C. Cedras and M. Shah, “Motion-based Recognition: A Survey,” Image and Vision Computing, Vol. 13, pp. 129 - 154, 1995.
[15] S. Huang and J. Hong, “Moving Object Tracking System Based On Camshift and Kalman Filter,” International Conference on Consumer Electronics, Communications and Networks, pp. 1423 - 1426, 2011.
[16] Y. Akagi, Y. Aoki, E. Hoshikawa and T. Sakashita, “Detection of Professional Techniques in Cooking by Image Processing,” Annual Conference SICE, pp. 1484 - 1488, 2007
[17] H. Rong and X. Dai, “Action Recognition of the Hand Holding the Pot in Cooking,” 2010 Chinese Conference on Pattern Recognition (CCPR), pp. 1 - 6, 2010
[18] R. K. Sadykhov and S. A. Kuchuk, “Background Substraction in Grayscale Images Algorithm,” IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), Vol. 1, pp. 425 - 428, 2013.
[19] M. Paralic, “Fast Connected Component Labeling in Binary Images,” 35th International Conference on Telecommunications and Signal Processing (TSP), pp. 706 - 709, 2012.
[20] 沈予平,「基於擴展型卡爾曼濾波器之中型雙足人形機器人足球影像追蹤與定位」,碩士論文,國立台灣科技大學,民國104年。
[21] 馮晨桓,「針對義大利麵條之翻炒式機器人設計及參數化軌跡控制系統開發」,碩士論文,國立台灣科技大學,民國104年。