簡易檢索 / 詳目顯示

研究生: 謝承哲
Cheng-Che Hsieh
論文名稱: 移動機器人立體視覺測距及追蹤移動目標
Stereo Vision Range Estimation and Tracking a Moving Target for a Mobile Robot
指導教授: 施慶隆
Ching-Long Shih
口試委員: 劉昌煥
Chang-Huan Liu
陳志明
Chih-Ming Chen
黃志良
Chih-Lyang Hwang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2010
畢業學年度: 98
語文別: 中文
論文頁數: 84
中文關鍵詞: FPGA影像追蹤立體視覺移動機器人
外文關鍵詞: FPGA, image tracking, stereo vision, mobile robot
相關次數: 點閱:266下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文利用FPGA開發板結合兩個CMOS影像感測模組,建立自走車三維空間距離感測以及即時影像追蹤功能。自走車追蹤特定目標物之步驟為,首先藉由兩張成對的影像,經過影像處理找出具有相同特徵點的特定目標物,利用立體影像三角成像原理計算出特定目標物與自走車的距離。經由所得到的特定目標物位置以及距離資訊,系統可自動控制自走車的移動,使自走車追蹤特定目標物,並與特定目標物保持一定距離。操作者也可透過無線藍牙方式以手動控制模式引導自走車移動。


The thesis aims to utilize a FPGA development platform with two CMOS image sensors to build three-dimensional distance estimation function for a mobile robot as well as a real-time image tracking function. The steps for tracking a specific target of a mobile robot are as follows. At first, a pair of images is used to find a specific target which has the same image features. After that the triangular perspective theory of the stereo vision is used to calculate the distance between a specific target and the mobile robot. Furthermore, a mobile robot can also move autonomously by knowing the distance between a specific target and itself, so that it can track the specific target and keep a fixed distance ahead. User can also guide the mobile robot in manual control mode via the Bluetooth wireless.

摘要 I Abstract II 致謝 III 目錄 IV 圖表索引 VII 第一章 緒論 1 1.1 前言 1 1.2 文獻回顧 1 1.3 研究目的與動機 3 1.4 論文架構 4 第二章 影像處理基本原理 5 2.1 色彩空間簡介 5 2.1.1 RGB色彩空間 6 2.1.2 YUV色彩空間 7 2.2 平滑線性濾波 8 2.3 顏色過濾二值化 9 2.4 中心判斷 12 2.5 立體視覺量測距離 13 第三章 系統硬體架構 17 3.1 自走車車體結構 18 3.2 Altera DE2 Board介紹 21 3.3 馬達驅動模組介紹 23 3.4 CMOS影像感測器 25 3.5 藍芽無線模組 27 第四章 自走車系統實現 29 4.1 FPGA系統架構與規劃 29 4.2 基本模組之設計 31 4.2.1 乘法器模組 31 4.2.2 除法器模組 34 4.2.3 除頻器 36 4.3 系統控制區塊設計 37 4.3.1 System Controller模組 38 4.3.2 LCD Controller模組 39 4.3.3 Motor Controller模組 41 4.4 影像追蹤區塊設計 43 4.4.1 CMOS Sensor Controller模組 44 4.4.2 影像處理模組 46 4.4.3 SDRAM Controller模組 55 4.4.4 目標物中心與距離計算模組 58 4.5 手動控制區塊設計 60 4.5.1 UART Controller模組 61 4.5.2 ASCII指令解碼模組 64 第五章 實驗結果 66 5.1 手動模式操作 66 5.2 影像處理驗證 68 5.3 目標物位置順序與距離顯示 70 5.4 立體視覺量測距離結果驗證 71 5.5 影像追蹤自走模式操作 73 5.5.1 特定目標物選定 73 5.5.2 球體追蹤 75 5.5.3 人體追蹤 78 5.6 FPGA資源分配 80 第六章 結論與建議 81 6.1 結論 81 6.2 建議 82 參考文獻 83

[1] Leese, J. A., Novak, C. S., and Taylor, V. R. “The Determination of Cloud Motion Patterns from Geosynchronous Satellite Image Data,” Patter Recognition, Clustering, Statistic, Grammars, Learning, Vol. 2, pp. 272-292, 1970.

[2] Horn, K. P. and Schunck, B. G. “Determining Optical Flow,” Artificial Intelligence, Vol. 17, pp. 185-203, 1981.

[3] Fuh, C. S. and Maragos P., “Region-Based Optical Flow Estimation,” Proceedings of IEEE Conference on Computer Vision and attern Recognition, San Diego, CA, pp. 130-133, 1989.

[4] Tan, Y. P., Kulkarni, S. R., and Ramadge, P. J. “A New Method for Camera Motion Paramater Estimation,” IEEE International Conference on Image Processing, Vol. 1, pp. 406-409, 1995.

[5] Masoud, O. and Papanikolopoulos, N. P. “A Novel Method for Tracking and Counting Pedestrians in Real-Time Using a Single Camera,” IEEE Trans. on Vehicukar Technology, Vol. 50, No. 5, pp. 1267-1278, 2001.

[6] Marr, D. and Poggio, T. “Cooperative Computation of Stereo Disparity,” Science, Vol. 194, pp. 283-287, 1976.

[7] Barnard, S. T. and Thompson, W. B., “Disparity Analysis of Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 2, pp. 330-340, 1980.

[8] Tsai, R. Y., “A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses,” IEEE J. of Robotics and Automation, Vol. RA-3, No. 4, pp. 323-344, 1987.

[9] Izaguirre, A., Pu, P., and Summers, J., “A New Development in Camera Calibration: Calibrating a Pair of Mobile Cameras,” Int. J. Robotics Res., Vol. 6, No. 3, pp. 104-116, 1987.

[10] Yau, W. Y. and Wang, H., “Fast Relative Depth Computation for an Active Stereo Vision System,” Real-Time Imaging, Vol. 5, pp. 189-202, 1999.

[11] Murray D., Little J., “Using Real-time Stereo Vision for Mobile Robot Navigation,” Autonomous Robots, 8(2):161-171, 2000.

[12] M. Bertozzi, A. Broggi, “GOLD: a Parallel Real-time Stereo Vision System for Generic Obstacle and Lane Detection,” IEEE Transactions on Image Processing, Vol. 7. No. 1, pp. 62-81, 1998.

[13] Kazuhiro Shimizu, Shinichi Hirai, “Implementing Planar Motion Tracking Algorithms on CMOS+FPGA Vision System,” IEEE International on Intelligent Robots and Systems, pp. 1366-1371, 2006.

[14] Yu-Jen Chen, Yan-Chay Li, Ke-Nung Huang, Ming-Shing, Young; “The Implementation of a Stand-alone Video Tracking and Analysis System for Animal Behavior Measurement in Morris Water Maze,” IEEE 27th Annual International Conference of the Engineering in Medicine and Biology Society, pp. 1766-1768, 2005.

[15] 鐘書明, “以SOPC實現強鍵性彩色影像物體追蹤及移動預測系統,” 碩士論文, 國立成功大學電機工程學系, 民國93年6月.

[16] 錢鉦津, “模糊控制之立體視覺影像追蹤系統,” 碩士論文, 中原大學機械工程系, 民國94年7月.

[17] 陳俊佳, “即時影像追蹤之自走車設計,” 碩士論文, 國立中央大學電機工程學系, 民國96年6月.

[18] 任紹棟, “具立體視覺測距之移動機器人,” 碩士論文, 國立台灣科技大學電機工程系, 民國96年7月.

[19] 鐘國亮, “影像處理與電腦視覺,” 東華書局, 台北市, 1999年9月.

[20] Ming-Bo Lin, “Digital System Designs and Practices: Using Verilog HDL and FPGAs,” Wiley, 2008.

QR CODE