簡易檢索 / 詳目顯示

研究生: 林威漢
Wei-Han Lin
論文名稱: 環繞影像駕駛輔助系統
Around View for Driver Assistance System
指導教授: 王乃堅
Nai-Jian Wang
口試委員: 郭景明
Jing-Ming Guo
鍾順平
Shun-Ping Chung
呂學坤
Shun-Ping Chung
方劭云
Shao-Yun Fang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 65
中文關鍵詞: 環繞影像駕駛輔助系統影像拼接鳥瞰轉換影像匹配
外文關鍵詞: Around View for Driver Assistance System, Image Stitching, Bird’s Eye Transformation, Image Matching
相關次數: 點閱:298下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文主要目標是建立一個環繞影像駕駛輔助系統,以四周圍的影像轉為鳥瞰影像並加以拼接合成,經過合成會得到車輛四周圍的鳥瞰合成影像,駕駛人可以藉由此影像來得知車輛周圍是否有障礙物或者是行人,增加行車安全。
    本論文可以分為三大部分,第一個部分是鳥瞰影像轉換,由於正上方的視角來觀察車輛周圍的障礙物,是最廣、最清楚的視角,因此我們將影像做鳥瞰轉換。第二部分是影像匹配,在此部分主要是先找出影像中的特徵點,這些特徵點可能是角點或者是一些影像中較明顯的點,本論文我們是利用Harris Corner演算法來找出角點當作我們的特徵點,再來將這些特徵點周圍的梯度值和方向,做為這些特徵點的梯度向量,最後利用這些梯度向量來做影像匹配,並利用RANSAC演算法來濾除其中的錯誤匹配。第三部分是鳥瞰影像合成,在此部分我們利用第二部分的特徵點匹配結果加以運算來做影像拼接,利用前、後、左、右,四個鳥瞰影像加以合成一個環繞鳥瞰影像。
    在本實驗中,我們利用四台解析度為640×480的攝影機來實現環繞影像駕駛輔助系統,最後我們可以即時輸出我們的影像,影像輸出平均可達到28 FPS。


    The main objective of this thesis is to create an around view for drivers in automatic driver assistance system. We transform the four around images into bird's eye images and stitch them together. Eventually, we can get the stitched bird's eye images surround our vehicle. Furthermore, drivers can be informed by this image where the obstructions or the pedestrians are. This system can also increase safety driving on road.
    This thesis is divided into three parts. The first part is bird's eye image transformation. We transform images into bird's eye images because the top view is widest view which is used to observe the obstacles around the vehicle. The second part is the image matching. In this section, we have to find out the feature points which may be corners or some obvious points on the images. We use Harris Corner detector to find out the feature points on the images. After finding out these feature points, we use the gradient vector to represent these feature points. Then, we use these gradient vectors to match images and use RANSAC algorithm to filter out the mismatches. The third part is the bird's eye image stitching. In this section, we use the results of the second part to calculate the image stitching matrix. Finally, we stitch four direction bird's-eye image to complete the surround bird's eye image.
    To achieve Around View for Driver Assistance System, we use four cameras with 640 × 480 images and show the bird's eye image on the screen. Experimental results show that our system can reach 28 FPS.

    摘要 I Abstract II 致謝 III 目錄 IV 圖目錄 VI 第一章 緒論 1 1.1研究背景與動機 1 1.2 文獻回顧 2 1.3 論文目標 3 1.4 論文組織 4 第二章 系統架構與發展環境 6 2.1 系統架構 6 2.2 開發環境 7 第三章 離線前置參數求取 11 3.1 鳥瞰轉換 11 3.1.1 鳥瞰基準點定位 11 3.1.2 鳥瞰參數求取 13 3.2影像匹配 16 3.2.1 特徵點求取 16 3.2.2 特徵點主方向求取 22 3.2.3 特徵描述向量 26 3.2.4 特徵點匹配 27 3.2.5 RANSAC 30 3.3影像幾何轉換及合成 34 3.3.1影像幾何轉換 34 3.3.2影像合成 37 第四章 即時環繞影像合成 39 4.1 環繞影像合成 39 第五章 實驗結果與分析 42 第六章 結論與未來研究方向 52 6.1 結論 52 6.2未來研究方向 53 參考文獻 54

    [1]. Tarak Gandhi, Mohan Manubhai Trivedi, “Motion based vehicle surround analysis using an omni-directional camera,” IEEE Intelligent Vehicles Symposium, Issue 14-17, pages 560-565, 2004.
    [2]. Tarak Gandhi, Mohan Manubhai Trivedi, “Vehicle surround capture survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps,” IEEE Transactions on Intelligent Transportation Systems, Volume 7, No.3, pages 293-308, Sep. 2006.
    [3]. Yu-Chih Liu, Kai-Ying Lin, Yong-Sheng Chen, “Bird's-eye view vision system for vehicle surrounding monitoring,” Proceedings of the 2nd international conference on Robot vision, pages 207-217, 2009.
    [4]. Yu-Chih Liu, Kai-Ying Lin, Yong-Sheng Chen, “Bird’s-eye view vision system for vehicle surrounding monitoring,” Proceedings conference on Robot Vision, pages 207-218, Feb. 2008.
    [5]. Richard Hartley and Andrew Zisserman, “Multiple view geometry in computer vision,” Cambridge University Press, 2003.
    [6]. P.H.S. Torr, “ Bayesian model estimation and selection for epipolar geometry and generic manifold fitting,” International Journal of Computer Vision, Volume 50, Issue 1, pages 35–61, 2002.
    [7]. 鍾國亮,「影像處理與電腦視覺,4th」,台灣東華書局股份有限公司,民國一○○年。
    [8]. Chris Harris, Mike Stephens, “A combined corner and edge detector,” The Fourth Alvey Vision Conference, pages 147-151, Aug.31-Sep.2 1988.
    [9]. Jian Jiang, Bo Lin, “An effective method for corner detection,” Control Automation and Systems, 2013.
    [10]. David G. Lowe, “Distinctive image features from scale-invariant key-points,” International Journal of Computer Vision, pages 91-110, 2004.
    [11]. Matthew Brown, David G. Lowe, “Automatic panoramic Image stitching using invariant features,” International Journal of Computer Vision, Volume 74, No.1, pages 59-73, 2006.
    [12]. Herbert Bay, Tinne Tuytelaars, Luc Van Gool, “Surf: Speeded up robust features,” Proceedings of European Conference on Computer Vision, Volume 3951, pages 404-417, 2006.
    [13]. Konstantinos G. Derpanis, “Integral image-based representations,” Technical Report, Computer Science and Engineering York University, 2007.
    [14]. Martin A. Fischler, Robert C. Bolles, “ Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography,” Communications of the ACM, Volume 24, Issue 6, pages 381–395, 1981.
    [15]. Edison Vincent, Richard Laganiere, “Detecting and matching feature points,” Journal of Visual Communication and Image Representation, Volume 16, No. 1, pages 38-54, 2005.
    [16]. James Davis, “Mosaics of scenes with moving objects,” IEEE Computer Vision and Pattern Recognition, Santa Barbara, pages 354-360, Jun.23-25 1998.
    [17]. Peter Burt and Edward Adelson, “A multiresolution spline with application to image mosaics,” ACM Transactions on Graphics, pages 217–236, 1983.

    QR CODE