簡易檢索 / 詳目顯示

研究生: 許家瀛
Jia-Ying Xu
論文名稱: 全天候即時前車偵測與信號辨識系統
A Real-Time Forward Vehicle Detection and Signal Recognition System in All-Weather Situations
指導教授: 王乃堅
Nai-Jian Wang
口試委員: 蘇順豐
Shun-Feng Su
鍾順平
Shun-Ping Chung
郭景明
Jing-Ming Guo
方劭云
Shao-Yun Fang
王乃堅
Nai-Jian Wang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 101
中文關鍵詞: 即時前車偵測車輛追蹤車輛尾燈偵測車輛信號辨識
外文關鍵詞: Real-time, Forward vehicle detection, Vehicle tracking, Vehicle taillight detection, Vehicle signal recognition
相關次數: 點閱:367下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

進階駕駛輔助系統之目的主要是提供車輛周圍環境與行車狀況相關資訊給駕駛人,以降低交通意外發生。車輛行駛常見的危險包括前方車輛突然煞車或者變換車道,導致與行駛車輛距離過近造成追撞事故,為了防止碰撞事故的發生,因此本篇論文提出一套『全天候即時前車偵測與信號辨識系統』,透過行車紀錄器的影像加以處理,將前方車輛提取出來,並辨識車輛的信號,例如煞車事件與轉向事件,顯示於行車紀錄器畫面上,供駕駛人參考並給予提醒,以提升行車安全。

本篇論文提出之系統分為六大部分: (1)環境亮度辨識、(2)車輛候選區域提取、(3)車輛驗證、(4)車輛追蹤、(5)車輛尾燈偵測,以及(6)信號辨識。車輛候選區域是以不同環境亮度的車輛特徵作為偵測;之後進行車輛驗證,我們以局部二元模式作為特徵,並以自適應增強演算法訓練出分類器,驗證候選區域是否為車輛;再下一步驟搭配物件追蹤演算法對車輛進行追蹤,使車輛偵測更穩定,以利車輛信號的判定;車輛信號辨識是以車燈的歷史資訊轉換成頻率特徵訓練出分類器,檢測車燈是否為閃爍狀態進而辨識車輛的轉向事件,並且偵測車輛第三煞車燈作為判斷車輛的煞車事件。

實驗結果可以看出我們的系統可以達到即時的運算處理,並且可適應於各種天候場景,同時車輛偵測率與信號辨識率分別達到93.1%與89.4%,顯示出本系統的穩健性。


Advanced driver assist system is a technology that provides drivers with essential information when driving and lead to avoid traffic accidents. Forward vehicle braking or changing lane out of a sudden are the most common dangers which lead to collision accidents. To prevent collision accident, we propose “a real-time forward vehicle detection and signal recognition system in all-weather situations”. We extract forward vehicle by images of dashboard camera, identify signals of vehicle such as braking and turning, and then display them on screen of dashboard camera to notify driver for safety.

In this thesis, our system consists of six main parts: (1) Lighting condition recognition, (2) Vehicle candidate extraction, (3) Vehicle verification, (4) Vehicle tracking, (5) Taillight detection and (6) Signal recognition. Vehicle candidate is detected by vehicle characteristics in different illumination environments. In vehicle verification, we use local binary pattern as feature, training our classifier to verify the vehicle with AdaBoost algorithm. To make signal recognition stable we use object tracking algorithm to support vehicle detection. In signal recognition, we record history information of vehicle lights and use its flicking frequency as feature to train classifier, identify vehicle turning by checking whether lights are flickering or not, and then detect third brake light of vehicle to confirm braking.

The experimental results show that our system is real-time, and is works in various weather condition. The accuracy of vehicle detection and signal recognition rates achieve 93.1% and 89.4%, which show the stability of our system.

摘要 I Abstract II 致謝 III 目錄 IV 圖目錄 VII 表目錄 XI 第一章 緒論 1 1.1 研究背景與動機 1 1.2 文獻回顧 2 1.3 論文目標 3 1.4 論文貢獻 4 1.5 論文組織 4 第二章 系統架構與開發環境 6 2.1 系統架構 6 2.2 系統開發環境 7 第三章 車輛候選區域提取 9 3.1 偵測區域定義 9 3.2 環境亮度辨識 12 3.3 白天車輛候選區域提取 15 3.3.1 車輛的底部陰影門檻值計算 15 3.3.2 形態學 16 3.3.3 快速物件連通標記法 18 3.3.4 車輛的底部陰影過濾 22 3.3.5 車輛的垂直邊緣偵測 23 3.4 晚上車輛候選區域提取 25 3.4.1 大津演算法(Otsu) 25 3.4.2 晚上車輛尾燈候選區提取 26 3.4.3 車輛的尾燈對稱性 29 第四章 車輛驗證與追蹤 31 4.1 車輛驗證 31 4.1.1 局部二元模式(Local Binary Pattern, LBP) 31 4.1.2 自適應增強演算法(Adaptive Boosting, AdaBoost) 33 4.1.3 串聯式分類器 38 4.1.4 晚上車輛在時間與空間的連續性 41 4.2 車輛追蹤(Vehicle Tracking) 42 4.2.1 (Minimum Output Sum of Squared Error, MOSSE) 42 4.2.2 (Discriminative Scale Space Tracker, DSST) 45 4.2.3 停止追蹤的條件 47 第五章 車輛尾燈偵測與信號辨識 49 5.1 車輛尾燈偵測 49 5.1.1白天車輛尾燈候選區提取 49 5.1.2車輛的尾燈驗證 51 5.2 車輛信號辨識 53 5.2.1 煞車事件偵測 53 5.2.2 轉向事件偵測 54 第六章 實驗結果與分析 58 6.1序列影像前車偵測與信號辨識結果 60 6.1.1 Sample-1序列影像實驗分析 61 6.1.2 Sample-2序列影像實驗分析 63 6.1.3 Sample-3序列影像實驗分析 65 6.1.4 Sample-4序列影像實驗分析 67 6.1.5 Sample-5序列影像實驗分析 69 6.1.6 Sample-6序列影像實驗分析 71 6.1.7 Sample-7序列影像實驗分析 73 6.1.8 Sample-8序列影像實驗分析 75 6.1.9 Sample-9序列影像實驗分析 77 6.2 綜合實驗結果與效能分析 79 6.3 不同方法結果比較 80 第七章 結論與未來研究方向 81 7.1 結論 81 7.2 未來研究方向 82 參考文獻 83

[1] A. Kanitkar, B. Bharti, and U. N. Hivarkar, “Vision based preceding vehicle detection using self shadows and structural edge features,” in 2011 International Conference on Image Information Processing, 2011, pp. 1–6.
[2] S. Han, Y. Han, and H. Hahn, “Vehicle detection method using haar-like feature on real time system,” World Academy of Science, Engineering and Technology, vol. 59, pp. 455–459, 2009.
[3] S. A. Nur, M. Ibrahim, N. Ali, and F. I. Y. Nur, “Vehicle detection based on underneath vehicle shadow using edge features,” in 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), 2016, pp. 407–412.
[4] J. Cui, F. Liu, Z. Li, and Z. Jia, “Vehicle localisation using a single camera,” in 2010 IEEE Intelligent Vehicles Symposium, 2010, pp. 871– 876.
[5] P. Viola, M. Jones et al., “Rapid object detection using a boosted cascade of simple features,” CVPR (1), vol. 1, pp. 511–518, 2001.
[6] N. Boonsim and S. Prakoonwit, “An algorithm for accurate taillight detection at night,” International Journal of Computer Applications, vol. 12, pp. 31–35, 2014.
[7] J.-W. Park and B. C. Song, “Night-time vehicle detection using low exposure video enhancement and lamp detection,” in 2016 International Conference on Electronics, Information, and Communications (ICEIC), 2016, pp. 1–2.
[8] K. M. Jeong and B. C. Song, “Night time vehicle detection using rear- lamp intensity,” in 2016 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), 2016, pp. 1–3.
[9] C. S. Pradeep and R. Ramanathan, “An improved technique for nighttime vehicle detection,” in 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 2018, pp. 508–513.
[10] Z. Di and D. He, “Forward collision warning system based on vehicle detection and tracking,” in 2016 International Conference on Optoelectronics and Image Processing (ICOIP), 2016, pp. 10–14.
[11] A. Soetedjo and I. K. Somawirata, “Improving on-road vehicle detection performance by combining detection and tracking techniques,” in 2018 3rd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), 2018, pp. 23–27.
[12] C.-C. Wu and K.-W. Weng, “The detecting and tracking system for vehicles,” in 2017 10th International Conference on Ubi-media Computing and Workshops (Ubi-Media), 2017, pp. 1–5.
[13] X. Li and X. Guo, “A hog feature and svm based method for forward vehicle detection with single camera,” in 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics, vol. 1, 2013, pp. 263–266.
[14] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, “Visual object tracking using adaptive correlation filters,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 2544–2550.
[15] M. Danelljan, G. Ha¨ger, F. S. Khan, and M. Felsberg, “Discriminative scale space tracking,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 8, pp. 1561–1575, 2017.
[16] Z. Cui, S.-W. Yang, and H.-M. Tsai, “A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition,” in 2015 IEEE 18th International Conference on Intelligent Transportation Systems, 2015, pp. 931–937.
[17] C.-L. Jen, Y.-L. Chen, and H.-Y. Hsiao, “Robust detection and tracking of vehicle taillight signals using frequency domain feature based adaboost learning,” in 2017 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), 2017, pp. 423–424.
[18] H.-T. Chen, Y.-C. Wu, and C.-C. Hsu, “Daytime preceding vehicle brake light detection using monocular vision,” IEEE Sensors Journal, vol. 16, no. 1, pp. 120–131, 2016.
[19] B. Fro¨hlich, M. Enzweiler, and U. Franke, “Will this car change the lane? turn signal recognition in the frequency domain,” in 2014 IEEE Intelligent Vehicles Symposium Proceedings, 2014, pp. 37–42.
[20] W. Liu, H. Bao, J. Zhang, and C. Xu, “Vision-based method for forward vehicle brake lights recognition,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 8, no. 6, pp. 167–180, 2015.
[21] M. Taha, H. H. Zayed, T. Nazmy, and M. Khalifa, “Day/night detector for vehicle tracking in traffic monitoring systems,” International Journal of Computer, Electrical, Automation, Control and Information Engineering, vol. 10, no. 1, p. 101, 2016.
[22] L. He, Y. Chao, K. Suzuki, and K. Wu, “Fast connected-component labeling,” Pattern Recognition, vol. 42, no. 9, pp. 1977–1987, 2009.
[23] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62–66, 1979.
[24] T. Ojala, M. Pietika¨inen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern recognition, vol. 29, no. 1, pp. 51–59, 1996.
[25] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
[26] J. Arro´spide, L. Salgado, and M. Nieto, “Video analysis-based vehicle detection and tracking using an mcmc sampling framework,” EURASIP Journal on Advances in Signal Processing, vol. 2012, no. 1, p. 2, 2012.
[27] H. Kuang, “Hong kong nighttime vehicle dataset,” Unpublished, 2016 [Online]. Available: http://rgdoi.net/10.13140/RG.2.2.20798.69447
[28] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
arXiv preprint arXiv:1804.02767, 2018.
[29] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects in context,”
in European conference on computer vision, 2014, pp. 740–755.

無法下載圖示 全文公開日期 2024/07/25 (校內網路)
全文公開日期 2024/07/25 (校外網路)
全文公開日期 2024/07/25 (國家圖書館:臺灣博碩士論文系統)
QR CODE