簡易檢索 / 詳目顯示

研究生: 葉倩妤
Chien-yu Yeh
論文名稱: 透過V2R串流影像網路增強駕駛人行車視角
Enhancing Driver's Visual Angle Through V2R Video Streaming
指導教授: 許孟超
Mon-Chau Shie
口試委員: 陳維美
Wei-Mei Chen
林昌鴻
Chang Hong Lin
阮聖彰
Shanq-Jang Ruan
林淵翔
Yuan-Hsiang Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 73
中文關鍵詞: V2RCodebook車輛偵測輔助駕駛
外文關鍵詞: V2R, Codebook, Vehicle Detection, Driver Assistance
相關次數: 點閱:206下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

台灣因自然地形的關係,許多地方潛藏著危險的道路,如北宜公路有九彎十八拐之稱,因道路沿著雪山的地勢開設,道路線行崎嶇視線不良,時常發生重大的交通事故;以及台灣早期欠缺良好的都市規劃,造成許多地區道路盤根錯節,巷弄街道狹小複雜,在與主要幹道接軌處或是岔路口容易產生駕駛的視覺死角,易發生交通事故。
現今社會中,人們追求更安全的行車空間,也越來越注重自身的用路權,紛紛在發生事故後調閱監視錄影器保障自己的權益。而一般道路上的監控系統,都是屬於透過不停的紀錄,發生交通事故將此紀錄做為判斷糾紛的依據,無法在事故尚未發生前給語提醒與警告。因此,電腦影像技術應用於監控系統成為目前熱門的領域,如智慧運輸系統(Intelligent Transportation System, ITS)的先進車輛控制及安全系統,即可在事故發生前主動的提供即時資訊供用駕駛人參考,達到提式與警告的目標,以降低交通事故的發生。
假設本系統架設於號誌燈的上方,以雙攝影機做為系統的輸入影像,模擬人眼產生出三維座標。系統架構是建立於Codebook背景模組為基礎提取出道路中的所有車輛做為偵測目標,再透過輪廓追蹤將每輛車分割出來,記錄車頭的中心座標。將兩台攝影機所記錄下來的車輛座標表做座標點匹配,帶入三維投影矩陣中求出三維座標。最後將其座標透過V2R網路傳送到連結的車輛接受端中顯示,達到取得周圍車輛資訊的目標。
實驗模擬結果顯示,本論文所實現的系統能夠偵測出在道路上行駛的車輛 ,並且以雙攝影機的方式模擬三維世界座標,透過V2R網路架構傳輸到車輛端顯示出來。在執行速度上平均執行速度為35ms/per frame,車輛偵測成功率為94.8%,車輛成功追蹤率為81%。根據模擬結果可增廣駕駛人視角,若將此系統整合於目前道路上的監視系統上,可有效增加行車安全並減少車禍發生的機率。


The territory in Taiwan is full of naturally rugged terrain. This results in many dangerous roads, such as national highway No. 9 in Taiwan, where is along the Xueshan(雪山). The driver's visual angle are too limited by front cars, causing traffic accidents frequently occurred in the rugged road. There are a lot kinds of complicated road structure in Taiwan, because the lack of planning when the early city roads were designed. When the vehicles reach a crossing or a fork road, this can easily lead to driver's visual blind-spot. It makes traffic accidents frequently.
In the modern society, people look for a safer driving and driving rights. But traditional passive video surveillance systems only can record events, and are often used as evidences after accidents happened. Advanced vehicle control and safety systems are popular fields of study. The system provides real-time information such as relative position of cars and the speed to the car drivers which can avoid accidents happened.
We implemented a vehicle information system through the V2R network architecture. Our proposed system can enhance drivers' visual angle through the V2R video streaming. The inputs of this system are traffic images through two cameras and uses Codebook background model used to detect the two-dimensional position of vehicle. Then we use two-dimensional coordinates from two cameras to get the three-dimensional coordinates of vehicle. Relaying the information to drivers nearby could reduce the accident. Our simulated experiment shows a correct vehicle detection rate of 94.8% and a correct vehicle tracking rate of 81%. According to our simulated experiment, we can enhance drivers' visual angle. If we implement this system on the road surveillance system, we can increase the traffic safety and reduce the probability of accident.

中文摘要 I Abstract III 致謝 V 目錄 VI 圖索引 VIII 表索引 X 第一章 緒論 1 1.1 研究動機與目的 1 1.2 研究背景及文獻探討 1 1.3 研究方法 3 1.4 論文架構 4 第二章 相關知識 5 2.1 影像處理 5 2.2 前景目標物偵測 12 2.3 樣板比對 21 2.4 相機模型 23 2.5 交通碰撞問題 28 第三章 系統架構 31 3.1 移動車輛偵測模組 32 3.2 車輛路徑追蹤模組 35 3.3 車輛資訊更新模組 37 3.4 三維座標重建模組 39 3.5 V2R網路 45 第四章 實驗結果與分析 49 4.1 實驗平台及環境 49 4.2 車輛偵測與追蹤 50 4.3 車輛的三維座標定位結果 52 4.4 增強駕駛人行車視角的系統測試 55 4.5 系統偵測率及執行速度 56 第五章 結論與未來展望 58 5.1 結論 58 5.2 未來展望 59 參考文獻 60

[1] K. Kim, T. H. Chalidabhongse, D. Harwood, and L. S. Davis, “Background modeling and subtraction by codebook construction,” International Conference on Image Processing, vol. 5, pp. 3061-3064, 2004.
[2] S. Pumrin, “A framework for dynamically measuring mean vehicle speed using un-calibrated cameras,” Technical report UWEETR-2002-0005, Univ. Washington Dept. of Elect. Engr., 2002
[3] Z. Yi, and F. Liangzhong, “Moving object detection based on running average background and temporal difference,” Intelligent Systems and Knowledge Engineering (ISKE), pp. 270-272, 2010
[4] B. Shoushtarian, and H. E. Bez, “A practical adaptive approach for dynamic background subtraction using an invariant color model and object tracking,” Pattern Recognition Letters, vol. 26, no. 1, pp. 5-26, 2005.
[5] T. Kohonen, “Improved versions of learning vector quantization,” International Joint Conference on Neural Networks(IJCNN '90), vol. 1, pp. 545-550 1990.
[6] Y. Deng, B. S. Manjunath, and H. Shin, “Color image segmentation,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CPVR '99), vol. 2, pp. 446-451, 1999.
[7] F. Meyer, “Color image segmentation,” International Conference on Image Processing and its Applications, pp. 303-306, 1992.
[8] 鐘國亮, 影像處理與電腦視覺, 東華書局, 台北, 2006.
[9] J. R. Parker, “Algorithms for Image Processing and Computer Vision, “ 2nd Edition, wiley Publishing, Inc., New York, 2010.
[10] E. R. Davies, “Machine Vision: Theory, Algorithms, Practicalities, “ 3rd Edition, Morgan Kaufmann, San Francisco, 2005.
[11] L. Susman, “Calibration of a six-port reflectometer using projective geometry concepts, “ Journal of Electronics Letters, Vol. 20, pp. 9, 1984
[12] 呂傑棋,“3D視覺校正軟體之研製“,碩士論文,中華大學,新竹,1997。
[13] 龔雨軒,“互動式投影遊戲之視覺平台發展“,碩士論文,國立台灣科技大學,台北 ,2008。
[14] 林奇叡,“先進安全車輛的前方與盲點視覺偵測“,碩士論文,國立中央大學,桃園,2008。
[15] R. D. Sampson, A. E. Peterson and E. P. Lozowski, “Photogrammetric calibration of a consumer grade flat-bed scanner,” IEEE Canadian Conference on Electrical and Computer Engineering, Vol.2, pp. 622-626 , 1999.
[16] R. I. Hartley, “An algorithm for self calibration from several views,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 908-912, 1994.
[17] Q. T. Luong and O. Faugeras, “Self-calibration of a moving camera from point correspondences and fundamental matrices,” Journal of Computer Vision, Vol.4, pp. 880-883 , 1997.
[18] W. Guanghui, Q. M. Wu and W. Zhang, “Camera Self-Calibration and Three Dimensional Reconstruction under Quasi-Perspective Projection,” IEEE Canadian Conference on Computer and Robot Vision, pp. 129-136, 2008.
[19] P. Shrestha, M. Barbieri, H. Weda and D. Sekulovski, “Synchronization of Multiple Camera Videos Using Audio-Visual Features,” IEEE Transactions on Multimedia, Vol.12, pp. 79-92, 2010.
[20] D. N. Brito, “Synchronizing Video Cameras with Non-Overlapping Fields of View,” Journal of Computer Graphics and Image Processing, pp. 37-44, 2008.
[21] “A1類道路交通事故概況“,交通部統計處,台北,2009。
[22] 劉家賢,“以立體視覺為基礎的前車輛偵測與防撞警示系統”,碩士論文,國立東華大學,花蓮,2009。
[23] B. Leibe, N. Cornelis, K. Cornelis, L. Van Gool , “Dynamic 3D Scene Analysis from a Moving Vehicle,“ IEEE Conference on Computer Vision and Patter Recognition, CVPR '07, 2007.
[24] A. Levin, P. Viola, Y. Freund, “Unsupervised Improvement of Visual Detectors using Co-Training, “ Proceedings of the Ninth IEEE International Conference on Computer Vision, iccv '03, 2003.
[25] Ya-Li Hou, Grantham K. H. Pang, “Human Detection in Crowded Scenes, “ IEEE 17th International Conference on Image Processing, 2010.
[26] P. Gomes, C. Olaverri-Monreal, M. Ferreira, “Making Vehicles Transparent Through V2V Video Streaming, “ IEEE Transactions on Intelligent Transportation Systems, Vol. 13,pp. 930-938, 2012.
[27] C. Campolo, A. Molinaro, “Improving V2R connectivity to provide ITS applications in IEEE 802.11p/WAVE VANETs, “ International Conference on Telecommunications, 2011.
[28] Wikipedia. "RGB Colorcube," http://upload.wikimedia.org/wikipedia/commons/0/03/RGB_Colorcube_Corner_White.png.
[29] Q. Chen, L. Zhao, J. Lu, G. Kuang, N. Wang, Y. Jiang, “Modified two-dimensional Otsu image segmentation ,“ Image Processing, IET, Vol. 6,2012.
[30] J. Wu, H. J. Yue, Y. Y. Cao, Z. M. Cui, “Video Object Tracking Method Based on Normalized Cross-correlation Matching ,“ Ninth International Symposium on Distributed Computing and Applications to Business Engineering and Science, pp. 523-527, 2010.
[31] “台北市列管交通事故肇事原因“,台北市政府警察局交通警察大隊,台北,2012。
[32] Z. Zhang, “A Flexible New Technique for Camera Calibration,“ IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22,pp. 1330-1334, 2000.
[33] E. Hamilton, “ JPEG File Interchange Format,“ C-Cube Microsystems, Version 1.02, 1992.
[34] M. F. Jhang, W. Liao, “On Cooperative and Opportunistic Channel Access for Vehicle to Roadside (V2R) Communications,“ IEEE Global Telecommunications Conference,2008.
[35] V. Jacobson, R. Braden, D. Borman,“ TCP Extensions for High Performance,“ Network Working Group, Request for Comments 1323, 1992.
[36] W.B. Pennebaler, J. L. Mitchell, “JPEG Still Image Data Comperssion Standerd,“ Van Nostrand Reinhold, New York, 1992.

QR CODE