簡易檢索 / 詳目顯示

研究生: 葉正傑
Cheng-chieh Yeh
論文名稱: 基於影像之嵌入式交通號誌動態調控系統
Image based Dynamic Traffic Light Control on an Embedded System
指導教授: 林昌鴻
Chang-hong Lin
口試委員: 吳晉賢
Chin-hsien Wu
陳維美
Wei-mei Chen
阮聖彰
Shanq-jang Ruan
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 中文
論文頁數: 106
中文關鍵詞: 動態調控嵌入式系統交通號誌影像處理
外文關鍵詞: Traffic light, Dynamic control, Embedded system, Image processing
相關次數: 點閱:158下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著經濟快速成長與都市的密集開發,交通壅塞的嚴重程度,已經成為生活品質與環保議題的首要改善目標。而如何有效率縮短通勤時間、提高道路使用率,則是智慧型運輸系統(Intelligent Transportation System, ITS),所設定的宗旨。其中最直接關係到用路人的部分,就是交通號誌的週期控制。目前現行控制方式,除了過度仰賴人力資源,也無法針對所有的交叉路口進行管控。即便是依照時段性質之週期調控,也僅限主要幹道,並且不能即時根據現場狀況予以調控週期。
    本論文的目標,在於系統以歷史統計車流輛為基礎,智慧化的判斷車流密度,進而動態的調控交通號誌週期。而利用影像辨識車輛的技術,雖然準確性與可靠度日漸成熟,但是套用於道路實景,仍有視角重疊與適應性的困難。再者,人工疏導車流之判斷基準,並非精確的車輛數目,而是肉眼感官之空間擁擠程度,因此車輛密度的表示,將以此為出發點。
    台灣道路在擁擠市區,交叉路口為數眾多,彼此間距近,所以行車中斷的頻率也較高。現實條件限制下,本研究以影像分析的方式,判斷目標停等區的道路佔用比例,作為兩相號誌之週期調控準則,藉此提高道路的使用效率,降低不必要的停等時間。並以嵌入式平台為開發系統,除了具有移植與升級的空間,在搭載數量眾多的交叉路口時,實現低成本建置的優點。實驗結果顯示,在車流量劇烈變化路口,系統判別壅塞程度,合理的動態調變兩相號誌,減少不對稱性車流容易發生的冗長等待,而路權之優先程度,也能動態調整以符合實際路況。進而達成,離峰時刻,減少冗長等待;尖峰時刻,降低人力資源。交通效率的改進,除了提升生活品質,對於經濟發展與環保節能,長時間下來,也有極大的效益。


    With the rapidly growing of economics and high density developments in cities, the severity of traffic congestion has become an urgent need to be improved for the quality of life and environmental issues. One of the Intelligent Transportation System (ITS)’s goal is to efficiently shorten the commute times, and enhance the road usage. Among the topics related to this goal, the one most directly related to road users is the control system for the periods of traffic signal cycles. Currently the system of cycle control relies too much on human resources, and it’s impossible to dispatch enough workforce for all intersections. Even for the main roads with existing traffic control systems, the length of the periods is predetermined based on histories, and cannot produce an instantaneous change depending on the real-time circumstances.
    The objective of this thesis is to intelligently evaluate the traffic density, and thus assign a dynamic regulation of traffic signal cycles, which would refer to historical traffic statistics as the foundation. Although the technique of identifying vehicles by image recognition has become mature in accuracy and reliability, there’re still some obstacles such as overlap of viewpoint and adaption while implementing under real circumstance. Furthermore, the judgment of traffic density isn’t an exact number of vehicles while police’s guiding traffic, but the visual perception for severity of congestion. Therefore, this will be the starting point for the expression of traffic density.
    There’re numerous intersections inside crowded urban area in Taiwan, and the distance is very short between each of them. Thus, it’s more frequent to stop the car during the traffic. Under realistic conditions, the study will use image analysis to determine the proportion of road’s occupation inside target zones, and regard it as the criterion to control the two-phase traffic signal. Thereby, the proposed system can improve the road usage, and save unnecessary waiting time. The device is designed to be operating on an embedded system, not only for the space of upgrade and porting, but also to achieve the advantages of low cost when massively construct in a large number of intersections. As the experiment results suggest, the proposed system reasonably regulate the signal cycle by evaluating the degree of traffic congestion from an intersection with dramatic change. The proposed system can reduce the time spending on avoidable waiting which often occurs on asymmetry traffic scenario. And then, the priority of road could be dynamically adjusted to meet the actual roads. Moreover, the system can save the lengthy waits in off-peak times, while reduce human resources in rush hours. The improvement of traffic efficiency is not only enhance the qualities of life, but also benefit on the economic development and environmental protection for a long run.

    中文摘要 Abstract 致謝 目錄 圖索引 表索引 第一章 緒論 1.1 研究動機與目的 1.2 研究背景及文獻探討 1.3 研究方法 1.4 論文架構 第二章 相關知識 2.1 影像處理 2.2 移動物件偵側 2.3 陰影辨識 2.4 色彩感測器 第三章 系統架構. 3.1 交通號誌週期動態調控模組 3.2 移動目標偵測模組. 3.3 色彩空間轉換模組 3.4 陽光偵測模組 3.5 陰影辨識模組 3.6 影像加強模組. 3.7 車道佔比計算模組 3.8 區塊化橫向補償模組 第四章 系統測試與實驗結果 4.1 交通號誌週期動態調控模組 4.2 移動目標偵測模組 4.3 色彩空間轉換模組 4.4 陽光偵測模組 4.5 陰影辨識模組 4.6 影像加強模組 4.7 車道佔比計算模組 第五章 結論與未來展望 5.1 結論 5.2 未來展望 參考文獻

    [1] www.bote.taipei.gov.tw
    [2] G. Matrella,and D. Marani, “An Embedded Video Sensor for a Smart Traffic Light,” Proc. IEEE Digital System Design, 2011, pp. 769-776, Europe, Sep., 2011.
    [3] P. Choudekar, S. Banerjee, and M.K.Muju, “Implementation of Image Processing in Real Time Traffic Light Control,” Proc. IEEE International Conference on Electronics Computer Technology, pp. 94-98, Japan, Apr., 2011.
    [4] S. Li, and G. Guo, “The application of improved HSV color space model in image processing,” International Conference on Future Computer and Communication, vol. 2, pp. 10-13, China, May., 2010.
    [5] L. Liang, J. Peng, and B. Yang, “Image Retrieval based on YCbCr Color Histogram,” Proc. IEEE International Conference on Congnitive Informatics and Congnitive Compputing, pp. 483-488, China, 2013.
    [6] E.R. Davies, “MACHINE VISION,” ELSEVIER, Third Edition , vol. 18, pp. 505-510, 2005.
    [7] Z. Chen, J. Cao, Y. Tang, and L. Tang, “Tracking of Moving Object Based on Optical Flow Detection,” Proc. International Conference on Computer Science and Network Technology, vol. 2, pp. 1096-1099, China, Dec., 2011.
    [8] 葉蒨妤, “透過V2R串流影像網路增強駕駛人行車視角” ,碩士論文,國立台灣科技大學,台北,2013。
    [9] 戴悅如, “影像監控之行人攜帶物偵側” ,碩士論文,國立台灣科技大學,台北,2012。
    [10] B. Mitra, R. Young, and C. Chatwin, “On shadow elimination after moving region segmentation based on different threshold selection strategies,“ Optics and Lasers in Engineering, pp. 1088-1093, 2007.
    [11] S. Nadimi, and B. Bhanu, “Physical models for moving shadow and object detection in video,“ IEEE. Transactions on Pattern Analysis and Machine Intelligence, vol 26, No. 8, pp. 1079-1087, Aug., 2004.
    [12] J.-W. Hsieh, W.-F. Hu, C.-J. Chang,and Y.-S. Chen, “Shadow elimination for effective moving object detection by Gaussian shadow modeling,“ Proc. IEEE International Conference on Pattern Recognition, vol. 2, pp. 540-543, 2002.
    [13] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects, ghosts, and shadows in video streams,“ IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 10, pp. 1337-1342, Oct., 2003.
    [14] T. Horprasert, D. Harwood, and L. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,“ IEEE.ICCV’99 (Computer vision) on Frame-Rate Workshop, 1999.
    [15] J.-B. Huang, and C.-S. Chen, “Moving cast shadow detection using physics-based features,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 2310-231, 2009.
    [16] S. Nadimi, and B. Bhanu, “Physical models for moving shadow and object detection in video,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 1079–1087, 2004.
    [17] C.-C. Chen, and J. Aggarwal, “Human shadow removal with unknown light source,” International Conference on Pattern Recognition, pp. 2407–2410, 2010.
    [18] Y.-L. Tian,M. Lu, and A. Hampapur, “Robust and efficient foreground analysis for real-time video surveillance,” IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 1182–1187, 2005.
    [19] A. Leone, and C. Disttante, “Shadow detection for moving objects based on texture analysis,” Pattern Recognition, vol. 40, no. 4, pp. 1222-1233, 2007.

    [20] A. Sanin, C. Sanderson, and B. Lovell, “Improved shadow removal for robust person tracking in surveillance scenarios,” International Conference on Pattern Recognition, pp. 141-144, 2010.
    [21] H. Nicolas, and J.-M. Pinel, “Joint moving cast shadows segmentation and light source detection in video sequences,” Signal Processing: Image Communication, vol. 21, No. 1, pp. 22–43, 2006.
    [22] A. Sanin, C. Sanderson, and Brian C. Lovell, “Shadow Detencion: A Survey and Comparative Evaluation of Recent Methods,“ Pattern Recognition, vol. 45, no. 4, pp. 1684-1695, 2012.
    [23] A. Polzer, W. Gaberl, and H. Zimmermann, “Filter-Less Vertical Integrated RGB Color Sensor for Light Monitoring,” Proc. International Convention on MIPRO Proceeding, pp. 55-59, Japan, May., 2011.
    [24] Y. Wang, Y. Zou, H. Shi, and H. Zhao, “Video Image Vehicle Detection System for Signaled Traffic Intersection,” International Conference on Hybrid Intelligent System, vol. 1, pp. 222-227, 2009.
    [25] 交通標誌標線號誌設置規則,中華民國102年2月7日修正。
    [26] 台北市交通管制工程處,交通流量調查資料,2013。
    [27] A. S. Ahuja, “Development of passenger car equivalents for freeway merging sections,” ProQuest, ISBN 0-549-24044-6, 2004.
    [28] L.-T. Lin, H.-J Huang, J.-M. Lin, and F.-F. Young, “A New Intelligent Traffic Control System for Taiwan,” Proc. International Conference on Intelligent Transport Systems Telecommunications (ITST), pp. 138-142, Lille, Oct., 2009.
    [29] 台北市交通管制工程處,網站資料庫,2013。
    [30] 台北市交通管制工程處,VD調查資料,2013。
    [31] 車載即時道路資訊系統,網路最佳化實驗室,網路資料庫,電資學院,國立交通大學,2013。
    http://nol.cs.nctu.edu.tw/vtmc/vrtics/index.html
    [32] OpenCV 3.0.0-dev documentation, “How to Use Background Subtraction Methods,”
    http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html
    [33] OpenCV 3.0.0-dev documentation, “Motion Analysis and Object Tracking,”
    http://docs.opencv.org/trunk/modules/imgproc/doc/motion_analysis_and_object_tracking.html
    [34] Y. Zhang, K. Zhang, and Y. Mao, “Research on Vehicle Detection Method based on Video Image,“ Proc. International Conference on Industrial Control and Electronics Engineering (ICICEE), pp. 931-936, Kochi, Nov., 2012.
    [35] OpenCV 2.4.9.0 documentation, “Motion Analysis and Object Tracking,”
    http://docs.opencv.org/modules/imgproc/doc/motion_analysis_and_object_tracking.html
    [36] F. Kasmin, A. Abdullah, and A. S. Prabuwono, “The Effect of Normalization Techniques and Their Ensembles towards Otsu Method,” Proc. International Conference on Intelligent Systems Design and Applications (ISDA), 2012.
    [37] E.R. Davies, “MACHINE VISION,” ELSEVIER, Third Edition, vol. 3, pp. 49-54 , 2005.
    [38] Products Info, RGB Sensor, Capella microsystems, INC
    http://www.capellamicro.com.tw/EN/products_view.php?id=64&mode=21
    [39] CM3323, Product Specification, Capella microsystems, INC, Serial No. 201402140012, rev: 1.0 Revised 28th-Nov-2013.
    [40] CM3323 Application Note, Capella microsystems, INC, Document No. AN33231206201201, 6th-Dec-2012.
    [41] B. Qin, Z. MA, Z. Fang, and S. Wang, “Fast Detection of Vehicles Based-on the Moving Region,” Proc. IEEE International Conference on Computer-Aided Design and Computer Graphics, pp. 202-207, Beijing, Oct., 2007.
    [42] K. Wang, Z. Li, Q. Yao, W. Huang, and F.-Y. Wang, “An Automated Vehicle Counting System for Traffic Surveillance,” Proc. IEEE International Conference on Vehicular Electronics and Safety(ICVES), pp. 1-6, Bejing, Dec., 2007.
    [43] W. Yu, W. Zhang, and T. Du, “Research on a Novel Moving Vehicle Detection based on Video,” International Conference on Computer Engineering and Technology(ICCET), vol. 1, pp. 380-382, Chengdu, Apr., 2010.
    [44] Gonzalez, and Woods, “Digital Image Processing,” Pearson, 3rd ed. ISBN 978-986-6534-10-2, 2009.
    [45] OpenCV 2.4.9.0 documentation, “Miscellaneous Image Transformations,”
    http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html
    [46] C. Han, and Q. Zhang, “Real-Time Detection of Vehicles for Advanced Traffic Signal Control,” Proc. International Conference on Computer and Electrical Engineering (ICCEE), pp. 245-249, Phuket, Dec., 2008.

    無法下載圖示 全文公開日期 2019/07/08 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE