簡易檢索 / 詳目顯示

研究生: 沈政一
Zheng-Yi Shen
論文名稱: 基於深度學習之全天候即時車輛資訊及違規行為辨識
Real-time Vehicle Attribute Recognition and Violation Detection in All-weather Based on Deep Learning
指導教授: 戴文凱
Kai-Tai Wen
口試委員: 花凱龍
Kai-Lung Hua
蔡昆樺
Kun-Hua Tsai
葉家宏
Chia-Hung Yeh
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 65
中文關鍵詞: 車牌辨識深度學習車型辨識車輛顏色辨識車牌及車牌號碼顏色辨 識車輛違規辨識摩托車未戴安全帽辨識
外文關鍵詞: license plate recognition, deep learning, vehicle classification, vehicle color recognition, license plate and license plate number color recognition, vehicle violation detection, motorcyclist without wearing helmet detection
相關次數: 點閱:413下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 自動化車牌辨識在現代交通系統上有著越來越重要的地位,可以對車輛違規的行為進行 24 小時的監控、對違法車輛進行追蹤,或是裝設在停車場,節省人力以及管理成本。不過目前裝設在台灣的車牌辨識系統,大多都需要針對該場所調整系統的設定,或是加裝照明設備,讓整體環境符合系統需求才有辦法進行有效的辨識,而近幾年也有許多基於機器學習的車牌辨識系統,但大多都沒有考慮到產品需要的辨識速度及產品本身有限的成本,且沒有經過一個龐大且多樣化環境的數據集檢測。因此,在本文中,我們提出了一種有效、快速且能適應各種環境的車牌辨識系統。透過三個步驟進行辨識,輸入道路監測影像進入 Object Detection Module 進行車輛及車牌辨識後輸出影像內的車輛和車牌資訊,將辨識到的車牌輸入到 Affine Transformation Module 對車牌傾斜的部分做修正,將所有車牌轉回正面視角後傳入 License Plate Recognition Module 進行無分段車牌號碼辨識,辨識出車牌號碼並輸出。在使用 CPU i9-9900K 和 GPU RTX 2080 的硬體狀況下我們的系統能達到 64 FPS 的辨識速度,並對多個影像同時辨識進行了系統優化,同時辨識四個影像時也能達到 47 FPS 的辨識速度。

    除了車牌辨識以外,本文還包含車輛、車牌及車牌號碼顏色辨識、車型辨識等功能,並對於臺灣交通法規定義之違法行為提出辨識方法,例如: 區間測速計算方式、違規跨線偵測、車輛未保持安全距離偵測、摩托車未戴安全帽辨識等等。我們使用了 AOLP Dataset 對我們車牌辨識系統進行評估,我們達到了平均 99.29% 的辨識率,在測試只需使用 Affine Transformation Module 和 License Plate Recognition Module 的狀況下,辨識速度更達到了 188 FPS。

    我們建立並公開一個車輛及車牌資訊數據集,Taiwan Vehicle Attribute in AllWeather (TVAAW) Dataset,總共包含了 4856 張影像、9802 輛車、8589 張車牌。從數支不同地區、不同角度的道路攝影機擷取影像製作,根據不同的天候狀況分成了六個子集,並在每個子集下都包含了各式車型、車牌,雖然大多為車輛後方影像,但也包含多種不同傾斜角度,可以有效評估車牌辨識系統對台灣車牌的辨識率。我們的系統在 TVAAW Dataset 的完整測試達到平均 93.43% 的辨識率,取車牌測試達到平均 97.65% 的辨識率。在有足夠訓練資料的狀況下能超越現有商用車牌辨識系統的效能和辨識率,能夠有效的支援臺灣市場對於區間測速的需求。

    關鍵字: 車牌辨識、深度學習、車型辨識、車輛顏色辨識、車牌及車牌號碼顏色辨識、車輛違規辨識、摩托車未戴安全帽辨識


    Automated license plate recognition (ALPR) plays a more and more important role in modern transportation systems with applications ranging from monitoring vehicle violations, tracking illegal vehicles, etc. Most of the license plate recognition systems currently in Taiwan need to adjust their system settings for specializing to respective environment of each installation. Recently, there are many license plate recognition systems using machine learning. But at the expense of the recognition speed and the cost of the computing power, and yet not have passed a large and diverse dataset proof. In this thesis, we propose an effective, fast, and adaptable license plate recognition system robust for various environments. Given input road monitoring images, our Object Detection Module is responsible for vehicle and license plate. Then by using the Affine Transformation Module to rectify the detected license plate, transforming all license plates into rectangle shape as viewed in front of it. So that the License Plate Recognition Module without segmenting the license plate number recognition, and recognizes and output the license plate number.

    In addition, our system could also recognize and classify categories and the color of the vehicles, identify the colors of the license plate number and itself. Therefore, our methods can practically apply for illegal behaviors defined by Taiwan’s traffic laws such as interval speed calculation method, illegal cross-line detection, vehicle not keeping a safe distance detection and detection motorcyclist without wearing helmets.

    The AOLP Dataset is selected to evaluate our license plate recognition system, and we achieve an accuracy rate of 99.29% with 188 FPS recognition speed on average.

    Furthermore, we have founded a public vehicle and license plate attribute dataset, Taiwan Vehicle Attribute in All-Weather (TVAAW) Dataset, which contains 4856 images, including 9802 vehicles, and 8589 license plates collecting from six different weather conditions, different regions and angles for evaluating license plate recognition system for Taiwan license plates. The test of our system in TVAAW Dataset achieves an average accuracy rate of 93.43%, and the license plate test achieves an average accuracy rate of 97.65% on average.

    Keywords: license plate recognition, deep learning, vehicle classification, vehicle color recognition, license plate and license plate number color recognition, vehicle violation detection, motorcyclist without wearing helmet detection

    論文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV 誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V 目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI 圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX 表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI 1 緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 研究動機和目標 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 研究方法敘述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 建立新的數據集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 研究貢獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.5 本論文之章節結構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 文獻探討 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 AutomaticLicensePlateRecognition . . . . . . . . . . . . . . . . . . . 5 2.1.1 車輛偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 車牌偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.3 車牌影像復原 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.4 車牌傾斜校正 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.5 車牌號碼切割 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.6 車牌號碼辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 車輛、車牌顏色辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 摩托車未戴安全帽辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.4 數據集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4.1 Application-OrientedLicensePlate(AOLP)Dataset . . . . . . . 9 2.4.2 VehicleColorDataset . . . . . . . . . . . . . . . . . . . . . . . 10 3 研究方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 ObjectDetectionModule . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 AffineTransformationModule . . . . . . . . . . . . . . . . . . . . . . . 19 3.4 LicensePlateRecognitionModule . . . . . . . . . . . . . . . . . . . . . 24 3.5 ColorRecognitionModule . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.6 HelmetDetectionModule. . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 ViolationDetectionModule . . . . . . . . . . . . . . . . . . . . . . . . 30 3.7.1 區間測速 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.7.2 未保持安全距離 . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.7.3 跨線判斷 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4 實驗設計 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1 AOLPDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 TVAAWDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3 VehicleColorDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.4 車輛、車牌顏色辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.5 摩托車未戴安全帽辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.6 深度學習模型辨識速度 . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5 實驗結果與分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.1 AOLPDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2 TVAAWDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.3 VehicleColorDataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.4 車輛、車牌顏色辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.4.1 車輛顏色辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.4.2 車牌、車牌號碼顏色辨識 . . . . . . . . . . . . . . . . . . . . . . 45 5.5 未戴安全帽辨識 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.6 深度學習模型辨識速度 . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6 結論與後續工作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.1 貢獻與結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.2 未來工作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 授權書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    [1] F. W. Siebert and H. Lin, “Detecting motorcycle helmet use with deep learning,” AccidentAnalysis Prevention,vol.134,p.105319,Jan2020.
    [2] J.Redmon,S.Divvala,R.Girshick,andA.Farhadi,“Youonlylookonce: Unified, real-timeobjectdetection,”inTheIEEEConferenceonComputerVisionandPattern Recognition(CVPR),pp.779–788,June2016.
    [3] H.Rezatofighi,N.Tsoi,J.Gwak,A.Sadeghian,I.Reid,andS.Savarese,“GeneralizedIntersectionoverUnion: AMetricandALossforBoundingBoxRegression,” arXive-prints,p.arXiv:1902.09630,Feb.2019.
    [4] A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,”inProceedingsofthe23rdInternationalConferenceonMachineLearning, ICML ’06, (New York, NY, USA), p. 369–376, Association for Computing Machinery,2006.
    [5] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv eprints,p.arXiv:1804.02767,Apr.2018.
    [6] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” arXiv e-prints, p. arXiv:1807.11164, July 2018.
    [7] K. K. Kim, K. I. Kim, J. B. Kim, and H. J. Kim, “Learning-based approach for licenseplaterecognition,”inNeuralNetworksforSignalProcessingX.Proceedings ofthe2000IEEESignalProcessingSocietyWorkshop(Cat.No.00TH8501),vol.2, pp.614–623vol.2,Dec2000.
    [8] A.B.Jung,K.Wada,J.Crall,S.Tanaka,J.Graving,C.Reinders,S.Yadav,J.Banerjee, G. Vecsei, A. Kraft, Z. Rui, J. Borovec, C. Vallentin, S. Zhydenko, K. Pfeiffer, B. Cook, I. Fernández, F.-M. De Rainville, C.-H. Weng, A. Ayala-Acevedo, R. Meudec, M. Laporte, et al., “imgaug.” https://github.com/aleju/imgaug, 2020. Online;accessed01-Feb-2020.
    [9] J. Zhuang, S. Hou, Z. Wang, and Z.-J. Zha, “Towards human-level license plate recognition,” in Proceedings of the European Conference on Computer Vision (ECCV),pp.306–321,2018.
    [10] Y.Lee,J.Jun,Y.Hong,andM.Jeon,“PracticalLicensePlateRecognitioninUnconstrained Surveillance Systems with Adversarial Super-Resolution,” arXiv e-prints, p.arXiv:1910.04324,Oct.2019.
    [11] Y.Lee,J.Lee,H.Ahn,andM.Jeon,“SNIDER:SingleNoisyImageDenoisingand Rectification for Improving License Plate Recognition,” arXiv e-prints, p. arXiv: 1910.03876,Oct.2019.
    [12] Y.Yuan,W.Zou,Y.Zhao,X.Wang,X.Hu,andN.Komodakis,“Arobustandefficientapproachtolicenseplatedetection,”IEEETransactionsonImageProcessing, vol.26,pp.1102–1114,March2017.
    [13] S.MontazzolliSilvaandC.RositoJung,“Licenseplatedetectionandrecognitionin unconstrainedscenarios,”inTheEuropeanConferenceonComputerVision(ECCV), pp.580–596,September2018.
    [14] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” arXiv preprint arXiv:1612.08242,2016.
    [15] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Networks,” arXiv e-prints, p.arXiv:1406.2661,June2014.
    [16] R. O. Duda and P. E. Hart, “Use of the hough transformation to detect lines and curvesinpictures,”Commun.ACM,vol.15,p.11–15,Jan.1972.
    [17] J. Jiao, Q. Ye, and Q. Huang, “A configurable method for multi-style license plate recognition,”PatternRecognition,vol.42,no.3,pp.358–369,2009.
    [18] D. Llorens, A. Marzal, V. Palazón, and J. M. Vilar, “Car license plates extraction and recognition based on connected components analysis and hmm decoding,” in PatternRecognitionandImageAnalysis(J.S.Marques,N.PérezdelaBlanca,and P.Pina,eds.),(Berlin,Heidelberg),pp.571–578,SpringerBerlinHeidelberg,2005.
    [19] S.MontazzolliandC.Jung,“Real-timebrazilianlicenseplatedetectionandrecognitionusingdeepconvolutionalneuralnetworks,”in201730thSIBGRAPIConference onGraphics,PatternsandImages(SIBGRAPI),pp.55–62,Oct2017.
    [20] H. Li and C. Shen, “Reading Car License Plates Using Deep Convolutional Neural NetworksandLSTMs,”arXive-prints,p.arXiv:1601.05610,Jan.2016.
    [21] P. Chen, X. Bai, and W. Liu, “Vehicle color recognition on urban road by feature context,” IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 5, pp.2340–2346,2014.
    [22] J. Hsieh, L. Chen, S. Chen, D. Chen, S. Alghyaline, and H. Chiang, “Vehicle color classification under different lighting conditions through color correction,” IEEE SensorsJournal,vol.15,pp.971–983,Feb2015.
    [23] R. Fuad Rachmadi and I. Ketut Eddy Purnama, “Vehicle Color Recognition using ConvolutionalNeuralNetwork,”arXive-prints,p.arXiv:1510.07391,Oct.2015.
    [24] C.Vishnu,D.Singh,C.K.Mohan,andS.Babu,“Detectionofmotorcyclistswithout helmet in videos using convolutional neural network,” in 2017 International Joint ConferenceonNeuralNetworks(IJCNN),pp.3036–3041,2017.
    [25] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense objectdetection,” inTheIEEEInternationalConferenceonComputerVision(ICCV), pp.2980–2988,Oct2017.
    [26] G.-S. Hsu, J.-C. Chen, and Y.-Z. Chung, “Application-oriented license plate recognition,” IEEE transactions on vehicular technology, vol. 62, no. 2, pp. 552–561, 2012.
    [27] R.Girshick,“FastR-CNN,”arXive-prints,p.arXiv:1504.08083,Apr.2015.
    [28] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature PyramidNetworksforObjectDetection,”arXive-prints,p.arXiv:1612.03144,Dec. 2016.
    [29] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for MobileVisionApplications,”arXive-prints,p.arXiv:1704.04861,Apr.2017.
    [30] M. Lin, Q. Chen, and S. Yan, “Network In Network,” arXiv e-prints, p. arXiv: 1312.4400,Dec.2013.
    [31] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp.1251–1258,2017.
    [32] C.Szegedy,S.Ioffe,V.Vanhoucke,andA.Alemi,“Inception-v4,Inception-ResNet and the Impact of Residual Connections on Learning,” arXiv e-prints, p. arXiv: 1602.07261,Feb.2016.
    [33] K.He,X.Zhang,S.Ren,andJ.Sun,“Deepresiduallearningforimagerecognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.770–778,2016.
    [34] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in 2018 IEEE/CVF Conference on ComputerVisionandPatternRecognition,pp.4510–4520,2018.
    [35] A.Vaswani,N.Shazeer,N.Parmar,J.Uszkoreit,L.Jones,A.N.Gomez,L.u.Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information ProcessingSystems30(I.Guyon,U.V.Luxburg,S.Bengio,H.Wallach,R.Fergus, S. Vishwanathan, and R. Garnett, eds.), pp. 5998–6008, Curran Associates, Inc., 2017.
    [36] T.-Y.Lin,M.Maire,S.Belongie,L.Bourdev,R.Girshick,J.Hays,P.Perona,D.Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common Objects in Context,”arXive-prints,p.arXiv:1405.0312,May2014.
    [37] Y. Wu and J. Li, “License plate recognition using deep fcn,” in International ConferenceonCognitiveSystemsandSignalProcessing,pp.225–234,Springer,2016.
    [38] C. Hu, X. Bai, L. Qi, P. Chen, G. Xue, and L. Mei, “Vehicle color recognition with spatial pyramid deep learning,” IEEE Transactions on Intelligent Transportation Systems,vol.16,no.5,pp.2925–2934,2015.

    無法下載圖示 全文公開日期 2025/06/09 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE