Basic Search / Detailed Display

Author: Edwin Indarto
Edwin Indarto
Thesis Title: Ego-Vehicle Speed Prediction & Traffic Light Classification Via Deep Learning Techniques
Ego-Vehicle Speed Prediction & Traffic Light Classification Via Deep Learning Techniques
Advisor: 楊傳凱
Chuan-Kai Yang
Committee: 林伯慎
Bor-Shen Lin
賴源正
Yuan-Cheng Lai
Degree: 碩士
Master
Department: 管理學院 - 資訊管理系
Department of Information Management
Thesis Publication Year: 2023
Graduation Academic Year: 112
Language: 英文
Pages: 91
Keywords (in other languages): Ego-Vehicle Speed Prediction, Traffic Light Classification, CBAM, CNN, Optical Flow, Histogram Equalization, YOLO, YOLOv5, YOLOv6, KITTI, LISA, Comma.AI, Udacity
Reference times: Clicks: 87Downloads: 4
Share:
School Collection Retrieve National Library Collection Retrieve Error Report

  • The rising incidence of traffic accidents can be attributed to multiple factors, with one notable contributor being human behavior. Each driver is required to obtain vehicle insurance to mitigate potential damages resulting from accidents. In the event of an accident, insurance companies rely on capturing comprehensive evidence of the damages. Presently, insurers often request video footage to serve as evidence, enabling them to assess the extent of the damage and calculate appropriate insurance rates for the driver. Video footage, in this context, refers to recordings from the vehicle’s
    dashboard camera. Additionally, insurers can use this footage to
    ascertain whether the driver has committed any traffic violations, such as speeding, disregarding traffic lights, tailgating, unsafe lane changes, etc. Penalties may be imposed on drivers found to have committed violations based on the evidence gathered from the video footage. Ego vehicle speed prediction involves determining a driver's vehicle speed based on video footage provided by the driver. Car insurance companies can use dashboard camera video to assess whether a driver exceeds the speed limit. In the context of traffic light classification, the system is designed to detect the presence of a traffic light and ascertain its current state, whether it is
    displaying a red, yellow, or green signal. This classification system can also be employed to identify instances where a driver may have violated traffic rules by running a red light.
    In this study, the Ego-Vehicle Speed Prediction employs Convolutional Neural Networks (CNNs) for both training and testing. Several techniques are integrated to enhance the prediction accuracy. Specifically, Histogram Equalization is employed to refine Optical Flow estimation. Furthermore, within the CNN architecture, a CBAM (Convolutional Block Attention Module) is incorporated to amplify object visibility and recognition within the images. The primary datasets leveraged in this study are the KITTI dataset and the Comma.AI dataset, both of which provide speed measurements
    in meters per second (m/s). To evaluate the performance of the
    Ego-Vehicle Speed Prediction model, metrics such as sMAPE, MAE, and RMSE are utilized. Beyond Ego-Vehicle Speed Prediction, the study also delves into Traffic Light Classification. For this task, datasets from LISA Traffic Light and Udacity Traffic Light are employed. To simplify the classification process, the classes within these datasets are categorized into three primary labels:"go", "warning", and "stop". For the actual classification, YOLOv5 and
    YOLOv6 architectures are utilized, aiming to accurately identify and classify traffic lights based on the aforementioned simplified categories. To evaluate and compare the performance of YOLOv5 and YOLOv6 in terms of accuracy, the metrics MAP@0.5 and MAP@0.5:0.95 are employed. These metrics provide insights into the models abilities to detect and classify objects within images, with varying levels of overlap or intersection over union (IoU) thresholds.

    Recommendation Letter i Approval Letter ii Abstract in English iii Contents v List of Figures viii List of Tables xi 1 Introduction 1 11 Background 1 12 Contribution 3 13 Research Outline 4 2 Related Works 6 21 Histogram Equalization 6 22 Optical Flow 7 23 Vehicle Speed Prediction 10 24 EfficientNetV2 11 25 YOLO 12 26 Traffic Light Classification 16 27 Ghost-Net 17 28 Convolutional Block Attention Module 18 3 Proposed System 20 31 System Overview 20 311 Convolutional Neural Network 22 312 YOLO 30 32 Dataset 32 321 KITTI 33 322 COMMAAI 37 323 LISA Traffic light Dataset 38 324 Udacity 44 33 Dataset Processing 47 331 Optical Flow 47 332 Histogram Equalization 49 4 Experiments & Results 54 41 Training 54 411 Ego-Vehicle Speed Prediction 55 412 Traffic Light Classification 58 42 Experimental Results 61 421 Ego-Vehicle Speed Prediction 62 422 Traffic Light Classification 74 43 Discussion 80 44 Limitations 81 5 Conclusion and Future Work 82 51 Conclusion 82 52 Future Works 84 Reference 86

    WHO, “Road traffic injuries.” https://www.who.int/ news-room/fact-sheets/detail/road-traffic-injuries.
    B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1, pp. 185–203, 1981.
    S.-D. Chen and A. Ramli, “Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preser- vation,” IEEE Transactions on Consumer Electronics, vol. 49, no. 4, pp. 1301–1309, 2003.
    Haraschax, “comma.ai programming challenge!.” https:// github.com/commaai/speedchallenge.
    A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research (IJRR), 2013.
    M. Jensen, M. Philipsen, A. Møgelmose, T. Moeslund, and M. Trivedi, “Vision for looking at traffic lights: Issues, survey, and perspectives,” IEEE Transactions on Intelligent Transportation Sys- tems, vol. 17, pp. 1–16, 02 2016.
    M. P. Philipsen, M. B. Jensen, A. Møgelmose, T. B. Moeslund, and M. M. Trivedi, “Traffic light detection: A learning algorithm and evaluations on challenging dataset,” in intelligent transportation sys- tems (ITSC), 2015 IEEE 18th international conference on, pp. 2341– 2345, IEEE, 2015.
    G. E., V. M., Lovekesh, J. J., and K. C., “Annotated driving dataset..” https://github.com/udacity/self-driving-car/ tree/master/annotations.
    J. Glenn, “yolov5.” https://github.com/ultralytics/yolov5.
    C. Li, L. Li, H. Jiang, K. Weng, Y. Geng, L. Li, Z. Ke, Q. Li, M. Cheng, W. Nie, Y. Li, B. Zhang, Y. Liang, L. Zhou, X. Xu, X. Chu, X. Wei, and X. Wei, “Yolov6: A single-stage object detection frame- work for industrial applications,” 2022.
    R. Dorothy, J. R M, J. Rathish, S. Prabha, S. Rajendran, and S. Joseph, “Image enhancement by histogram equalization,” International Jour- nal of Nano Corrosion Science and Engineering, vol. 2, pp. 21–30, 09 2015.
    H. Cheng and X. Shi, “A simple and effective histogram equalization approach to image enhancement,” Digital Signal Processing, vol. 14, no. 2, pp. 158–170, 2004.
    A. M. Reza, “Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement,” Journal of VLSI signal processing systems for signal, image and video technol- ogy, vol. 38, pp. 35–44, Aug 2004.
    D. J. Fleet and Y. Weiss, “Optical flow estimation,” in Handbook of Mathematical Models in Computer Vision, 2006.
    D. Sun, X. Yang, M. Liu, and J. Kautz, “Pwc-net: Cnns for opti- cal flow using pyramid, warping, and cost volume,” CoRR, vol. abs/ 1709.02371, 2017.
    B. D. Lucas and T. Kanade, “An iterative image registration tech- nique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’81, (San Francisco, CA, USA), p. 674–679, Morgan Kauf- mann Publishers Inc., 1981.
    G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Image Analysis (J. Bigun and T. Gustavsson, eds.), (Berlin, Heidelberg), pp. 363–370, Springer Berlin Heidelberg, 2003.
    A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning opti- cal flow with convolutional networks,” in Proceedings of the IEEE In- ternational Conference on Computer Vision (ICCV), December 2015.
    B. Suh, Y. Shao, and Z. Sun, “Vehicle speed prediction for connected and autonomous vehicles using communication and perception,” in 2020 American Control Conference (ACC), pp. 448–453, 2020.
    J. D. Trivedi, S. D. Mandalapu, and D. H. Dave, “Vision-based real- time vehicle detection and vehicle speed measurement using mor- phology and binary logical operation,” Journal of Industrial Infor- mation Integration, vol. 27, p. 100280, 2022.
    A. M. Mathew and T. Khalid, “Ego vehicle speed estimation using 3d convolution with masked attention,” 2022.
    M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for con- volutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning (K. Chaudhuri and R. Salakhutdi-nov, eds.), vol. 97 of Proceedings of Machine Learning Research, pp. 6105–6114, PMLR, 09–15 Jun 2019.
    M. Tan and Q. Le, “Efficientnetv2: Smaller models and faster train- ing,” in Proceedings of the 38th International Conference on Machine Learning (M. Meila and T. Zhang, eds.), vol. 139 of Proceedings of Machine Learning Research, pp. 10096–10106, PMLR, 18–24 Jul 2021.
    J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and- excitation networks,” 2019.
    R. Xu, H. Lin, K. Lu, L. Cao, and Y. Liu, “A forest fire detection system based on ensemble learning,” Forests, vol. 12, p. 217, 02 2021.
    C. Li, L. Li, Y. Geng, H. Jiang, M. Cheng, B. Zhang, Z. Ke, X. Xu, and X. Chu, “Yolov6 v3.0: A full-scale reloading,” 2023.
    K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu, “Ghostnet: More features from cheap operations,” 2020.
    S. Woo, J. Park, J. Lee, and I. S. Kweon, “CBAM: convolutional block attention module,” CoRR, vol. abs/1807.06521, 2018.
    A. Geron, Hands-on machine learning with scikit-learn, keras, and TensorFlow: Concepts, tools, and techniques to build intelligent sys- tems. Sebastopol, CA: O’Reilly Media, 2 ed., 2019.
    M. B. Jensen, M. P. Philipsen, A. Møgelmose, T. B. Moeslund, and M. M. Trivedi, “Vision for looking at traffic lights: Issues, survey, and perspectives,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 1800–1815, 2016.
    X. Qimin, L. Xu, W. Mingming, L. Bin, and S. Xianghui, “A method- ology of vehicle speed estimation based on optical flow,” in Proceed- ings of 2014 IEEE International Conference on Service Operations and Logistics, and Informatics, pp. 33–37, 2014.
    J. Li, S. Chen, F. Zhang, E. Li, T. Yang, and Z. Lu, “An adap- tive framework for multi-vehicle ground speed estimation in airborne videos,” Remote Sensing, vol. 11, no. 10, 2019.
    N. Nemade and V. Gohokar, “Comparative performance analysis of optical flow algorithms for anomaly detection,” SSRN Electronic Journal, 01 2019.
    H. L. Bandari and B. B. Nair, “An end to end learning based ego vehicle speed estimation system,” in 2021 IEEE International Power and Renewable Energy Conference (IPRECON), pp. 1–8, 2021.
    S. Makridakis, “Accuracy measures: theoretical and practical con- cerns,” International Journal of Forecasting, vol. 9, no. 4, pp. 527– 529, 1993.
    C. Lewis, “International and business forecasting methods butter- worths: London,” 1982.
    Q. Wang, Q. Zhang, X. Liang, Y. Wang, C. Zhou, and V. I. Mikulovich, “Traffic lights detection and recognition method based on the improved yolov4 algorithm,” Sensors, vol. 22, no. 1, 2022.
    Z. Li, W. Zhang, and X. Yang, “An enhanced deep learning model for obstacle and traffic light detection based on yolov5,” Electronics, vol. 12, no. 10, 2023.
    J. P. Vieira de Mello, L. Tabelini, R. F. Berriel, T. M. Paixão, A. F. de Souza, C. Badue, N. Sebe, and T. Oliveira-Santos, “Deep traffic light detection by overlaying synthetic context on arbitrary natural images,” Computers & Graphics, vol. 94, pp. 76–86, 2021.
    T. H. P. Tran and J. W. Jeon, “Accurate real-time traffic light detection using yolov4,” in 2020 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia), pp. 1–4, 2020.
    B. Kang and Y. Lee, “A driver’s visual attention prediction using optical flow,” Sensors, vol. 21, no. 11, 2021.
    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Com- puter Vision and Pattern Recognition (CVPR), 2012.
    A. H. Alomari, T. S. Khedaywi, A. R. O. Marian, and A. A. Jadah, “Traffic speed prediction techniques in urban environments,” He- liyon, vol. 8, no. 12, p. e11847, 2022.

    QR CODE