簡易檢索 / 詳目顯示

研究生: 張先道
Hsien-Tao Chang
論文名稱: 基於視覺的移動車輛速度偵測嵌入式系統
Vision-based Embedded System for Moving Vehicle Speed Detection
指導教授: 蘇順豐
Shun-Feng Su
口試委員: 蘇順豐
Shun-Feng Su
陳美勇
Mei-Yung Chen
蔡清池
Ching-Chih Tsai
莊鎮嘉
Chen-Chia Chuang
王乃堅
Nai-Jian Wang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 128
中文關鍵詞: 車輛偵測車輛追蹤光流影像處理嵌入式系統即時處理
外文關鍵詞: Vehicle Detection, Vehicle Tracking, Optical Flow, Image Processing, Embedded System, Real-time Processing
相關次數: 點閱:284下載:21
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究的目的是實現一個基於視覺的嵌入式系統,此系統可以檢測機車後方移動車輛的移動速度,並且測量車輛的相對速度和位置,使得機車駕駛可以知道後方是否有危險車輛。本系統使用卷積神經網路偵測機車駕駛後方的車輛,並且提出了車頭擷取方法得到車輛的車頭部位來做為車輛追蹤的參考範圍。在車輛追蹤上,使用光流法來進行車輛追蹤。在量測後方來車與機車駕駛的相對速度上,基於相機焦距與車輛距離的關係來量測車輛與機車駕駛之間的相對速度。由於光流容易受到光線以及雜訊的影響,因此使用光流點的質心來增強光流點的穩定性,使得車輛測速具有穩健性。在光流離群值方面,由於影像金字塔具有線性比例放大的關係,每個光流點與光流點的質心之間的距離應為線性放大,若有光流點與光流質心之間的距離不為線性放大,表示此光流點為離群值。為了使整體的系統可以在嵌入式系統上達到即時影像運算,除了使用神經運算棒加速神經網路推理之外,也提出了非同步模式搭配光流法,使得神經網路推理的速度更快。在光流運算當中,利用任務優先級分配以及多執行緒來加速光流運算,使得整體的系統達到即時運算。使用白天的行車紀錄器影像測試本篇論文的系統之後,在平坦的道路上面所測出來的速度與實際速度的相對誤差為14.72%,將此系統移植到樹莓派上後,能夠平均以每秒30幀的幀率進行即時的後方來車測速。


    This study is to implement a vision-based embedded system that can the moving speed of the moving vehicles from the rear side. The system can measure the relative speed and position of vehicles coming from behind so that the motorcyclist can know whether there is any dangerous vehicle. This system uses a convolutional neural network to detect vehicles behind the motorcyclist and a car front capture method is proposed to get the front end of vehicles for vehicle tracking. For vehicle tracking, optical flow is employed to track vehicles. In terms of vehicle speed measurement, the camera focal length and the distance of a vehicle are considered to measure the relative speed between the vehicle and the motorcyclist. Since optical flow is easily affected by light and noise, the stability of optical flow is strengthened by using the centroid of the optical flow points, which makes the vehicle speed measurement more robust. For the outliers of optical flow points, based on image pyramid, the distance between the centroid of the optical flow points and each optical flow point should be linearly enlarged. If not, it means that there are outliers in the optical flow points. In order to speed up the overall system, in addition to using neural compute stick to accelerate the CNN inference, an asynchronous mode with optical flow is proposed to make the CNN inference faster. In the optical flow calculation, using task priority assignment and multithreading to speed up the optical flow calculation so that the system can achieve real-time calculation. After using daytime dashcam videos to test the system, the relative error between the measured vehicle speeds and the actual vehicle speeds on flat roads is 14.72%. The system in this study is able to perform vehicle speed measurement at an average frame rate of 30 FPS on the Raspberry Pi.

    中文摘要 I Abstract II 致謝 III Table of Contents IV List of Figures VII List of Tables XII Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 System Architecture 2 1.3 Thesis Contributions 3 1.4 Thesis Organization 7 Chapter 2 Related Work 8 2.1 Image Acquisition 8 2.2 Image Preprocessing 9 2.2.1 Grayscale Image Conversion 9 2.2.2 Image Filter 10 2.3 Camera Calibration 11 2.3.1 Image Distortion 11 2.3.2 Camera Calibration Method 13 2.4 Vehicle Distance Measurement 15 2.4.1 Coordinate Conversion 15 2.4.2 Vehicle Information Measurement 19 2.5 Vehicle Detection 21 2.5.1 Haar-like Feature 21 2.5.2 Background Subtraction 23 2.5.3 Convolutional Neural Network 24 2.6 Corner Detection 25 2.6.1 Harris Corner Detection 26 2.6.2 Shi–Tomasi Corner Detection 28 2.6.3 FAST Corner Detection 29 2.7 Object Tracking 30 2.7.1 Optical Flow 31 2.7.2 Feature Matching 33 Chapter 3 Vehicle Detection and Tracking 35 3.1 Vehicle Detection 35 3.1.1 Comparison of Vehicle Detection Methods 36 3.1.2 Car Front Capture Algorithm 39 3.1.3 Edge Detection of Vehicle 45 3.2 Vehicle Tracking 47 3.2.1 Comparison of Vehicle Tracking Methods 47 3.2.2 Corner Detection 49 3.2.3 Optical Flow Tracking 52 3.2.4 Light and Shadow 55 3.3 Objects of Optical Flow 56 3.3.1 Survival of Optical Flow Objects 57 3.3.2 Optical Flow Outliers 57 3.4 Vehicle Speed Measurement 59 3.4.1 Vehicle Coordinate System Speed Measurement 59 3.4.2 Focal Length Speed Measurement 61 3.4.3 Speed Measurement Problem 62 3.4.4 Centroid-based Speed Measurement 63 3.4.5 Vehicle Route Analysis 65 Chapter 4 Real-time Image Processing 68 4.1 Convolutional Neural Network Optimization 70 4.1.1 Intel Neural Compute stick 2 70 4.1.2 Asynchronous Mode with Optical Flow 73 4.2 Vehicle Object Optimization 75 4.2.1 Task Priority Assignment 75 4.2.2 Multithreading 77 4.2.3 Optical Flow Processing with Multithreading 78 Chapter 5 Experiments 81 5.1 Experimental Equipment 81 5.2 Comparison of speed measurement methods 84 5.2.1 Ideal Road 87 5.2.2 Non-ideal Road 91 5.2.3 Night Road 96 5.2.4 Summary of Speed Measurement Methods 99 5.3 Experimental Results 100 5.3.1 Vehicle Warning Functions 100 5.3.2 Optical Flow Outliers Deletion 103 5.3.3 Performance Evaluation on Raspberry Pi 105 Chapter 6 Conclusions and Future Work 107 6.1 Conclusions 107 6.2 Future Work 108 References 109

    [1] Ministry of Transportation and Communications R.O.C (Taiwan), (2021 DEC 14), Motor vehicle registration number [Online], Available:
    https://stat.motc.gov.tw/mocdb/stmain.jsp?sys=100&funid=a3301
    [2] National statistics R.O.C (Taiwan), (2021 DEC 14), National Statistical Bulletin No. 203 [Online], Available:
    https://www.stat.gov.tw/public/Data/010261811314J9NIE4Q.pdf
    [3] Z. Zhang, "Flexible camera calibration by viewing a plane from unknown orientations," Proceedings of the Seventh IEEE International Conference on Computer Vision, vol.1, pp. 666-673, 1999.
    [4] G. Kim and J. Cho, "Vision-based vehicle detection and inter-vehicle distance estimation," 12th International Conference on Control, Automation and Systems, pp. 625-629, 2012.
    [5] M. Bertozzi, A. Broggi and S. Castelluccio, "A real-time oriented system for vehicle detection," Journal of Systems Architecture, vol. 43, no. 1, pp. 317-325, 1997.
    [6] R. Cucchiara and M. Piccardi, "Vehicle detection under day and night illumination," in Proceedings of the 3rd International ICSC Symposium on Intelligent Industrial Automation, pp. 618-623, 1999.
    [7] P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features," Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, pp. I-I, 2001.
    [8] K. Kadir et al., "A comparative study between LBP and Haar-like features for Face Detection using OpenCV," 2014 4th International Conference on Engineering Technology and Technopreneuship (ICE2T), pp. 335-339, 2014.
    [9] S. Choudhury et al., "Vehicle detection and counting using haar feature-based classifier," 2017 8th Annual Industrial Automation and Electromechanical Engineering Conference (IEMECON), pp. 106-109, 2017.
    [10] Z. Zivkovic, "Improved adaptive Gaussian mixture model for background subtraction," Proceedings of the 17th International Conference on Pattern Recognition, pp. 28-31, 2004.
    [11] C. Hsu, "Light- and distance-adaptive vehicle detection in blind-spot areas," Ph.D. dissertation, Dept. CSIE Eng, National Central University, 2016.
    [12] Y. LeCun et al., "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol.86, pp.2278-2324, 1998.
    [13] R. Girshick et al., "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580-587, 2014.
    [14] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016.
    [15] W. Liu et al., "SSD: Single Shot MultiBox Detector," Cham: Springer International Publishing, in Computer Vision – ECCV 2016, pp. 21-37, 2016.
    [16] S. Albelwi and A. Mahmood, "A Framework for Designing the Architectures of Deep Convolutional Neural Networks," Entropy 19, no. 6: 242, 2017.
    [17] A. Pradhan, B. Sarma and B. K. Dey, "Lung Cancer Detection using 3D Convolutional Neural Networks," 2020 International Conference on Computational Performance Evaluation (ComPE), pp. 765-770, 2020.
    [18] M. Saeidi and A. Ahmadi, "Deep Learning based on CNN for Pedestrian Detection: An Overview and Analysis," 2018 9th International Symposium on Telecommunications (IST), pp. 108-112, 2018.
    [19] P. Adarsh, P. Rathi and M. Kumar, "YOLO v3-Tiny: Object Detection and Recognition using one stage improved model," 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), pp. 687-694, 2020.
    [20] M. Sandler et al., "MobileNetV2: Inverted Residuals and Linear Bottlenecks," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510-4520, 2018.
    [21] J. Canny, "A Computational Approach to Edge Detection," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, 1986.
    [22] C. Harris and M. Stephens, "A Combined Corner and Edge Detector," in Proceedings of the 4th Alvey Vision Conference, pp. 147-151, 1988.
    [23] E. Rosten and T. Drummond, "Fusing points and lines for high performance tracking," Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, Vol. 2, pp. 1508-1515, 2005.
    [24] J.-Y. Bouguet, "Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm," Intel Corporation Microprocessor Research Labs, 2000.
    [25] P. F. Alantarilla, J. Nuevo, and A. Bartoli, "Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces," British Machine Vision Conference, pp. 13.1-13.11, 2013.
    [26] H. Qiang, C. Qian and B. Zhong, "A Real-time Moving Target Tracking Algorithm Based on SIFT," International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), pp. 569-572, 2017.
    [27] W. Li et al., "Detection of Partially Occluded Pedestrians by An Enhanced Cascade Detector," IET Intelligent Transport Systems, vol. 8, no. 7, pp. 621-630, 2014.
    [28] H. J. Chien et al., "When To Use What Feature? SIFT, SURF, ORB, or A-KAZE Features for Monocular Visual Odometry," International Conference on Image and Vision Computing New Zealand (IVCNZ), pp. 1-6, 2016.
    [29] C. Chen, "Robust Pedestrian Tracking Using LOG-AKAZE Algorithm and Haar based LBP Detection," M.S. thesis, Dept. Electrical Eng, National Taiwan University of Science and Technology, 2018.
    [30] N. Sharmin and R. Brad, "Optimal filter estimation for Lucas-Kanade optical flow," Sensors 12.9, pp. 12694-12709, 2012.
    [31] M. D. Binder, N. Hirokawa, and U. Windhorst, "Aperture Problem," in Encyclopedia of Neuroscience, 2009.
    [32] G. Wei, Z. Hou, W. Li and W. Yu, "Color Image Optical Flow Estimation Algorithm with Shadow Suppression," 2013 Seventh International Conference on Image and Graphics, pp. 423-427, 2013.

    QR CODE