簡易檢索 / 詳目顯示

研究生: 張豐東
Feng-Dong Zhang
論文名稱: 結合攝影機與超音波感測技術之車輛跟隨控制系統
Vehicle Following Control System with Combining Camera and Sonar Sensing Techniques
指導教授: 郭重顯
Chung-Hsien Kuo
口試委員: 吳世琳
Shih-lin Wu
蘇國和
Kuo-Ho Su
梁書豪
Shu-Hao Liang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 72
中文關鍵詞: 卡爾曼濾波車輛跟隨深度學習
外文關鍵詞: Kalman filte, vehicle following, deep learning
相關次數: 點閱:294下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文提出一結合影像與超音波定位技術之車輛自動跟隨控制系統。此一系統會藉由車載攝影機與超音波感測器對無人車周遭的環境進行感知偵測,並對無人車前方的目標物進行識別定位,然後再通過卡爾曼濾波(Kalman Filter;KF)以及比例微分(Proportional-Derivative;PD)控制器,以實現對前方目標車輛的自動跟隨。在對前方目標車輛的偵測上,本論文先以深度學習卷積神經網路為架構,透過離線監度學習的方式,應用YOLO(You Only Look Once)V3 的網路結構進行學習,從而達到對車輛物件在複雜環境中的影像辨識及框選並結合單視覺測距公式計算出前方車輛之相對影像位置。然而當前方道路影像中出現多個車輛時,本論文會將第一個偵測到的車輛作為目標車輛,透過卡爾曼濾波以預估目標車輛之影像坐標,排除其他車輛的干擾,找出目標車輛之影像坐標,最後再結合超音波感測技術所偵測到無人車與其前方目標車輛的距離,完成實現無人車在模擬之行車道路環境中,對前方目標車輛的自動跟隨。在機構設計實作上,本論文自行開發一尺寸為長700mm 寬455mm 之小型無人車做為實驗平台,該小型無人車為後輪驅動、前輪轉向之設計,以模擬實際汽車之運動行為。而在追蹤對象選擇上,本文以此小型無人車相同比例大小下之汽車尾部照片進行比例化之輸出,並貼於其他輪型機器人後面,設計一模擬實驗環境,以驗證此系統之可行性。


This thesis presents a vehicle following control system combining image and ultrasonic sensors. In this system, front vehicle’s information was recognized in terms of image-based vehicle detection and ultrasonic sensor’s distance sensing in real time. Meanwhile, a Kalman filter (KF) and a proportional-derivative (PD) controller were employed and developed to control the steering and speed of the vehicle for the purpose of automatic tracking. The detection of front vehicles was done with the deep learning convolution neural network (CNN) framework in terms of YOLO (You Only Look Once) V3 network structure. Hence, the front vehicles could be detected in a complex environment, and the detected vehicle bounding-boxes were further used to calculate the relative coordinates of the targeted vehicles. However, when there are more than one vehicles appeared in the image, this work took the first detected vehicle as the target vehicle, and the Kalman filter was applied to predict the targeted vehicle bounding-box area in the next image to identify the correct target vehicle bounding-box from a number of vehicle bounding-boxes which were obtained from the CNN image detection. Moreover, the ultrasonic sensor was used to detect the distance to the front vehicle so that a desired keeping distance could be desirable. As a consequence, the image system was used to control the steering, and the ultrasonic sensor was used to keep a desired distance to the front vehicle. To evaluate the performance of the vehicle following system, a small-size front-steering/ rear-drive car-like vehicle was realized in our laboratory with 700mm (length) and 455 mm (width) as the experimental platform. In addition, several rated vehicle tail pictures with considering the size ration between real vehicle and our small-size vehicle were printed, and these pictures were pasted at the tail of other mobile robots or wheelchairs to form a rated size simulation environment. Finally, the experiments validated the feasibility of the proposed vehicle following system.

指導教授推薦書 口試委員會審定書 誌謝 摘要 Abstract 目錄 圖目錄 表目錄 第1章 緒論 1.1 研究背景動機 1.2 研究目的 1.3 論文架構 1.4 文獻回顧 1.4.1 深度學習網路於物件辨識相關研究 1.4.2 無人駕駛汽車相關研究 1.4.3 超音波感測技術相關研究 1.4.4 車輛跟隨控制系統相關研究 1.4.5 卡爾曼濾波器相關研究 第2章 系統架構與設計實現 2.1 系統架構 2.2 環境感測模組 2.2.1 超音波感測器 2.2.2 影像感測相關設備 2.3 小型無人車硬體平台 2.3.1 無人車車身主體設計 2.3.2 前輪轉向機構設計 2.3.3 後輪驅動機構設計 第3章 車輛之影像辨識 3.1 卷積類神經網路簡介 3.1.1 You Only Look Once 神經網路學習架構 3.2 車輛偵測流程 3.2.1 資料蒐集 3.2.2 一體化偵測 3.2.3 學習成效測試 第4章 車輛追蹤控制系統 4.1 目標物之影像定位 4.2 PD控制器 4.3 卡爾曼濾波 4.3.1 擴展型卡爾曼濾波器 4.3.2 擴展型卡爾曼濾波器應用於車輛追蹤 4.4 超音波感測技術 第5章 實驗結果與分析 5.1 實驗環境配置 5.2 車輛物件辨識結果測試 5.3 卡爾曼濾波器追蹤結果分析 5.4 PD控制器結果分析 第6章 結論與未來研究方向 6.1 結論 6.2 未來研究方向 參考文獻

[1]M. Joo Er, S. Wu, S. Wu, J. Lu and H. Lye Toh,“Face recognition with radial basis function (RBF) neural networks,” IEEE Transactions on Neural Networks, pp.697 -710, 2002.
[2]W. Chen, T. Qu,Y. Zhou,K. Weng,G. Wang and G. Fu,“Door recognition and deep learning algorithm for visual based robot navigation,” IEEE International Conference on Robotics and Biomimetics (ROBIO ), pp.1793-1798, 2014.
[3]S. Nagpal, M. Singh, R. Singh and M. Vatsa, “Regularized deep learning for face recognition with weight variations,” IEEE Access, vol. 3, pp. 3010 – 3018, 2015.
[4]R. Polishetty, M. Roopaei and P. Rad, “A next-generation secure cloud-based deep learning license plate recognition for smart cities,” 2016 15th IEEE International Conference on Machine Learning and Applications, 2016.
[5]蘇浩平,「開放環境下之車牌偵測」,碩士論文,國立臺灣科技大學,民國106年。
[6]S. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, issue. 6, pp. 1137-1149, 2017.
[7]J. Redmon, S. Divvala, R. Girshick and A. Farhadi, “You only look once: unified, real-time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016.
[8]S. Contreras,F. De La Rosa,“Using Deep Learning for Exploration and Recognition of Objects Based on Images,”XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), pp.1-6, 2016.
[9]T. Lin,A. RoyChowdhury,S. Maji,“Bilinear Convolutional Neural Networks for Fine-grained Visual Recognition,”IEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. PP, Issu.99 ,pp.1-1, 2017.
[10]K. S. Aarathi,Anish Abraham,“Vehicle color recognition using deep learning for hazy images,”International Conference on Inventive Communication and Computational Technologies (ICICCT),pp.335-339,2017.
[11]L. Li,Z. Jun,J. Fei and S. Li,“An incremental face recognition system based on deep learning”, Fifteenth IAPR International Conference on Machine Vision Applications (MVA), pp.238-241, 2017.
[12]N. Gallardo,N. Gamez,P. Rad and M. Jamshidi,“Autonomous decision making for a driver-less car”, 12th System of Systems Engineering Conference (SoSE), pp.1-6, 2017.
[13]D. Jianmin,Z. Kaihua and S. Lixiao,“Road and obstacle detection based on multi-layer laser radar in driverless car”, 34th Chinese Control Conference (CCC), pp.8003-8008, 2015.
[14]G. Amrithanandha Babu,K. Guruvayoorappan,V.V Sajith Variyar and K.P Soman,“ Design and fabrication of robotic systems : Converting a conventional car to a driverless car”,International Conference on Advances in Computing Communications and Informatics (ICACCI), pp.857-863, 2017.
[15]R. Guidolini, A. F. De Souza, F. Mutz and C. Badue,“Neural-based model predictive control for tackling steering delays of autonomous cars”,International Joint Conference on Neural Networks (IJCNN), pp.4324 - 4331, 2017.
[16]P. Suresh and P.V. Manivannan,“Human driver emulation and cognitive decision making for autonomous cars”,International Conference on Robotics: Current Trends and Future Challenges (RCTFC), pp.1-6, 2016.
[17]P. Hosur, R. B. Shettar, and M. Potdar, “Environmental awareness around vehicle using ultrasonic sensors,” International Conference on Advances in Computing, Communications and Informatics, pp. 1154–1159, 2016.
[18]M. Moussa, A. Moussa, and N. El-Sheimy, “Multiple ultrasonic aiding system for car navigation in GNSS denied environment,” IEEE/ION Position, Location and Navigation Symposium, pp. 133–140, 2018.
[19]Q. Wang, S. Xu, and H. Xu, “A Fuzzy Control Based Self-Optimizing PID Model for Autonomous Car Following on Highway,” International Conference on Wireless Communication and Sensor Network, pp. 395-399, 2014.
[20]T.Ogitsu and M.Omae, “Design and experimental testing of vehicle-following control for small electric vehicles with communication,” International Conference on Automation, Robotics and Applications (ICARA), pp. 586-590, 2015.
[21]C.Fries and H.Wuensche, “Real-time unsupervised feature model generation for a vehicle following system,” IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp.2450-2455, 2016.
[22]A.Ramoso and C.Ramos, “Comparative study of different Fuzzy-Neural configurations for autonomous vehicle following algorithm,” 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp.413-418, 2016.
[23]X.Wang, R.Jiang, L.Li, Y.Lin, X.Zheng and F.Wang, “Capturing Car-Following Behaviors by Deep Learning,” IEEE Transactions on Intelligent Transportation Systems, pp910-920, 2018.
[24]D.Fassbender, C.Heinrich, T.Luettel and H.Wuensche, “An optimization approach to trajectory generation for autonomous vehicle following,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp3675-3680, 2017.
[25]Q.Zhu, Z.Huang, Z.Sun, D.Liu and B.Dai, “Reinforcement learning based throttle and brake control for autonomous vehicle following,” Chinese Automation Congress (CAC), pp.6657-6662, 2017.
[26]M.Pourabdollah, E.Bjärkvik, F.Fürer, B.Lindenberg and K.Burgdorf, “Calibration and evaluation of car following models using real-world driving data,” IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp.1-6, 2017.
[27]Y.Zhang, Q.Li, J.Wang, S.Verwer and M.Dolan, “Lane-change Intention Estimation for Car-following Control in Autonomous Driving,” IEEE Transactions on Intelligent Vehicles, pp.1-1,2018.
[28]S. Huang and J. Hong, “Moving Object Tracking System Based On Camshift And Kalman Filter,” International Conference on Consumer Electronics, Communications and Networks, pp. 1423 - 1426, 2011.
[29]M. Vinaykumar and R. K. Jatoth, “Performance Evaluation of Alpha-Beta and Kalman Filter for Object Tracking,” International Conference on Advanced Communication Control and Computing Technologies, pp. 1369 - 1373, 2014.
[30]S. Zhang, C. Wang, S. C. Chan, X. Wei and C. H. Ho, “New Object Detection, Tracking, and Recognition Approaches for Video Surveillance Over Camera Network,” IEEE Sensors Journal, vol. 15, pp. 2679 - 2691, 2014.
[31]J. M. Jeongl, T. S. Yoon and J. B. Park, “Kalman Filter Based Multiple Objects Detection-Tracking Algorithm Robust to Occlusion,” SICE Annual Conference, Hokkaido University, Sapporo, Japan, pp. 941 - 946, 2014.

無法下載圖示 全文公開日期 2023/08/28 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE