簡易檢索 / 詳目顯示

研究生: 方御儒
Yu-Ru Fang
論文名稱: 阿克曼汽車之影像深度學習自動停車
Automatic Parking of Ackerman Steering Car by Image Deep Learning
指導教授: 張以全
I-Tsyuen Chang
口試委員: 劉孟昆
Meng-Kun Liu
林志哲
Chih-Jer Lin
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 80
中文關鍵詞: 物件偵測影像處理深度學習卷積神經網路阿克曼汽車自動停 車
外文關鍵詞: object detection, image processing, deep learning, convolutional neural networks, Ackerman cars, automatic parking
相關次數: 點閱:321下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在感測器數量有限的前提下,本研究提出一種使用視覺感測器,以物件偵 測技術 YOLO v3 完成實體的阿克曼自動停車系統,初步探討利用微型單板電腦 Raspberry Pi 4B 整合等多項系統讓自走車完成自動停車功能的可行性,其中使用 了 YOLO v3、馬達控制、旋轉編碼器、循跡控制等系統。在實驗中實現了辨識空 車位,並在自走車辨識出空車位後,設計一套使用 YOLO v3 辨識出車輛與停車格 相對座標的方法,最終以 Dubins 曲線為理論基礎計算倒車路徑,完成自動停車。 本篇論文將會詳細的說明實驗中使用到的設備、實驗的過程、研究方法還有實體 實驗中的細節,最後提出幾點未來實驗能夠改進的方向,讓整個自動停車系統更 完善,提升實用價值。


    Under the constrain of only using a limited number of sensors, this research proposes a way of introducing an automatic Ackerman vehicle automatic parking system. By using a simple video camera and with YOLO v3 object, this thesis starts an preliminary research on the feasibility of using a single-board Raspberry Pi 4B, and integrate other systems to make the vehicle complete the automatic parking function. The Ackerman vehicle com- poses of many sub-systems, for example: YOLO v3, motor control, rotary encoder, and tracking control. The function of identifying empty parking spaces is realized in the ex- periment, after the vehicle identifies the empty parking space, an automation process is triggered to identify the relative coordinates of the vehicle and the parking space, finally, the reversing path is calculated based on the Dubins curve and complete the automatic parking. This paper will describe the equipment used in the experiment, the process of the experiment, the research method and the details of the physical experiment. Finally proposing some directions for future experiments to be improved to make the whole au- tomatic parking system better to bring more practical use.

    論文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II 誌謝.......................................... III 目錄.......................................... IV 圖目錄 ........................................VIII 表目錄 ........................................XII 符號說明 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XII 1 緒論........................................ 1 1.1 前言..................................... 1 1.2 文獻回顧與動機.............................. 3 1.2.1 研究動機.............................. 3 1.2.2 汽車阿克曼轉向結構 ....................... 3 1.2.3 深度學習.............................. 4 1.3 本篇貢獻及架構.............................. 7 2 深度學習神經網路介紹 ............................. 8 2.1 卷積神經網路(ConvolutionalNeuralNetwork). . . . . . . . . . . . . . 8 2.2 CNN各項架構及歷史........................... 16 2.2.1 LeNet................................ 17 2.2.2 AlexNet............................... 18 2.2.3 VGGNet .............................. 19 2.2.4 ResNet(參考[1]).......................... 22 2.3 物件偵測系統及歷史 ........................... 23 2.3.1 FasterR-CNN ........................... 25 2.3.2 YOLO ............................... 26 2.3.3 SSD................................. 27 2.3.4 各架構比較與選擇 ........................ 28 3 研究方法 ..................................... 30 3.1 YOLOv3監督式學習訓練方法 ...................... 30 3.1.1 影像訓練.............................. 31 3.1.2 訓練標籤.............................. 32 3.1.3 訓練結果.............................. 34 3.2 阿克曼轉向運動學............................. 35 3.3 辨識自走車座標 .............................. 36 3.3.1 線性回歸.............................. 37 3.3.2 用線性回歸求實際座標...................... 38 3.4 旋轉編碼器................................. 40 3.5 利用Dubins曲線完成路徑規劃...................... 43 3.5.1 Dubins曲線 ............................ 43 3.5.2 座標變換.............................. 43 3.5.3 Dubins曲線中的LSR情況.................... 44 4 實驗架構與實驗結果............................... 47 4.1 實驗設備.................................. 47 4.2 實驗流程.................................. 48 4.2.1 辨識流程.............................. 49 4.2.2 停車流程.............................. 50 4.3 實體場地設置 ............................... 51 4.4 實際實驗.................................. 53 4.4.1 實際實驗流程 ........................... 55 4.4.2 右方辨識有車時.......................... 56 4.4.3 發現車位時 ............................ 57 4.4.4 到達停車起始點附近時...................... 58 4.4.5 到達停車起始點.......................... 59 5 結論與未來展望 ................................. 61 5.1 結論..................................... 61 5.1.1 阿克曼汽車路經規劃 ....................... 61 5.1.2 影像辨識準確度.......................... 61 5.1.3 整合多方系統並成功驗證於實體環境中 ............ 62 5.2 未來展望.................................. 62 5.2.1 設備升級.............................. 62 5.2.2 選擇更適合的物件偵測系統 ................... 63 5.2.3 使用更精準的測距感測器 .................... 63 5.2.4 加入迴授系統 ........................... 64 5.2.5 設計多項實驗情況 ........................ 64 5.2.6 量化實驗結果 ........................... 64 5.2.7 優化標籤.............................. 65 參考文獻....................................... 66

    [1] K.He,X.Zhang,S.Ren,andJ.Sun,“Deepresiduallearningforimagerecognition,” pp. 770–778, 06 2016.
    [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Sys- tems (F. Pereira, C. Burges, L. Bottou, and K. Weinberger, eds.), vol. 25, Curran Associates, Inc., 2012.
    [3] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Pro- cessing Systems (C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, eds.), vol. 28, Curran Associates, Inc., 2015.
    [4] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pp. 779–788, 2016.
    [5] C. Ning, H. Zhou, Y. Song, and J. Tang, “Inception single shot multibox detector for object detection,” in 2017 IEEE International Conference on Multimedia Expo Workshops (ICMEW), pp. 549–554, 2017.
    [6] J. Redmon and A. Farhadi, “Yolov 3 : An incremental improvement,” 2018.
    [7] K. Yao, Y. Wang, Z. Hou, and X. Zhao, “Optimum design and calculation of acker- man steering trapezium,” in 2008 International Conference on Intelligent Computa- tion Technology and Automation (ICICTA), vol. 1, pp. 1248–1252, 2008.
    [8] A.Tourani,A.Shahbahrami,S.Soroori,S.Khazaee,andC.Y.Suen,“Arobustdeep learning approach for automatic iranian vehicle license plate detection and recogni- tion for surveillance systems,” IEEE Access, vol. 8, pp. 201317–201330, 2020.
    [9] O.I.Abiodun,A.Jantan,A.E.Omolara,K.V.Dada,N.A.Mohamed,andH.Arshad, “State-of-the-art in artificial neural network applications: A survey,” Heliyon, vol. 4, no. 11, p. e00938, 2018.
    [10] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
    [11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv 1409.1556, 09 2014.
    [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for ac- curate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587, 2014.
    [13] R. Girshick, “Fast R-CNN,” 2015.
    [14] S.Zhang,C.Chi,Y.Yao,Z.Lei,andS.Li,“Bridgingthegapbetweenanchor-based and anchor-free detection via adaptive training sample selection,” pp. 9756–9765, 06 2020.
    [15] A. M. Shkel and V. J. Lumelsky, “Classification of the dubins set,” Robotics Auton. Syst., vol. 34, pp. 179–202, 2001.

    無法下載圖示 全文公開日期 2025/08/08 (校內網路)
    全文公開日期 2025/08/08 (校外網路)
    全文公開日期 2025/08/08 (國家圖書館:臺灣博碩士論文系統)
    QR CODE