簡易檢索 / 詳目顯示

研究生: 于普丞
Pu-Chang Yu
論文名稱: 阿克曼汽車之影像深度學習自動停車
Automatic Parking of Ackerman Steering Car by Image Deep Learning
指導教授: 劉孟昆
Meng-Kun Liu
口試委員: 藍振洋
Chen-Yang Lan
劉耀先
Yao-Hsien Liu
劉孟昆
Meng-Kun Liu
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 82
中文關鍵詞: 影像處理深度學習卷積神經網路阿克曼汽車自動停車
外文關鍵詞: image processing, deep learning, convolutional neural networks, Ackerman cars, automatic parking
相關次數: 點閱:217下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於近年來科技的蓬勃發展,自動化機器被廣泛應用於不同的場合,像是產線中的自動化手臂及家庭裡的自動清掃機器等。自動化機器又會因使用方式不同,需在機器上加裝不同的感測器。在眾多感測器中又屬視覺感測器能獲取較為豐富的資訊,因此使用機器視覺與機器控制的整合技術,是現今較為廣泛的發展。自動化停車是以機器視覺搭配其他感測器完成的機器控制,其成效也在近年來成為昂貴車輛的標準配備,但是在辨識度能力不夠以及感測器昂貴的情況下,一般的影像處理已經無法有效處理畫面中的資訊。近年來蓬勃發展的深度學習更是在影像辨識的領域中有不錯的成果,搭配影像處理能夠使視覺得到的資訊更為完整。本研究利用卷積神經網路學習辨識出不同的車位與車輛,並使用影像處理獲取車格資訊以達到自動停車的目的。此外本研究於虛擬環境中,驗證了此整合系統的可行性,在環境中車輛能達到繞場、找車位、停車等動作,並設計更改車位位置、種類或車輛起始點。最後結果驗證了此整合系統良好的效果。


    With the vigorous development of technologies in the recent years, automatic machines have been widely used in different occasions, such as automatic arms in production lines and automatic cleaning machines at homes. Different sensors are added to the automatic machines due to different applications. Among the many sensors, visual sensors can obtain more information. Therefore, the integrated technology of machine vision and machine control is a relatively extensive development nowadays. Automatic parking is a machine control method that combines visual sensors. Visual sensors have also become the standard equipment of expensive cars in recent years. However, in the case of insufficient recognition capabilities and expensive sensors, general image processing is no longer efficient enough to process the information on the screen. In recent years, the vigorous development of deep learning has achieved good results in the field of image recognition. With image processing, the information obtained by the vision can be more complete. In this study, convolutional neural networks are used to identify different parking spaces and cars, and image processing is used to obtain information to achieve automatic parking. In addition, this study uses a virtual environment to verify the feasibility of this integrated system. In this simulated environment, the car can achieve the actions of detouring, finding parking space, auto parking, etc. The parking location, type of car and starting point. The final result verifies the good performance of this integrated system.

    摘要 ABSTRACT 致謝 目錄 圖目錄 表目錄 第一章 序論 1.1前言 1.2文獻回顧及研究動機 1.3本文貢獻及架構 第二章 深度學習神經網路介紹 2.1卷積神經網路(Convolutional Neural Network) 2.2 CNN各項架構及歷史 2.3物件偵測及歷史 2.4資料庫建立及架構選擇 第三章 阿克曼轉向運動學與路徑規劃 3.1阿克曼轉向運動學 3.2最小停車格計算與防撞 3.3停車線位置 3.4目標點及起始點座標 3.5路徑規劃 第四章 虛擬環境模擬與實驗結果 4.1虛擬環境實驗架設與流程 4.2訓練影像 4.3虛擬環境路邊停車測試 4.4虛擬環境倒車入庫測試 4.5虛擬環境實驗結論 第五章 結論與未來展望 5.1結論 5.2未來展望 參考文獻

    [1]T. Kanade, C. Thorpe, and W. Whittaker, "Autonomous land vehicle project at CMU, " Proc. 1986 ACM 14th Annu. Conf. Comput. Sci. CSC 1986, pp. 71–80, 1986
    [2]M. O. Hasan, M. M. Islam and Y. Alsaawy, "Smart Parking Model based on Internet of Things (IoT) and TensorFlow," 2019 7th International Conference on Smart Computing & Communications (ICSCC), Sarawak, Malaysia, Malaysia, pp. 1-5, 2019.
    [3]H. T. Vu and C. Huang, "Parking Space Status Inference Upon a Deep CNN and Multi-Task Contrastive Network With Spatial Transform," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 4, pp. 1194-1208, April 2019.
    [4]I. Kilic and G. Aydin, "Turkish Vehicle License Plate Recognition Using Deep Learning," 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, pp. 1-5, 2018.
    [5]P. S. Chandran, N. B. Byju, R. U. Deepak, K. N. Nishakumari, P. Devanand and P. M. Sasi, "Missing Child Identification System Using Deep Learning and Multiclass SVM," 2018 IEEE Recent Advances in Intelligent Computational Systems (RAICS), Thiruvananthapuram, India, pp. 113-116, 2018.
    [6]D. King-Hele, "Erasmus Darwin’s improved design for steering carriages - and cars," Notes Rec. R. Soc., vol. 56, no. 1, pp. 41–62, 2002.
    [7]H. Hubel. and T. N. Wiesel, "Receptive fields of single neurones in the cat's striate cortex." The Journal of physiology 148.3, pp. 574-591, 1959.
    [8]Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition." in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
    [9]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems, pp.1097-1105, 2012.
    [10]K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015
    [11]C. Szegedy et al., "Going deeper with convolutions." 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp. 1-9, 2015.
    [12]K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.770-778, 2016.
    [13]R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation." 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, pp. 580-587, 2014.
    [14]R. Girshick, "Fast r-cnn." Proceedings of the IEEE international conference on computer vision, pp.1440-1448, 2015.
    [15]S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 1 June 2017.
    [16]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-December, pp. 779–788, 2016.
    [17]W. Liu et al., "SSD: Single shot multibox detector," Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.
    [18]J. Redmon and A. Farhadi, "Yolov3: An incremental improvement."2018, arXiv preprint arXiv:1804.02767, 2018.
    [19]L. Wang, L. Guo and Y. He, "Path Planning Algorithm for Automatic Parallel Parking from Arbitrary Initial Angle." 2017 10th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, pp. 55-58, 2017.
    [20]C. C .Lin and M. S.Wang, "A vision based top-view transformation model for a vehicle parking assistant," Sensors, vol. 12, no. 4, pp. 4431–4446, 2012
    [21]S. Kim, J. Kim and W. Kim, "A method of detecting parking slot in hough space and pose estimation using rear view image for autonomous parking system," 2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC), Beijing, pp. 452-454, 2016.
    [22]J. Canny, "A Computational Approach to Edge Detection," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, Nov. 1986.
    [23]R. O. Duda and P. E .Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Commun. ACM, vol. 15, no. 1, pp. 11–15, 1972.
    [24]A. Scheuer and T. Fraichard, "Continuous-curvature path planning for car-like vehicles," Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97, Grenoble, France, pp. 997-1003 vol.2, 1997.

    QR CODE