簡易檢索 / 詳目顯示

研究生: Anjana Kumar
Anjana Kumar
論文名稱: 基於單一攝影機之高速公路先進駕駛輔助系統開發
Development of Single Camera Based Advanced Driver Assistance System for Navigation in Highway
指導教授: 郭重顯
Chung-Hsien Kuo
口試委員: 包傑奇
Hansjoerg (Jacky) Baltes
Shun-Feng Su
蘇順豐
黃忠偉
Allen Jong-Woei Whang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 70
中文關鍵詞: Advanced Driver Assistance System (ADAS)deep learningRandom Sample Consensus (RANSAC)
外文關鍵詞: Advanced Driver Assistance System (ADAS), deep learning, Random Sample Consensus(RANSAC)
相關次數: 點閱:328下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Recent developments in the area of deep learning and GPU computing have
    benefited researchers to obtain information rich features to tackle various computer vision and localization problems and have aided in the development of various smart assistance systems such as driver assistance systems, surveillance systems etc. to ensure safety. This thesis presents a single camera based Advanced Driver Assistance System (ADAS) for navigation in highway. The main objective of this project is to develop a robust lane detection and vehicle identification system. Random Sample Consensus (RANSAC) algorithm is used to detect the road lane boundaries. To detect vehicles in the scene, a deep learning based object detection algorithm called Single Shot MultiBox Detector (SSD) is used. This detector is based on a feed forward network which produces a set of bounding boxes for an object and scores for the presence of the object within those boxes. For this project, the network is trained with KITTI benchmark dataset. This algorithm can process up to 59FPS. With known parameters such as the height at which the camera is mounted,tilt angle of the camera and bounding box obtained using the deep learning vehicle detection system, the distance between vehicles detected in the scene and the camera can be computed. This is done using a series of trigonometric relationships between the image formed in the camera and the real world scene. A camera calibration method is also proposed to calculate few essential parameters for distance estimation.


    Recent developments in the area of deep learning and GPU computing have
    benefited researchers to obtain information rich features to tackle various computer vision and localization problems and have aided in the development of various smart assistance systems such as driver assistance systems, surveillance systems etc. to ensure safety. This thesis presents a single camera based Advanced Driver Assistance System (ADAS) for navigation in highway. The main objective of this project is to develop a robust lane detection and vehicle identification system. Random Sample Consensus (RANSAC) algorithm is used to detect the road lane boundaries. To detect vehicles in the scene, a deep learning based object detection algorithm called Single Shot MultiBox Detector (SSD) is used. This detector is based on a feed forward network which produces a set of bounding boxes for an object and scores for the presence of the object within those boxes. For this project, the network is trained with KITTI benchmark dataset. This algorithm can process up to 59FPS. With known parameters such as the height at which the camera is mounted,tilt angle of the camera and bounding box obtained using the deep learning vehicle detection system, the distance between vehicles detected in the scene and the camera can be computed. This is done using a series of trigonometric relationships between the image formed in the camera and the real world scene. A camera calibration method is also proposed to calculate few essential parameters for distance estimation.

    Chapter 1 Introduction Chapter 2 Literature Review Chapter 3 Overview and Data Collection Chapter 4 Road Lane Detection System Chapter 5 Vehicle Detection System Chapter 6 Distance Calculation and camera calibration Chapter 7 Conclusion and Future Work Reference

    [1] “Global status report on road safety 2015. Geneva: World Health Organization”
    [2015]. xii, 323 stran: ilustrace, tabulky; 30cm + 1 sešit (11 stran). ISBN: 978-92-4-
    156506-6.
    [2] A. Shaout, D. Colella and S. Awad, “Advanced Driver Assistance Systems – Past,
    present and future”, Computer Engineering Conference (ICENCO), 2011 Seventh
    International, Giza, 2011, pp.72-82.
    [3] H. Xiong et al., “Research on control strategy of automatic emergency brake system
    based on Prescan”, IET International Conference on Intelligent and Connected
    Vehicles (ICV 2016), Chongqing, pp.1-6.
    [4] M.A. Fischler and R. C. Bolles.1981. “Random sample consensus: a paradigm for
    model fitting with applications to image analysis and automated cartography”,
    Commun. ACM 24,6 (June 1981), 381-395.
    [5] C.Olaverri, “Road Environment Perception Lab of Toyota CRDL [ITS Research
    Lab]”, IEEE Intelligent Transportation Systems Magazine, vol.7, no.2, pp.100-104,
    Summer 2015.
    [6] N. Morales, J. Toledo, L. Acosta and J. Sánchez-Medina, "A Combined Voxel and
    Particle Filter-Based Approach for Fast Obstacle Detection and Tracking in
    Automotive Applications", IEEE Transactions on Intelligent Transportation Systems,
    vol. 18, no. 7, pp. 1824-1834, July 2017.
    56[7] K. Jo, M.lee, J. Kim and M. Sunwoo, “Tracking and Behavior Reasoning of Moving
    Vehicles Based on Roadway Geometry Constraints”, IEEE Transactions on Intelligent
    Systems, vol.18, no.2, pp.406-476, Feb.2017.
    [8] G. Velez and O. Otaegui, “Embedding vision-based advanced driver assistance
    systems: a survey”, IET Intelligent Transport Systems, vol. 11, no. 3, pp.103-112, 4
    2017.
    [9] Liu W. et al. (2016) “SSD: Single Shot MultiBox Detector”, Leibe B., Matas J.,
    Welling M. (eds) Computer Vision-ECCV 2016. Lecture Notes in Computer Science,
    vol 9905.Springer, Cham.
    [10]X. Du and K. K. Tan, “Comprehensive and Practical Vision System for Self-
    Driving Vehicle Lane-Level Localization”, IEEE Transactions on Image Processing,
    vol. 25, no. 5, pp. 2075-2088, May 2016.
    [11]J. H. Yoo, S. W. Lee, S. K. Park and D. H. Kim, “A Robust Lane Detection Method
    Based on Vanishing Point Estimation Using the Relevance of Line Segments”, IEEE
    Transactions on Intelligent Transportation Systems, vol. 18, no. 12, pp. 3254-3266,
    Dec. 2017.
    [12]U. Ozgunalp, R. Fan, X. Ai and N. Dahnoun, “Multiple Lane Detection Algorithm
    Based on Novel Dense Vanishing Point Estimation” IEEE Transactions on Intelligent
    Transportation Systems, vol. 18, no. 3, pp. 621-632, March 2017.
    [13]M. B. de Paula and C. R. Jung, "Automatic Detection and Classification of Road
    Lane Markings Using Onboard Vehicular Cameras” IEEE Transactions on Intelligent
    Transportation Systems, vol. 16, no. 6, pp. 3160-3169, Dec. 2015.
    [14]V. Gaikwad and S. Lokhande, “Lane Departure Identification for Advanced Driver
    Assistance”, IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2,pp. 910-918, April 2015.
    [15]J. Piao and H. Shin, "Robust hypothesis generation method using binary blob
    analysis for multi-lane detection" IET Image Processing, vol. 11, no. 12, pp. 1210-
    1218, 12 2017.
    [16]Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied
    to document recognition" Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324,
    Nov 1998.
    [17]O. Russakovsky*, J. Deng*, H. Su, J. Krause, S.Satheesh, S.Ma, Z.Huang,
    A.Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. (*
    = equal contribution) “ImageNet Large Scale Visual Recognition Challenge”,
    International Journal of Computer Vision (IJCV), 2015.
    [18]A Krizhevsky, I Sutskever, GE Hinton “ImageNet classification with deep
    convolutional neural networks” Advances in neural information processing systems
    (NIPS), 2012
    [19]C.Szegedy, W.Liu, Y.Jia, P.Sermanet, S.Reed, D.Anguelov, D.Erhan, V.
    Vanhoucke, A.Rabinovich “Going Deeper with Convolutions”, Computer Vision and
    Pattern Recognition (CVPR), 2015
    [20]K.Simonyan, & A.Zisserman. “Very deep convolutional networks for large-scale
    image recognition”, International Conference on Learning Representations (ICLR)
    http://arxiv.org/abs/1409.1556 (2014).
    [21]K.He, X.Zhang, S.Ren and J.Sun “Deep Residual Learning for Image
    Recognition”, Computer Vision and Pattern Recognition (CVPR), Las Vegas,
    NV,2016, pp.770-778
    [22]R.Girshick, J.Donahue, T.Darrell, J.Malik “Rich Feature Hierarchies for Accurate
    Object Detection and Semantic Segmentation”,IEEE Conference on Computer Vision
    and Pattern Recognition (CVPR), 2014, pp. 580-587
    [23]J.R.Uijlings, K.E.Sande, T.Gevers., A.W.Smeulders “Selective Search for Object
    Recognition,” International Journal of Computer Vision (IJCV), vol.104 issue 2,
    September 2013, pp.154-171.
    [24]R. Girshick, "Fast R-CNN”, IEEE International Conference on Computer Vision
    (ICCV), Santiago, 2015, pp. 1440-1448.
    [25]S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards Real-Time Object
    Detection with Region Proposal Networks" IEEE Transactions on Pattern Analysis
    and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, June 1 2017.
    [26]J.Dai, Y. Li, K. He, J.Sun. “R-FCN: Object Detection via Region-based Fully
    Convolutional Networks”, Advances in Neural Information Processing Systems 29
    (NIPS), 2016.
    [27]C. H. Kuo, T. S. Chen, H. C. Chou and G. Z. Chen, "PN-WSNA-Based Eye–Hand–
    Leg Coordination with a FIRA HuroCup Basketball Game", IEEE/ASME Transactions
    on Mechatronics, vol. 18, no. 3, pp. 854-866, June 2013.
    [28]M.Abadi, A.Agarwal, P.Barham, E.Brevdo, J.Dean, M.Devin, S.Ghemawat,
    I.Goodfellow, A.Harp, G.Irving, M.Isard, R.Jozefowicz, Y.Jia, L.Kaiser, M.Kudlur, J.
    Levenberg, D.Mané, M.Schuster, R.Monga, S.Moore, D.Murray, C.Olah, J.Shlens,
    59B.Steiner, I.Sutskever, K.Talwar, P.Tucker, V.Vanhoucke, V.Vasudevan, F.Viégas,
    O.Vinyals, P.Warden, M.Wattenberg, M.Wicke, Y.Yu, and X.Zheng. “TensorFlow:
    Large-scale machine learning on heterogeneous systems”, 2015. Software available
    from tensorflow.org.
    [29]A.Geiger, P.Lenz and R.Urtasun, “Are we ready for Autonomous Driving? The
    KITTI Vision Benchmark Suite”, Conference on Computer Vision and Pattern
    Recognition (CVPR), 2012.

    無法下載圖示 全文公開日期 2023/02/08 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE