簡易檢索 / 詳目顯示

研究生: 林子捷
Tzu-Chieh Lin
論文名稱: 基於人工和學習特徵之三維點雲中行人與車輛偵測
Detecting Pedestrian and Vehicle in 3D Point Cloud based on Hand-crafted and Deep-learned Features
指導教授: 花凱龍
Kai-Lung Hua
口試委員: 花凱龍
Kai-Lung Hua
鄭文皇
Wen-Huang Cheng
陳建中
Jiann-Jone Chen
陳永耀
Yung-Yao Chen
郭彥甫
Yan-Fu Kuo
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 41
中文關鍵詞: 光學雷達行人車輛物件偵測深度學習
外文關鍵詞: LIDAR, pedestrian, vehicle, object detection, deep learning
相關次數: 點閱:236下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,自動駕駛汽車為全球發展的重要目標,不論白天或晚上自動駕駛系統都需要準確地偵測行人與車輛。這意味著我們不能依靠普通的RGB相機來偵測周圍環境,因為它對於光線敏感度很高,在晚上或是下雨天時拍出來的影像會不清晰,所以我們選擇使用光學雷達(LiDAR),該感測器可以生成三維點雲,其中每個點表示與物體的距離。在本論文中,我們提出了一種方法,僅使用由LiDAR生成的三維點雲來檢測行人與車輛。我們方法首先是項目將三維點雲包刮點雲資訊映射到二維平面。然後我們提取三維點雲和映射後的二維圖片特徵和來自卷積神經網絡的特徵以訓練支持向量機(SVM)來偵測行人與車輛。我們提出的方法在F1-度量法比現有技術方法取得了顯著的改進。


    In recent years, self-driving cars are an important goal for global development. Autopilot systems need to be able to detect pedestrians or vehicles with high precision and recall regardless of whether it is during the day or night. This means that we cannot rely on normal cameras to sense the surroundings due to its sensitivity to lighting conditions. An alternative for images is to use light detection and ranging sensors (LiDAR) that produce three-dimensional point clouds where each point represents the distance to an object. However, most pedestrian or vehicle detection systems are designed for image inputs and not on distance point clouds. In his paper, we propose a method for detecting pedestrians and vehicles using only the three-dimensional point clouds generated by the LiDAR. Our approach first projects the three-dimensional point cloud include point cloud information into a two-dimensional plane. We then extract both hand-crafted features and learned features from a convolutional neural network in order to train a support vector machine (SVM) to detect pedestrians and vehicles. Our proposed method achieved significant improvements in terms of F1-measurement over prior state-of-the-art methods.

    論文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.1 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2.1 Hand-crafted Features . . . . . . . . . . . . . . . . . . . . . . . 8 3.2.2 Learned Features from Convolutional Neural Networks (CNN) . . 13 3.3 Pedestrian and Vehicle Detection . . . . . . . . . . . . . . . . . . . . . . 17 4 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5 Experimental Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1 Example of Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    [1] L. E. Navarro-Serment, C. Mertz, and M. Hebert, “Pedestrian detection and tracking using three-dimensional ladar data,” The International Journal of Robotics Research, vol. 29, no. 12, pp. 1516–1528, 2010.
    [2] K. Kidono, T. Miyasaka, A. Watanabe, T. Naito, and J. Miura, “Pedestrian recognition using high-definition lidar,” in Intelligent Vehicles Symposium (IV), 2011 IEEE, pp. 405–410, IEEE, 2011.
    [3] W. Jun and T. Wu, “Camera and lidar fusion for pedestrian detection,” in Pattern Recognition (ACPR), 2015 3rd IAPR Asian Conference on, pp. 371–375, IEEE, 2015.
    [4] H.-L. Tang, S.-C. Chien, W.-H. Cheng, Y.-Y. Chen, and K.-L. Hua, “Multi-cue pedestrian detection from 3d point cloud data,” in Multimedia and Expo (ICME), 2017 IEEE International Conference on, pp. 1279–1284, IEEE, 2017.
    [5] T.-C. Lin, D. S. Tan, H.-L. Tang, S.-C. Chien, F.-C. Chang, Y.-Y. Chen, W.-H. Cheng, and K.-L. Hua, “Pedestrian detection from lidar data via cooperative deep and handcrafted features,” in Image Processing (ICIP), 2018 IEEE International Conference on, IEEE, 2018.
    [6] K. O. Arras, O. M. Mozos, and W. Burgard, “Using boosted features for the detection of people in 2d range data,” in Robotics and Automation, 2007 IEEE International Conference on, pp. 3402–3407, IEEE, 2007.
    [7] L. Spinello, K. O. Arras, R. Triebel, and R. Siegwart, “A layered approach to people detection in 3d range data.,” in AAAI, vol. 10, pp. 1–1, 2010.
    [8] D. Matti, H. K. Ekenel, and J.-P. Thiran, “Combining lidar space clustering and convolutional neural networks for pedestrian detection,” in Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on, pp. 1–6, IEEE, 2017.
    [9] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” in Readings in computer vision, pp. 726–740, Elsevier, 1987.
    [10] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “Density-based spatial clustering of applications with noise,” in Int. Conf. Knowledge Discovery and Data Mining, vol. 240, 1996.
    [11] C. Premebida, O. Ludwig, and U. Nunes, “Exploiting lidar-based features on pedestrian detection in urban scenarios,” in Intelligent Transportation Systems, 2009. ITSC’09. 12th International IEEE Conference on, pp. 1–6, IEEE, 2009.
    [12] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,”in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 886–893, IEEE, 2005.
    [13] T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern recognition, vol. 29, no. 1, pp. 51–59, 1996.
    [14] R. Lienhart and J. Maydt, “An extended set of haar-like features for rapid object detection,” in Image Processing. 2002. Proceedings. 2002 International Conference on, vol. 1, IEEE, 2002.
    [15] J. Huang and S. You, “Point cloud labeling using 3d convolutional neural network,” in Pattern Recognition (ICPR), 2016 23rd International Conference on, pp. 2670–2675, IEEE, 2016.
    [16] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural networks,” in Advances in Neural Information Processing Systems, pp. 972–981, 2017.
    [17] M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L. Chen, “Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation,” CoRR, vol. abs/1801.04381, 2018.
    [18] C.-C. Chang and C.-J. Lin, “Libsvm: a library for support vector machines,” ACM transactions on intelligent systems and technology (TIST), vol. 2, no. 3, p. 27, 2011.
    [19] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
    [20] R. B. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in Robotics and automation (ICRA), 2011 IEEE International Conference on, pp. 1–4, IEEE, 2011.
    [21] J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, A. Bergeron, et al., “Theano: Deep learning on gpus with python,” in NIPS 2011, BigLearning Workshop, Granada, Spain, vol. 3, Citeseer, 2011.
    [22] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden,
    M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015. Software available from tensorflow. org.

    QR CODE