簡易檢索 / 詳目顯示

研究生: 柯景翔
Ching-Hsiang Ko
論文名稱: 基於高效層聚合網路及循環特徵位移聚合器之車道線偵測系統
Lane Detection System Based on Efficient Layer Aggregation Network and Cyclical Recurrent Feature-Shift Aggregator
指導教授: 陳永耀
Yung-Yao Chen
口試委員: 黃正民
Cheng-Ming Huang
林昌鴻
Chang-Hong Lin
呂政修
Jenq-Shiou Leu
沈中安
Chung-An Shen
陳永耀
Yung-Yao Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 52
中文關鍵詞: 車道線偵測深度學習先進駕駛輔助系統
外文關鍵詞: lane detection, deep learning, advanced driver assistance system
相關次數: 點閱:244下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

隨著自駕車技術的重視度日益提高,車道輔助的功能成為了其中不可或缺的一部分。而高準確度和效率是車道輔助系統中重要的考慮因素。在這種背景下,設計一個輕巧且準確的車道辨識系統變得尤為重要。本研究提出一個基於深度學習的車道線偵測模型架構,為了解決深度模型在初期特徵提取時常見的收斂性惡化問題,我們使用高效的特徵提取模塊,並對網路結構進行重新調整。且考慮到車道線細長的獨特性,以及在極端天氣或多變的道路環境下的辨識挑戰,我們引入了循環特徵轉移聚合器及多分支的解碼器,以更有效地傳遞車道線特徵,提升模型面對高挑戰性環境的泛化能力,最終模型在準確度及速度達到良好的平衡。本研究方法使用TuSimple車道線檢測資料集,以及自行蒐集的實驗數據和實驗車輛上進行了不同場景的驗證,與其他先進的車道線檢測模型比較後取得了優異的結果。


As the importance of autonomous driving technology continues to grow, the functionality of lane assistance has become an indispensable part of it. In particular, high accuracy and efficiency are crucial considerations in a lane assistance system. In this context, designing a lightweight and accurate lane recognition system has become particularly important.
This research proposes a lane detection model structure based on deep learning. To address the common problem of convergence deterioration during the initial feature extraction of deep models, we utilize an efficient feature extraction module and rearrange the network structure. Considering the thin and elongated uniqueness of lane, as well as the recognition challenges under extreme weather or variable road environments, we have introduced a Recurrent Feature-Shift Aggregator and multi-branch decoders to effectively convey lane features, enhancing the model's ability to generalize in high-challenge environments. The final model achieves a good balance between accuracy and speed.
This research method uses the TuSimple lane detection dataset, along with self-collected experimental data and experimental vehicles for verification in different scenarios. Compared with other advanced lane detection models, it yielded excellent results.

致謝 I 摘要 II Abstract III 目錄 IV 圖目錄 VI 表目錄 VIII 第一章緒論 1 1.1前言 1 1.2研究動機 2 1.3論文貢獻 3 第二章 相關文獻 5 2.1傳統車道線偵測技術 5 2.1.1基於邊緣偵測的方法 5 2.1.2基於顏色閾值的方法 6 2.2深度學習車道線偵測技術 8 2.2.1基於分割的車道線偵測方法 8 2.2.2基於錨點的車道線偵測方法 11 2.2.3基於多任務的車道線偵測方法 13 第三章方法 15 3.1車道線偵測模型架構 15 3.2高效主幹網路(Efficient backbone and Neck) 15 3.3循環特徵位移聚合器(Recurrent Feature-Shift Aggregators) 20 3.4解碼器(Decorder) 22 3.5損失函數(Loss Function) 23 3.6優化器(Optimizer) 24 第四章 實驗結果與分析 25 4.1實驗環境 25 4.2資料集 27 4.2.1TuSimple 車道線檢測數據集 27 4.2.2NTUST-Lane 車道線檢測數據集 28 4.3模型參數設置 30 4.4效能評估 30 4.4.1TuSimple 數據集驗證 31 4.4.2NTUST-Lane數據集驗證 31 4.4.3速度驗證 33 4.5消融實驗 34 4.6實驗結果 34 第五章 結論與未來展望 37 參考文獻 38

[1]C. S. Nam, M. J. Jeon, and C. S. Won, "Lane detection and tracking based on Canny and Hough transforms." in 2010 IEEE Intelligent Vehicles Symposium, 2010.
[2]Y. Li, L. Chen, H. Huang, X. Li, W. Xu, L. Zheng and J. Huang., "Nighttime lane markings recognition based on Canny detection and Hough transform," 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, Cambodia, pp. 411-415, 2016.
[3]M. He, Z. Li, and J. Cao, "A lane detection algorithm based on Canny edge detection." in 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2017.
[4]C. Y. Low, H. Zamzuri and S. A. Mazlan, "Simple robust road lane detection algorithm," 2014 5th International Conference on Intelligent and Advanced Systems (ICIAS), Kuala Lumpur, Malaysia, pp. 1-4, 2014.
[5]V. S. Bottazzi, P. V. Borges, B. Stantic, and J. Jo, "Adaptive regions of interest based on HSV histograms for lane marks detection," in Robot Intelligence Technology and Applications 2: Springer, pp. 677-687, 2014.
[6]K.-Y. Chiu and S.-F. Lin, "Lane detection using color-based segmentation," IEEE Proceedings. Intelligent Vehicles Symposium, 2005., Las Vegas, NV, USA, pp. 706-711, 2005.
[7]H. Zhao, Z. Teng, H.-H. Kim, and D.-J. Kang, "Annealed particle filter algorithm used for lane detection and tracking," Journal of Automation and Control Engineering, vol. 1, no. 1, 2013.
[8]X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang. Spatial as deep: Spatial cnn for trafficsceneunderstanding. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI'18/IAAI'18/EAAI'18). AAAI Press, Article 891, 7276–7283., 2018.
[9]T. Zheng, H. Fang, Y. Zhang, W. Tang, Z. Yang, H. Liu, and D. Cai. “RESA: recurrent feature-shift aggregator for lane detection,” Proc. AAAI Conf. on Artif. Intell., vol. 35, no. 4, pp. 3547–3554, 2020.
[10]S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards RealTime Object Detection with Region Proposal Networks," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2017 1 Jun.
[11]J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 6517-6525, 2017.
[12]J. Redmon and A. Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
[13]C. -Y. Wang, A. Bochkovskiy, and H. -Y. M. Liao, "Scaled-YOLOv4: Scaling Cross Stage Partial Network," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 13024-13033, 2021.
[14]X. Li, J. Li, X. Hu and J. Yang, "Line-CNN: End-to-End Traffic Line Detection With Line Proposal Unit," in IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 1, pp. 248-258, Jan. 2020.
[15]D. Wu, M. -W Liao, W. –T. Zhang, Xing-Gang Wang, Xiang Bai, Wen-Qing Cheng, Wen-Yu Liu. YOLOP: You Only Look Once for Panoptic Driving Perception. Machine Intelligence Research, vol. 19, no. 6, pp.550-562, 2022.
[16]K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
[17]V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481–2495, 2017.
[18]H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, X. Han, Y.-W. Chen, and J. Wu. Unet 3+: A full-scale connected unet for medical image segmentation. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1055–1059. IEEE, 2020.
[19]H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017.
[20]K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770-778.
[21]C.-Y. Wang, A. Bochkovskiy and H.-Y. Mark Liao, "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors", arXiv:2207.02696, 2022.
[22]anonymous. Designing network design strategies. anonymous submission, 2022.
[23]T. -Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan and S. Belongie, "Feature Pyramid Networks for Object Detection," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 936-944
[24]Tusimple-benchmark Available Online:https://github.com/TuSimple/tusimple-benchmark/tree/master/doc/lane_detection.
[25]K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition", arXiv:1409.1556, 2014.
[26]H. Abualsaud, S. Liu, D. B. Lu, K. Situ, A. Rangesh, and M. M. Trivedi, “LaneAF: Robust Multi-Lane Detection With Affinity Fields,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7477-7484, Oct. 2021.
[27]L. Tabelini, R. Berriel, T. M. Paixão, C. Badue, A. F. De Souza, and T. Oliveira-Santos. “Keep your eyes on the lane: real-time attentionguided lane detection.” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 294-302.
[28]Z. Wang, W. Ren, and Q., “Lanenet: Realtime lane detection networks for autonomous driving,” arXiv:1807.01726, 2018.
[29]Y. Ko, Y. Lee, S. Azam, F. Munir, M. Jeon and W. Pedrycz, "Key Points Estimation and Point Instance Segmentation Approach for Lane Detection," in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 8949-8958, July 2022
[30]Y. Zhang, S. Lin, T. Chou, S. Jhong, and Y. Chen, “Robust Lane Detection via Filter Estimator and Data Augmentation,” IEEE International Conference on Consumer Electronics – Taiwan, 2022, pp. 291-292.

無法下載圖示 全文公開日期 2025/08/21 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 2028/08/21 (國家圖書館:臺灣博碩士論文系統)
QR CODE