簡易檢索 / 詳目顯示

研究生: 林原生
Yuan-Sheng Lin
論文名稱: 一個應用深度神經網路與核相關濾波器於動態多角度低空影像船舶偵測及軌跡追蹤方法
A Dynamic Multi-angle Ship Detection and Tracking Method for Low Aerial Images Using Deep Neural Networks and Kernelized Correlation Filters
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 鍾斌賢
Bin-Sheng Jong
陳怡伶
Yi-Ling Chen
陳彥霖
Yen-Lin Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 65
中文關鍵詞: 海面影像處理低海面空照圖船隻偵測船隻追蹤核相關濾波器深度神經網路
外文關鍵詞: image processing of sea surface, low sea aerial image, ship detection, ship tracking, kernelized correlation filter, deep neural network
相關次數: 點閱:216下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,海面上活動越來越頻繁,對於海上船隻、軍艦等的監控逐漸地受到大眾重視,然而以人力的方式監控日趨繁複的航線,不但會造成人事成本的增加,且可能會導致效率降低、人員的勞累等;而我們希望以科學的方式取代人力來提高效率以及降低成本,但在影像處理中,船隻偵測是一個不容易解決的議題,於現實世界中,海面上狀況較為複雜,包括船隻顏色、停靠船隻各個角度、航行方向等皆會影響。
    有鑑於以上的缺點,本論文設計一套透過電腦視覺的技術,對低海面空照圖之船隻進行偵測及該船隻後續追蹤的方法,並且改良了用於追蹤演算法的核相關濾波器,使得在海面上多尺度的船隻擁有符合的邊界框。此系統主要有兩大部分:首先,利用深度神經網路在低海面空拍影像中,辨識出船隻,同時輸出邊界框於系統畫面中,告知使用者該船隻坐落於影像中位置;接著,系統將持續追蹤該船隻。根據實驗結果顯示,我們的船隻偵測模型召回率可以達到85%、準確率為93.2%,以及平均精度為81.2%,而在我們的改良核相關濾波器追蹤演算法中,總共測試了兩部影片,其中一部為靜態影片,另外一部為動態影片,它們的多目標追蹤精度(MOTA)分別為0.899與0.681。


    Sea activities have become more and more frequent in recent years, so the monitoring of ships or warships is gradually receiving attention by people. However, monitoring increasingly complex routes by manpower not only increases personnel costs but also leads to lower efficiency and personnel fatigue. Then we hope to replace the manpower by the scientific way to improve efficiency and reduce costs, but ship detection is a difficult issue in image processing. In the real world, the conditions on the sea are more complicated, including the color of the ship, the ship docking angle, and the ship sailing direction.
    According to the above, we design a method to detect and track the ships in low aerial images through computer vision techniques, and improve the tracking algorithm called kernel correlation filter (KCF), so that multi-scale ships on the sea have a consistent bounding box. There are two stages in our system. First, we detect ships on the sea in low aerial images by using a deep learning method, and output the bounding boxes in our system screen to inform users the location of each ship. Therefore, our system will keep tracking the ships that we detected. In the light of the experimental results, the recall rate, accuracy rate, and mean average precision (mAP) of our ship detection model reach 85%, 93.2%, and 81.2%, respectively. In the improved kernelized correlation filter acting as the ship tracking algorithm, we test two videos. One is static and the other is dynamic. The multiple objects tracking accuracy (MOTA) of the static video is 0.899, while that of the dynamic video is 0.681.

    Contents 中文摘要 i Abstract ii 誌謝 iii Contents iv List of Figures vi List of Tables ix Chapter 1 Introduction 1 1.1 Overview 1 1.2 Motivation 2 1.3 System Descriptions 3 1.4 Thesis Organization 4 Chapter 2 Related Works 5 2.1 Ship Detection Using Synthetic Aperture Radar 5 2.2 Ship Detection Using Visual Images 8 Chapter 3 Ship Detection Method 14 3.1 Introduction to One Stage Object Detection 14 3.2 Anchor Boxes and Bounding Box Prediction 16 3.3 Backbone Model and Feature Extraction 20 3.4 Residual blocks of Residual Network 22 3.5 Loss Function and Output Activation 23 Chapter 4 Ship Tracking Method 26 4.1 Circulant Matrix 26 4.2 Ridge Regression and Kernel Trick 28 4.3 Kernelized Correlation Filters 30 4.4 Ship Tracking Map Establishment 34 Chapter 5 Experimental Results and Discussions 36 5.1 Experimental Setup 36 5.2 Results of Ship Detection 41 5.3 Results of Ship Tracking 44 Chapter 6 Conclusions and Future Works 49 6.1 Contribution and Conclusions 49 6.2 Future Works 50 References 51

    References
    [1] J. Jiao et al., “A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection,” IEEE Access, vol. 6, pp. 20881-20892, 2018. doi: 10.1109/ACCESS.2018.2825376
    [2] F. Mazzarella et al., “SAR ship detection and self-reporting data fusion based on traffic knowledge,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 8, pp. 1685-1689, 2015. doi: 10.1109/LGRS.2015.2419371
    [3] M. Kang et al., “A modified Faster R-CNN based on CFAR algorithm for SAR ship detection,” in Proceedings of the International Workshop on Remote Sensing with Intelligent Processing, Shanghai, China, pp. 1-4, 2017.
    [4] X. Bao et al., “Context modeling combined with motion analysis for moving ship detection in port surveillance,” Journal of Electronic Imaging, vol. 22, no. 4, p. 041114, 2013. doi: 10.1117/1.JEI.22.4.041114
    [5] Y. Zhang, Q. Z. Li, and F. Zang, “Ship detection for visual maritime surveillance from non-stationary platforms,” Ocean Engineering, vol, 141, pp. 53-63, 2017. doi: 10.1016/j.oceaneng.2017.06.022
    [6] G. K. Høye et al., “Space-based AIS for global maritime traffic monitoring,” Acta Astronautica, vol. 62, nos. 2-3, pp. 240-245, 2008. doi: 10.1016/j.actaastro.2007.07.001
    [7] T. Eriksen et al., “Maritime traffic monitoring using a space-based AIS receiver,” Acta Astronautica, vol. 58, no. 10, pp. 537-549, 2006. doi:10.1016/j.actaastro.2005.12.016
    [8] I. M. Organization, “The International Convention for the Safety of Life at Sea (SOLAS),” 2004. [Online]. Available: http://www.imo.org/en/About/Pages/
    ContactUs.aspx [Accessed: Jun. 5, 2020]
    [9] J. F. Henriques et al., “High-speed tracking with kernelized correlation filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583-596, 2014. doi: 10.1109/TPAMI.2014.2345390
    [10] Y. Wang et al., “Automatic ship detection based on RetinaNet using multi-resolution Gaofen-3 imagery,” Remote Sensing, vol. 11, no. 5, pp. 531-544, 2019. doi: 10.3390/rs11050531
    [11] T. Y. Lin et al., “Focal loss for dense object detection,” in Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, pp. 2980-2988, 2017.
    [12] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 [cs.CV], 2014.
    [13] K. He et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, pp. 770-778, 2016.
    [14] G. Huang et al., “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, pp. 4700-4708, 2017.
    [15] R. Wang et al., “An improved Faster R-CNN based on MSER decision criterion for SAR image ship detection in harbor,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, pp. 1322-1325, 2019. doi: 10.1109/IGARSS.2019.8898078
    [16] S. Ren et al., “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proceedings of the Advances in Neural Information Processing Systems, Qeubec, Cananda, pp. 91-99, 2015. doi: 10.1109/TPAMI.2016.2577031
    [17] M. Donoser and H. Bischof, “Efficient maximally stable extremal region (MSER) tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, New York, vol. 1, pp. 553-560, 2006.
    [18] Z. Shao et al., “Saliency-aware convolution neural network for ship detection in surveillance video,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 3, pp. 781-794, 2019. doi: 10.1109/TCSVT.2019.2897980
    [19] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, pp. 7263-7271, 2017.
    [20] Y. Wang et al., “Ship detection based on deep learning,” in Proceedings of the IEEE International Conference on Mechatronics and Automation, Beijing, China, pp. 275-279, 2019.
    [21] Z. Chen et al., “Automatic detection and tracking of ship based on mean shift in corrected video sequences,” in Proceedings of the 2nd International Conference on Image, Vision and Computing, Chengdu, China, pp. 449-453, 2017.
    [22] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv:1804.02767 [cs.CV], 2018.
    [23] T. Kanungo et al., “An efficient k-means clustering algorithm: Analysis and implementation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 881-892, 2002. doi: 10.1109/TPAMI.2002.1017616

    無法下載圖示 全文公開日期 2025/08/10 (校內網路)
    全文公開日期 2030/08/10 (校外網路)
    全文公開日期 2030/08/10 (國家圖書館:臺灣博碩士論文系統)
    QR CODE