簡易檢索 / 詳目顯示

研究生: 周煒霖
Wei-Lin Chou
論文名稱: 以基於可調整關注的YOLOX應用於遠端無人機的特定降落平台之視覺辨識、搜索和導引
Visual Recognition, Searching, and Navigation of a Remote UAV Using Adjusted-Attention-Based YOLOX
指導教授: 黃志良
Chih-Lyang Hwang
口試委員: 陳博現
Bor-Sen Chen
蘇順豐
Shun-Feng Su
莊家峰
Chia-Feng Juang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 66
中文關鍵詞: 無人機智慧城市視覺搜索視覺辨識視覺導引深度捲積神經網路
外文關鍵詞: UAV, Visual searching, Visual recognition, Visual guidance, Deep CNN, YOLOX
相關次數: 點閱:268下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 無人機可以為智慧城市的發展提供高效的解決方案。除了遠程控制多架無人機,近場和遠距離的視覺辨識和導引到可能的靜態、動態降落平台以進行無線充電或與地面車輛協作是至關重要的。為了滿足遠距離的辨識精度要求,減少邊緣運算的計算時間,並且降低功耗以增加飛行距離,首先針對特定降落平台(SLP)做訓練然後以基於可調整關注的YOLOX(AA-YOLOX)測試。為了能成功且精確地降落在 SLP,需要附加一個標記來幫助降落操作。在此情況下,本研究還實現了帶有標記的 SLP 的少樣本訓練,以提高識別率的魯棒性。此外針對有標記和沒有標記的SLP的情況分別測試了兩個學習權重。最終,在各種場景下的實時戶外驗證近距離及遠距離的大型無人機視覺導引,包括不同的背景和天氣條件,以及SLP有部分遮擋,證實我們方法的可行性、穩健性和實用性。


    Unmanned aerial vehicles (UAVs) can provide efficient and effective solutions for the development of smart cities. Besides the remote control of multiple UAVs, near-field visual recognition and guidance to a possible dynamic landing platform for wireless charging or collaboration with ground vehicle is paramount. To satisfy the required recognition accuracy from a far distance, the reduction of computation time for edge computing, and the attenuation of power consumption for increasing flight range, an Adjusted-Attention based YOLOX (AA-YOLOX) is first trained and tested for specific landing platform (SLP). To successfully land an SLP, a marker is attached to help the landing operation. In this situation, few-shot training for SLP with marker is also achieved to increase the robustness of recognition rate. Furthermore, two learning weights respectively for SLP w/o and with maker are tested for situations of SLP w/o and with marker. Ultimately, online outdoor verifications in various scenarios, including different background and weather conditions, and partial occlusion, confirm the feasibility, robustness, and practicality of our approach.

    摘 要 i Abstract ii 圖目錄 vi 表目錄 viii 第一章 導論與文獻回顧 1 第二章 系統建構與任務陳述 4 2.1系統建構 4 2.1.1機架 5 2.1.2 飛行控制器 6 2.1.3 直流無刷馬達 7 2.1.4 螺旋槳 7 2.1.5 電子調速器 8 2.1.6 GPS 9 2.1.7數傳 9 2.1.8 電池 10 2.1.9 三軸雲台和雲台減震器 10 2.1.10 ZED2雙眼鏡頭和Insta360 GO2 11 2.1.11 Jetson Xavier NX和4G模組 12 2.1.12 遙控器 13 2.2 任務陳述 14 第三章 可調整關注的YOLOX視覺辨識 15 3.1 可調整關注方法 15 3.2 資料集 18 3.3 Cost Function and Training Law 20 3.4 可調整關注方法用於高解析度 21 3.5 子圖像邊界物件處理 22 3.6 YOLOX-Nano的訓練和驗證 24 第四章 實驗結果與討論 28 4.1 在Jetson Xavier NX執行視覺辨識 28 4.2 視覺導引到SLP 33 4.3 五方位搜索策略 36 4.4 距離搜索策略 38 第五章 結論和未來研究 42 參考文獻 43 附錄一 47

    [1] H. Menouar, I. Guvenc, K. Akkaya, A. S. Uluagac, A. Kadri, and A. Tuncer, “UAV-enabled intelligent transportation systems for the smart city: applications and challenges,” IEEE Communications Magazine, pp. 22-28, Mar. 2017.
    [2] S. Wan, J. Lu, P. Fan, and K. B. Letaief, “To smart city: public safety network design for emergency,” IEEE Access, vol. 6, pp. 1451-1465, 2018.
    [3] L. Hu and Q. Ni, “IoT-driven automated object detection algorithm for urban surveillance systems in smart cities,” IEEE Internet Things J., vol. 5, no. 2, pp. 747–754, Apr. 2018.
    [4] Y. Yao, Y. Sun, C. Phillips, and Y. Cao, “Movement-aware relay selection for delay-tolerant information dissemination in wildlife tracking and monitoring applications,” IEEE Internet Things J., vol. 5, no. 4, pp. 3079–3090, Aug. 2018.
    [5] S. Shakoor, Z. Kaleem, M. I. Baig, O. Chughat, T. Q. Duong, and L. D. Nguyen, “Role of UAVs in public safety communications: energy efficiency perspective,” IEEE Access, vol. 8, pp. 140665-140679, 2019.
    [6] M. Wan, G. Gu, W. Qian, K. Ren, X. Maldague, and Q. Chen, “Unmanned aerial vehicle video-based target tracking algorithm using sparse representation,” IEEE Internet Things J., vol. 6, no. 6, pp. 9689-9706, Dec. 2019.
    [7] Y. Jin, Z. Qian, and W. Yang, “UAV cluster-based video surveillance system optimization in heterogeneous communication of smart cities,” IEEE Access, vol. 8, pp. 55654-55666, 2020.
    [8] S. Aggarwal, N. Kumar, and S. Tanwar, “Blockchain-envisioned UAV communication using 6G networks: open issues, use cases, and future directions,” IEEE Internet Things J., vol. 8, no. 7, pp. 5416–5441, Apr. 2021.
    [9] W. Wang, Y. Peng, G. Cao, X. Guo, and N. Kwok, “Low-illumination image enhancement for night-time UAV pedestrian detection,” IEEE Tran. Ind. Inform., vol. 17, no. 8, pp. 5208-5220, Aug. 2021.
    [10] Y. Cao, G. Wang, D. Yan, and Z. Zhao, “Two algorithms for the detection and tracking of moving vehicle targets in aerial infrared image sequences,” Remote Sens., vol. 8, no. 1, p. 28, 2015.
    [11] X. Lu, J. Ji, Z. Xing, and Q. Miao, “Attention and feature fusion SSD for remote sensing object detection,” IEEE Trans. Instrum. Meas., DOI: 10.1109/TIM.2021.3052575.
    [12] S. Wu, C. Cai, X. Liu, W. Chai, and S. Yang, “Compact and free-positioning omnidirectional wireless power transfer system for unmanned aerial vehicle charging applications,” IEEE Trans. Power Electron., DOI: 10.1109/TPEL.2022.3158610.
    [13] J. Chen, Y. Zhang, J. Li, W. Du, Z. Chen, Z. Liu, H. Wang, and V. C. M. Leung, “Integrated air-ground vehicles for UAV emergency landing based on graph convolution network,” IEEE Internet Things J., DOI: 10. 1109/JIOT. 2021.3058192.
    [14] X. Li, J. Deng, and Y. Fang, “Few-shot object detection on remote sensing images,” IEEE Trans. Geosci. Remote Sens., vol. 60, Article no. 5601614, 2022.
    [15] L. Xie, T. Ahmad, L. Jin, Y. Liu, and S. Zhang, “A new CNN-based method for multi-directional car license plate detection,” IEEE Trans. Intell. Transport., vol. 19, no. 2, pp. 507-517, Feb. 2018.
    [16] S. Liu, D. Huang, and Y. Wang, “Pay Attention to Them: deep reinforcement learning-based cascade object detection,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 7, pp. 2544-2556, Jul. 2020.
    [17] J.-M. Li, C.-W. Chen, and T.-H. Cheng, “Motion prediction and robust tracking of a dynamic and temporarily-occluded target by an unmanned aerial vehicle,” IEEE Trans. Control Syst. Technol., vol. 29, no. 4, pp. 1623-1635, Jul. 2021.
    [18] P. Sikora, L. Malina, M. Kiac, Z. Martinasek, K. Riha, J. Prinosil, L. Jirik, and G. Srivastava, “Artificial intelligence-based surveillance system for railway crossing traffic,” IEEE Sensor Journal, vol. 21, no.14, pp. 15515-15526, Jul. 2021.
    [19] X. Zhou, X. Xu, W. Liang, Z. Zeng, and Z. Yan, “Deep-learning-enhanced multitarget detection for end–edge–cloud surveillance in smart IoT,” IEEE Internet Things J., vol. 8, no. 16, pp. 12588-12596, Aug. 2021.
    [20] S.-J. Horng, J. Supardi, W. Zhou, C.-T. Lin, and B. Jiang, “Recognizing very small face images using convolution neural networks,” IEEE Trans. Intell. Transp. Syst., DOI: 10.1109/TITS.2020.3032396.
    [21] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: Exceeding YOLO Series in 2021,” arXiv:2107. 08430, pp. 1-7, 2021.
    [22] G. Wang, H. Zheng, and X. Zhang, “A robust checkerboard corner detection method for camera calibration based on improved YOLOX,” Frontiers in Physics, vol. 9, Article No. 819019, Feb. 2022.
    [23] Z. Cui, W. Yang, L. Chen, and H. Li, “MKN: Metakernel networks for few shot remote sensing scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 60, Article no. 4705611, 2022.
    [24] L. Li, X. Yao, G. Cheng, and J. Han, “AIFS-DATASET for few-shot aerial image scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 60, Article No. 5618211, 2022.
    [25] H.-T. Zhang, B.-B. Hu, Z. Xu, Z. Cai, B. Liu, X. Wang, T. Geng, S. Zhong, and J. Zhao, “Visual navigation and landing control of an unmanned aerial vehicle on a moving autonomous surface vehicle via adaptive learning,” IEEE Trans. Neural Netw. Learning Syst., vol. 32, no. 12, pp. 5345-5355, Dec. 2021.
    [26] J. Lin, Y. Wang, Z. Miao, H. Zhong, and R. Fierro, “Low-complexity control for vision-based landing of quadrotor UAV on unknown moving platform,” IEEE Trans. Ind. Inform., DOI: 10.1109/TII. 2021.3129486.
    [27] J. González-Trejo, D. Mercado-Ravell, I. Becerra, and R. Murrieta-Cid, “On the visual-based safe landing of UAVs in populated areas: a crucial aspect for urban deployment,” IEEE Robot. Autom. Lett., vol. 6, no. 4, pp. 7901-7908, Oct. 2021.
    [28] J. Dong, X. Ren, S. Han, and S. Luo, “UAV vision aided INS/Odometer integration for land vehicle autonomous navigation,” IEEE Trans. Veh. Technol., DOI: 10.1109/TVT.2022.3151729.
    [29] Y. Pang, J. Cao, Y. Li, J. Xie, H. Sun, and J. Gong, “TJU-DHD: A diverse high-resolution dataset for object detection,” IEEE Trans. Image Process., vol. 30, pp. 207-219, 2021.
    [30] G. Chen, H. Wang, K. Chen, Z. Li , Z. Song, Y. Liu, W. Chen, and A. Knoll, “A survey of the four pillars for small object detection: multiscale representation contextual information super-resolution and region proposal,” IEEE Trans. Syst. Man Cybern., Syst., vol. 52, no. 2, pp. 936-953, Feb. 2022.
    [30] C.-L. Hwang, H. M. Wu, and J. Y. Lai, “On-line obstacle detection, avoidance and mapping of an outdoor quadrotor using EKF based fuzzy tracking incremental control,” IEEE Access, vol. 7, pp.160203-160-216, 2019.
    [31] C.-L. Hwang, J. Y. Lai, and Z. S. Lin, “Sensor-fused fuzzy variable structure incremental control for partially known nonlinear dynamic systems and application to an outdoor quadrotor,” IEEE/ASME Trans. Mechatronics, vol. 25, no. 2, pp. 716-727, Apr. 2020.
    [32] C.-L. Hwang and G. H. Liao, “Real-time pose imitation by mid-size humanoid robot with servo-cradle-head RGB-D vision system,” IEEE Trans. Syst. Man & Cybern.: Syst., vol. 49, no. 1, pp. 181-191, Jan. 2019.

    無法下載圖示 全文公開日期 2025/07/29 (校內網路)
    全文公開日期 2025/07/29 (校外網路)
    全文公開日期 2025/07/29 (國家圖書館:臺灣博碩士論文系統)
    QR CODE