簡易檢索 / 詳目顯示

研究生: 徐瑋辰
Wei-Chen Hsu
論文名稱: 適用於遙測衛星酬載邊緣運算之U-Net雲分割加速器實現
Edge Computing for U-Net Cloud Segmentation Accelerator on Satellite Payload
指導教授: 李佩君
Pei-Jun Lee
口試委員: 莊智清
劉小菁
張陽郎
李佩君
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 78
中文關鍵詞: 雲分割CNN加速器現場可程式化邏輯閘陣列H.265影像編碼遙測酬載邊緣運算
外文關鍵詞: Cloud Segmentation, CNN Accelerator, Field Programmable Gate Array, H.265 Video Codec, Satellite Payload, Edge Computing
相關次數: 點閱:246下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文提出輕量化和量化的U-Net分割架構,實現在遙測衛星中邊緣運算的雲分割處理。它能準確地辨識出可見光高解析度衛星影像中的雲霧特徵。本論文提出高效率的記憶體存取方法實現高度平行運算的資料流策略,並以此實現高使用率、低硬體資源量、低延遲及低功耗的U-Net推理架構,設計出一套適用於遙測衛星中的雲分割加速器。在此架構中分別提出彈性卷積模組和可擴展性上採樣插值模組,它們還能應用於其他CNN分割架構中。
此外,本論文提出用於遙測酬載的雲偵測後處理方法,將Xilinx在FPGA開發的H.265處理器整合於邊緣運算裝置中。將雲量低的衛星影像使用H.265演算法進行影像壓縮,實現具高影像資訊和高壓縮率的資料,以助於遙測衛星的下傳任務。


This thesis proposes a lightweight and quantized U-Net segmentation architecture, achieving cloud segmentation processing for edge computing on satellite payloads. It can accurately identify cloud features in high-resolution visible spectrum satellite images. Subsequently, this thesis implements a highly parallel dataflow strategy through an efficient memory access method, achieving a U-Net inference architecture with high utilization, low hardware resource consumption, low latency, and low power consumption. This design aims to create a cloud segmentation accelerator suitable for satellite payloads. In this architecture, a “Flexible Convolution” module and a “Scalable Upsampling Interpolation” module are proposed, which can be applied to other CNN-based segmentation architectures.
Furthermore, this thesis proposes a post-processing method for cloud detection in satellite payloads, integrating Xilinx's FPGA-based H.265 processor into edge computing. The satellite images with low cloud coverage are compressed using the H.265 algorithm to achieve data with high image information and high compression rates, decreasing the downlink data for satellite payloads.

摘要 II ABSTRACT III 致謝辭 IV LIST OF CONTENTS V LIST OF FIGURES VII LIST OF TABLES IX CHAPTER 1 INTRODUCTIONS 1 1.1 Introduction 1 1.2 Motivation 3 1.3 Organization 6 CHAPTER 2 RELATED WORKS 7 2.1 The Review for Cloud Detection 7 2.2 Convolution Neuron Network for Cloud Segmentation 9 2.3 Lightweight CNN for Hardware Implementation 13 CHAPTER 3 PROPOSED FPGA-BASED LIGHTWEIGHT U-NET CLOUD SEGMENTATION 16 3.1 Lightweight CNN Segmentation Model 16 3.1.1 CNN Development Method 16 3.1.2 Lightweight U-Net Architecture 17 3.1.3 Model Quantization 19 3.2 H.265 Video Compression After Cloud Segmentation 22 CHAPTER 4 IMPLEMENTATION OF CNN SEGMENTATION ACCELERATOR ON FPGA 25 4.1 FPGA Platform and Environment 25 4.2 Overall Hardware Design for Functionality Integration 26 4.3 High Efficiency Direct Memory Access Method 30 4.4 Flexible Convolution for Hardware Design 34 4.4.1 Convolution Dataflow Optimization Design 34 4.4.2 Flexible Convolution Design 37 4.4.3 Channel Zero Padding Design 41 4.4.4 Dual Data Synchronizer Design 44 4.5 Scalable Upsampling Interpolation for Hardware Design 46 4.5.1 Upsampling Dataflow Design for Interpolation Method 46 4.5.2 Upsampling Interpolation Design 49 CHAPTER 5 EXPERIMENT RESULTS 52 5.1 Dataset 52 5.2 Cloud Segmentation Evaluation 53 5.2.1 Evaluation Metrics 53 5.2.2 Software Inference Result 54 5.3 Hardware Performance Evaluation 60 5.3.1 Hardware Validation Result 60 5.3.2 Hardware Implementation Result 67 5.3.3 H.265 Video Codec Implementation Result 69 5.4 Hardware Performance Comparison with Other Paper 71 6.1 Conclusion 73 6.2 Future Work 74 REFERENCES 75

[1] “Cloud Fraction.” Accessed: May 15, 2024. [Online]. Available: https://earthobservatory.nasa.gov/global-maps/MODAL2_M_CLD_FR
[2] “‘Sentinel-2 cloudless mosaic’: the first global (almost) cloud-free view of the planet | Copernicus.” Accessed: May 15, 2024. [Online]. Available: https://www.copernicus.eu/en/sentinel-2-cloudless-mosaic-first-global-almost-cloud-free-view-planet
[3] “Home – Climate Change: Vital Signs of the Planet.” Accessed: Dec. 07, 2023. [Online]. Available: https://climate.nasa.gov/
[4] P. Figuli, W. Ding, S. Figuli, K. Siozios, D. Soudris, and J. Becker, “Parameter sensitivity in virtual FPGA architectures,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10216 LNCS, pp. 141–153, 2017, doi: 10.1007/978-3-319-56258-2_13/FIGURES/7.
[5] Z. Zhu, S. Wang, and C. E. Woodcock, “Improvement and expansion of the Fmask algorithm: cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images,” Remote Sens Environ, vol. 159, pp. 269–277, Mar. 2015, doi: 10.1016/J.RSE.2014.12.014.
[6] Z. Li, H. Shen, H. Li, G. Xia, P. Gamba, and L. Zhang, “Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery,” Remote Sens Environ, vol. 191, pp. 342–358, Mar. 2017, doi: 10.1016/J.RSE.2017.01.026.
[7] C. Latry, C. Panem, and P. Dejean, “Cloud detection with SVM technique,” International Geoscience and Remote Sensing Symposium (IGARSS), pp. 448–451, 2007, doi: 10.1109/IGARSS.2007.4422827.
[8] “Wildfire smoke detection using temporospatial features and random forest classifiers.” Accessed: Jun. 05, 2024. [Online]. Available: https://www.spiedigitallibrary.org/journals/optical-engineering/volume-51/issue-1/017208/Wildfire-smoke-detection-using-temporospatial-features-and-random-forest-classifiers/10.1117/1.OE.51.1.017208.full#_=_
[9] V. Zekoll, M. Main-Knorn, J. Louis, D. Frantz, R. Richter, and B. Pflug, “Comparison of Masking Algorithms for Sentinel-2 Imagery,” Remote Sensing 2021, Vol. 13, Page 137, vol. 13, no. 1, p. 137, Jan. 2021, doi: 10.3390/RS13010137.
[10] M. Domnich et al., “KappaMask: AI-Based Cloudmask Processor for Sentinel-2,” Remote Sensing 2021, Vol. 13, Page 4100, vol. 13, no. 20, p. 4100, Oct. 2021, doi: 10.3390/RS13204100.
[11] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, Sep. 2014, Accessed: Jun. 05, 2024. [Online]. Available: https://arxiv.org/abs/1409.1556v6
[12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 770–778, Dec. 2015, doi: 10.1109/CVPR.2016.90.
[13] Y. Zi, F. Xie, and Z. Jiang, “A Cloud Detection Method for Landsat 8 Images Based on PCANet,” Remote Sensing 2018, Vol. 10, Page 877, vol. 10, no. 6, p. 877, Jun. 2018, doi: 10.3390/RS10060877.
[14] Y. Zhan, J. Wang, J. Shi, G. Cheng, L. Yao, and W. Sun, “Distinguishing Cloud and Snow in Satellite Images via Deep Convolutional Network,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 10, pp. 1785–1789, Oct. 2017, doi: 10.1109/LGRS.2017.2735801.
[15] Z. Shao, Y. Pan, C. Diao, and J. Cai, “Cloud detection in remote sensing images based on multiscale features-convolutional neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 4062–4076, Jun. 2019, doi: 10.1109/TGRS.2018.2889677.
[16] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 12, pp. 2481–2495, Nov. 2015, doi: 10.1109/TPAMI.2016.2644615.
[17] J. Guo, J. Yang, H. Yue, H. Tan, C. Hou, and K. Li, “CDnetV2: CNN-Based Cloud Detection for Remote Sensing Imagery with Cloud-Snow Coexistence,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 700–713, Jan. 2021, doi: 10.1109/TGRS.2020.2991398.
[18] D. Chai, S. Newsam, H. K. Zhang, Y. Qiu, and J. Huang, “Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks,” Remote Sens Environ, vol. 225, pp. 307–316, May 2019, doi: 10.1016/J.RSE.2019.03.007.
[19] M. Wieland, Y. Li, and S. Martinis, “Multi-sensor cloud and cloud shadow segmentation with a convolutional neural network,” Remote Sens Environ, vol. 230, p. 111203, Sep. 2019, doi: 10.1016/J.RSE.2019.05.022.
[20] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 6230–6239, Nov. 2017, doi: 10.1109/CVPR.2017.660.
[21] L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11211 LNCS, pp. 833–851, Feb. 2018, doi: 10.1007/978-3-030-01234-2_49.
[22] L. Peng, X. Chen, J. Chen, W. Zhao, and X. Cao, “Understanding the Role of Receptive Field of Convolutional Neural Network for Cloud Detection in Landsat 8 OLI Imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, 2022, doi: 10.1109/TGRS.2022.3150083.
[23] A. Zhang, Z. C. Lipton, M. Li, and A. J. Smola, “Dive into Deep Learning,” Journal of the American College of Radiology, vol. 17, no. 5, pp. 637–638, Jun. 2021, doi: 10.1016/j.jacr.2020.02.005.
[24] G. Mateo-García, V. Laparra, D. López-Puigdollers, and L. Gómez-Chova, “Transferring deep learning models for cloud detection between Landsat-8 and Proba-V,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 160, pp. 1–17, Feb. 2020, doi: 10.1016/J.ISPRSJPRS.2019.11.024.
[25] W. Weng and X. Zhu, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” IEEE Access, vol. 9, pp. 16591–16603, May 2015, doi: 10.1109/ACCESS.2021.3053408.
[26] J. Zhang et al., “Lightweight U-Net for cloud detection of visible and thermal infrared remote sensing images,” Opt Quantum Electron, vol. 52, no. 9, pp. 1–14, Sep. 2020, doi: 10.1007/S11082-020-02500-8/TABLES/7.
[27] L. Li, X. Li, L. Jiang, X. Su, and F. Chen, “A review on deep learning techniques for cloud detection methodologies and challenges,” Signal Image Video Process, vol. 15, no. 7, pp. 1527–1535, Oct. 2021, doi: 10.1007/S11760-021-01885-7/TABLES/3.
[28] C. Aybar et al., “CloudSEN12, a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2,” Scientific Data 2022 9:1, vol. 9, no. 1, pp. 1–17, Dec. 2022, doi: 10.1038/s41597-022-01878-2.
[29] “H.264/H.265 Video Codec Unit.” Accessed: Jun. 14, 2024. [Online]. Available: https://www.xilinx.com/products/intellectual-property/v-vcu.html#overview
[30] “Liscotech.” Accessed: Jun. 24, 2024. [Online]. Available: https://www.liscotech.com/
[31] K. T. Gribbon and D. G. Bailey, “A Novel Approach to Real-time Bilinear Interpolation,” in Second IEEE International Workshop on Electronic Design, Test and Applications, IEEE, pp. 126–126. doi: 10.1109/DELTA.2004.10055.
[32] W. C. Hsu, P. J. Lee, and T. A. Bui, “Efficient FPGA-Accelerated Neural Network Scalable Upsampling Architecture,” Digest of Technical Papers - IEEE International Conference on Consumer Electronics, 2024, doi: 10.1109/ICCE59016.2024.10444458.
[33] “GF-1 (Gaofen-1) - eoPortal.” Accessed: Jun. 25, 2024. [Online]. Available: https://www.eoportal.org/satellite-missions/gaofen-1
[34] Y. ; Tan et al., “Cloud and Cloud Shadow Detection of GF-1 Images Based on the Swin-UNet Method,” Atmosphere 2023, Vol. 14, Page 1669, vol. 14, no. 11, p. 1669, Nov. 2023, doi: 10.3390/ATMOS14111669.
[35] Z. J. Gao, Y. He, and Y. Li, “A Novel Lightweight Swin-Unet Network for Semantic Segmentation of COVID-19 Lesion in CT Images,” IEEE Access, vol. 11, pp. 950–962, 2023, doi: 10.1109/ACCESS.2022.3232721.
[36] H. Harkat, J. M. P. Nascimento, and A. Bernardino, “FIRE DETECTION USING DEEPLABV3+ WITH MOBILENETV2,” International Geoscience and Remote Sensing Symposium (IGARSS), vol. 2021-July, pp. 4095–4098, 2021, doi: 10.1109/IGARSS47720.2021.9553141.
[37] A. Kanadath, J. A. A. Jothi, and S. Urolagin, “Histopathology Image Segmentation Using MobileNetV2 based U-net Model,” 2021 International Conference on Intelligent Technologies, CONIT 2021, Jun. 2021, doi: 10.1109/CONIT51480.2021.9498341.
[38] K. Wu, Z. Xu, X. Lyu, and P. Ren, “Cloud detection with boundary nets,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 186, pp. 218–231, Apr. 2022, doi: 10.1016/J.ISPRSJPRS.2022.02.010.
[39] J. Zhang, J. Wu, H. Wang, Y. Wang, and Y. Li, “Cloud Detection Method Using CNN Based on Cascaded Feature Attention and Channel Attention,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, 2022, doi: 10.1109/TGRS.2021.3120752.
[40] L. Xuan, K. F. Un, C. S. Lam, and R. P. Martins, “An FPGA-Based Energy-Efficient Reconfigurable Depthwise Separable Convolution Accelerator for Image Recognition,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 69, no. 10, pp. 4003–4007, Oct. 2022, doi: 10.1109/TCSII.2022.3180553.
[41] J. Li, J. Chen, K. F. Un, W. H. Yu, P. I. Mak, and R. P. Martins, “A 50.4 GOPs/W FPGA-Based MobileNetV2 Accelerator using the Double-Layer MAC and DSP Efficiency Enhancement,” Proceedings - A-SSCC 2021: IEEE Asian Solid-State Circuits Conference, 2021, doi: 10.1109/A-SSCC53895.2021.9634838.
[42] L. Xia et al., “PAI-FCNN: FPGA based inference system for complex CNN models,” Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors, vol. 2019-July, pp. 107–114, Jul. 2019, doi: 10.1109/ASAP.2019.00-21.
[43] H. Le Blevec, M. Léonardon, H. Tessier, and M. Arzel, “Pipelined Architecture for a Semantic Segmentation Neural Network on FPGA,” ICECS 2023 - 2023 30th IEEE International Conference on Electronics, Circuits and Systems: Technosapiens for Saving Humanity, 2023, doi: 10.1109/ICECS58634.2023.10382715.
[44] G. Kai, Y. Ximing, P. Yu, and L. Liansheng, “A Winograd-based CNN Accelerator for Cloud Detection on FPGA,” Proceedings - 2022 Chinese Automation Congress, CAC 2022, vol. 2022-January, pp. 3966–3970, 2022, doi: 10.1109/CAC57257.2022.10054855.

QR CODE