簡易檢索 / 詳目顯示

研究生: 李秉和
Bing-He Li
論文名稱: 具自我訓練強化與三維點級混合場景理解之多任務網路
Self-training Enhanced Multi-task Network for 3D Point-level Hybrid Scene Understanding
指導教授: 陸敬互
Ching-Hu Lu
口試委員: 蘇順豐
鍾聖倫
黃正民
李俊賢
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2023
畢業學年度: 112
語文別: 中文
論文頁數: 72
中文關鍵詞: 深度學習點雲多任務學習三維語意分割場景流估計自我訓練
外文關鍵詞: deep learning, point cloud, multi-task learning, 3D semantic segmentation, scene flow estimation, self-training
相關次數: 點閱:338下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

近來,隨著人工智慧科技飛速進展,其深厚的技術應用已經滲透至各式載運工具之中,以便更精準地確保行車安全並提升駕駛體驗等諸多面向。為確保自駕車及其他智能交通系統能在複雜多變的交通環境中穩定運作,呈現出更細緻且精確的環境感知資訊顯然已漸成為這一領域不可或缺的重要課題。雖然當前的研究中,已有研究著手進行點雲資料的多種環境感知任務的並行處理。然而,這些研究往往只能達到較為基礎之「方框級別」的複合環境理解,也就是說,對於複合環境的理解多半停留在較為粗略「物件偵測框」的層面,無法細膩地辨識環境中更細微「點雲級別」的變化。因此本研究提出「具分層通道注意力門控之多任務網路」,此網路能夠在同一結構之下,靈活地同時處理點雲資料的三維語意分割和場景流估計的任務,從而提供對環境之點雲級別的複合理解。另外,我們也發現既有研究有「訓練資料未被充分利用」的問題。因為點雲資料標註複雜且困難,所以既有點雲資料採隔幾點才標註的方式來降地標註的成本,但也造成這些未標註資料未能被充分採用。為了能有效將未標註點雲資料加入學習的過程中,以進一步提升環境感知系統之精確性,本研究提出「多任務網路之點級自我訓練」的策略,該技術運用流資訊為未標註資料產生可信的偽標籤,然後將這些帶有偽標籤的資料融合到訓練過程中,以豐富及強化模型的學習表現。根據實驗結果顯示,本研究推出的「具分層通道注意力門控之多任務網路」於Waymo Open Dataset中展現其優勢,與最新相關研究比較,在三維語意分割的任務中,其mIoU指標達成了1.94%的提升;同時,在場景流估計的任務上,EPE3D指標也有71.18%的表現進步。而在相同的評估標準中,本研究與既有採單任務但採同步學習三維語意分割與場景流的研究比較,可節省9.68%的訓練時間。另外,當我們進一步將「點級自我訓練」技術融合至多任務網路中時,數據表現可再度攀升。在三維語意分割任務的mIoU提升至2.35%,而場景流估計任務的EPE3D提升也提升至72.27%。這些結果不僅印證了本研究方法的效益,也為未來的相關研究奠定了堅實的基礎。


Recently, as AI technology has advanced rapidly, its profound applications have been seamlessly integrated into various transportation tools, ensuring enhanced driving safety and elevating the overall driving experience. To ensure the stable operation of autonomous vehicles in complex traffic environments, accurate environmental perception has become crucial in this field. Although current research has initiated the processing of various environmental perception tasks with point cloud data, existing studies can only achieve a "box-level" hybrid scene understanding, with limitations in recognizing fine-grained "point-level" variations. Therefore, we propose a "Multi-task Network with Hierarchical Channel-Attention Gating", that can flexibly and simultaneously handle 3D semantic segmentation and scene flow estimation within a single architecture, thereby providing a point-level hybrid scene understanding. In addition, we also identified an issue of "underutilization of training data" in existing studies. Because of the difficulty of point cloud data labeling, some point cloud data are left unlabeled to reduce the cost of labeling, but this also results in the underutilization of the unlabeled data. To efficiently incorporate unlabeled data into the learning process, we propose a strategy of " Point-level Self-training for Multi-task Network". This technique employs flow information to generate reliable pseudo-labels for unlabeled data, integrating them into the training process to enhance model performance. Experimental results on the Waymo Open Dataset reveal the advantages of the proposed network, showing a 1.94% improvement in mIoU for 3D semantic segmentation and a 71.18% enhancement in EPE3D for scene flow estimation compared to state-of-the-art research. Moreover, it reduces training time by 9.68% compared to existing single-task synchronous learning studies. Integrating "Point-level Self-training" further elevates performance, improving mIoU to 2.35% and EPE3D to 72.27% in 3D semantic segmentation and scene flow estimation, respectively. These results not only validate the effectiveness of the proposed methods but also establish a robust foundation for future research.

中文摘要 I Abstract II 致謝 IV 目錄 V 圖目錄 VIII 表格目錄 X 第一章 簡介 1 1.1 研究動機 1 1.2 文獻探討 3 1.2.1 「僅達到方框級別的複合環境理解」的議題 3  三維點雲語意分割 4  點雲場景流估計 8  三維點雲多任務學習 10 1.2.2 「點級多任務學習未利用無標籤數據」的議題 11  三維自我訓練 11 1.3 本研究貢獻與文章架構 13 第二章 系統設計理念與架構簡介 15 2.1 系統架構簡介 15 2.2 系統應用情境 16 第三章 具分層通道注意力門控之多任務網路 18 3.1 多任務網路架構概述 18 3.1.1 單任務學習分析 18 3.1.2 多任務網路設計理念與架構 20 3.2 三維點雲卷積運算 22 3.3 編碼網路 26 3.4 分層通道注意力門控網路 27 3.5 解碼網路 31 3.5.1 三維語意分割解碼器 31 3.5.2 場景流估計解碼器 32 3.6 聯合訓練 35 第四章 多任務網路之點級自我訓練 38 4.1 概述與觀察 38 4.2 標籤資料監督學習 40 4.3 偽標籤生成 41 4.4 再次學習 43 第五章 實驗結果與討論 44 5.1 實驗平台 44 5.2 實驗資料集和評估指標 44 5.2.1 實驗資料集 44 5.2.2 評估指標 45 5.3 具分層通道注意力門控之多任務網路 46 5.3.1 網路訓練參數設定 46 5.3.2 學習策略之實驗 48 5.3.3 門控網路之實驗 49 5.3.4 聯合訓練之實驗 51 5.4 多任務網路之點級自我訓練 53 5.4.1 網路訓練參數設定與流程 53 5.4.2 標註設置之實驗 53 5.5 相關研究比較 56 第六章 結論與未來研究方向 60 參考文獻 61 口試委員之建議與回覆 68

[1] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, "A survey of autonomous driving: Common practices and emerging technologies," IEEE access, vol. 8, pp. 58443-58469, 2020.
[2] Q. Lu, T. Tettamanti, D. Hörcher, and I. Varga, "The impact of autonomous vehicles on urban traffic network capacity: an experimental analysis by microscopic traffic simulation," Transportation Letters, vol. 12, no. 8, pp. 540-549, 2020.
[3] D. Parekh et al., "A review on autonomous vehicles: Progress, methods and challenges," Electronics, vol. 11, no. 14, p. 2162, 2022.
[4] E. Ahmed et al., "A survey on deep learning advances on different 3D data representations," arXiv preprint arXiv:1808.01462, 2018.
[5] C. Badue et al., "Self-driving cars: A survey," Expert Systems with Applications, vol. 165, p. 113816, 2021.
[6] B. Caine et al., "Pseudo-labeling for scalable 3d object detection," arXiv preprint arXiv:2103.02093, 2021.
[7] A. Xiao, X. Zhang, L. Shao, and S. Lu, "A Survey of Label-Efficient Deep Learning for 3D Point Clouds," arXiv preprint arXiv:2305.19812, 2023.
[8] F. Duffhauss and S. A. Baur, "PillarFlowNet: A Real-time Deep Multitask Network for LiDAR-based 3D Object Detection and Scene Flow Estimation," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020: IEEE, pp. 10734-10741.
[9] D. Ye et al., "Lidarmultinet: Towards a unified multi-task network for lidar perception," arXiv preprint arXiv:2209.09385, 2022.
[10] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V. Koltun, "Point transformer," in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 16259-16268.
[11] M. Tatarchenko, J. Park, V. Koltun, and Q.-Y. Zhou, "Tangent convolutions for dense prediction in 3d," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3887-3896.
[12] B. Wu, A. Wan, X. Yue, and K. Keutzer, "Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud," in 2018 IEEE international conference on robotics and automation (ICRA), 2018: IEEE, pp. 1887-1893.
[13] A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, "Rangenet++: Fast and accurate lidar semantic segmentation," in 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS), 2019: IEEE, pp. 4213-4220.
[14] H. Radi and W. Ali, "Volmap: A real-time model for semantic segmentation of a lidar surrounding view," arXiv preprint arXiv:1906.11873, 2019.
[15] Y. Zhang et al., "Polarnet: An improved grid representation for online lidar point clouds semantic segmentation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9601-9610.
[16] M. Kellner, B. Stahl, and A. Reiterer, "Fused projection-based point cloud segmentation," Sensors, vol. 22, no. 3, p. 1139, 2022.
[17] M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International conference on machine learning, 2019: PMLR, pp. 6105-6114.
[18] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 2015: Springer, pp. 234-241.
[19] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[20] A. Jhaldiyal and N. Chaudhary, "Semantic segmentation of 3D LiDAR data using deep learning: a review of projection-based methods," Applied Intelligence, pp. 1-12, 2022.
[21] X. Yan et al., "2dpass: 2d priors assisted semantic segmentation on lidar point clouds," in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII, 2022: Springer, pp. 677-695.
[22] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size," arXiv preprint arXiv:1602.07360, 2016.
[23] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, "Pointnet: Deep learning on point sets for 3d classification and segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652-660.
[24] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," Advances in neural information processing systems, vol. 30, 2017.
[25] X. Liu, Z. Han, Y.-S. Liu, and M. Zwicker, "Point2sequence: Learning the shape representation of 3d point clouds with an attention-based sequence to sequence network," in Proceedings of the AAAI conference on artificial intelligence, 2019, vol. 33, no. 01, pp. 8778-8785.
[26] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, "Dynamic graph cnn for learning on point clouds," Acm Transactions On Graphics (tog), vol. 38, no. 5, pp. 1-12, 2019.
[27] J. Yang et al., "Modeling point clouds with self-attention and gumbel subset sampling," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3323-3332.
[28] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen, "Pointcnn: Convolution on x-transformed points," Advances in neural information processing systems, vol. 31, 2018.
[29] S. Wang, S. Suo, W.-C. Ma, A. Pokrovsky, and R. Urtasun, "Deep parametric continuous convolutional neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2589-2597.
[30] Y. Xu, T. Fan, M. Xu, L. Zeng, and Y. Qiao, "Spidercnn: Deep learning on point sets with parameterized convolutional filters," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 87-102.
[31] J. Mao, X. Wang, and H. Li, "Interpolated convolutional networks for 3d point cloud understanding," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1578-1587.
[32] H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas, "Kpconv: Flexible and deformable convolution for point clouds," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6411-6420.
[33] W. Wu, Z. Qi, and L. Fuxin, "Pointconv: Deep convolutional networks on 3d point clouds," in Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 2019, pp. 9621-9630.
[34] X. Jia, B. De Brabandere, T. Tuytelaars, and L. V. Gool, "Dynamic filter networks," Advances in neural information processing systems, vol. 29, 2016.
[35] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
[36] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
[37] X. Chu et al., "Twins: Revisiting the design of spatial attention in vision transformers," Advances in Neural Information Processing Systems, vol. 34, pp. 9355-9366, 2021.
[38] X. Lai et al., "Stratified transformer for 3d point cloud segmentation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8500-8509.
[39] W. Wu, Q. Shan, and L. Fuxin, "PointConvFormer: Revenge of the Point-based Convolution," arXiv preprint arXiv:2208.02879, 2022.
[40] B. Graham, M. Engelcke, and L. Van Der Maaten, "3d semantic segmentation with submanifold sparse convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9224-9232.
[41] C. Choy, J. Gwak, and S. Savarese, "4d spatio-temporal convnets: Minkowski convolutional neural networks," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3075-3084.
[42] Z. Liu, H. Tang, Y. Lin, and S. Han, "Point-voxel cnn for efficient 3d deep learning," Advances in Neural Information Processing Systems, vol. 32, 2019.
[43] H. Zhou et al., "Cylinder3d: An effective 3d framework for driving-scene lidar semantic segmentation," arXiv preprint arXiv:2008.01550, 2020.
[44] C. Park, Y. Jeong, M. Cho, and J. Park, "Fast point transformer," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16949-16958.
[45] D. Deng and A. Zakhor, "Temporal lidar frame prediction for autonomous driving," in 2020 International Conference on 3D Vision (3DV), 2020: IEEE, pp. 829-837.
[46] H. Wang, Z. Li, and J. Gong, "Sequential Point Cloud Prediction in Interactive Scenarios: A Survey," in 2021 China Automation Congress (CAC), 2021: IEEE, pp. 3862-3867.
[47] G. Zhai, X. Kong, J. Cui, Y. Liu, and Z. Yang, "FlowMOT: 3D multi-object tracking by scene flow association," arXiv preprint arXiv:2012.07541, 2020.
[48] Y. Shen, G. Wang, and H. Wang, "DetFlowTrack: 3D Multi-object Tracking based on Simultaneous Optimization of Object Detection and Scene Flow Estimation," arXiv preprint arXiv:2203.02157, 2022.
[49] A. Dosovitskiy et al., "Flownet: Learning optical flow with convolutional networks," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758-2766.
[50] X. Liu, C. R. Qi, and L. J. Guibas, "Flownet3d: Learning scene flow in 3d point clouds," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 529-537.
[51] X. Gu, Y. Wang, C. Wu, Y. J. Lee, and P. Wang, "Hplflownet: Hierarchical permutohedral lattice flownet for scene flow estimation on large-scale point clouds," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3254-3263.
[52] W. Wu, Z. Y. Wang, Z. Li, W. Liu, and L. Fuxin, "Pointpwc-net: Cost volume on point clouds for (self-) supervised scene flow estimation," in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, 2020: Springer, pp. 88-107.
[53] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8934-8943.
[54] W. Cheng and J. H. Ko, "Bi-PointFlowNet: Bidirectional Learning for Point Cloud Based Scene Flow Estimation," in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII, 2022: Springer, pp. 108-124.
[55] R. Caruana, Multitask learning. Springer, 1998.
[56] H. Jiang, D. Sun, V. Jampani, Z. Lv, E. Learned-Miller, and J. Kautz, "Sense: A shared encoder network for scene-flow estimation," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3195-3204.
[57] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, "Pointpillars: Fast encoders for object detection from point clouds," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 12697-12705.
[58] C. Yin, B. Yang, J. C. Cheng, V. J. Gan, B. Wang, and J. Yang, "Label-efficient semantic segmentation of large-scale industrial point clouds using weakly supervised learning," Automation in Construction, vol. 148, p. 104757, 2023.
[59] H. Wang, Y. Cong, O. Litany, Y. Gao, and L. J. Guibas, "3dioumatch: Leveraging iou prediction for semi-supervised 3d object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 14615-14624.
[60] A. Tarvainen and H. Valpola, "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results," Advances in neural information processing systems, vol. 30, 2017.
[61] K. Sohn et al., "Fixmatch: Simplifying semi-supervised learning with consistency and confidence," Advances in neural information processing systems, vol. 33, pp. 596-608, 2020.
[62] J. Wang, H. Gang, S. Ancha, Y.-T. Chen, and D. Held, "Semi-supervised 3d object detection via temporal graph neural networks," in 2021 International Conference on 3D Vision (3DV), 2021: IEEE, pp. 413-422.
[63] L. Jiang et al., "Guided point contrastive learning for semi-supervised point cloud semantic segmentation," in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 6423-6432.
[64] L. Kong, J. Ren, L. Pan, and Z. Liu, "Lasermix for semi-supervised lidar semantic segmentation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 21705-21715.
[65] S. Mohapatra, S. Yogamani, H. Gotzig, S. Milz, and P. Mader, "BEVDetNet: bird's eye view LiDAR point cloud based real-time 3D object detection for autonomous driving," in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 2021: IEEE, pp. 2809-2815.
[66] W. Zheng, H. Xie, Y. Chen, J. Roh, and H. Shin, "PIFNet: 3D object detection using joint image and point cloud features for autonomous driving," Applied Sciences, vol. 12, no. 7, p. 3686, 2022.
[67] F. Lu et al., "Monet: Motion-based point cloud prediction network," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 13794-13804, 2021.
[68] Y. Hou, X. Zhu, Y. Ma, C. C. Loy, and Y. Li, "Point-to-voxel knowledge distillation for lidar semantic segmentation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8479-8488.
[69] J. Fu, Z. Xiang, C. Qiao, and T. Bai, "PT-FlowNet: Scene Flow Estimation on Point Clouds With Point Transformer," IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2566-2573, 2023.
[70] D. Feng, Y. Zhou, C. Xu, M. Tomizuka, and W. Zhan, "A simple and efficient multi-task network for 3d object detection and road understanding," in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021: IEEE, pp. 7067-7074.
[71] J. Ma, Z. Zhao, X. Yi, J. Chen, L. Hong, and E. H. Chi, "Modeling task relationships in multi-task learning with multi-gate mixture-of-experts," in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 1930-1939.
[72] Y. Wu and K. He, "Group normalization," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19.
[73] M. Berman, A. R. Triki, and M. B. Blaschko, "The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4413-4421.
[74] A. Kendall, Y. Gal, and R. Cipolla, "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7482-7491.
[75] L. Li, H. P. Shum, and T. P. Breckon, "Less is more: Reducing task and model complexity for 3d point cloud semantic segmentation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9361-9371.
[76] P. Sun et al., "Scalability in perception for autonomous driving: Waymo open dataset," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2446-2454.
[77] P. Jund, C. Sweeney, N. Abdo, Z. Chen, and J. Shlens, "Scalable scene flow from point clouds in the real world," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1589-1596, 2021.
[78] Z. Jin, Y. Lei, N. Akhtar, H. Li, and M. Hayat, "Deformation and correspondence aware unsupervised synthetic-to-real scene flow estimation for point clouds," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7233-7243.
[79] M. Menze and A. Geiger, "Object scene flow for autonomous vehicles," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3061-3070.
[80] M. Menze, C. Heipke, and A. Geiger, "Joint 3d estimation of vehicles and scene flow," ISPRS annals of the photogrammetry, remote sensing and spatial information sciences, vol. 2, pp. 427-434, 2015.
[81] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017.
[82] L. N. Smith and N. Topin, "Super-convergence: Very fast training of neural networks using large learning rates," in Artificial intelligence and machine learning for multi-domain operations applications, 2019, vol. 11006: SPIE, pp. 369-386.

QR CODE