簡易檢索 / 詳目顯示

研究生: Muhamad Amirul Haq
Muhamad Amirul Haq
論文名稱: 利用廣區域注意力和卷積-Transformer複合式編碼器實現無監督領域自適應
Leveraging Large Window Attention and Hybrid Convolution-Transformer Encoder for Unsupervised Domain Adaptation
指導教授: 阮聖彰
Shanq-Jang Ruan
口試委員: 呂政修
Jenq-Shiou Leu
方文賢
Wen-Hsien Fang
陳郁堂
Yie-Tarng Chen
李忠謀
Greg Chung-Mou Lee
彭文志
Wen-Chih Peng
阮聖彰
Shanq-Jang Ruan
學位類別: 博士
Doctor
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 114
外文關鍵詞: convolution-transformer network
相關次數: 點閱:192下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報


Semantic segmentation dataset requires time-consuming and labor-intensive process to annotate. Nevertheless, this process is required to deploy a model effectively in the domain of interest. Using a model trained in one domain and deploying it in a different domain can lead to a significant drop in performance due to the domain shift. Unsupervised domain adaptation attempts to address this problem by training a model in an unlabeled target dataset using labeled data from a source domain. In this paper, we propose a novel semantic segmentation network that is specifically designed for unsupervised domain adaptation. The proposed semantic segmentation network utilizes an efficient Convolution-Transformer hybrid encoder that can generalize features across domains using a minimum amount of data. Moreover, the self-attention-based large window attention decoder can capture global and local context information effectively. Meanwhile, the adaptation strategy leverage from the true and tested self-training strategy which employs class-mix augmentation, domain color adjustment, and masked image consistency to close the domain gap and produce high-quality pseudo-labels. During experiments and benchmarks, the proposed network is able to outperform the state-of-the-art method in GTA to Cityscapes, Synthia to Cityscapes, and Cityscapes to Dark Zurich scenarios by 1.4, 1.8, and 0.2 points, respectively. This improvement in detection quality is achieved by a faster network with less GPU VRAM consumption.

Recommendation Letter . . . . . . . . . . . . . . . . . . . . . . . . i Approval Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Abstract in English . . . . . . . . . . . . . . . . . . . . . . . . . . iii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . v Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Abbreviation and Acronym . . . . . . . . . . . . . . . . . . . . . . xiii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Identifying Problems in UDA . . . . . . . . . . . . . . . . 5 1.3 Organization of this Dissertation . . . . . . . . . . . . . . 6 2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1 Semantic Segmentation . . . . . . . . . . . . . . . . . . . 9 2.2 Unsupervised Domain Adaptation (UDA) . . . . . . . . . 11 2.2.1 Adversarial Methods . . . . . . . . . . . . . . . . 11 2.2.2 Self-Training Methods . . . . . . . . . . . . . . . 12 3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1 Single-Resolution Architecture . . . . . . . . . . . . . . . 15 3.1.1 Convolution-Transformer Encoder . . . . . . . . . 16 3.1.2 Large Window Attention Decoder . . . . . . . . . 23 3.2 Multi-Resolution Architecture . . . . . . . . . . . . . . . 24 3.3 Unsupervised Domain Adaptation Strategy . . . . . . . . 28 3.3.1 Self-Training . . . . . . . . . . . . . . . . . . . . 28 3.3.2 Closing the Domain Gap . . . . . . . . . . . . . . 31 3.3.3 Class-Imbalance and Overfitting Prevention . . . . 35 3.4 Objective and Loss Functions . . . . . . . . . . . . . . . . 39 4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . 41 4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 Datasets and Domain Configurations . . . . . . . . . . . . 43 4.2.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . 43 4.2.2 Domain Configurations . . . . . . . . . . . . . . . 44 4.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . 46 4.4 Adaptation and Generalization Capability . . . . . . . . . 47 4.5 Performance in Model-Agnostic Strat. . . . . . . . . . . . 54 4.6 Comparison with Prior Strategies . . . . . . . . . . . . . . 56 4.6.1 Comparison in GTA→Cityscapes . . . . . . . . . 56 4.6.2 Comparison in Synthia→Cityscapes . . . . . . . . 61 4.6.3 Comparison in Cityscapes→Dark Zurich . . . . . 65 4.7 Runtime and Complexity Measure . . . . . . . . . . . . . 67 4.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.9 Failure Case . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.10 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 72 5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.1 UDA Problems and the Solution . . . . . . . . . . . . . . 74 5.2 Assessment of Objectives . . . . . . . . . . . . . . . . . . 75 5.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 76 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Letter of Authority . . . . . . . . . . . . . . . . . . . . . . . . . . 92

[1] J. Hosang, R. Benenson, P. Dollár, and B. Schiele, “What makes for effective detection proposals?,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 4, pp. 814–830, 2015.
[2] B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” Advances in neural information processing systems, vol. 30, 2017.
[3] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[4] C. Sakaridis, D. Dai, and L. Van Gool, “Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10765–10775, 2021.
[5] W. Tranheden, V. Olsson, J. Pinto, and L. Svensson, “Dacs: Domain adaptation via cross-domain mixed sampling,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1379–1389, 2021.
[6] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
[7] S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 102–118, Springer, 2016.
[8] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille,“Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
[9] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3146–3154, 2019.
[10] L. Huang, Y. Yuan, J. Guo, C. Zhang, X. Chen, and J. Wang, “Interlaced sparse self-attention for semantic segmentation,” arXiv preprint arXiv:1907.12273, 2019.
[11] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam,“Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), pp. 801–818, 2018.
[12] E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo,“Segformer: Simple and efficient design for semantic segmentation with transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12077–12090, 2021.
[13] J. Hoffman, D. Wang, F. Yu, and T. Darrell, “Fcns in the wild: Pixel-level adversarial and constraint-based adaptation,” arXiv preprint arXiv:1612.02649, 2016.
[14] W. Li, M. Wang, H. Wang, and Y. Zhang, “Object detection based on semi-supervised domain adaptation for imbalanced domain resources,” Machine Vision and Applications, vol. 31, no. 3, pp. 1–18, 2020.
[15] Z. Deng, Q. Kong, N. Akira, and T. Yoshinaga, “Hierarchical contrastive adaptation for cross-domain object detection,” Machine Vision and Applications, vol. 33, no. 4, pp. 1–13, 2022.
[16] L. Hoyer, D. Dai, and L. Van Gool, “Hrda: Context-aware high resolution domain-adaptive semantic segmentation,” arXiv preprint arXiv:2204.13132, 2022.
[17] M.-E. Shao, M. A. Haq, D.-Q. Gao, P. Chondro, and S.-J. Ruan, “Semantic segmentation for free space and lane based on grid-based interest point detection,” IEEE Transactions on Intelligent Transportation Systems, 2021.
[18] E. Romera, J. M. Alvarez, L. M. Bergasa, and R. Arroyo, “Erfnet: Efficient residual factorized convnet for real-time semantic segmentation,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 263–272, 2017.
[19] K. Yang, X. Hu, Y. Fang, K. Wang, and R. Stiefelhagen, “Omnisupervised omnidirectional semantic segmentation,” IEEE Transactions on Intelligent Transportation Systems, 2020.
[20] K. Yang, X. Hu, L. M. Bergasa, E. Romera, and K. Wang, “Pass: Panoramic annular semantic segmentation,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 10, pp. 4171–4185, 2019.
[21] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440, 2015.
[22] H. Zhang, K. Dana, J. Shi, Z. Zhang, X. Wang, A. Tyagi, and A. Agrawal, “Context encoding for semantic segmentation,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7151–7160, 2018.
[23] Z. Gu, S. Zhou, L. Niu, Z. Zhao, and L. Zhang, “Context-aware feature generation for zero-shot semantic segmentation,” in Proceedings of the 28th ACM International Conference on Multimedia, pp. 1921–1929, 2020.
[24] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. Jorge Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep learning in medical image analysis and multimodal learning for clinical decision support, pp. 240–248, Springer, 2017.
[25] X. Lai, Z. Tian, L. Jiang, S. Liu, H. Zhao, L. Wang, and J. Jia,“Semi-supervised semantic segmentation with directional context-aware consistency,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1205–1214, 2021.
[26] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
[27] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo,“Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022, 2021.
[28] Y. Li, C.-Y. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer, “Mvitv2: Improved multiscale vision transformers for classification and detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4804–4814, 2022.
[29] S. d’Ascoli, H. Touvron, M. L. Leavitt, A. S. Morcos, G. Biroli, and L. Sagun, “Convit: Improving vision transformers with soft convolutional inductive biases,” in International Conference on Machine Learning, pp. 2286–2296, PMLR, 2021.
[30] L. Hoyer, D. Dai, and L. Van Gool, “Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9924–9935, 2022.
[31] Z. Dai, H. Liu, Q. V. Le, and M. Tan, “Coatnet: Marrying convolution and attention for all data sizes,” Advances in neural information processing systems, vol. 34, pp. 3965–3977, 2021.
[32] Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, and Y. Li, “Maxvit: Multi-axis vision transformer,” in European conference on computer vision, pp. 459–479, Springer, 2022.
[33] P. O. Pinheiro, “Unsupervised domain adaptation with similarity learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8004–8013, 2018.
[34] S. Ettedgui, S. Abu-Hussein, and R. Giryes, “Procst: Boosting semantic segmentation using progressive cyclic style-transfer,” arXiv preprint arXiv:2204.11891, 2022.
[35] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker, “Learning to adapt structured output space for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7472–7481, 2018.
[36] Y.-H. Tsai, K. Sohn, S. Schulter, and M. Chandraker, “Domain adaptation for structured output via discriminative patch representations,”in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1456–1465, 2019.
[37] G. Hinton, O. Vinyals, J. Dean, et al., “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, vol. 2, no. 7, 2015.
[38] L. Hoyer, D. Dai, H. Wang, and L. Van Gool, “Mic: Masked image consistency for context-enhanced domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11721–11732, 2023.
[39] Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, et al., “Swin transformer v2: Scaling up capacity and resolution,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12009–12019, 2022.
[40] X. Chen, X. Wang, S. Changpinyo, A. Piergiovanni, P. Padlewski, D. Salz, S. Goodman, A. Grycner, B. Mustafa, L. Beyer, et al., “Pali:A jointly-scaled multilingual language-image model,” arXiv preprint arXiv:2209.06794, 2022.
[41] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125, 2017.
[42] S.-W. Kim, H.-K. Kook, J.-Y. Sun, M.-C. Kang, and S.-J. Ko, “Parallel feature pyramid network for object detection,” in Proceedings of the European conference on computer vision (ECCV), pp. 234–250, 2018.
[43] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
[44] A. Gillioz, J. Casas, E. Mugellini, and O. Abou Khaled, “Overview of the transformer-based models for nlp tasks,” in 2020 15th Conference on Computer Science and Information Systems (FedCSIS), pp. 179–183, IEEE, 2020.
[45] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European conference on computer vision, pp. 213–229, Springer, 2020.
[46] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen,“Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510–4520, 2018.
[47] P. Shaw, J. Uszkoreit, and A. Vaswani, “Self-attention with relative position representations,” arXiv preprint arXiv:1803.02155, 2018.
[48] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 6023–6032, 2019.
[49] A. Dabouei, S. Soleymani, F. Taherkhani, and N. M. Nasrabadi, “Supermix: Supervising the mixing data augmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13794–13803, 2021.
[50] P. Morerio, R. Volpi, R. Ragonesi, and V. Murino, “Generative pseudo-label refinement for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3130–3139, 2020.
[51] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16000–16009, 2022.
[52] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
[53] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32, pp. 8024–8035, CurranAssociates, Inc., 2019.
[54] M. Contributors, “MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark.” https://github.com/open-mmlab/mmsegmentation, 2020.
[55] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez,“The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3234–3243, 2016.
[56] C. Sakaridis, D. Dai, and L. Van Gool, “Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 3139–3153, 2020.
[57] T.-H. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez, “Advent: Adversarial entropy minimization for domain adaptation in semantic seg-mentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2517–2526, 2019.
[58] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker, “Learning to adapt structured output space for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7472–7481, 2018.
[59] Y. Zou, Z. Yu, B. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in Proceedings of the European conference on computer vision (ECCV), pp. 289–305, 2018.
[60] H. Wang, T. Shen, W. Zhang, L.-Y. Duan, and T. Mei, “Classes matter: A fine-grained adversarial approach to cross-domain semantic segmentation,” in European conference on computer vision, pp. 642–659, Springer, 2020.
[61] Q. Wang, D. Dai, L. Hoyer, L. Van Gool, and O. Fink, “Domain adaptive semantic segmentation with self-supervised depth estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8515–8525, 2021.
[62] P. Zhang, B. Zhang, T. Zhang, D. Chen, Y. Wang, and F. Wen, "Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12414–12424, 2021.
[63] X. Huo, L. Xie, H. Hu, W. Zhou, H. Li, and Q. Tian, “Domain-agnostic prior for transfer semantic segmentation,” in Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pp. 7075–7085, 2022.
[64] C. Sakaridis, D. Dai, and L. V. Gool, “Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7374–7383, 2019.
[65] X. Wu, Z. Wu, H. Guo, L. Ju, and S. Wang, “Dannet: A one-stage domain adaptation network for unsupervised nighttime semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15769–15778, 2021.
[66] Q. Xu, Y. Ma, J. Wu, C. Long, and X. Huang, “Cdada: A curriculum domain adaptation for nighttime semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2962–2971, 2021.
[67] H. Gao, J. Guo, G. Wang, and Q. Zhang, “Cross-domain correlation distillation for unsupervised domain adaptation in nighttime semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9913–9923, 2022.
[68] R. Li, S. Li, C. He, Y. Zhang, X. Jia, and L. Zhang, “Class-balanced pixel-level self-labeling for domain adaptive semantic segmentation,”in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11593–11603, 2022.
[69] Y. Li, L. Yuan, and N. Vasconcelos, “Bidirectional learning for domain adaptation of semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6936–6945, 2019.

無法下載圖示 全文公開日期 2025/07/31 (校內網路)
全文公開日期 2025/07/31 (校外網路)
全文公開日期 2025/07/31 (國家圖書館:臺灣博碩士論文系統)
QR CODE