簡易檢索 / 詳目顯示

研究生: Syahidah Izza Rufaida
Syahidah Izza Rufaida
論文名稱: Enhancing Robustness and Efficiency of Deep Neural Networks through Adversarial Approaches
Enhancing Robustness and Efficiency of Deep Neural Networks through Adversarial Approaches
指導教授: 呂政修
Jenq-Shiou Leu
口試委員: 呂政修
Jenq-Shiou Leu
陳俊良
Jiann-Liang Chen
鄭瑞光
Ray-Guang Cheng
阮聖彰
Shanq-Jang Ruan
吳晉賢
Chin-Hsien Wu
陳維美
Wei-Mei Chen
周承復
Cheng-Fu Chou
曾建超
Chien-Chao Tseng
衛信文
Hsin-Wen Wei
學位類別: 博士
Doctor
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2023
畢業學年度: 112
語文別: 英文
論文頁數: 94
外文關鍵詞: machine learning, adversarial attack, noisy dataset, adversarial reprogramming
相關次數: 點閱:266下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • Neural networks have demonstrated exceptional performance across various tasks; However, their vulnerability to adversarial images has given rise to concerns. Adversarial attacks have been observed to significantly degrade network performance. Moreover, deep networks heavily depend on large and accurate datasets, which can be both costly and time-consuming to acquire. In an effort to tackle these challenges, this dissertation explores some solutions to the problems: defensive mechanisms, sanitization approaches of noisy datasets, and an approach to leverage adversarial methods as an advantage.
    Firstly, a novel defensive training procedure called "democracy learning" is introduced to enhance network robustness. Democracy learning generates target labels based on network predictions and previous labels, increasing the robustness against adversarial attacks and have superior results compared to existing paradigms. Secondly, democracy learning is explored to correct noisy labels during training iteratively. The framework is demonstrated on various datasets achieving superior results compared to other methods and effectively identifying mislabeled data.
    Thirdly, the potential of adversarial reprogramming is explored, demonstrating the capability to repurpose a single machine-learning model for multiple tasks without altering model parameters. This technique proves effective even for cross-model tasks, achieving high prediction accuracy across different domains, such as sentiment analysis and low-resource language classification.
    In conclusion, this dissertation presents innovative adversarial approaches to enhance the robustness and efficiency of neural networks. The proposed methods offer solutions to combat adversarial attacks, address noisy data challenges, and enable multi-purpose model, thereby advancing the capabilities of deep learning models for diverse applications.

    Table of contents Abstract i RelatedPublications iii Acknowledgment v Tableofcontents vi ListofFigures ix ListofTables xii Abbreviations xiii 1 Introduction 1 2 LiteratureReview 5 2.1 DeepLearningArchitecture 5 2.1.1 ResidualNetwork 5 2.1.2 Efficient-Net 7 2.2 AdversarialAttack 10 2.2.1 FastGradientSignMethod 10 2.2.2 Jacobian-basedSaliencyMapAttack 11 2.2.3 DeepFool 12 2.2.4 Carlini-Wadger 14 2.3 DefensiveMechanism 15 2.3.1 KnowledgeDistillation 15 2.3.2 StochasticActivationPrunning 16 2.4 SanitizationApproach 18 2.5 Dataset 19 2.5.1 MNIST 19 2.5.2 FashionMNIST 20 2.5.3 Kuzushiji-MNIST 22 2.5.4 CIFAR10 23 2.5.5 Imagenet 26 2.5.6 StanfordsentimentTreebank 30 2.5.7 WReTE 30 3 DemocracyLearning 31 3.1 DemocracyLearning 31 3.2 Democracy Learning Approach to Increase Network Robustness 34 3.3 DemocracyLearningforIterativeSanitization 35 3.4 HyperparameterSetting 39 4 AdversarialReprogramming 40 4.1 ImagetoImageDomain 42 4.2 TexttoImageDomain 43 5 SimulationResult 44 5.1 DefensiveCapability 44 5.2 SanitizationCapability 52 5.2.1 MNIST 53 5.2.2 Fashion-MNIST 54 5.2.3 CIFAR10 57 5.2.4 Imagenet1K 60 5.3 AdversarialReprogramming 62 5.3.1 ImagetoImageDataset 62 5.3.2 ImagetoTextDataset 64 6 ConclusionandFutureWorks 66 6.1 Conclusion 66 6.2 FutureWorks 67

    [1] Arslan Ali, Matteo Testa, Lev Markhasin, Tiziano Bianchi, and Enrico Magli. Ad- versarial learning of mappings onto regularized spaces for biometric authentication. IEEE Access, 8:149316–149331, 2020. doi: 10.1109/ACCESS.2020.3016599. URL https://doi.org/10.1109/ACCESS.2020.3016599.
    [2] Dana Angluin and Philip D. Laird. Learning from noisy examples. Machine Learn- ing, 2(4):343–370, 1987. doi: 10.1007/BF00116829. URL https://doi.org/10. 1007/BF00116829.
    [3] Devansh Arpit, Stanislaw K. Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron C. Courville, Yoshua Bengio, and Simon Lacoste-Julien. A closer look at memorization in deep networks. In ICML, pages 233–242, 2017.
    [4] Devansh Arpit, Stanislaw Jastrzkebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron C. Courville, Yoshua Bengio, and Simon Lacoste-Julien. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learn- ing, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 233–242, 2017. URL http://proceedings.mlr.press/v70/arpit17a.html.
    [5] SabrinaNarimeneBenassou,WuzhenShi,andFengJiang.Entropyguidedadversar- ial model for weakly supervised object localization. Neurocomputing, 429:60–68, 2021.
    [6] L. Brigato, Bjo ̈rn Barz, L. Iocchi, and Joachim Denzler. Tune it or don’t use it: Benchmarking data-efficient image classification. ArXiv, abs/2108.13122, 2021.
    [7] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57, May 2017. doi: 10.1109/SP.2017.49.
    [8] B. Chandra and Rajesh K. Sharma. Fast learning in deep neural networks. Neu- rocomput., 171(C):1205–1215, January 2016. ISSN 0925-2312. doi: 10.1016/j.neucom.2015.07.093. URL https://doi.org/10.1016/j.neucom.2015.07. 093.
    [9] Jr-Chang Chen, Ting-Yu Lin, and Gang-Yu Fan. Yahari wins chinese dark chess tournament. J. Int. Comput. Games Assoc., 42(1):53–56, 2020. doi: 10.3233/ ICG-190137. URLhttps://doi.org/10.3233/ICG-190137.
    [10] Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Ya- mamoto, and David Ha. Deep learning for classical japanese literature. CoRR, abs/1812.01718, 2018. URL http://arxiv.org/abs/1812.01718.
    [11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large- Scale Hierarchical Image Database. In CVPR09, 2009.
    [12] Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bern- stein, Jean Kossaifi, Aran Khanna, and Animashree Anandkumar. Stochastic ac- tivation pruning for robust adversarial defense. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https: //openreview.net/forum?id=H1uR4GZRZ.
    [13] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super- resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 38(2):295–307, 2016. doi: 10.1109/TPAMI.2015.2439281. URL https: //doi.org/10.1109/TPAMI.2015.2439281.
    [14] Gamaleldin F. Elsayed, Ian J. Goodfellow, and Jascha Sohl-Dickstein. Adversar- ial reprogramming of neural networks. In 7th International Conference on Learn- ing Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenRe- view.net, 2019. URL https://openreview.net/forum?id=Syx\_Ss05tm.
    [15] Eyad Elyan, Laura Jamieson, and Adamu Ali-Gombe. Deep learning for symbols detection and classification in engineering drawings. Neural Networks, 129:91–102, 2020. doi: 10.1016/j.neunet.2020.05.025. URLhttps://doi.org/10.1016/j. neunet.2020.05.025.
    [16] Matthias Englert and Ranko Lazic. Adversarial reprogramming revisited. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https:// openreview.net/forum?id=F0wPem89q9y.
    [17] Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of classifiers’ robust- ness to adversarial perturbations. Mach. Learn., 107(3):481–508, March 2018. ISSN 0885-6125. doi: 10.1007/s10994-017-5663-3. URL https://doi.org/10.1007/ s10994-017-5663-3.
    [18] L. Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 2, pages 524–531 vol. 2, 2005. doi: 10. 1109/CVPR.2005.16.
    [19] Christiane Fellbaum, editor. WordNet: an electronic lexical database. MIT Press, 1998.
    [20] Stanislav Fort, Pawel Krzysztof Nowak, and Srini Narayanan. Stiffness: A new perspective on generalization in neural networks. CoRR, abs/1901.09491, 2019. URL http://arxiv.org/abs/1901.09491.
    [21] Benoˆıt Fre ́nay and Michel Verleysen. Classification in the presence of label noise: A survey. IEEE Trans. Neural Netw. Learning Syst., 25(5):845–869, 2014. doi: 10.1109/TNNLS.2013.2292894. URLhttps://doi.org/10.1109/TNNLS. 2013.2292894.
    [22] Aritra Ghosh, Naresh Manwani, and P. S. Sastry. Making risk minimization tolerant to label noise. Neurocomputing, 160:93–107, 2015. doi: 10.1016/j.neucom.2014. 09.081. URLhttps://doi.org/10.1016/j.neucom.2014.09.081.
    [23] Aritra Ghosh, Himanshu Kumar, and P. S. Sastry. Robust loss functions under la- bel noise for deep neural networks. In Proceedings of the Thirty-First AAAI Con- ference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 1919–1925, 2017.
    [24] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In 5th International Conference on Learning Representa- tions, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceed- ings, 2017. URL https://openreview.net/forum?id=H12GRgcxg.
    [25] Adrian Goldwaser and Michael Thielscher. Deep reinforcement learning for general game playing. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 1701–1708. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/ article/view/5533.
    [26] IanGoodfellow,JonathonShlens,andChristianSzegedy.Explainingandharnessing adversarial examples. In International Conference on Learning Representations, 2015. URLhttp://arxiv.org/abs/1412.6572.
    [27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
    [28] Niels Justesen, Philip Bontrager, Julian Togelius, and Sebastian Risi. Deep learning for video game playing. IEEE Trans. Games, 12(1):1–20, 2020. doi: 10.1109/TG. 2019.2896986. URLhttps://doi.org/10.1109/TG.2019.2896986.
    [29] A.KrizhevskyandG.Hinton.Learningmultiplelayersoffeaturesfromtinyimages. Master’s thesis, Department of Computer Science, University of Toronto, 2009.
    [30] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet clas- sification with deep convolutional neural networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neu- ral Information Processing Systems, volume 25. Curran Associates, Inc., 2012. URL https://proceedings.neurips.cc/paper_files/paper/2012/ file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf.
    [31] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. ICLR Workshop, 2017. URL https://arxiv.org/abs/1607. 02533.
    [32] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 2169–2178, 2006. doi: 10.1109/CVPR.2006.68.
    [33] Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvinine- jad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neu- ral machine translation. Trans. Assoc. Comput. Linguistics, 8:726–742, 2020. doi: 10.1162/tacl\ a\ 00343. URLhttps://doi.org/10.1162/tacl\_a\_00343.
    [34] Bc. Luka ́vs. Majer. Object localization by spatial matching of high-level cnn fea- tures. 2021.
    [35] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: A simple and accu- rate method to fool deep neural networks. In 2016 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 2574–2582, June 2016. doi: 10.1109/CVPR.2016.282.
    [36] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, and Stefano Soatto. Analysis of universal adversarial perturbations. CoRR, abs/1705.09554, 2017.
    [37] Yuesong Nan and Hui Ji. Deep learning for handling kernel/model uncertainty in image deconvolution. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 2385–2394. IEEE, 2020. doi: 10.1109/CVPR42600.2020.00246. URL https: //doi.org/10.1109/CVPR42600.2020.00246.
    [38] P. Neekhara, S. Hussain, J. Du, S. Dubnov, F. Koushanfar, and J. McAuley. Cross- modal adversarial reprogramming. In 2022 IEEE/CVF Winter Conference on Ap- plications of Computer Vision (WACV), pages 2898–2906, Los Alamitos, CA, USA, jan 2022. IEEE Computer Society. doi: 10.1109/WACV51458.2022.00295.
    [39] Olivia Nocentini, Jaeseok Kim, Muhammad Zain Bashir, and Filippo Cavallo. Im- age classification using multiple convolutional neural networks on the fashion- mnist dataset. Sensors, 22(23):9544, 2022. doi: 10.3390/s22239544. URL https://doi.org/10.3390/s22239544.
    [40] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42:145–175, 2001.
    [41] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Sympo- sium on Security and Privacy (EuroS P), pages 372–387, March 2016.
    [42] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pages 582–597, May 2016. doi: 10.1109/SP.2016.41.
    [43] Daniel S. Park, Yu Zhang, Chung-Cheng Chiu, Youzheng Chen, Bo Li, William Chan, Quoc V. Le, and Yonghui Wu. Specaugment on large scale datasets. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, May 4-8, 2020, pages 6879–6883. IEEE, 2020. doi: 10.1109/ICASSP40776.2020.9053205. URL https://doi.org/10.1109/ ICASSP40776.2020.9053205.
    [44] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2233–2241, 2017. doi: 10.1109/CVPR.2017.240. URLhttps://doi.org/10.1109/CVPR.2017.240.
    [45] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2233–2241, 2017. doi: 10.1109/CVPR.2017.240. URLhttps://doi.org/10.1109/CVPR.2017.240.
    [46] Zhi pin Nie, Ying Lin, Sp Ren, and Lan Zhang. Adaptive perturbation adversarial training: based on reinforcement learning. ArXiv, abs/2108.13239, 2021.
    [47] Yunxiao Qin, Yuanhao Xiong, Jinfeng Yi, and Cho-Jui Hsieh. Training meta- surrogate model for transferable adversarial attack. 2021.
    [48] Wenjun Qiu and David Lie. Deep active learning with crowdsourcing data for pri- vacy policy classification. CoRR, abs/2008.02954, 2020. URL https://arxiv. org/abs/2008.02954.
    [49] OlgaRussakovsky,JiaDeng,HaoSu,JonathanKrause,SanjeevSatheesh,SeanMa, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexan- der C. Berg, and Fei-Fei Li. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. doi: 10.1007/ s11263-015-0816-y. URLhttps://doi.org/10.1007/s11263-015-0816-y.
    [50] AadhavanSadasivam,KausicGunasekar,HasanDavulcu,andYezhouYang.meme- bot: Towards automatic image meme generation. CoRR, abs/2004.14571, 2020. URL https://arxiv.org/abs/2004.14571.
    [51] Ken Setya and Rahmad Mahendra. Semi-supervised textual entailment on indone- sian wikipedia data, 03 2018.
    [52] Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, and Yan Yan. Lipschitz continuity guided knowledge distillation. ArXiv, abs/2108.12905, 2021.
    [53] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv. org/abs/1409.1556.
    [54] Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Proceedings of the Conference, 25-27 October 2008, Honolulu, Hawaii, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 254–263, 2008. URL http://www.aclweb.org/anthology/D08-1027.
    [55] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. Recursive deep models for semantic com- positionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 Octo- ber 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631–1642. ACL, 2013.
    [56] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Er- han, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. URL http: //arxiv.org/abs/1312.6199.
    [57] MingxingTanandQuocV.Le.Efficientnetv2:Smallermodelsandfastertraining.In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Con- ference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 10096–10106. PMLR, 2021.
    [58] Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint op- timization framework for learning with noisy labels. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 5552–5560, 2018.
    [59] Sunil Thulasidasan, Tanmoy Bhattacharya, Jeff A. Bilmes, Gopinath Chennupati, and Jamal Mohd-Yusof. Combating label noise in deep learning using abstention. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 6234–6243, 2019. URL http://proceedings.mlr.press/v97/thulasidasan19a.html.
    [60] Antonio Torralba, Rob Fergus, and William T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008. doi: 10.1109/ TPAMI.2008.128.
    [61] Antonio Torralba, Robert Fergus, and William T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell., 30(11):1958–1970, 2008. doi: 10.1109/TPAMI.2008.128. URL https://doi.org/10.1109/TPAMI.2008.128.
    [62] Aditay Tripathi, Rajath R Dani, Anand Mishra, and Anirban Chakraborty. Sketch- guided object localization in natural images. ArXiv, abs/2008.06551, 2020.
    [63] Vladimir Vapnik. On the uniform convergence of relative frequencies of events to their probabilities. 1971.
    [64] Chaofei Wang, Jiayu Xiao, Yizeng Han, Qisen Yang, Shiji Song, and Gao Huang. Towards learning spatially discriminative feature representations. 2021.
    [65] Jing Wang, Weiqing Min, Sujuan Hou, Shengnan Ma, Yuanjie Zheng, Haishuai Wang, and Shuqiang Jiang. Logo-2k+: A large-scale logo dataset for scalable logo classification. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 6194–6201. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/ article/view/6085.
    [66] Ziheng Wang, Su Wu, Chang Liu, Shaozhi Wu, and Kai Xiao. The regression of MNIST dataset based on convolutional neural network. In Aboul Ella Has- sanien, Ahmad Taher Azar, Tarek Gaber, Roheet Bhatnagar, and Mohamed F. Tolba, editors, The International Conference on Advanced Machine Learning Technolo- gies and Applications, AMLTA 2019, Cairo, Egypt, 28-30 March, 2919, volume 921 of Advances in Intelligent Systems and Computing, pages 59–68. Springer, 2019. doi: 10.1007/978-3-030-14118-9\ 7. URL https://doi.org/10.1007/ 978-3-030-14118-9\_7.
    [67] Bryan Wilie, Karissa Vincentio, Genta Indra Winata, Samuel Cahyawijaya, Xiao- hong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, and Ayu Purwarianti. Indonlu: Benchmark and resources for evaluating in- donesian natural language understanding. In Kam-Fai Wong, Kevin Knight, and Hua Wu, editors, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the As- sociation for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, AACL/IJCNLP 2020, Suzhou, China, December 4-7, 2020, pages 843–857. Association for Computational Linguistics, 2020. URL https://aclanthology.org/2020.aacl-main.85/.
    [68] Mitchell Wortsman, Gabriel Ilharco, Mike Li, Jong Wook Kim, Hannaneh Ha- jishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine- tuning of zero-shot models. 2021.
    [69] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017. URL http://arxiv.org/abs/1708.07747.
    [70] Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 2691–2699, 2015. doi: 10.1109/CVPR.2015.7298885. URL https://doi.org/10.1109/CVPR.2015.7298885.
    [71] Jiangchao Yao, Hao Wu, Ya Zhang, Ivor W. Tsang, and Jun Sun. Safeguarded dynamic label regression for noisy supervision. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Appli- cations of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Sympo- sium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019., pages 9103–9110, 2019. URL https://aaai.org/ojs/index.php/AAAI/article/view/4943.
    [72] Kun Yi and Jianxin Wu. Probabilistic end-to-end noise correction for learning with noisy labels. CoRR, abs/1903.07788, 2019. URL http://arxiv.org/abs/1903. 07788.
    [73] Zhixuan Yu, Jae Shin Yoon, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, and Hyun Soo Park. HUMBI: A large multiview dataset of human body ex- pressions. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 2987–2997. IEEE, 2020. doi: 10.1109/CVPR42600.2020.00306. URL https://doi.org/10.1109/ CVPR42600.2020.00306.
    [74] Bowen Zhang, Benedetta Tondi, and Mauro Barni. Adversarial examples for replay attacks against cnn-based face recognition with anti-spoofing capability. Comput. Vis. Image Underst., 197-198:102988, 2020. doi: 10.1016/j.cviu.2020.102988. URL https://doi.org/10.1016/j.cviu.2020.102988.
    [75] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24- 26, 2017, Conference Track Proceedings, 2017. URL https://openreview.net/ forum?id=Sy8gdB9xx.
    [76] Lei Zhang, Zhiqiang Lang, Peng Wang, Wei Wei, Shengcai Liao, Ling Shao, and Yanning Zhang. Pixel-aware deep function-mixture network for spectral super- resolution. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Confer- ence, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Arti- ficial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 12821–12828. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/ AAAI/article/view/6978.
    [77] XinjieZhang,JiaweiShao,YuyiMao,andJunZhang.Communication-computation efficient device-edge co-inference via automl. ArXiv, abs/2108.13009, 2021.
    [78] Zhilu Zhang and Mert R. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montre ́al, Canada.
    [79] Zixiang Zhao, Shuang Xu, Chunxia Zhang, Junmin Liu, Jiangshe Zhang, and Pengfei Li. Didfuse: Deep image decomposition for infrared and visible image fusion. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 970–976. ijcai.org, 2020. doi: 10.24963/ijcai.2020/135. URL https://doi.org/10.24963/ijcai. 2020/135.
    [80] Zhisheng Zhong, Hiroaki Akutsu, and Kiyoharu Aizawa. Channel-level variable quantization network for deep image compression. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intel- ligence, IJCAI 2020, pages 467–473. ijcai.org, 2020. doi: 10.24963/ijcai.2020/65. URL https://doi.org/10.24963/ijcai.2020/65.

    QR CODE