簡易檢索 / 詳目顯示

研究生: 林政諺
Jheng-Yan Lin
論文名稱: 針對聯盟式學習入侵檢測的兩階段防禦中毒攻擊
Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection
指導教授: 賴源正
Yuan-Cheng Lai
口試委員: 賴源正
Yuan-Cheng Lai
查士朝
Shi-cho Cha
黃仁竑
Ren-Hung Hwang
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 37
中文關鍵詞: 聯盟式學習入侵偵測中毒攻擊
外文關鍵詞: federated learning, intrusion detection, poisoning attack
相關次數: 點閱:205下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 傳統入侵偵測系統 (Intrusion Detection System, IDS)需依靠人工更新,但在當今廣泛且多樣的網路環境下,其更新速度遠不及威脅變化速度,而機器學習(Machine Learning, ML)的IDS又因隱私意識抬頭,導致訓練資料蒐集不易,因此聯盟式學習(Federated Learning, FL)被提出以期解決ML訓練資料時,需先將資料集中至特定位置而衍生出資料隱私的問題;FL的IDS藉由區域參與者各自訓練出區域模型後,將模型參數送至全域建模者,讓全域建模者學習出全域模型再送回給區域參與者,如此進行多回通訊來達成共同訓練目的。目前大多數的文獻都是以提升效能、保護隱私作為FL的目標,只有少數文獻有探討FL中毒攻擊(Poisoning Attacks)的防禦,然這些文獻大多只探討圖片領域的防禦,且僅單純針對標籤翻轉攻擊(Label-flipping Attacks),並沒有考量後門攻擊(Backdoor Attacks)。為了降低FL架構之IDS遭受中毒攻擊之損害,本論文在全域建模者中提出Defending Poisoning Attacks in Federated Learning (DPA-FL)之兩階式防禦方法;在第一階時利用相對概念先快速比對參與者間的更新參數,將可能異常的參與者排除,然後於第二階則利用惡意參與者會影響全域模型正確性的特性,透過絕對概念將選定的資料集對模型進行測試,並將正確率過低的參與者排除;此兩階式防禦架構可兼顧檢測效率且提升防禦效果。實驗結果顯示,DPA-FL在標籤翻轉攻擊和後門攻擊情境中最終平均達到96.5%的正確性,相較於無防禦和其它防禦方法,在標籤翻轉攻擊下最終模型測試正確性平均優於3~5%,而在後門攻擊中正確性更上升了20~40%。同時在檢測效率上,在攻擊者是少數的情境中,DPA-FL都能在六回通訊內將攻擊者排除。


    Traditional intrusion detection systems (IDSs) require manual updates, which cannot usually keep up with the proliferation of threats in an ever-increasing diverse and complicated network. On the other hand, machine learning (ML) IDSs encounter difficulty on collecting data due to the concern of privacy leakage. Therefore, federated learning (FL) was proposed to solve the data privacy issue. In FL, each participant first trains its local model and sends the model’s weights to the global server, which then aggregates the received weights and distributes the aggregated global model to participants. The above procedure is called one round and FL will repeat many rounds until converge. Currently, most studies focus on increasing performance or privacy. Only a few studies focus on defending against poisoning attacks, but they only discuss label-flipping attacks in the image field and do not handle backdoor attacks. In order to prevent damage to FL-IDS from poisoning attacks, we propose a two-phase defense mechanism, called Defending Poisoning Attacks in Federated Learning (DPA-FL). The first-phase employs relative difference to quickly compare weights between different participants and remove attackers while the second-phase conducts absolute testing using datasets because the local model of attackers will affect the global model. Experiment results show that DPA-FL can reach 96.5% accuracy in defending poisoning attacks, including label-flipping attacks and backdoor attacks. Compared with the undefended and other mechanisms, DPA-FL can improve 3~5% accuracy for label-flipping attacks and 20~40% accuracy for backdoor attacks. Also DPA-FL can remove the attackers within 6 rounds when the attackers are few.

    摘要 I Abstract II 目錄 III 圖目錄 V 表目錄 VI 第一章 緒論 1 第二章 相關背景 5 2.1 聯盟式學習 5 2.2 中毒攻擊 6 2.3 相關研究 7 2.4 相關技術背景 9 2.4.1 區域性異常因子(Local Outlier Factor, LOF) 9 2.4.2 強化學習(Reinforcement Learning, RL) 10 第三章 系統架構與問題陳述 13 3.1 系統架構 13 3.2 環境假設 14 3.3 問題陳述 15 第四章 Defending Poisoning Attacks in Federated Learning 16 4.1 DPA-FL方法概念 16 4.2 Relative-phase 17 4.3 Absolute-phase 19 4.4 例子 21 第五章 實驗與分析 23 5.1 資料集與實驗參數 23 5.1.1 資料集 23 5.1.2 FL-IDS模型設置 23 5.1.3 攻擊設置 24 5.1.4 評估指標與比較對象 25 5.2 DPA-FL與其它中毒攻擊防禦方法之效果 25 5.3 不同資料中毒量之效果 27 5.4 不同惡意參與者比例之效果 28 5.5 RP與AP效果及效率 29 第六章 結論與未來展望 31 參考文獻 32

    [1] F. Sabahi and A. Movaghar, “Intrusion Detection: A Survey,” Third International Conference on Systems and Networks Communications, pp. 23-26, 2008.
    [2] Z. Chunyue, L. Yun, and Z. Hongke, “A Pattern Matching Based Network Intrusion Detection System,” 9th International Conference on Control, Automation, Robotics and Vision, pp. 1-4, 2006.
    [3] A. L. Buczak and E. Guven, “A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection,” IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1153-1176, 2015.
    [4] C. Yin, Y. Zhu, J. Fei, and X. He, “A Deep Learning Approach for Intrusion Detection using Recurrent Neural Networks,” IEEE Access, vol. 5, pp. 21954-21961, 2017.
    [5] Y. Xiao, C. Xing, T. Zhang, and Z. Zhao, “An Intrusion Detection Model Based on Feature Reduction and Convolutional Neural Networks,” IEEE Access, vol. 7, pp. 42210–42219, 2019.
    [6] N. A. A.-A. Al-Marri, B. S. Ciftler, and M. M. Abdallah, “Federated Mimic Learning for Privacy Preserving Intrusion Detection,” International Black Sea Conference on Communications and Networking (BlackSeaCom), pp. 1-6, 2020.
    [7] Y. Fan, Y. Li, M. Zhan, H. Cui, and Y. Zhang, “Iotdefender: A Federated Transfer Learning Intrusion Detection Framework for 5G IoT,” 14th International Conference on Big Data Science and Engineering (BigDataSE), pp. 88-95, 2020.
    [8] K. Li, H. Zhou, Z. Tu, W. Wang, and H. Zhang, “Distributed Network Intrusion Detection System in Satellite-Terrestrial Integrated Networks using Federated Learning,” IEEE Access, vol. 8, pp. 214852-214865, 2020.
    [9] T. D. Nguyen, S. Marchal, M. Miettinen, H. Fereidooni, N. Asokan, and A.-R. Sadeghi, “DIoT: A Federated Self-Learning Anomaly Detection System for IoT,” 39th International Conference on Distributed Computing Systems (ICDCS), pp. 756-767, 2019.
    [10] Y. Sun, H. Esaki, and H. Ochiai, “Adaptive Intrusion Detection in the Networking of Large-Scale LANs with Segmented Federated Learning,” IEEE Open Journal of the Communications Society, vol. 2, pp. 102-112, 2020.
    [11] N. Papernot, P. McDaniel, A. Sinha, and M. Wellman, “Towards the Science of Security and Privacy in Machine Learning,” arXiv preprint arXiv:1611.03814, 2016.
    [12] V. Duddu, “A Survey of Adversarial Machine Learning in Cyber Warfare,” Defence Sci. J., vol. 68, no. 4, pp. 356-366, 2018.
    [13] O. Ibitoye, O. Shafiq, and A. Matrawy, “Analyzing Adversarial Attacks Against Deep Learning for Intrusion Detection in IoT Networks,” IEEE Global Communications Conference (GLOBECOM), pp. 1-6, 2019.
    [14] K. Talty, J. Stockdale, and N. D. Bastian, “A Sensitivity Analysis of Poisoning and Evasion Attacks in Network Intrusion Detection System Machine Learning Models,” IEEE Military Communications Conference (MILCOM), pp. 1011-1016, 2021.
    [15] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, and F. Roli, “Evasion Attacks Against Machine Learning at Test Time,” arXiv preprint arXiv: 1708.06131, 2017.
    [16] B. Biggio, B. Nelson, and P. Laskov, “Poisoning Attacks Against Support Vector Machines,” arXiv preprint arXiv:1206.6389, 2012.
    [17] IJ. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” arXiv preprint arXiv: 1412.6572, 2014.
    [18] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The Limitations of Deep Learning in Adversarial Settings,” IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372-387, 2016.
    [19] H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli, “Support Vector Machines under Adversarial Label Contamination,” Neurocomputing, vol. 160, pp. 53-62, 2015.
    [20] B. Wang et al., “Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks,” IEEE Symposium on Security and Privacy (SP), pp. 707-723, 2019.
    [21] J. Zhang, J. Chen, D. Wu, B. Chen, and S. Yu, “Poisoning Attack in Federated Learning using Generative Adversarial Nets,” 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), pp. 374-380, 2019.
    [22] J. Zhang, B. Chen, X. Cheng, H. T. T. Binh, and S. Yu, “PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems,” IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3310-3322, 2020.
    [23] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to Backdoor Federated Learning,” International Conference on Artificial Intelligence and Statistics, pp. 2938-2948, 2020.
    [24] X. Zhou, M. Xu, Y. Wu, and N. Zheng, “Deep Model Poisoning Attack on Federated Learning,” Future Internet, vol. 13, no. 3, pp. 73, 2021.
    [25] T. D. Nguyen, P. Rieger, M. Miettinen, and A.-R. Sadeghi, “Poisoning Attacks on Federated Learning-Based IoT Intrusion Detection System,” Workshop Decentralized IoT Syst. Secur (DISS), pp. 1-7, 2020.
    [26] A. R. Short, H. C. Leligou, M. Papoutsidakis, and E. Theocharis, “Using Blockchain Technologies to Improve Security in Federated Learning Systems,” IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), pp. 1183-1188, 2020.
    [27] W. Liu et al., “D2MIF: A Malicious Model Detection Mechanism for Federated Learning Empowered Artificial Intelligence of Things,” IEEE Internet of Things Journal, 2021.
    [28] R. Doku and D. B. Rawat, “Mitigating Data Poisoning Attacks on A Federated Learning-Edge Computing Network,” IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), pp. 1-6, 2021.
    [29] Z. Chen, N. Lv, P. Liu, Y. Fang, K. Chen, and W. Pan, “Intrusion Detection for Wireless Edge Networks Based on Federated Learning,” IEEE Access, vol. 8, pp. 217463-217472, 2020.
    [30] S. -M. Huang, Y. -W. Chen and J. -J. Kuo, “Cost-Efficient Shuffling and Regrouping Based Defense for Federated Learning,” IEEE Global Communications Conference (GLOBECOM), pp. 1-6, 2021.
    [31] Z. Gu and Y. Yang, “Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder,” IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 671-680, 2021.
    [32] J. Zhou et al., “A Differentially Private Federated Learning Model against Poisoning Attacks in Edge Computing,” IEEE Transactions on Dependable and Secure Computing, doi: 10.1109/TDSC.2022.3168556.
    [33] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated Machine Learning: Concept and Applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1-19, 2019.
    [34] A. Nilsson, S. Smith, G. Ulm, E. Gustavsson, and M. Jirstrand, “A Performance Evaluation of Federated Learning Algorithms,” Proceedings of the second workshop on distributed infrastructures for deep learning, pp. 1-8, 2018.
    [35] Y. LeCun, “The MNIST Database of Handwritten Digits,” http://yann.lecun. com/exdb/mnist/, 1998.
    [36] K. Alex, N. Vinod, and H. Geoffrey, “Learning Multiple Layers of Features from Tiny Images,” https://www.cs.toronto.edu/~kriz/cifar.html, 2009.
    [37] R. Panigrahi, and S. Borah, “A Detailed Analysis of CICIDS2017 Dataset for Designing Intrusion Detection Systems,” Int. J. Eng. Technol., vol. 7, no. 3.24, pp. 479-482, 2018.
    [38] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, “LOF: Identifying Density-Based Local Outliers,” Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, pp. 93-104, 2000.
    [39] R. S. Sutton and A. G. Barto, “Reinforcement Learning: An Introduction,” IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 1054-1054, 1998.
    [40] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” arXiv preprint arXiv: 1312.5602, 2013.
    [41] S. Thrun, and A. Schwartz, “Issues in Using Function Approximation for Reinforcement Learning,” Proceedings of the 1993 Connectionist Models Summer School Hillsdale, NJ. Lawrence Erlbaum, 1993.
    [42] S. Fujimoto, H. Hoof, and D. Meger, “Addressing Function Approximation Error in Actor-critic Methods,” International conference on machine learning, pp. 1587-1596, 2018.
    [43] J. Ma, Z. Teng, Q. Tang, W. Qiu, Y. Yang and J. Duan, “Measurement Error Prediction of Power Metering Equipment Using Improved Local Outlier Factor and Kernel Support Vector Regression,” IEEE Transactions on Industrial Electronics, vol. 69, no. 9, pp. 9575-9585, 2022.
    [44] J. Auskalnis, N. Paulauskas, and A. Baskys, “Application of Local Outlier Factor Algorithm to Detect Anomalies in Computer Network,” Elektronika ir Elektrotechnika, 24(3), 96-99, 2017.

    無法下載圖示 全文公開日期 2025/09/20 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE