簡易檢索 / 詳目顯示

研究生: 吳俊霆
Chun-Ting Wu
論文名稱: 5G置信度傳播循序神經網路解碼器
The Design of 5G Polar Code Belief Propagation Sequential Neural Network Decoder
指導教授: 王煥宗
Huan-Chun Wang
林敬舜
Ching-Shun Lin
口試委員: 王瑞堂
Jui-Tang Wang
林敬舜
Ching-Shun Lin
劉建成
Jian-Cheng Liu
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 58
中文關鍵詞: 極化碼置信度傳播解碼神經網路
外文關鍵詞: Polar Code, Deep Neural Networks, Belief Propagation
相關次數: 點閱:310下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文在探討使用神經網路應用於極化碼的置信度傳播解碼架構時,神經網路解碼器的訓練方式,透過多SNR的輸入資料集進行深度學習,使神經網路可以更完整的學習極化解碼架構,並在3dB後展現出比單SNR資料集訓練更好的解碼效能,同時利用分區解碼的特性,在子區塊碼率、訊息、凍結位元分布相同時,共用相同的神經網路解碼器,減少神經網路模型數的同時,維持原有的解碼精確度。
    本論文使用Python作為演算法的軟體模擬開發平台,並以Xilinx Virtex-7 VC707之FPGA開發板,和Design Compiler進行電路的合成和效能評估,晶片設計使用TSMC 40nm CMOS製程進行電路佈局實作。


    This thesis explores the training methodology of a Neural Network decoder applied to the Belief Propagation decoding architecture of Polar Codes. The Neural Network is trained through a Deep Learning process using multi-SNR input data sets, enabling it to thoroughly learn the Polar decoding structure. The network demonstrates superior decoding performance after 3dB compared to training with a single SNR data set. At the same time, by leveraging the characteristics of Partitioned Decoding, and when Sub-Block, Code Rate, Messages, and Frozen Bit distributions align, the same Neural Decoder is shared, reducing model numbers while retaining decoding precision.
    This thesis utilizes Python as the simulation platform for algorithm development, with circuit synthesis by Virtex-7 VC707 FPGA board and Design Compiler. The chip design is carried out using the TSMC 40nm CMOS process.

    圖目錄 v 表目錄 vi 第1章 緒論 1 1.1 研究背景 1 1.2 研究目的 2 1.3 論文架構 2 第2章 極化碼 Polar Code 3 2.1 極化碼介紹 3 2.1.1 通道極化 3 2.1.2 通道組合(Channel Combining) 4 2.1.3通道分裂(Channel Splitting) 8 2.1.4 通道參數說明 10 2.1.5 通道極化判定 11 2.1.6極化碼 編碼方式介紹 12 2.2置信度傳播解碼(Belief Propagation Decoder) 13 2.3.1置信度傳播(Belief Propagation, BP)解碼方式介紹 14 2.3.2 迭代過程 14 2.3.3 區塊式解碼(Partitionable Codes) 17 第3章 深度神經網路(Deep Neural Networks) 18 3.1神經網路介紹 18 3.2多層感知器(Multilayer perceptron, MLP) 18 3.2.1 輸入層(Input Layer) 20 3.2.2 隱藏層(Hidden Layers) 20 3.2.3 輸出層(Output Layer) 20 3.3神經網路訓練 21 3.3.1反向傳播 21 3.3.2 激活函數(Activation Function) 22 3.3.3 損失函數(Loss Function) 24 3.3.4 優化器(Optimizer) 25 第4章 神經網路解碼器訓練與參數選擇 26 4.1神經網路訓練輸入資料集雜訊選擇 26 4.2神經網路訓練精準度 27 第5章 置信度傳播神經網路解碼演算法改良與架構設計 28 5.1原置信度傳播神經網路解碼演算法架構 28 5.2原置信度傳播神經網路解碼器訓練 31 5.3置信度傳播神經網路解碼器架構改良 32 5.4環境設定 34 5.5多元訊雜比輸入神經網路解碼器訓練 35 5.6原始演算法與置信度傳播循序神經網路解碼效能比較 40 5.7循序BP-NN解碼器神經網路共用架構 42 第6章 解碼器硬體架構設計與比較 44 6.1置信度傳播循序神經網路解碼器架構方塊圖 44 6.2 運算單元PE(Process Element) 45 6.3神經網路解碼器(Neural Network Decoder, NND) 46 6.4電路合成比較 48 6.5文獻比較 49 6.6 晶片設計 50 6.6.1 晶片設計流程 50 6.6.2 晶片佈局結果 52 第7章 結論與未來展望 53 參考文獻 54 附錄一 中英名稱對照表 57

    [1] E. Arikan, "Channel polarization: A method for constructing capacity-achieving codes," 2008 IEEE International Symposium on Information Theory, Toronto, ON, 2008, pp. 1173-1177 Jul. 2008.
    [2] C. Wen, J. Xiong, L. Gui and L. Zhang, "A BP-NN Decoding Algorithm for Polar Codes," 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), Xi'an, China, 2019, pp. 1-5, doi: 10.1109/WCSP.2019.8927910.
    [3] Shannon, Claude Elwood. "A Mathematical Theory of Communication", July 1948 Bell System Technical Journal, doi:10.1002/j.1538-7305.1948.tb01338.x
    [4] R. Mori and T. Tanaka, "Performance of Polar Codes with the Construction using Density Evolution," IEEE Communications Letters, vol. 13, no. 7, pp. 519-521, July 2009
    [5] I. Tal and A. Vardy, ‘‘How to construct polar codes,’’ IEEE Trans. Inf. Theory, vol. 59, no. 10, pp. 6562–6582, Oct. 2013
    [6] P. Trifonov, "Efficient Design and Decoding of Polar Codes," IEEE Transactions on Communications, vol. 60, no. 11, pp. 3221-3227, November 2012.
    [7] J. Dai, K. Niu, Z. Si, C. Dong, and J. Lin, “Does gaussian approximation work well for the long-length polar code construction?” IEEE Access, 2017
    [8] X. Liu et al., “?-expansion A Theoretical Framework for Fast and Recursive Construction of Polar Codes,” Proc IEEE Globecom, Dec 2017
    [9] 3GPP TS 38.212 V16.5.0(2021-03
    [10] Q. Zhang, A. Liu, X. Pan and K. Pan, "CRC Code Design for List Decoding of Polar Codes," in IEEE Communications Letters, vol. 21, no. 6, pp. 1229-1232, June 2017.
    [11] A. Elkelesh, M. Ebada, S. Cammerer, and S. ten Brink, ‘‘Belief propagation list decoding of polar codes,’’ IEEE Commun. Lett., vol. 22, no. 8, pp. 1536–1539, Aug. 2018.
    [12] M. Xu, S. Jing, J. Lin, W. Qian, Z. Zhang, X. You, and C. Zhang, “Approximate belief propagation decoder for polar codes,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 1169–1173
    [13] N. Doan, S. A, Hashemi, M, Mondelli, and W. J. Gross, "On the Decoding of Polar Codes on Permuted Factor Graphs," ArXiv e-prints, Jun. 2018.
    [14] A. Elkelesh, S. Cammerer, M. Ebada, and S. ten Brink, “Mitigating clipping effects on error floors under belief propagation decoding of polar codes,” in Proc. Int. Symp. Wireless Commun. Syst., Aug. 2017, pp. 384–389
    [15] S. A. Hashemi, A. Balatsoukas-Stimming, P. Giard, C. Thibeault and W. J. Gross, "Partitioned successive-cancellation list decoding of polar codes," 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 2016, pp. 957-960, doi: 10.1109/ICASSP.2016.7471817.
    [16] M. Hummert, D. Wübben and A. Dekorsy, "Machine Learning Scaled Belief Propagation for Short Codes," 2021 IEEE 94th Vehicular Technology Conference (VTC2021-Fall), Norman, OK, USA, 2021, pp. 1-5, doi: 10.1109/VTC2021-Fall52928.2021.9625308.
    [17] G. Sarkis, P. Giard, A. Vardy, C. Thibeault and W. J. Gross, "Fast List Decoders for Polar Codes," in IEEE Journal on Selected Areas in Communications, vol. 34, no. 2, pp. 318-328, Feb. 2016, doi: 10.1109/JSAC.2015.2504299.
    [18] G. E. Hinton, S. Osindero and Y. Teh, "A Fast Learning Algorithm for Deep Belief Nets," in Neural Computation, vol. 18, no. 7, pp. 1527-1554, July 2006, doi: 10.1162/neco.2006.18.7.1527.
    [19] A. Krizhevsky, I. Sutskever and G. E. Hinton, "ImageNet classification with deep convolutional neural networks", Commun. ACM, vol. 60, pp. 84-90, 2017.
    [20] X. Glorot, A Bordes, and Y. Bengio, "Deep sparse rectifier neural networks, "in Proceedings of the fourteenth international conference on artificial intelligence and statistics, 2011, pp.315-323.
    [21] S. Cammerer, T. Gruber, J. Hoydis and S. ten Brink, "Scaling Deep Learning-Based Decoding of Polar Codes via Partitioning," GLOBECOM 2017 - 2017 IEEE Global Communications Conference, Singapore, 2017, pp. 1-6, doi: 10.1109/GLOCOM.2017.8254811.
    [22] T. Gruber, S. Cammerer, J. Hoydis and S. t. Brink, "On deep learning-based channel decoding," 2017 51st Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 2017, pp. 1-6, doi: 10.1109/CISS.2017.7926071.
    [23] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization", CoRR, 2014, [online] Available: http://arxiv.org/abs/1412.6980.
    [24] Q. Wang, S. Wang, H. Fang, L. Chen, L. Chen and Y. Guo, "A Model-Driven Deep Learning Method for Normalized Min-Sum LDPC Decoding," 2020 IEEE International Conference on Communications Workshops (ICC Workshops), Dublin, Ireland, 2020, pp. 1-6, doi: 10.1109/ICCWorkshops49005.2020.9145237.
    [25] https://docs.xilinx.com/v/u/en-US/xa-zynq-7000-product-table
    [26] https://docs.xilinx.com/v/u/en-US/ds180_7Series_Overview

    無法下載圖示 全文公開日期 2033/08/23 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE