標題: 低密度奇偶校驗碼之位元翻轉解碼法:校驗和之動態權重、位元翻轉決策及效能分析
Bit-Flipping Decoding of LDPC Codes: Dynamic Checksum Weighting, Flipping Decisions, and Performance Analysis
作者: 張致遠
Chang, Chih-Yuan
蘇育德
Su, Yu Ted
電信工程研究所
關鍵字: 低密度奇偶校驗碼 信度傳遞 位元翻轉解碼法 翻轉位元選擇策略 密度演化分析;LDPC Codes Belief Propagation Bit-Flipping Decoding Flipped Bit Selection Density Evolution Analysis
公開日期: 2015
摘要: 位元翻轉解碼法(Bit-Flipping Decoding)用於低密度奇偶校驗碼 (LDPC code)有較低的複雜度,但在大部分的情況下其解碼效能相對於和積演算法(sum-product algorithm)也較不理想。為了改進其解碼效能並提供更多的效能與複雜度之權衡選項(Trade-offs),在本論文中我們針對位元翻轉法做了新的設計。其中包含了(一)、校驗和(Checksum)之動態權重方法(Dynamic Weighting)與新的位元可靠度衡估法則;(二)、校驗和權重更新之順序規劃(Weight Updating Schedule)及(三)、新的多翻轉位元(Flipped Bit Selection)選擇策略。在新的位元可靠度衡估方法中,校驗和的權重會在解碼過程中被動態的更新而在新的多翻轉位元選擇策略的設計上,我們引入了諸多在以往文獻中未被考慮到的方向與資訊。上述兩種方式均能提高對於位元決策與校驗和訊息的可靠度衡估之精準度。此外我們又提出兩種權重更新規劃以降低新的可靠度衡量設計所多出的複雜度、提供更多效能與複雜度之選項。 本論文所提出之新多翻轉位元選擇策略與現有文獻上的位元可靠度衡估方法可組合出許多不同的新位元翻轉演算法。與文獻上的舊有做法相比,這些新解碼法有較佳的解碼效能但稍高的計算複雜度。當新的位元選擇策略與上述新位元可靠度衡估法結合時,所得之新解碼法可得到與正規化最小值-總和演算法(Normalized Min-Sum Algorithm)匹敵的錯誤更正效能。因上述演算法具較高的計算複雜度,我們亦提供一較簡單之位元翻轉策略來降低運算複雜度;比起原演算法,此方法之解碼效能減損十分有限。 本論文提供了簡單的基於決策理(Decision-Theory)之推論來佐證我們對於校驗和動態權重設計之合理性。我們亦運用時域擴張因子圖(Time-Expanded Factor Graph)來詮釋本論文中所提出之權重更新順序規劃法則。另外我們亦分析相關的密度演化(Density Evolution),藉以解釋本文所提出的新演算法之解碼行為與效能。
Bit-flipping (BF) decoding of low-density parity-check codes is of low complexity but gives inferior performance in general. To improve performance and provide new BF decoder options for complexity-performance tradeoffs, we propose new designs for the flipping function (FF), the flipped bit selection (FBS) rule and the checksum weight updating schedule. The new FF adjusts the checksum weights in every iteration while our FBS rules take more information into account. These two modifications represent efforts to track more closely the evolutions of both check and variable nodes' reliabilities. Two selective update schedules are proposed to offer more performance and complexity tradeoffs. The combinations of the new FBS rule and known FFs result in new BF decoders with improved performance and a modest complexity increase. On the other hand, combining the new FF and FBS rule gives a new decoder with performance comparable to that of the normalized min-sum algorithm while if we use a much simpler FBS rule instead, the decoder suffers little performance loss with reduced complexity. We also present a simple decision-theoretical argument to justify the new checksum weight formula and a time-expanded factor graph model to explain the proposed selective weight-updating schedules. Furthermore, we develop a density evolution based approach to analyze the behaviors of the proposed algorithms when a regular LDPC code is used.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079613512
http://hdl.handle.net/11536/127798
Appears in Collections:Thesis