ERA Banner
Download Add to Cart Share
More Like This
  • Controlling the Error Floors of the Low-Density Parity-Check Codes
  • Zhang, Shuai
  • English
  • LDPC
    Error Floor
  • Dec 7, 2012 10:22 AM
  • Thesis
  • English
  • Adobe PDF
  • 6016101 bytes
  • The goal of error control coding is to encode information in such a way, that in the event that errors occur during transmission over a noisy communication channel or during storage in an unreliable memory, the receiver can correct the errors and recover the original transmitted information. Low-density parity-check (LDPC) codes are a kind of high performance linear block code, which are already used in many recent communication systems. Information rates guaranteed by these codes approach the theoretical Shannon capacity limit. In addition, LDPC codes are now widely used in practice and included in many communication standards. Therefore study of these codes is very important and practical approaches to their design and decoding techniques are of great interest. There exist very good decoding algorithms for LDPC codes with respect to different channel models. However, LDPC codes suffer from the infamous error floor problem at high signal-to-noise ratios. This problem is attributed to the trapping sets by Richardson. It is shown that the dominant trapping sets of regular LDPC codes, so called absorption sets, undergo a two-phase dynamic behavior in the iterative message-passing decoding algorithm. We present a linear dynamic model for the iteration behavior of these sets. It can be seen that they undergo an initial geometric growth phase which stabilizes in a final bit-flipping behavior where the algorithm reaches a fixed point. Our analysis is shown to lead to very accurate numerical calculations of the error floor bit error rates down to error rates that are inaccessible by simulation. The topologies of the dominant absorption sets of two example codes, the IEEE 802.3an [2048,1723] regular (6,32) LDPC code and the Tanner [155,64,20] regular (3,5) LDPC code, are identified and tabulated by using topological features in combination with search algorithms, respectively. To make our analysis more complete, we provide more solid evidence showing that trapping sets are equivalent to absorption sets. In other words, we argue that absorption sets characterize all failure mechanisms. Some insights from our linear analysis can be borrowed to approach this problem. Another insight that this formula provides to us is the means to reduce the error floor, which is another important part of what we hoped to achieve by developing the formula in the first place. By allowing the log-likelihood ratios (LLRs) utilized by the message-passing decoding to grow bigger in precision length, the absorption sets as a trouble-causing structure will be successfully corrected. Therefore, the absorption sets, generically born with the code design, will no longer threaten the error correcting performance as long as the messages have enough room and time to grow. This translates to greater LLR clipping thresholds and more iterations, both of which are preset at the decoder. However, the actual settings are dependent on how much the subgraph of the dominant absorption set resembles that of a non-zero minimum-weight codeword, therefore the settings are empirical. In general more likeness in their topologies, the more effort it costs to correct the absorption set.
  • Doctoral
  • Doctor of Philosophy
  • Department of Electrical and Computer Engineering
  • Communications
  • Spring 2013
  • Schlegel, Christian (Computing Science)
    Cockburn, Bruce (Electrical and Computer Engineering)
  • Tavakoli, Mahdi (Electrical and Computer Engineering)
    Schlegel, Christian (Computing Science)
    Cockburn, Bruce (Electrical and Computer Engineering)
    Perez, Lance (Electrical Engineering, University of Nebraska, USA)
    Fair, Ivan (Electrical and Computer Engineering)
    Ardakani, Masoud (Electrical and Computer Engineering)

Apr 30, 2014 10:02 PM


Dec 7, 2012 10:22 AM


Sintra Lewis