Download the full-sized PDF of Controlling the Error Floors of the Low-Density Parity-Check CodesDownload the full-sized PDF


Download  |  Analytics

Export to: EndNote  |  Zotero  |  Mendeley


This file is in the following communities:

Faculty of Graduate Studies and Research


This file is not currently in any collections.

Controlling the Error Floors of the Low-Density Parity-Check Codes Open Access


Other title
Error Floor
Type of item
Degree grantor
University of Alberta
Author or creator
Zhang, Shuai
Supervisor and department
Schlegel, Christian (Computing Science)
Cockburn, Bruce (Electrical and Computer Engineering)
Examining committee member and department
Cockburn, Bruce (Electrical and Computer Engineering)
Perez, Lance (Electrical Engineering, University of Nebraska, USA)
Tavakoli, Mahdi (Electrical and Computer Engineering)
Fair, Ivan (Electrical and Computer Engineering)
Ardakani, Masoud (Electrical and Computer Engineering)
Schlegel, Christian (Computing Science)
Department of Electrical and Computer Engineering
Date accepted
Graduation date
Doctor of Philosophy
Degree level
The goal of error control coding is to encode information in such a way, that in the event that errors occur during transmission over a noisy communication channel or during storage in an unreliable memory, the receiver can correct the errors and recover the original transmitted information. Low-density parity-check (LDPC) codes are a kind of high performance linear block code, which are already used in many recent communication systems. Information rates guaranteed by these codes approach the theoretical Shannon capacity limit. In addition, LDPC codes are now widely used in practice and included in many communication standards. Therefore study of these codes is very important and practical approaches to their design and decoding techniques are of great interest. There exist very good decoding algorithms for LDPC codes with respect to different channel models. However, LDPC codes suffer from the infamous error floor problem at high signal-to-noise ratios. This problem is attributed to the trapping sets by Richardson. It is shown that the dominant trapping sets of regular LDPC codes, so called absorption sets, undergo a two-phase dynamic behavior in the iterative message-passing decoding algorithm. We present a linear dynamic model for the iteration behavior of these sets. It can be seen that they undergo an initial geometric growth phase which stabilizes in a final bit-flipping behavior where the algorithm reaches a fixed point. Our analysis is shown to lead to very accurate numerical calculations of the error floor bit error rates down to error rates that are inaccessible by simulation. The topologies of the dominant absorption sets of two example codes, the IEEE 802.3an [2048,1723] regular (6,32) LDPC code and the Tanner [155,64,20] regular (3,5) LDPC code, are identified and tabulated by using topological features in combination with search algorithms, respectively. To make our analysis more complete, we provide more solid evidence showing that trapping sets are equivalent to absorption sets. In other words, we argue that absorption sets characterize all failure mechanisms. Some insights from our linear analysis can be borrowed to approach this problem. Another insight that this formula provides to us is the means to reduce the error floor, which is another important part of what we hoped to achieve by developing the formula in the first place. By allowing the log-likelihood ratios (LLRs) utilized by the message-passing decoding to grow bigger in precision length, the absorption sets as a trouble-causing structure will be successfully corrected. Therefore, the absorption sets, generically born with the code design, will no longer threaten the error correcting performance as long as the messages have enough room and time to grow. This translates to greater LLR clipping thresholds and more iterations, both of which are preset at the decoder. However, the actual settings are dependent on how much the subgraph of the dominant absorption set resembles that of a non-zero minimum-weight codeword, therefore the settings are empirical. In general more likeness in their topologies, the more effort it costs to correct the absorption set.
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication

File Details

Date Uploaded
Date Modified
Audit Status
Audits have not yet been run on this file.
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 6016101
Last modified: 2015:10:12 20:10:09-06:00
Filename: Zhang_Shuai_Spring 2013.pdf
Original checksum: 8ac30301b5e6559fd2fa7c9f73a6dd59
Well formed: false
Valid: false
Status message: No document catalog dictionary offset=0
Activity of users you follow
User Activity Date