Papers by Amir Banihashemi
IEEE Transactions on Information Theory, 1998

arXiv (Cornell University), Nov 28, 2017
Cages, defined as regular graphs with minimum number of nodes for a given girth, are well-studied... more Cages, defined as regular graphs with minimum number of nodes for a given girth, are well-studied in graph theory. Trapping sets are graphical structures responsible for error floor of low-density paritycheck (LDPC) codes, and are well investigated in coding theory. In this paper, we make connections between cages and trapping sets. In particular, starting from a cage (or a modified cage), we construct a trapping set in multiple steps. Based on the connection between cages and trapping sets, we then use the available results in graph theory on cages and derive tight upper bounds on the size of the smallest trapping sets for variable-regular LDPC codes with a given variable degree and girth. The derived upper bounds in many cases meet the best known lower bounds and thus provide the actual size of the smallest trapping sets. Considering that non-zero codewords are a special case of trapping sets, we also derive tight upper bounds on the minimum weight of such codewords, i.e., the minimum distance, of variable-regular LDPC codes as a function of variable degree and girth.

We propose a deterministic method to design irregular Low-Density Parity-Check (LDPC) codes for b... more We propose a deterministic method to design irregular Low-Density Parity-Check (LDPC) codes for binary erasure channels (BEC). Compared to the existing methods, which are based on the application of asymptomatic analysis tools such as density evolution or Extrinsic Information Transfer (EXIT) charts in an optimization process, the proposed method is much simpler and faster. Through a number of examples, we demonstrate that the codes designed by the proposed method perform very closely to the best codes designed by optimization. An important property of the proposed designs is the flexibility to select the number of constituent variable node degrees P. The proposed designs include existing deterministic designs as a special case with P = N-1, where N is the maximum variable node degree. Compared to the existing deterministic designs, for a given rate and a given δ > 0, the designed ensembles can have a threshold in δ-neighborhood of the capacity upper bound with smaller values of P and N. They can also achieve the capacity of the BEC as N, and correspondingly P and the maximum check node degree tend to infinity. Index Terms-channel coding, low-density parity-check (LDPC) codes, binary erasure channel (BEC), deterministic design. Low-Density Parity-Check (LDPC) codes have received much attention in the past decade due to their attractive performance/complexity tradeoff on a variety of communication channels. In particular, on the Binary Erasure Channel (BEC), they achieve the channel capacity asymptotically . In [1],[5],[6] a complete mathematical analysis for the performance of LDPC codes over the BEC, both asymptotically and for finite block lengths, has been developed. For other types of channels such as the Binary Symmetric Channel (BSC) and the Binary Input Additive White Gaussian Noise (BIAWGN) channel, only asymptotic analysis is available . For irregular LDPC codes, the problem of finding ensemble

arXiv (Cornell University), Nov 28, 2017
Leafless elementary trapping sets (LETSs) are known to be the problematic structures in the error... more Leafless elementary trapping sets (LETSs) are known to be the problematic structures in the error floor region of low-density parity-check (LDPC) codes over the additive white Gaussian (AWGN) channel under iterative decoding algorithms. While problems involving the general category of trapping sets, and the subcategory of elementary trapping sets (ETSs), have been shown to be NP-hard, similar results for LETSs, which are a subset of ETSs are not available. In this paper, we prove that, for a general LDPC code, finding a LETS of a given size a with minimum number of odd-degree check nodes b is NP-hard to approximate within any approximation factor. We also prove that finding the minimum size a of a LETS with a given b is NP-hard to approximate within any approximation factor. Similar results are proved for elementary absorbing sets, a popular subcategory of LETSs.

arXiv (Cornell University), Mar 19, 2019
Counting short cycles in bipartite graphs is a fundamental problem of interest in many fields inc... more Counting short cycles in bipartite graphs is a fundamental problem of interest in many fields including the analysis and design of low-density parity-check (LDPC) codes. There are two computational approaches to count short cycles (with length smaller than 2g, where g is the girth of the graph) in bipartite graphs. The first approach is applicable to a general (irregular) bipartite graph, and uses the spectrum {η i } of the directed edge matrix of the graph to compute the multiplicity . This approach has a computational complexity O(|E| 3 ), where |E| is number of edges in the graph. The second approach is only applicable to biregular bipartite graphs, and uses the spectrum {λ i } of the adjacency matrix (graph spectrum) and the degree sequences of the graph to compute N k . The complexity of this approach is O(|V | 3 ), where |V | is number of nodes in the graph. This complexity is less than that of the first approach, but the equations involved in the computations of the second approach are very tedious, particularly for k ≥ g + 6. In this paper, we establish an analytical relationship between the two spectra {η i } and {λ i } for bi-regular bipartite graphs. Through this relationship, the former spectrum can be derived from the latter through simple equations. This allows the computation of N k using N k = i η k i /(2k) but with a complexity of O(|V | 3 ) rather than O(|E| 3 ).
IEEE Transactions on Information Theory, Oct 1, 2019

IEEE Transactions on Information Theory, Sep 1, 2014
In this paper, we study the graphical structure of elementary trapping sets (ETS) of variable-reg... more In this paper, we study the graphical structure of elementary trapping sets (ETS) of variable-regular low-density parity-check (LDPC) codes. ETSs are known to be the main cause of error floor in LDPC coding schemes. For the set of LDPC codes with a given variable node degree d l and girth g, we identify all the non-isomorphic structures of an arbitrary class of (a, b) ETSs, where a is the number of variable nodes and b is the number of odd-degree check nodes in the induced subgraph of the ETS. Our study leads to a simple characterization of dominant classes of ETSs (those with relatively small values of a and b) based on short cycles in the Tanner graph of the code. For such classes of ETSs, we prove that any set S in the class is a layered superset (LSS) of a short cycle, where the term "layered" is used to indicate that there is a nested sequence of ETSs that starts from the cycle and grows, one variable node at a time, to generate S. This characterization corresponds to a simple search algorithm that starts from the short cycles of the graph and finds all the ETSs with LSS property in a guaranteed fashion. Specific results on the structure of ETSs are presented for d l = 3, 4, 5, 6, g = 6, 8 and a, b ≤ 10 in this paper. The results of this paper can be used for the error floor analysis and for the design of LDPC codes with low error floors. I. INTRODUCTION T HE performance of low-density parity-check (LDPC) codes under iterative decoding algorithms in the error floor region is closely related to the problematic structures of the code's Tanner graph [11], [25], [27], [32], [16], [26], [35], [36], [8], [5], [9], [15], [38]. Following the nomenclature of [27], here, we collectively refer to such structures as trapping sets. The most common approach for classifying the trapping sets is by a pair (a, b), where a is the size of the trapping set and b is the number of odd-degree (unsatisfied) check nodes in the subgraph induced by the set in the Tanner graph of the code. Among the trapping sets, the so-called elementary trapping sets (ETS) are known to be the main culprits [27], [16], [5], [26], [15], [38]. These are trapping sets whose induced subgraph only contains check nodes of degree one or two. For a given LDPC code, the knowledge of dominant trapping sets, i.e., those that are most harmful, is important. Such knowledge can be used to estimate the error floor [5], to modify the decoder to lower the error floor [4], [12], [22], or to design codes with low error floor [14], [1].

arXiv (Cornell University), Jun 4, 2018
Counting short cycles in bipartite graphs is a fundamental problem of interest in the analysis an... more Counting short cycles in bipartite graphs is a fundamental problem of interest in the analysis and design of low-density parity-check (LDPC) codes. The vast majority of research in this area is focused on algorithmic techniques. Most recently, Blake and Lin proposed a computational technique to count the number of cycles of length g in a bi-regular bipartite graph, where g is the girth of the graph. The information required for the computation is the node degree and the multiplicity of the nodes on both sides of the partition, as well as the eigenvalues of the adjacency matrix of the graph (graph spectrum). In this paper, the result of Blake and Lin is extended to compute the number of cycles of length g + 2, . . . , 2g -2, for bi-regular bipartite graphs, as well as the number of 4-cycles and 6-cycles in irregular and half-regular bipartite graphs, with g ≥ 4 and g ≥ 6, respectively.

arXiv (Cornell University), May 30, 2019
Finding the multiplicity of cycles in bipartite graphs is a fundamental problem of interest in ma... more Finding the multiplicity of cycles in bipartite graphs is a fundamental problem of interest in many fields including the analysis and design of low-density parity-check (LDPC) codes. Recently, Blake and Lin computed the number of shortest cycles (g-cycles, where g is the girth of the graph) in a bi-regular bipartite graph, in terms of the degree sequences and the spectrum (eigenvalues of the adjacency matrix) of the graph [IEEE Trans. Inform. Theory 64(10):6526-6535, 2018]. This result was subsequently extended in [IEEE Trans. Inform. Theory, accepted for publication, Dec. 2018] to cycles of length g + 2, . . . , 2g -2, in bi-regular bipartite graphs, as well as 4-cycles and 6-cycles in irregular and halfregular bipartite graphs, with g ≥ 4 and g ≥ 6, respectively. In this paper, we complement these positive results with negative results demonstrating that the information of the degree sequences and the spectrum of a bipartite graph is, in general, insufficient to count (a) the i-cycles, i ≥ 2g, in bi-regular graphs, (b) the i-cycles for any i > g, regardless of the value of g, and g-cycles for g ≥ 6, in irregular graphs, and (c) the i-cycles for any i > g, regardless of the value of g, and g-cycles for g ≥ 8, in half-regular graphs. To obtain these results, we construct counter-examples using the Godsil-McKay switching.

arXiv (Cornell University), Jul 3, 2015
Wideband spectrum sensing is a significant challenge in cognitive radios (CRs) due to requiring v... more Wideband spectrum sensing is a significant challenge in cognitive radios (CRs) due to requiring very high-speed analogto-digital converters (ADCs), operating at or above the Nyquist rate. Here, we propose a very low-complexity zero-block detection scheme that can detect a large fraction of spectrum holes from the sub-Nyquist samples, even when the undersampling ratio is very small. The scheme is based on a block sparse sensing matrix, which is implemented through the design of a novel analog-toinformation converter (AIC). The proposed scheme identifies some measurements as being zero and then verifies the sub-channels associated with them as being vacant. Analytical and simulation results are presented that demonstrate the effectiveness of the proposed method in reliable detection of spectrum holes with complexity much lower than existing schemes. This work also introduces a new paradigm in compressed sensing where one is interested in reliable detection of (some of the) zero blocks rather than the recovery of the whole block sparse signal.

arXiv (Cornell University), Apr 1, 2011
In this paper, we present a new approach for the analysis of iterative node-based verification-ba... more In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressive sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension n). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration in the asymptotic regime where n → ∞. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of non-trivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of . Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of n. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate.
In this paper, we propose a characterization for non-elementary trapping sets (NETSs) of lowdensi... more In this paper, we propose a characterization for non-elementary trapping sets (NETSs) of lowdensity parity-check (LDPC) codes. The characterization is based on viewing a NETS as a hierarchy of embedded graphs starting from an ETS. The characterization corresponds to an efficient search algorithm that under certain conditions is exhaustive. As an application of the proposed characterization/search, we obtain lower and upper bounds on the stopping distance s min of LDPC codes. We examine a large number of regular and irregular LDPC codes, and demonstrate the efficiency and versatility of our technique in finding lower and upper bounds on, and in many cases the exact value of, s min . Finding s min , or establishing search-based lower or upper bounds, for many of the examined codes are out of the reach of any existing algorithm.

In this paper, we present a new approach for the analysis of iterative node-based verification-ba... more In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressive sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension n). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration in the asymptotic regime where n → ∞. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of non-trivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of . Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of n. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate.

IEEE Transactions on Vehicular Technology, Sep 1, 2012
In this paper, we investigate the transmission range assignment for N wireless nodes located on a... more In this paper, we investigate the transmission range assignment for N wireless nodes located on a line (a linear wireless network) for broadcasting data from one specific node to all the nodes in the network with minimum energy. Our goal is to find a solution that has low complexity and yet performs close to optimal. We propose an algorithm for finding the optimal assignment (which results in the minimum energy consumption) with complexity O(N 2 ). An approximation algorithm with complexity O(N ) is also proposed. It is shown that, for networks with uniformly distributed nodes, the linear-time approximate solution obtained by this algorithm on average performs practically identical to the optimal assignment. Both the optimal and the suboptimal algorithms require the full knowledge of the network topology and are thus centralized. We also propose a distributed algorithm of negligible complexity, i.e., with complexity O(1), which only requires the knowledge of the adjacent neighbors at each wireless node. Our simulations demonstrate that the distributed solution on average performs almost as good as the optimal one for networks with uniformly distributed nodes.

In this paper, we propose a characterization of elementary trapping sets (ETSs) for irregular low... more In this paper, we propose a characterization of elementary trapping sets (ETSs) for irregular lowdensity parity-check (LDPC) codes. These sets are known to be the main culprits in the error floor region of such codes. The characterization of ETSs for irregular codes has been known to be a challenging problem due to the large variety of non-isomorphic ETS structures that can exist within the Tanner graph of these codes. This is a direct consequence of the variety of the degrees of the variable nodes that can participate in such structures. The proposed characterization is based on a hierarchical graphical representation of ETSs, starting from simple cycles of the graph, or from single variable nodes, and involves three simple expansion techniques: degree-one tree (dot), path and lollipop, thus, the terminology dpl characterization. A similar dpl characterization was proposed in an earlier work by the authors for the leafless ETSs (LETSs) of variable-regular LDPC codes. The present paper generalizes the prior work to codes with a variety of variable node degrees and to ETSs that are not leafless. The proposed dpl characterization corresponds to an efficient search algorithm that, for a given irregular LDPC code, can find all the instances of (a, b) ETSs with size a and with the number of unsatisfied check nodes b within any range of interest a ≤ a max and b ≤ b max , exhaustively. Although, (brute force) exhaustive search algorithms for ETSs of irregular LDPC codes exist, to the best of our knowledge, the proposed search algorithm is the first of its kind, in that, it is devised based on a characterization of ETSs that makes the search process efficient. Extensive simulation results are presented to show the versatility of the search algorithm, and to demonstrate that, compared to the literature, significant improvement in search speed can be obtained.
arXiv (Cornell University), Jan 13, 2010
In this paper, we propose a general framework for the asymptotic analysis of node-based verificat... more In this paper, we propose a general framework for the asymptotic analysis of node-based verification-based algorithms. In our analysis we tend the signal length n to infinity. We also let the number of non-zero elements of the signal k scale linearly with n. Using the proposed framework, we study the asymptotic behavior of the recovery algorithms over random sparse matrices (graphs) in the context of compressive sensing. Our analysis shows that there exists a success threshold on the density ratio k/n, before which the recovery algorithms are successful, and beyond which they fail. This threshold is a function of both the graph and the recovery algorithm. We also demonstrate that there is a good agreement between the asymptotic behavior of recovery algorithms and finite length simulations for moderately large values of n.

arXiv (Cornell University), Apr 21, 2015
In this paper, we propose solutions for the energy-efficient broadcasting over cross networks, wh... more In this paper, we propose solutions for the energy-efficient broadcasting over cross networks, where N nodes are located on two perpendicular lines. Our solutions consist of an algorithm which finds the optimal range assignment in polynomial time (O(N 12 )), a near-optimal algorithm with linear complexity (O(N )), and a distributed algorithm with complexity O(1). To the best of our knowledge, this is the first study presenting an optimal solution for the minimum-energy broadcasting problem for a 2-D network (with cross configuration). We compare our algorithms with the broadcast incremental power (BIP) algorithm, one of the most commonly used methods for solving this problem with complexity O(N 2 ). We demonstrate that our near-optimal algorithm outperforms BIP, and that the distributed algorithm performs close to it. Moreover, the proposed distributed algorithm can be used for more general two-dimensional networks, where the nodes are located on a grid consisting of perpendicular line-segments. The performance of the proposed near-optimal and distributed algorithms tend to be closer to the optimal solution for larger networks.

arXiv (Cornell University), Oct 16, 2015
In this paper, we propose a new characterization for elementary trapping sets (ETSs) of variabler... more In this paper, we propose a new characterization for elementary trapping sets (ETSs) of variableregular low-density parity-check (LDPC) codes. Recently, Karimi and Banihashemi proposed a characterization of ETSs, which was based on viewing an ETS as a layered superset (LSS) of a short cycle in the code's Tanner graph. A notable advantage of LSS characterization is that it corresponds to a simple LSS-based search algorithm (expansion technique) that starts from short cycles of the graph and finds the ETSs with LSS structure efficiently. Compared to the LSS-based characterization of Karimi and Banihashemi, which is based on a single LSS expansion technique, the new characterization involves two additional expansion techniques. The introduction of the new techniques mitigates two problems that LSS-based characterization/search suffers from: (1) exhaustiveness: not every ETS structure is an LSS of a cycle, (2) search efficiency: LSS-based search algorithm often requires the enumeration of cycles with length much larger than the girth of the graph, where the multiplicity of such cycles increases rapidly with their length. We prove that using the three expansion techniques, any ETS structure can be obtained starting from a simple cycle, no matter how large the size of the structure a or the number of its unsatisfied check nodes b are, i.e., the characterization is exhaustive. We also demonstrate that for the proposed characterization/search to exhaustively cover all the ETS structures within the (a, b) classes with a ≤ a max , b ≤ b max , for any value of a max and b max , the length of the short cycles required to be enumerated is less than that of the LSS-based characterization/search. We, in fact, show that such a length for the proposed search algorithm is minimal. We also prove that the three expansion techniques, proposed here, are the only expansions needed for characterization of ETS structures starting from simple cycles in the graph, if one requires each and every intermediate sub-structure to be an ETS as well. Extensive simulation results are provided to show that, compared to LSS-based search, significant improvement in search speed and memory requirements can be achieved.

arXiv (Cornell University), Apr 14, 2021
In this paper, we analyze the error floor of quasi-cyclic (QC) low-density parity-check (LDPC) co... more In this paper, we analyze the error floor of quasi-cyclic (QC) low-density parity-check (LDPC) codes decoded by the sum-product algorithm (SPA) with row layered message-passing scheduling. For this, we develop a linear state-space model of trapping sets (TSs) which incorporates the layered nature of scheduling. We demonstrate that the contribution of each TS to the error floor is not only a function of the topology of the TS, but also depends on the row layers in which different check nodes of the TS are located. This information, referred to as TS layer profile (TSLP), plays an important role in the harmfulness of a TS. As a result, the harmfulness of a TS in particular, and the error floor of the code in general, can significantly change by changing the order in which the information of different layers, corresponding to different row blocks of the parity-check matrix, is updated.We also study the problem of finding a layer ordering that minimizes the error floor, and obtain row layered decoders with error floor significantly lower than that of their flooding counterparts. As part of our analysis, we make connections between the parameters of the state-space model for a row layered schedule and those of the flooding schedule. Simulation results are presented to show the accuracy of analytical error floor estimates.
IEEE Transactions on Information Theory, Mar 1, 2015
In the above paper, [1], there are some erroneous entries in Tables I, III, IV, VII, and X, which... more In the above paper, [1], there are some erroneous entries in Tables I, III, IV, VII, and X, which are corrected here. Moreover, for the proper application of the definition of layered superset (LSS) property to all the results of Tables I-VII in the above-mentioned paper, the LSS definition needs to be extended as described here.
Uploads
Papers by Amir Banihashemi