Loading [MathJax]/extensions/MathMenu.js
Channel Coding Rate in the Finite Blocklength Regime | IEEE Journals & Magazine | IEEE Xplore

Channel Coding Rate in the Finite Blocklength Regime


Abstract:

This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and con...Show More

Abstract:

This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ¿ isclosely approximated by C - ¿(V/n) Q-1(¿) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.
Published in: IEEE Transactions on Information Theory ( Volume: 56, Issue: 5, May 2010)
Page(s): 2307 - 2359
Date of Publication: 19 April 2010

ISSN Information:

References is not available for this document.

I. Introduction

The proof of the channel coding theorem involves three stages:

Converse: an upper bound on the size of any code with given arbitrary blocklength and error probability.

Achievability: a lower bound on the size of a code that can be guaranteed to exist with given arbitrary blocklength and error probability.

Asymptotics: the bounds on the log size of the code normalized by blocklength asymptotically coincide as a result of the law of large numbers (memoryless channels) or another ergodic theorem (for channels with memory).

Select All
1.
S. Verd, "teaching it", Proc. XXVIII Shannon Lecture 2007 IEEE Int. Symp. Inf. Theory, 2007-Jun.-28.
2.
S. Verd and T. S. Han, "A general formula for channelcapacity", IEEE Trans. Inf. Theory, vol. 40, pp. 1147-1157, 1994.
3.
C. E. Shannon, "Probability of error for optimalcodes in a Gaussian channel", Bell Syst. Tech. J., vol. 38, pp. 611-656, 1959.
4.
D. Slepian, "Bounds on communication", Bell Syst. Tech. J., vol. 42, pp. 681-707, 1963.
5.
S. Dolinar, D. Divsalar and F. Pollara, Code Performance as a Function of Block Size, 1998.
6.
C. Salema, Microwave Radio Links: From Theory to Design, New York:Wiley, 2002.
7.
D. Baron, M. A. Khojastepour and R. G. Baraniuk, "How quickly can we approachchannel capacity?", Proc. 38th Asilomar Conf. Signals Syst. Comput., 2004-Nov.
8.
A. Valembois and M. P. C. Fossorier, "Sphere-packing bounds revisitedfor moderate block lengths", IEEE Trans. Inf. Theory, vol. 50, pp. 2998-3014, 2004.
9.
D. E. Lazic, T. Beth and S. Egner, "Constrained capacity of the AWGN channel", Proc. 1998 IEEE Int. Symp. Inf. Theory (ISIT), 1998.
10.
J. Shi and R. D. Wesel, "A study on universal codeswith finite block lengths", IEEE Trans. Inf. Theory, vol. 53, pp. 3066-3074, 2007.
11.
G. Wiechman and I. Sason, "An improved sphere-packing bound for finite-lengthcodes over symmetric memoryless channels", IEEE Trans. Inf. Theory, vol. 54, pp. 1962-1990, 2008.
12.
A. E. Ashikhmin, A. Barg and S. N. Litsyn, "A new upper bound on the reliability functionof the Gaussian channel", IEEE Trans. Inf. Theory, vol. 46, pp. 1945-1961, 2000.
13.
A. Feinstein, "A new basic theorem of informationtheory", IRE Trans. Inf. Theory, vol. 4, no. 4, pp. 2-22, 1954.
14.
C. E. Shannon, "Certain results in coding theoryfor noisy channels", Inf. Contr., vol. 1, pp. 6-25, 1957.
15.
R. G. Gallager, "A simple derivation of thecoding theorem and some applications", IEEE Trans. Inf. Theory, vol. 11, pp. 3-18, 1965.
16.
R. G. Gallager, Information Theory and Reliable Communication, New York:Wiley, 1968.
17.
G. Poltyrev, "Bounds on the decoding error probabilityof binary linear codes via their spectra", IEEE Trans. Inf. Theory, vol. 40, pp. 1284-1292, 1994.
18.
A. Barg and G. D. Forney, "Random codes: Minimum distancesand error exponents", IEEE Trans. Inf. Theory, vol. 48, pp. 2568-2573, 2002.
19.
T. Helleseth, T. Klove and V. Levenshtein, "On the information function of an errorcorrecting code", IEEE Trans. Inf. Theory, vol. 43, pp. 549-557, 1997.
20.
A. Ashikhmin, personal communication, 2009.
21.
A. J. Thomasian, "Error bounds for continuouschannels", Proc. 4th London Symp. Inf. Theory, pp. 46-60, 1961.
22.
R. Ash, Information Theory, New York:Interscience, 1965.
23.
J. Wolfowitz, "The coding of messages subject to chanceerrors", Illinois J. Math., vol. 1, pp. 591-606, 1957.
24.
J. Wolfowitz, Coding Theorems of Information Theory, NJ, Englewood Cliffs:Prentice-Hall, 1962.
25.
J. Wolfowitz, "Notes on a general strong converse", Inf. Contr., vol. 12, pp. 1-4, 1968.
26.
H. V. Poor and S. Verd, "A lower bound on the errorprobability in multihypothesis testing", IEEE Trans. Inf. Theory, vol. 41, pp. 1992-1993, 1995.
27.
C. E. Shannon, R. G. Gallager and E. R. Berlekamp, "Lower bounds to error probabilityfor coding on discrete memoryless channels I", Inf. Contr., vol. 10, pp. 65-103, 1967.
28.
C. E. Shannon, "A mathematical theory of communication", Bell Syst. Tech. J., vol. 27, pp. 379-423, Oct. 1948.
29.
L. Weiss, "On the strong converse of the coding theorem for symmetricchannels without memory", Quart. Appl. Math., vol. 183, 1960.
30.
R. L. Dobrushin, "Mathematical problems in theShannon theory of optimal coding of information", Proc. 4th Berkeley Symp. Math. Statist. Probabil., vol. 1, pp. 211-252, 1961.

Contact IEEE to Subscribe

References

References is not available for this document.