Abstract:
Given a discrete memoryless source X, it is well known that the expected codeword length per symbol L n(X) of an optimal prefix code for the extended source X n converges...Show MoreMetadata
Abstract:
Given a discrete memoryless source X, it is well known that the expected codeword length per symbol L n(X) of an optimal prefix code for the extended source X n converges to the source entropy as n approaches infinity. However, the sequence L n(X) need not be monotonic in n , which implies that the coding efficiency cannot be increased by simply encoding a larger block of source symbols (unless the block length is appropriately chosen). As the encoding and decoding complexity increases exponentially with the block length, from a practical perspective it is useful to know when an increase in the block length guarantees a decrease in the expected codeword length per symbol. While this paper does not provide a complete answer to that question, we give some properties of L n(X) and obtain for each nges1 and nondyadic p 1 n ( p 1 is the probability of the most likely source symbol) an integer k* for which L kn(X)<L n(X) for all kgesk*, implying that the coding efficiency of encoding blocks of length kn is higher than that of encoding blocks of length n for all kgesk*. This question is simpler in part because L kn(X)lesL n(X) is guaranteed for all nges1 and kges1, but our results distinguish scenarios where increasing the multiplicative factor guarantees strict improvement. These results extend and generalize those by Montgomery and Kumar.
Published in: IEEE Transactions on Information Theory ( Volume: 55, Issue: 4, April 2009)