1. Introduction Throughout the paper N denotes the set of the natural numbers and for n∈N we set(1)Δn=A=a1,a2,…,an; 0<ak≤1, 0<∑k=1nak≤1,Δn∗=A=a1,a2,…,an; 0<ak≤1, ∑k=1nak=1,where n=2,3,4,… denote the set of all n-components complete and incomplete discrete probability distributions.

For a1,a2,…,an=A∈Δn, b1,b2,…,bn=B∈Δn, we define a nonadditive measure of inaccuracy, denoted by H(A,B;ξ) as(2)HA,B;ξ=11-ξ∑k=1nakξ2-ξ+1/ξbkξ-1/ξ∑k=1nak1/ξξ-1 ξ>1=-∑k=1naklog2bk∑k=1nak; ξ⟶1.If bk=ak/∑k=1nakξ, then H(A,B;ξ) reduces to nonadditive entropy.(3)i.e., HA;ξ=11-ξ∑k=1nakξ∑k=1nak-1 ξ>0≠1=-∑k=1naklog2ak∑k=1nak; ξ⟶1.Entropy (3) was first of all characterized by Havrda and Charvát [1]. Later on, Daróczy [2] and Behara and Nath [3] studied this entropy. Vajda [4] also characterized this entropy for finite discrete generalized probability distributions. Sharma and Mittal [5] generalized this measure which is known as the entropy of order α and type β. Pereira and Gur Dial [6] and Gur Dial [7] also studied Sharma and Mittal entropy for a generalization of Shannon inequality and gave its applications in coding theory. Kumar and Choudhary [8] also gave its application in coding theory. Recently, Wondie and Kumar [9] gave a Joint Representation of Renyi’s and Tsallis Entropy. Tsallis [10] gave its applications in physics for A∈Δn∗, and ξ→1, HA;ξ reduces to Shannon [11] entropy(4)i.e., HA=-∑k=1naklogDak.Inequality (6) has been generalized in the case of Renyi’s entropy.

2. Formulation of the Problem For ξ→1 and A∈Δn∗, B∈Δn∗, then an important property of Kerridge’s inaccuracy [12] is that (5)HA≤HA,B.Equality holds if and only if A=B. In other words, Shannon’s entropy is the minimum value of Kerridge’s inaccuracy. If A∈Δn, B∈Δn, then (5) is no longer necessarily true. Also, the corresponding inequality(6)HA;ξ≤HA,B;ξis not necessarily true even for generalized probability distributions. Hence, it is natural to ask the following question: “For generalized probability distributions, what are the quantity the minimum values of which are HA;ξ?” We give below an answer to the above question separately for HA;ξ by dividing the discussion into two parts (i) ξ→1 and (ii) 1≠ξ. Also we shall assume that n≥2, because the problem is trivial for n=1.

Case 1. Let ξ→1. If A∈Δn∗, B∈Δn∗, then as remarked earlier (5) is true. For A∈Δn, B∈Δn, it can be easily seen by using Jenson’s inequality that (5) is true if ∑k=1nak≥∑k=1nbk, equality in (5) holding if and only if(7)a1b1=a2b2=⋯=anbn=∑k=1nak∑k=1nbk.

Case 2. Let 1≠ξ. Since (6) is not necessarily true, we need an inequality(8)∑k=1nakξ-1bk≤1; ξ>1such that HA;ξ≤HA,B;ξ and equality holds if and only if bk=ak/∑k=1nakξ.

Since ξ>1, by reverse Hölder inequality, that is, if n=2,3,…,γ>1 and x1,…,xn, y1,…,yn are positive real numbers, then(9)∑k=1nxk1/γγ·∑k=1nyk-1/γ-1-γ-1≤∑k=1nxkyk.Let γ=ξ/(ξ-1), xk=ak(ξ2-ξ+1)/(ξ-1)bk, and yk=akξ/1-ξ (k=1,2,3,…,n).

Putting these values into (9), we get(10)∑k=1nakξ2-ξ+1/ξbkξ-1/ξξ/ξ-1∑k=1nakξ1/1-ξ≤∑k=1nakξ-1bk≤1,where we used (8), too. This implies however that(11)∑k=1nakξ2-ξ+1/ξbkξ-1/ξξ≤∑k=1nakξ.Or(12)∑k=1nakξ2-ξ+1/ξbkξ-1/ξ∑k=1nak1/ξξ≤∑k=1nakξ∑k=1nak;using (12) and the fact that ξ>1,, we get (6).

Particular’s Case. If ξ=1, then (6) becomes(13)HA≤HA,B,which is Kerridge’s inaccuracy [12].

3. Mean Codeword Length and Its Bounds We will now give an application of inequality (6) in coding theory for(14)Δn∗=A=a1,a2,…,an; 0<ak≤1, ∑k=1nak=1.Let a finite set of n input symbols(15)X=x1,x2,…,xnbe encoded using alphabet of D symbols, then it has been shown by Feinstein [13] that there is a uniquely decipherable code with lengths N1,N2,…,Nn if and only if the Kraft inequality holds; that is,(16)∑k=1nD-Nk≤1,where D is the size of code alphabet.

Furthermore, if(17)L=∑k=1nNkakis the average codeword length, then for a code satisfying (16), the inequality(18)L≥HAis also fulfilled and equality holds if and only if(19)Nk=-logDak k=1,…,n,and by suitable encoded into words of long sequences, the average length can be made arbitrarily close to HA, (see Feinstein [13]). This is Shannon’s noiseless coding theorem.

By considering Renyi’s entropy (see, e.g., [14]), a coding theorem, analogous to the above noiseless coding theorem, has been established by Campbell [15] and the authors obtained bounds for it in terms of(20)HξA=11-ξlogD∑akξ, ξ>0≠1.Kieffer [16] defined a class rules and showed that HξA is the best decision rule for deciding which of the two sources can be coded with expected cost of sequences of length N when N→∞, where the cost of encoding a sequence is assumed to be a function of length only. Further, in Jelinek [17] it is shown that coding with respect to Campbell’s mean length is useful in minimizing the problem of buffer overflow which occurs when the source symbol is produced at a fixed rate and the code words are stored temporarily in a finite buffer. Concerning Campbell’s mean length the reader can consult [15].

It may be seen that the mean codeword length (17) had been generalized parametrically by Campbell [15] and their bounds had been studied in terms of generalized measures of entropies. Here we give another generalization of (17) and study its bounds in terms of generalized entropy of order ξ.

Generalized coding theorems by considering different information measure under the condition of unique decipherability were investigated by several authors; see, for instance, the papers [6, 13, 18].

An investigation is carried out concerning discrete memoryless sources possessing an additional parameter ξ which seems to be significant in problem of storage and transmission (see [9, 16–18]).

In this section we study a coding theorem by considering a new information measure depending on a parameter. Our motivation is, among others, that this quantity generalizes some information measures already existing in the literature such as Arndt [19] entropy, which is used in physics.

Definition 1. Let n∈N, ξ>0(≠1) be arbitrarily fixed, then the mean length Lξ corresponding to the generalized information measure HA;ξ is given by the formula(21)Lξ=11-ξ∑k=1nakξ2-ξ+1/ξDNk1-ξ/ξξ-1,where A=(a1,…,an)∈Δn∗ and D,N1,N2,…,Nn are positive integers so that(22)∑k=1nakξ-1D-Nk≤1.Since (22) reduces to Kraft inequality when ξ=1, therefore it is called generalized Kraft inequality and codes obtained under this generalized inequality are called personal codes.

Theorem 2. Let n∈N, ξ>1 be arbitrarily fixed. Then there exist code length N1,…,Nn so that(23)HA;ξ≤Lξ<D1-ξHA;ξ+-1+D1-ξ1-ξholds under condition (22) and equality holds if and only if(24)Nk=-logDak∑k=1nakξ; k=1,2,…,n,where HA;ξ and Lξ are given by (3) and (21), respectively.

Proof. First of all we shall prove the lower bound of Lξ.

By reverse Hölder inequality, that is, if n=2,3,…,γ>1 and x1,…,xn, y1,…,yn are positive real numbers then(25)∑k=1nxk1/γγ·∑k=1nyk-1/γ-1-γ-1≤∑k=1nxkyk.Let γ=ξ/(ξ-1), xk=ak(ξ2-ξ+1)/(ξ-1)D-Nk, and yk=akξ/1-ξ (k=1,2,3,…,n).

Putting these values into (25), we get(26)∑k=1nakξ2-ξ+1/ξD-Nkξ-1/ξξ/ξ-1∑k=1nakξ1/1-ξ≤∑k=1nakξ-1D-Nk≤1; ξ>1,where we used (22), too. This implies however that(27)∑k=1nakξ2-ξ+1/ξD-Nkξ-1/ξξ/ξ-1≤∑k=1nakξ1/ξ-1.For ξ>1, (27) becomes(28)∑k=1nakξ2-ξ+1/ξD-Nkξ-1/ξξ≤∑k=1nakξ;using (28) and the fact that ξ>1, we get(29)HA;ξ≤Lξ.From (24) and after simplification, we get(30)akξ2-ξ+1/ξD-Nkξ-1/ξ=akξ∑k=1nakξξ-1/ξ.This implies(31)∑k=1nakξ2-ξ+1/ξD-Nkξ-1/ξξ=∑k=1nakξ,which gives Lξ=HA;ξ. Then equality sign holds in (29).

Now we will prove inequality (23) for upper bound of Lξ.

We choose the codeword lengths Nk, k=1,…,n in such a way that(32)-logDak∑k=1nakξ≤Nk<-logDak∑k=1nakξ+1is fulfilled for all k=1,…,n.

From the left inequality of (32), we have(33)D-Nk≤ak∑k=1nakξ;multiplying both sides by akξ-1 and then taking sum over k, we get the generalized inequality (22). So there exists a generalized code with code lengths Nk, k=1,…,n.

Since ξ>1, then (32) can be written as(34)ak∑k=1nakξξ-1/ξ≥D-Nkξ-1/ξ>ak∑k=1nakξξ-1/ξD-ξ-1/ξ.Multiplying (34) throughout by akξ2-ξ+1/ξ and then summing up from k=1 to n, we obtain inequality(35)∑k=1nakξ≥∑k=1nakξ2-ξ+1/ξDNk1-ξ/ξξ>∑k=1nakξD1-ξ.

Since 1-ξ<0 for ξ>1, we get from (35) inequality (23).

Particular’s Cases. For ξ→1, then (23) becomes(36)HAlogD≤L<HAlogD+1,which is the Shannon [11] classical noiseless coding theorem(37)Nk=-logDak; k=1,2,…,n,where HA and L are given by (4) and (17), respectively.