Complex Stochastic Boolean Systems: Comparing Bitstrings with the Same Hamming Weight

A complex stochastic Boolean system (CSBS) is a complex system depending on an arbitrarily large number n of random Boolean variables. CSBSs arise in many different areas of science and engineering. A proper mathematical model for the analysis of such systems is based on the intrinsic order: a partial order relation defined on the set {0, 1}n of all binary n-tuples of 0s and 1s. The intrinsic order enables one to compare the occurrence probabilities of two given binary n-tuples with no need to compute them, simply looking at the relative positions of their 0s and 1s. Regarding the analysis of CSBSs, the intrinsic order reduces the complexity of the problem from exponential (2 binary n-tuples) to linear (n Boolean variables). In this paper, using the intrinsic ordering, we compare the occurrence probabilities of any two binary n-tuples having the same number of 1-bits (i.e., the sameHamming weight). Our results can be applied to any CSBS with mutually independent Boolean variables.


Introduction
This paper deals with the mathematical modeling of a special kind of complex systems, namely, those depending on an arbitrary number  of random Boolean variables.That is, the  basic variables  1 , . . .,   of the system are assumed to be stochastic and they only take two possible values, 0 or 1, with probabilities where the values {  }  =1 will be referred to as the basic probabilities or parameters of the system.
We call such a system a complex stochastic Boolean system (CSBS).These systems can be found in many different scientific or engineering areas like mechanical engineering, meteorology and climatology, nuclear physics, complex systems analysis, operations research, and so forth.CSBSs also arise very often when analyzing system safety in reliability engineering and risk analysis; see, for example, [1][2][3].
Each one of the 2  possible outcomes associated with a CSBS is given by a binary -tuple (or bitstring of length )  = ( 1 , . . .,   ) ∈ {0, 1}  , and it has its own occurrence probability Pr{}.
Throughout this paper, the  Boolean variables   of the CSBS are assumed to be mutually independent, so that the occurrence probability of a given binary string  of length  can be easily computed as that is, Pr{} is the product of factors As an example of CSBS, we can consider a technical system like the accumulator system of a pressured water reactor in a nuclear power plant, taken from [4].This technical system depends on  = 83 mutually independent basic components  1 , . . .,  83 .Assuming that   = 1 if component  fails,   = 0 otherwise; then the failure and working probabilities of component  will be Pr{  = 1} =   , Pr{  = 0} = 1 −   , respectively.The probability   of failure of each basic component  (1 ≤  ≤ 83) is given, so that all basic probabilities   are known.
Thus, this accumulator system can be considered as a CSBS where each one of its 2 83 system binary states (i.e., binary 83-tuple ( 1 , . . .,  83 ) ∈ {0, 1} 83 ) describes the current represents the system state for which the first 23 components fail, while the last 60 components work.Moreover, the occurrence probability of the binary state  can be immediately computed using (2) as follows: Real-world CSBSs arising in many different engineering areas, like the above-mentioned accumulator system, are typically analyzed in many works dealing with system safety and reliability theory.In this context, let us mention that formula (2) for computing the binary -tuple probabilities associated with a CSBS on -independent Boolean variables (basic components) can be seen as a particular case of Theorem 1 in [5].In that theorem, the author considers a coherent or noncoherent  component system with dependent or independent failures and proves that the reliability/unreliability function  () in the system is a multilinear function of the vector  of all conditional component (success/failure) probabilities.Let us recall that a system is said to be coherent if its structure (Boolean) function  satisfies [5]  (0, . . ., 0) = 0,  (1, . . ., 1) = 1, and if , V ∈ {0, 1}  are such that   ≤ V  for all  = 1, 2, . . ., , and   < V  for some , then Moreover, we must highlight that the assumption of independent failures (an essential hypothesis for formula (2) and for the results presented in our paper), while being classical and advantageous from a calculation point of view, is a bit restrictive for realistic applications.In this respect, in [6] one can find an illustration of the issues when realistic systems are addressed.
The behavior of a CSBS is determined by the ordering between the current values of the 2  associated binary -tuple probabilities Pr{}.Due to the exponential complexity of the problem, computing all these 2  probabilities-by using (2)and ordering them in decreasing or increasing order of their values is only possible in practice for small values of the number  of basic variables.However, for large values of , it is necessary to use alternative procedures to compare the binary string probabilities.For this purpose, a simple, positional criterion to order binary -tuple probabilities is used.The socalled intrinsic order criterion (IOC) enables one to compare (to order) the occurrence probabilities Pr{}, Pr{V} of two given binary -tuples , V without the need to compute them, simply looking at the relative positions of their 0s and 1s.IOC was first described in [7], and it naturally leads to a partial order relation on the set {0, 1}  of all the binary strings of length .The so-called intrinsic order provides a unified approach for the analysis and modeling of CSBSs.
The most useful representation of a CSBS is the intrinsic order graph: a symmetric directed graph on 2  vertices, displaying all the 2  binary -tuples from top to bottom in decreasing order of their occurrence probabilities.
In this context, the main goal of this paper is to compare the occurrence probabilities of two binary strings with the same length  and containing the same number of 1-bits.Our results will be derived from IOC, as well as from other properties of the intrinsic ordering, and they will be illustrated with the intrinsic order graph.
For this purpose, this paper has been organized as follows.Section 2 contains all the background about the intrinsic order relation, required to make this paper self-contained.Section 3 is devoted to present our new results concerning the comparison between the occurrence probabilities of two binary strings with the same number of 1-bits.Finally, conclusions are presented in Section 4.

Preliminaries
Let us start this section with some basic concepts and notations which will be used in the rest of the paper.Definition 1.Let  ≥ 1 and let  = ( 1 , . . .,   ) ∈ {0, 1}  be a binary -tuple.Then we have the following.
2.1.The Intrinsic Order Criterion.According to (2), the ordering between two given binary string probabilities Pr() and Pr(V) depends, in general, on the parameters   , as the following simple example shows.
Example 3. Let  = 3,  = (0, 1, 1), and V = (1, 0, 0).Using (2), we have However, assuming a simple (but not restrictive in practice) hypothesis on the parameters   , we can assure that for some pairs of binary -tuples, the ordering between their occurrence probabilities is independent of the basic probabilities   .More precisely, as mentioned in Section 1, to overcome the exponential complexity in the problem of computing and sorting the 2  binary string probabilities, the following simple positional criterion has been introduced in [7].
Remark 5.In the following, we assume that the parameters   always satisfy condition (13).Note that this hypothesis is not restrictive for practical applications because for any index  such that   > 1/2, we only need to consider the variable   = 1 −   , instead of   .Next, we order the  new Boolean variables in increasing order of their probabilities.This reduces the complexity in sorting tasks from exponential (ordering 2  binary -tuple probabilities) to linear (ordering  Boolean variable probabilities).
Remark 6.The ( 0 1 ) column preceding each ( 1 0 ) column is not required to be necessarily placed at the immediately previous position, but just at previous position.Remark 7. The term "corresponding, " used in Theorem 4, has the following meaning: for each two ( 1 0 ) columns in matrix   V there must exist (at least) two different ( 0 1 ) columns preceding each other.Formally, there must exist (at least) one injective precedence map from the set of ( 1 0 ) columns of   V onto the set of its ( 0 1 ) columns that assigns to each ( 1 0 ) column a ( 0 1 ) column preceding it.In other words, for each ( 1 0 ) column in matrix   V , the number of preceding ( 0 1 ) columns must be strictly greater than the number of preceding ( 1 0 ) columns.
The matrix condition IOC, stated by Theorem 4, is called the intrinsic order criterion because it is independent of the basic probabilities   , and it only depends on the relative positions of the bits in the binary -tuples  and V to be compared.Theorem 4 naturally leads to the following partial order relation on the set {0, 1}  ; see [7].The so-called intrinsic order will be denoted by "⪯", and when V ⪯  we will say that V is intrinsically less than or equal to .
Note that if  ⪰ V, then there must exist (at least) one injective precedence map from the set of ( 1 0 ) columns of   V onto the set of its ( 0 1 ) columns (Remark 7).Then in matrix V the number of ( 1 0 ) columns must be less than or equal to the number of ( 0 1 ) columns.But this is equivalent to saying that the number of 1-bits in  must be less than or equal to the number of 1-bits in V.That is Corollary 13.The intrinsic order respects the Hamming weight.More precisely, for all, V ∈ {0, 1}  , The converse of Corollary 13 is false, as the following two simple counterexamples (indeed, the simplest ones that one can find) show.

The Intrinsic Order Graph.
To finish this section, we present the graphical representation of the poset   .As it is well known, the usual representation for a poset is its Hasse diagram (see [8] for more details about posets and their diagrams).Let us recall that, for a poset (, ≤) and for ,  ∈ , we say that  covers  if  <  with no other elements between them.The Hasse diagram of a finite poset (, ≤) is the graph whose vertices are the elements of  and whose edges are the cover relations (with the convention that if  covers , then  is drawn above ).The Hasse diagram of our poset   will be also called the intrinsic order graph for  variables, denoted as well by   .So, this is a directed graph (digraph, for short) whose vertices are the 2  binary tuples of 0s and 1s, and whose edges go downward from  to V whenever  covers V (denoted by  ⊳ V); that is, For small values of , the intrinsic order graph   can be directly constructed by using Theorem 4. For instance, for  = 1, the Hasse diagram of  1 = ({0, 1}, ⪯) is shown in Figure 1.Indeed, using Theorem 4, we have that 0 ≻ 1, since matrix ( 1 0 ) has no ( 1 0 ) columns! So,  1 has a downward edge from 0 to 1, and this is in accordance with the fact that since  1 ≤ 1/2 due to (13).However, for large values of , a more efficient method is needed.For this purpose, in [9] the following algorithm for iteratively building up   (for all  ≥ 2) from  1 (depicted in Figure 1) has been developed.Theorem 14 (building up   from  1 ).Let  ≥ 2. The graph of the poset   = {0, . . ., 2  −1} (on 2  nodes) can be drawn simply by adding to the graph of the poset  −1 = {0, . . ., 2 −1 − 1} (on 2 −1 nodes) its isomorphic copy 2 −1 +  −1 = {2 −1 , . . ., 2  − 1} (on 2 −1 nodes).This addition must be performed by placing the powers of 2 at consecutive levels in the Hasse diagram of   .Finally, the edges connecting one vertex  of  −1 with the other vertex V of 2 −1 +  −1 are given by the set of vertex pairs In Figure 2, the iterative process described in Theorem 14 is illustrated by showing the intrinsic order graphs for  = 1, 2, 3, 4 from left to right.Basically, to draw   , we first add to  −1 its isomorphic copy 2 −1 +  −1 , and then we connect one-to-one the nodes of "the second half of the first half " to the nodes of "the first half of the second half." Hence, the intrinsic order graph   is a fractal or self-similar graph in the sense that it can be recursively constructed from the previous ( 1 ,  2 , . . .,  −1 ) by some operations which preserve self-similarity; that is, it appears similar at different scales (orders).In Figure 3, the intrinsic order graphs for  = 3, 4 are depicted using the binary representation instead of the decimal representation of their nodes.Note that   has a downward path from  to V if and only if  ≻ V.For instance, looking at the digraph  4 (the rightmost one in Figure 2), we confirm that 3 ≻ 12, as shown in Example 10.
On the contrary, each pair (, V) of nonconnected vertices in the digraph of   , either by one edge or by a longer downward path, means that  and V are incomparable by intrinsic order; that is,   V and V  .For instance, looking at the digraph  3 (the third one from the left in Figure 2), we confirm that 3  4 and 4  3, as shown in Example 3 or in Example 9.
Also, looking at any of the digraphs in Figure 2, we can confirm Corollary 13, as well as the fact that 0 and 2  −1 are the maximum and minimum elements, respectively, in the poset   , as shown in Example 12. Recall that two binary -tuples are complementary if and only if their decimal equivalents sum up to 2  − 1 (see Definition 1-(iii)).Hence, one can observe that any two complementary -tuples are placed at symmetric positions (with respect to the central point) in the intrinsic order graph   .For instance, this is the case in the graph  4 for the following pairs of binary 4-tuples (see the right-most graphs in Figures 2 and 3 The edgeless graph associated with a given graph is obtained by removing all its edges, keeping its (isolated) nodes at the same positions.In Figures 4 and 5, the edgeless intrinsic order graphs for  5 and  6 , respectively, are depicted.
For further theoretical properties and practical applications of the intrinsic order and the intrinsic order graph, we refer the reader to [7,[9][10][11][12][13] and to the references therein.

Occurrence Probabilities of Bitstrings with the Same Weight
In this section, we present our results about the comparison, by intrinsic order, between the occurrence probabilities of two binary -tuples having the same Hamming weight.
For the special case that the bitstrings , V ∈ {0, 1}  have the same weight, the intrinsic order can be characterized as stated by the following two lemmas.
Then  ⪰ V if and only if the matrix has neither ( 1 0 ) nor ( 0 1 ) columns, or for each V there exists exactly one corresponding preceding ( 0 1 ) column.
Proof.Using Definition 8 and Theorem 4, we have that  ⪰ V if and only if matrix   V satisfies IOC, if and only if either matrix   V has no ( 1 0 ) columns or for each ( 1 0 ) column in matrix V there exists at least one corresponding preceding ( 0 1 ) column (IOC).But, under the assumption that  and V have the same number of 1-bits, IOC is actually equivalent to saying that matrix   V has neither ( 1 0 ) nor ( 0 1 ) columns (in this case, obviously,  = V), or for each ( 1 0 ) column in matrix V there exists exactly one corresponding preceding ( 0 1 ) column.
Then  ⪰ V if and only if for each 1-bit in  there exists exactly one corresponding 1-bit in V placed at the same or at a previous position.
Proof.Using Lemma 15, we have that  ⪰ V if and only if matrix   V has neither ( 1 0 ) nor ( 0 1 ) columns, or each V is preceded by exactly one corresponding ( 0 1 ) column.On one hand, each )) corresponds to a 1-bit placed at the same position in both binary -tuples  and V (  = V  = 1).On the other hand, each column (   V  ) = ( 1 0 ) in   V preceded by its corresponding column (   V  ) = ( 01 ) ( < ) corresponds to a 1-bit V  = 1 in V placed at a previous position than the 1-bit   = 1 in .Now, we introduce the following notation for binary tuples.
Definition 17.Let  ≥ 1 and let  ∈ {0, 1}  with Hamming weight   () = .Then (i) the vector of positions of 1s of  is the vector of positions of its 1-bits, displayed in increasing order from the left-most position to the right-most position, and it will be denoted by (ii) the vector of positions of 0s of  is the vector of positions of its ( − )0-bits, displayed in increasing order from the left-most position to the right-most position, and it will be denoted by We also use the symbol "≡" to denote the conversion between any of these vector notations and the binary or decimal representation of the bitstrings.
Example 18.Let  = 7 and  = 43 ≡ (0, 1, 0, 1, 0, 1, 1).Then we have  =   () = 4,  −  = 3, and The following theorem characterizes the intrinsic order between binary -tuples of the same weight, using the vectors of positions of their 1-bits, introduced in Definition 17-(i).be the vectors of positions of 1-bits of  and V, respectively.Then Proof.Using Lemma 16, we have that  ⪰ V if and only if for each 1-bit in  there exists exactly one corresponding 1-bit in V placed at the same or at a previous position.Now, according to Definition 17-(i), sweeping the 1-bits of  from left to right, the last assertion is clearly equivalent to saying that and the proof is concluded.
Example 20.Let  = 9 and let  = (0, 0, 0, 1, 1, 0, 1, 1, 1) ≡ [ 1 ,  2 ,  3 ,  4 ,  5 ] Given a fixed binary -tuple  ∈ {0, 1}  with weight , the following theorem provides us with the set and with the number of all the binary -tuples V ∈ {0, 1}  with weight  that are intrinsically less than or equal to  (i.e., V ⪯ ).That is, it provides the set and the number of all the binary -tuples V with the same weight as , and whose occurrence probabilities are always (i.e., intrinsically) less than or equal to the occurrence probability of .