A Divide-and-Conquer Approach for Solving Fuzzy Max-Archimedean 𝑡 -Norm Relational Equations

. A system of fuzzy relational equations with the max-Archimedean t -norm composition was considered. The relevant literature indicated that this problem can be reduced to the problem of finding all the irredundant coverings of a binary matrix. A divide-and-conquer approach is proposed to solve this problem and, subsequently, to solve the original problem. This approach was used to analyze the binary matrix and then decompose the matrix into several submatrices such that the irredundant coverings of the original matrix could be constructed using the irredundant coverings of each of these submatrices. This step was performed recursively for each of these submatrices to obtain the irredundant coverings. Finally, once all the irredundant coverings of the original matrix were found, they were easily converted into the minimal solutions of the fuzzy relational equations. Experiments on binary matrices, with the number of irredundant coverings ranging from 24 to 9680, were also performed. The results indicated that, for test matrices that could initially be partitioned into more than one submatrix, this approach reduced the execution time by more than three orders of magnitude. For the other test matrices, this approach was still useful because certain submatrices could be partitioned into more than one submatrix.


Introduction
Solving a system of fuzzy relational equations is a subject of great scientific interest [1,2].This work considers a system of fuzzy relational equations of the form max { ( 1 ,  11 ) ,  ( 2 ,  21 ) , . . .,  (  ,  1 )} =  1 max { ( 1 ,  12 ) ,  ( 2 ,  22 ) , . . .,  (  ,  2 )} =  2 . . ., max { ( 1 ,  1 ) ,  ( 2 ,  2 ) , . . .,  (  ,   )} =   , (1) where   ,   ,   ∈ [0, 1] for each , 1 ⩽  ⩽  and for each , 1 ⩽  ⩽  and  represents a continuous Archimedean -norm function.System (1) can be succinctly written in the following equivalent matrix form: where  = (  ) 1× is the matrix of unknowns,  = (  ) × is the matrix of coefficients,  = (  ) 1× is the right-hand side of the system, and the symb "∘" represents a max-Archimedean -norm composition.Di Nola et al. [3] indicated that, given a continuous norm for  in system (1) and assuming the existence of solutions, the solution set of system (1) can be fully determined by the greatest solution and a finite number of minimal solutions.It is well-known that the greatest solution can be easily computed, but finding all minimal solutions is difficult.Li and Fang [4] demonstrated that the systems of max--norm equations can be divided into two categories, depending on the function  in the system.When  is continuous and Archimedean, the minimal solutions correspond one-to-one to the irredundant coverings of a set covering problem.When  is continuous and non-Archimedean, the minimal solutions correspond to a subset of constrained irredundant coverings of a set covering problem.Li and Fang [5] discussed the necessary and sufficient conditions for solving max--norm norm equations is NP-hard.Wu and Guu [7] demonstrated that the number of minimal solutions of such a system can grow exponentially as the numbers of variables and equations (i.e.,  and  in system (1)) increase.Therefore, solving system (1) that has hundreds or thousands of minimal solutions is not uncommon and can be a challenge.
The concept of partitioning involves grouping related variables and equations and separating unrelated variables and equations.A variable   and the th equation (i.e., max{( 1 ,  1 ), . . ., (  ,   ) =   }) in system (1) are related if the value of   can affect whether the th equation holds.Furthermore, two variables (or two equations or one variable and one equation) in system (1) are related if they are related to a common variable or equation.In system (1) with numerous variables and equations (i.e.,  and  are high), it is likely that not all variables and equations are related to one another.Consequently, system (1) may be partitioned into several subsystems, each containing only the related variables and equations.Thus, the original problem is decomposed into several subproblems.Because solving system (1) is an NPhard problem [5,6], solving several smaller subproblems is considerably faster than solving the original problem directly.Therefore, partitioning can expedite the process of solving system (1).Notably, even if all variables and equations of system (1) are related, partitioning can still be applied to the subsets of all the variables and equations.For example, many approaches involve reducing system (1) by fixing the value of a certain variable   (Rule 5 in Section 5 provides an example).Subsequently, the remaining variables and equations can be partitioned into more than one group such that each group contains only the related variables and equations.The concept of partitioning is discussed further in Section 4.
Based on the concept of partitioning, the first objective of this study was to develop a divide-and-conquer approach for finding all of the minimal solutions of system (1).In this approach, system (1) is first transformed into a binary binding matrix.We propose an algorithm, called PA, in which the concept of partitioning is applied to decompose the binary binding matrix into several submatrices, the irredundant coverings of each submatrix are constructed recursively, and, finally, they are used to form the irredundant coverings of the binary binding matrix.Once all of the irredundant coverings of the binary binding matrix are found, they can be easily converted into the minimal solutions of system (1).
Numerous studies on solving system (1) have been conducted [7][8][9], but few have provided a performance study of the methods used for solving system (1) in which hundreds or thousands of minimal solutions are involved.Wu and Guu [7] used their method to solve test problems for which the number of minimal solutions ranged from 6 to 100, and the results indicated (Figure 1) that all test problems can be solved in less than 300 ms by using an ordinary PC.However, test problems solved by using such a scale do not fully reflect the difficulty of solving system (1).One problem that hinders the process of performing tests on a large scale is that generating test cases in which system (1) has a high number of minimal solutions is complex.
To bypass the need to generate complex system (1) test cases, large binary matrices can be generated because any system (1) case can be reduced to a binary matrix, called a binary binding matrix, and the problem of solving system (1) can be reduced to the problem of finding all of the irredundant coverings of the binary binding matrix.According to Lin [8], both the reduction process and the conversion of an irredundant covering to a minimal solution can be conducted in polynomial time and, therefore, finding all irredundant coverings is the core process executed in solving system (1).In other words, the time required to find all of the irredundant coverings of the binary binding matrix accounts for most of the time required to solve system (1).This is especially true for complex system (1) cases because the time used for both the reduction of system (1) and the conversion of irredundant coverings to minimal solutions is insubstantial compared with the time used for finding all irredundant coverings.Therefore, the time required to find all irredundant coverings when adopting the proposed approach can be used to represent the performance of this approach.The second objective of this study was to develop an algorithm for generating binary matrices with various characteristics on a considerably large scale to enable the evaluation of the performance of various approaches in finding all irredundant coverings on the test matrices that are difficult to solve.
To evaluate the impact of partitioning on the execution time, the proposed algorithm (PA) was compared with another approach (denoted as the non-PA) proposed by Markovskii [6] for finding all irredundant coverings.The only difference between the PA and non-PA is that the PA contains an additional step for incorporating the concept of partitioning.Several test matrices were generated for this performance study.For test matrices that can be initially partitioned into more than one submatrix, the PA reduces the execution time required by the non-PA by more than three orders of magnitude.Even for test matrices that cannot be initially partitioned into more than one submatrix, the PA still offers more favorable performance than does the non-PA.This is attributed to the partitioning of the submatrices of the binary binding matrix.
The rest of this paper is organized as follows.In Section 2, the preliminaries of Archimedean -norm are given.In Section 3, the procedure for constructing the binary binding matrix and minimal solutions is presented.In Section 4, the concept of partitioning is discussed and an algorithm for finding all irredundant coverings is proposed in Section 5.In Section 6, the procedure for generating test matrices is described and the performance results are presented.Finally, conclusion is given in Section 7.

Preliminaries
This section describes the basic concepts of -norm, Archimedean -norm, and the greatest solution and minimal solutions of fuzzy relational equations.Please refer also to [3-5, 9, 10].
A well-known fact is that (, ) ⩽ min{, } for any norm.The commonly seen "min" and "product" are both a -norm function.
The greatest solution of system (1) with a -norm for  can be computed explicitly using the " →  " operator defined as where (, ) is a -norm and  ∈ [0, 1].If the solution set of system (1) is not empty, then the greatest solution  = (  ) ∈I can be calculated as follows: Mostert and Shields [11] subdivided continuous norms into three categories, namely, the "min" operation, Archimedean -norms, and ordinal sums of a family of properly defined Archimedean -norms.The Archimedean -norm  is a -norm with (, ) <  for all 0 <  < 1 [3].Notably, the well-known "min" operation is not an Archimedean -norm.Wu and Guu [7] collected six Archimedean -norm functions as follows: (1) Algebraic product: (, ) = ; (2) Łukasiewicz -norm: (, ) = max{0,  +  − 1}; Simple formula for calculating the greatest solution of system (1) that uses any of these six Archimedean -norm functions for  is also available in Wu and Guu [7].

Reduction of Fuzzy Relational Equations to Covering Problem
This section describes the procedure to reduce the problem of finding all minimal solutions of system (1) to the problem of finding all irredundant coverings of the binary binding matrix of system (1).The description follows the results of Lin [8].Subsequently, in Section 4, we apply the concept of partitioning to the binary binding matrix to expedite finding all irredundant coverings.Let  = (  ) ||×|| denote a matrix with  ∈ ,  ∈  and   ∈ {0, 1, 2}, where  and  denote the index sets for the rows and the columns of , respectively.Notably, both  and  are a set of positive integers.Let   () = { ∈  |   ̸ = 0} denote an index set for each  ∈  and let denotes the set of all irredundant coverings of .
Example 1.Consider the matrix  below, where the index sets  and  are indicated on the left and on the top of the matrix, respectively.The set of all irredundant coverings of  is Φ() = {{1, 2}, {1, 3}, {2, 3}} : The binding matrix of system (1) is denoted by M = (  ) × and is given by for  ∈ I and  ∈ J.
According to Expression (6), if   = 0, then   = 2 for each  ∈ I; that is, all elements in the th column of M are 2.Such columns are referred to as all-2-columns in a binding matrix.Let M * denote a binding matrix M with all of its all-2-columns removed.It is obvious that if  is not a zero vector, then M * is a binary matrix containing one or more columns and the set of irredundant coverings of M equals that of M * .The matrix M * is referred to as the binary binding matrix of system (1).

(7)
The binding matrix M is the same as the matrix  in Example 1, while the binary binding matrix M * is formed by the first three columns of M.
The mapping vector of an irredundant covering  ∈ Φ(M * ) is denoted by   = (   ) ∈I and is given by Let (, ) denote the set of all minimal solutions of system (1).Lin [8] proved that if  is not a zero vector, then (, ) equals the set of mapping vectors of all irredundant coverings of M * ; that is, (, ) = {  |  ∈ Φ(M * )}.If  is a zero vector, namely,   = 0 for every  ∈ J, then it is obvious that (, ) = { ⃗ 0}.Therefore, (, ) can be determined with the following procedure.
(5) Let M * be obtained from M with all of its all-2columns removed.Example 3. Consider the fuzzy relational equations in Example 2. Since  is not a zero vector, Step (1) is skipped.
Step (4) obtains the binding matrix M, which is the same as the matrix  in Example 1.
Step (5) obtains the binary binding matrix M * , which is the same as M but without its fourth column.
It is obvious that Steps (1)-( 5) can be done in () time.Li and Fang [4] and Lin [8] proved the bijective (i.e., both one-to-one and onto) mapping between a minimal solution and an irredundant covering, and thus Step (7) can be done in (|(, )|) time, where |(, )| is the number of minimal solutions of (2).Step (6), finding all irredundant coverings, is the most time-consuming step.Therefore, the task of finding all minimal solutions of ( 2) is reduced to the task of finding all irredundant coverings of a binary matrix, which is the focus of the next two sections.

Partitioning
This section describes the concept of partitioning a binary matrix into one or more submatrices such that the irredundant coverings of the binary matrix can be derived from the irredundant coverings of these submatrices.First, the notation used is defined as follows to facilitate the discussion.
In Section 1, we described the concept of partitioning, which involves grouping the related variables and equations of system (1) to reduce the problem size.Because the variables and equations of system (1), respectively, correspond to the rows and columns of its binding matrix, we applied the same concept to the binary binding matrix by grouping the rows and columns that are related.Given a binary matrix  = (  ), row  and column  are related if   ̸ = 0. Furthermore, two rows (or two columns or one row and one column) are related if they are related to a common row or column.
The advantage of using partitioning is twofold.First, partitioning involves decomposing a matrix into several submatrices with a reduced number of rows and columns, the irredundant coverings of which can be found more efficiently than those of the original matrix.Second, if the covering of submatrices is not minimal, then it is immediately discarded and is not combined with the coverings of other submatrices to form new coverings.This substantially reduces the number of redundant coverings generated.Furthermore, once the set of irredundant coverings of each submatrix is found, Theorem 8 can be applied to obtain the set of irredundant coverings of the original matrix without generating any redundant coverings of the original matrix.Because an irredundant covering of the binary binding matrix of system (1) corresponds to a minimal solution, only minimal solutions are generated.The disadvantage of using partitioning is that if the partitioning of the binary binding matrix contains only one component (i.e., the matrix), then benefits cannot be obtained from partitioning, and the time used to perform partitioning is wasted.The impact of partitioning on performance is discussed further in Section 6.

The Divide-and-Conquer Algorithm
We propose a divide-and-conquer algorithm for constructing the set of irredundant coverings of a binary matrix .The algorithm follows a set of rules specified by Lin [8] either to obtain the irredundant coverings directly or to decompose a binary matrix into one or more submatrices.Rules 1-3 consider some trivial cases of  whose irredundant coverings can be directly derived and are adopted as the termination condition in the proposed algorithm.If none of the above rules are applicable, then the algorithm calculates the partitioning Ψ().Subsequently, if |Ψ()| > 1, then the irredundant coverings of  can be derived from the irredundant coverings of those submatrices in Ψ(), according to Theorem 8.However, if |Ψ()| = 1, then Rule 5 [6] is applied to decompose  into three submatrices.

Rule 5 (forced binding). Let 𝑀
Algorithm 1 shows the proposed algorithm (denoted as PA).Notably, this algorithm does not find any covering that is redundant.If a matrix can be partitioned into more than one submatrix, then lines 10 and 11 of the algorithm are applied to expedite the process of executing the algorithm by using the concept of partitioning.When lines 10 and 11 are excluded, the resulting algorithm, denoted as non-PA, is identical to the algorithm proposed by Markovskii [6] or Lin [8].The PA is demonstrated in the following examples.

Performance Study
This performance study focused on evaluating the impact of applying the concept of partitioning.This was achieved by comparing the performance of the PA with that of the non-PA, because the algorithms differed only in whether the concept of partitioning was used.As discussed in Section 3, finding all of the irredundant coverings of the binary binding matrix M * is the most time-consuming step in solving system (1).Therefore, we measured only the time required to derive Φ(M * ) from M * to evaluate the speed at which system (1) can be solved using a given approach.
6.1.Test Matrices.We used binary binding matrices rather than system (1) for this performance study.This offered three advantages.First, a binary binding matrix is a binary matrix only and is independent of any specific Archimedean -norm function.Second, using binary binding matrices prevents the imprecision caused by floating point truncation from occurring when solving system (1).Third, generating a large binary matrix is easier than generating system (1) with a high number of equations and variables.
Algorithm 2 shows the procedure for generating the test matrices.This procedure involves four parameters, , , , and , and generates an  ×  binary matrix  with density  such that Ψ() contains  submatrices of  with approximately the same number of rows and columns.The density of  is defined as the number of nonzero elements in  divided by  × .In this procedure, an irredundant covering is first injected into the matrix to avoid Φ() = 0 (lines 5-9).Then, the elements in the regions of  that correspond to these  submatrices are repeatedly and randomly chosen to assume the value of 1 until the number of elements with a value of 1 reaches ⌈ *  * ⌉ (lines [11][12][13][14][15].
In this study, test matrices were generated with  =  = 24,  ranging from 1 to 8 and  ranging from 2/ to 1/ with an increment of 1/.Notably, if  = 1/, then the matrix had at most one irredundant covering.Therefore, only test matrices with a density of no less than 2/ were generated in this study.In addition, when the partitioning of a matrix contains  submatrices of equal size, the density of the matrix could not be greater than 1/ and, thus, the greatest density was set to 1/.
Because this procedure is random, it does not always generate a binary matrix with numerous irredundant coverings.This is especially true when  and  are low or  is near zero or one.Therefore, for each setting of  and , this procedure was repeated five times to generate five test matrices.For example, when  = 1, we varied  from 2/24 to 1/1 with an increment of 1/24 and consequently generated 115 (= 23 × 5) test matrices.In general, given a fixed  value, (24/ − 1) × 5 test matrices were generated in this study.That is, 55, 35, 25, 15, and 10 test matrices were generated for  = 2, 3, 4, 6, and 8, respectively.For clarity, Figure 2 shows only the number of irredundant coverings of the test matrices with the lowest or the highest number of irredundant coverings among the respective five test matrices with the same  and  values.Among all the test matrices generated, the matrix with the highest number of irredundant coverings (|Φ()| = 9680) was generated using  = 6/24 and  = 3.
To understand how  affects the number of irredundant coverings, we performed a preliminary check on the number of irredundant coverings for the test matrices with  = 1.Specifically, we first grouped the generated matrices with  = 1 by their densities.Then, for every two groups with their difference in density being 1/24, a Mann-Whitney  test was performed to compare the difference of the number of irredundant coverings between both groups.The results show that when both groups' densities are less than or equal to 5/24, the number of irredundant coverings is smaller in the group with smaller density; when both groups' densities are greater than or equal to 8/24, the number of irredundant coverings is larger in the group with smaller density; when both groups' densities are between 5/24 and 8/24, there is no significant difference between the two groups in the number of irredundant coverings.Thus, although we did not include test matrices with density less than 2/24 in this study, the generated test matrices have covered a wide range of density to provide meaningful analysis.and 1 gigabyte main memory, running Windows XP.To evaluate the impact of using partitioning, each test matrix was subjected to two tests, a test in which the PA was used and a test in which the non-PA was used.The results are shown in Figures 3-5.First, consider the group of test matrices with  = 1.Because  = 1, the 115 test matrices in this group were generated without intentionally making them capable of being partitioned into more than one submatrix.Prior to comparing the execution times of the PA and non-PA on this group of test matrices, we used the Kolmogorov-Smirnov test to check the normality of the execution time, and the results show that the assumption of normality failed for both the PA and non-PA.Because the assumption of normality   of distribution was questionable, the Wilcoxon signed-rank test was used as a substitute for a paired -test to compare the difference between the execution time of the PA and non-PA.The results were in the expected direction and were significant ( = −7.741and  < 0.05).Thus, it is statistically significant to say that the execution time of the PA is smaller than that of the non-PA with this group of test matrices.This may be due to the fact that, although the test matrices in this group could not be partitioned into more than one submatrix (i.e., |Ψ()| = 1), several of the submatrices could and, therefore, the concept of partitioning was still helpful.Figure 3 shows that the execution times of the PA and non-PA exhibited a similar pattern: both were linearly proportional to the square of the number of irredundant coverings of the test matrix.The correlation coefficient between the execution time and the square of the number of irredundant coverings was 0.99502 for the PA and 0.990939 for the non-PA.
For test matrices in which  > 1, Figures 4 and 5 reveal that the PA outperformed the non-PA by more than three orders of magnitude.Figure 4 shows that the execution time of the non-PA was still linearly proportional to the square of the number of irredundant coverings and was not affected by the value of . Figure 5 shows that, for any test matrix in which  > 1, the PA required less than 120 ms to determine Φ().If a test matrix can be partitioned into more than one submatrix (i.e., |Ψ()| > 1), then the PA first identifies the irredundant coverings of these submatrices.Because the number of irredundant coverings of the test matrix is the product of the number of irredundant coverings of these submatrices, finding the irredundant coverings of these submatrices is much faster than finding those of the test matrix.Consequently, the time required by the PA to find all the irredundant coverings can be reduced substantially.We also used the Wilcoxon signed-rank test to compare the execution times of the PA and non-PA on each group of test matrices with the same number of partitions.The results were all in the expected direction and were significant ( = −6.452,−5.16, −4.372, −3.408, and −2.803 for the groups of   test matrices with 2, 3, 4, 6, and 8 partitions, respectively, and all  < 0.05).

Conclusion
In a system of fuzzy relational equations with numerous variables and equations, several variables and equations are likely to be unrelated, and when such a situation occurs, partitioning can be used to facilitate substantial reduction of the time required to determine all the minimal solutions.Therefore, considering partitioning when solving fuzzy relational equations is crucial.
Partitioning is not useful when all of the rows and columns of a binary binding matrix  are related, and after applying forced binding (Rule 5 in Section 5) to derive two submatrices  × and  − , all of the rows and columns of both submatrices are still related.Intuitively, this situation occurs

Figure 1 :
Figure 1: Number of minimal solutions versus execution time (in ms) reported in [7].