Decomposition and Reduction: General Problem-Solving Paradigms

In this article we will be discussing the utilization of decomposition and reduction for development of algorithms. We will assume that a given problem instance can be somehow broken up into two smaller instances that can be solved separately. As a special case of decomposition we will define a reduction, i


INTRODUCTION
The notion of decomposition is known to have several different interpretations: -The problem itself represents a decomposition of an entity like a graph, network, set, FSM, etc. into two or more entities of the same kind. Example: The problem of partitioning a hypergraph into the specified number of disconnected parts by removing hyperedges. -Decomposition is a solution technique which is based on the transformation of the (original) problem to be solved into several subproblems of different kinds that are then solved sequentially. Example: The problem of VLSI layout is usually decomposed into partitioning, floorplanning, and routing. -Decomposition is a solution technique which is based on the transformation of an instance of a particular problem into several instances of the same kind that are then solved independently. Example: Dynamic programming approach to the minimum Steiner tree problem [9]. Throughout this article we will deal with the last of the above-mentioned interpretations, particularly with re-This work has been supported by the Czech Technical University under grant No. 8095  spect to the following class of problems: A finite set and a system of constraints are given, and a minimal subset satisfying constraints is to be found.
Many theoretical problems can be formulated in this manner, e.g. the shortest path problem, Steiner tree problem, coveting problem, knapsack problem and others. These problems have a large number of applications in various areas, particularly in VLSI design.
Let us assume the shortest path problem as an example: Given a weighted graph G and two nodes v and vt, construct a shortest path connecting vs and vt. This problem can be formulated in a different manner: Given a set of weighted edges, construct a subset of edges so that a connected graph containing nodes v and vt is created and the total weight of edges in the subset is minimized (Fig. 1).
Throughout this article this class of problems will be denoted Set Minimization Problems. The common feature of all the set minimization problems is their discrete and finite nature. The finiteness of these problems guarantees that an optimal solution can be found by a brute-force method: generate, consecutively all subsets, check whether the constraints are satisfied, and always store the best solution found so far. Such methods, unfortunately, can succeed in a reasonable time for small instances only. This is why in this article we will focus our attention on the design and evaluation of efficient algorithms for solution of large set minimization problems based on a decomposition and reduction of an instance of the problem. Informally, to decompose a problem instance means to change it to two (or more) instances that can be solved independently. The solution of the original instance is then obtained from solutions of the new instances. In cases in which one of the resulting instances is trivial, the decomposition is called reduction.
Let us demonstrate these notions with the abovementioned shortest path problem (see Fig. 1). On the assumption that the edge e is known to be a part of a shortest path, the problem can be decomposed to finding a shortest path from Vs to this edge, and to finding a shortest path from vt to this edge. Similarly, on the assumption that the edge e 2 is a part of a shortest path, the original problem can be reduced to the problem of finding a shortest path from Vs to Vr.
Apparently, decomposition makes the size of the instance to be solved smaller and in this way it may reduce computational requirements of solution algorithms. Moreover, recursive decomposition may lead directly to a minimal solution. Generally, however, there is no guarantee that such an approach will be successful. This is why the process of (exact) decomposition is often combined with exhaustive or heuristic search techniques.
The notions of decomposition and reduction seem to be quite simple and straightforward. The main problem, however, is to find a way how to decompose and/or reduce an instance so that the solution of the subinstances generated quickly leads to finding a solution of the original instance. The theory of dynamic programming introduces the notion of decomposable problem [9] and conditions that must be fulfilled so that a certain type of decomposition can be used. In this article we will extend this approach by introducing a hierarchy of types of decomposition and reduction and propose a general schema of an algorithm utilizing the proposed notions.
The rest of this article is organized as follows: In the first part we give the formal framework. Several types of decomposition and reduction of an instance of a set minimization problem are then introduced and a general algorithm utilizing the hierarchy is proposed. The second part of this article provides a case-study demonstrating the general notions with the covering problem. First, the formal definition of the covering problem is given and its applications in the field of VLSI design are mentioned. Several rules of decomposition and reduction are then derived, and the general algorithm proposed in the first part of this article is adopted for the covering problem. A complexity analysis and an empirical testing of this algorithm is described. The last part of this article is devoted to conclusions.

SET MINIMIZATION PROBLEMS
In this section we will formally define the class of problems that will be considered further. We will be dealing with a special type of optimization problems denoted set minimization problems. Definition 2.1--Set Minimization Problem [9] A Set Minimization Problem is a minimization problem corresponding to the following description: Instance: a triple 1-I (A, T, W) where A { a 1, a 2 a }, is a finite set, T is a system of constraints, and W is a mapping W: A R/, As a result of a pure decomposition we obtain two instances II' and II" that can be solved separately. At least one minimal solution of the original instance can be obtained from minimal solutions of the new instances. In addition, infeasibility of one of the new instances implies the infeasibility of the original instance. However, there is no guarantee that all minimal solutions of the original instance can be derived from minimal solutions of the new instances. This is why we introduce a stronger condition preserving all minimal solutions.

DECOMPOSITION AND REDUCTION
In this section we will introduce a hierarchy of types of decomposition and reduction. Apparently, a particular instance of the set minimization problem can have more than one minimal solution. Thus, various types of decomposition can be classified by their ability to keep none, at least one, or all minimal solutions.
The following definition corresponds to the conditions of the decomposable problem introduced in [9]: respectively, S S' U S" is a solution of II.
As a result of an approximate decomposition we obtain two instances II' and II" that can be solved separately. If minimal solutions of the new instances are found, a solution of the original instance can be constructed as their union. However, there is no guarantee that this solution will be optimal. Moreover, infeasibility of one of the new instances does not necessarily imply the infeasibility of the original instance. This is why we introduce a stronger condition preserving at least one minimal solution. -for each minimal solution S of II such minimal solutions S' and S" of II' and II" respectively exist, that S S' U S'.
As a result of a proper decomposition we obtain two instances II' and II" that can be solved separately. All minimal solutions of the original instance can be obtained from minimal solutions of the new instances. Naturally, all these types of decomposition can be defined for more than two new instances. Doing so, however, brings nothing substantially new because a decomposition to more than two instances can be obtained by repeated decomposition to two instances.
The size of an instance (see Definition 2.3) is a measure of the time required to solve the instance by the brute-force method. The total time to solve the new instances is proportional to 2 card(a') + 2card(a'), while the original instance is solved in time proportional to 2 card(a). Assuming that card(A) card(A') + card(A"), the "best" decomposition is such that card(A') card(A") = card (A).
Generally, finding a "true" decomposition consisting of two non-trivial instances is a complicated task. Quite often, however, a trivial part of an instance can be found and extracted. As a result of such a "special" decomposition we obtain, in fact, just one instance of smaller size. This is why it is called a reduction. As in the case of decomposition, we will define a hierarchy of different types of reduction. Let II (A, T, W) be an instance of the set minimization problem and let (II', H") be a proper decomposition of II so that H" (A", 1", W") is a trivial instance the minimal solution ofwhich is a set R. Then the pair (II', R) is called a Proper Reduction of II.
Naturally, if a decomposition of an instance has been found, the resulting instances can be recursively decomposed again until trivial and/or infeasible instances are encountered. Results of such recursive decompositions are then to be treated according to the type of decompositions used in the recursion.
Thus, the above-mentioned hierarchy of decompositions and reductions can be used for classification of algorithms: 1. Algorithms based on utilization of all three types of decomposition and/or reduction. Generally, if a solution is found by such an algorithm, it is a correct solution of the original instance. However, this class of algorithms does not guarantee that a minimal solution will be found. Moreover, if a solution is not found by such an algorithm, no conclusion can be made about the feasibility of the original instance. 2. Algorithms based on utilization of pure and proper decomposition and/or reduction. This class of algorithms guarantees finding at least one minimal solution of the instance to be solved. If an infeasible (sub)instance is encountered, the original instance is infeasible. 3. Algorithms based on utilization of only proper decomposition and/or proper reduction. This class of algorithms guarantees finding all existing minimal solutions of the instance to be solved. If an infeasible (sub)instance is encountered, the original instance is infeasible.

UTILIZATION OF DECOMPOSITION AND REDUCTION
In this section we will present a strategy of application of the above-mentioned notions. First, we will present a skeleton of an algorithm and a discussion of various possibilities of search strategies. Thereafter, we will discuss the evaluation of effectiveness of such an algorithm.

The Algorithm
In the previous section we have introduced a hierarchy of types of decomposition and reduction according to their ability to keep none, at least one, or all minimal solutions. Based on this hierarchy we have introduced a hierarchy of algorithms. However, only algorithms of the first type represent the most general approach applicable in all cases. The reason is simple: the recursive utilization of pure and proper decomposition does not need to lead to a set of trivial subinstances. In other words, such a non-trivial subinstance can be encountered that is not further decomposable using only proper and/or pure decomposition and reduction rules. To solve such a subinstance, an approximate decomposition or reduction must be used. This is why the proposed algorithm utilizes all three types of decomposition. The question now is, in which order particular types of decomposition should be applied. The following strategy seems to be appropriate: The possibilities of proper decomposition and/or reduction are to be checked first. If the instance has not been solved, we can admit that some minimal solutions will be lost using pure decomposition and/or reduction. If the instance still has not been solved, approximate decomposition and/or reduction can be applied.
If at least one approximate decomposition and/or reduction has been applied, there is no guarantee that a minimal solution will be found. Moreover, an infeasible subinstance can be encountered even though the original instance is feasible. This is why some techniques should be used to increase the probability of finding a "good" solution. We propose to use a combination of two well-known techniques: branch and bound and heuristic search [12].
A schema of an algorithm based on this idea is shown in Fig. 2. Each decomposition can result in one of three ways: at least one of the resulting instances is non-trivial and all the remaining are trivial and feasible, -at least one of the resulting instances is trivial and infeasible, -all the resulting instances are trivial and feasible.
In the first case the process of decomposition is repeated. Otherwise an analysis is to be performed and according to its result either the process is interrupted (and the best solution found so far claimed to be the result), or the status before the last approximate decomposition is restored and another possibility of decomposition is attempted.
As a result, the proposed algorithm is able to perform a systematic exhaustive search, i.e. to examine all relevant possibilities of approximate decomposition. On the other hand, the algorithm may perform a heuristic search, i.e. to examine only several possibilities of approximate decomposition. In addition, bounding (not shown in Fig. 2) can be added in both cases. The analysis step of the algorithm controls in fact the degree of branching, or, in other words, the number of returns.
Possibilities of proper and pure decompositions seem to be very promising, at least at first sight, because they keep all or at least one minimal solution. On the other hand, to evaluate whether such a decomposition can be applied, takes some time. Moreover, in some cases no such decomposition exists. Thus, pure and proper decompositions are to be treated with care and must be well tested before being incorporated into an algorithm.
Since pure and/or proper decompositions do not need be always found, the approximate decomposition (together with the analysis) is the keypoint of this algorithm. Basically, there are two extreme strategies: -a "clever" search for a good approximate decomposition guided by a heuristic function (the evaluation of which, however, will take some time) and a limited number of returns, or -a "silly" (but fast) approximate decomposition with a higher number of returns. Fig. 3 shows an example of the convergence curves of the both strategies. Notice that the analysis step should take into consideration the shape of the convergence curve as well as the number of approximate decompositions when deciding whether to stop the search. Commonly known search strategies, such as greedy search, simulated annealing, tabu search etc. (for overview see 15]) are in fact special cases of the general analysis step.

Evaluation of the Algorithm
In the previous subsection we have discussed the utilization of various possibilities of decomposition in the proposed algorithm. The question now is, how to find out, whether a particular decomposition rule is useful, or not. In addition, usefullness of a decomposition rule may be affected by the search strategy (the analysis step) employed in the algorithm. Since the behaviour of the algorithm depends.heavily on the particular circumstances, there is no general answer to the above-mentioned question. In some cases the behaviour of a particular decomposition rule can be derived analytically. Similarly, an analysis of the structure of an instance may give the probability that a decomposition rule will be applied successfuly. Obviously, only decomposition rules with good run time complexity and high enough efficiency should be incorporated into the algorithm. 364 M. SERVtT AND J. ZAMAZAL However, in most cases we are forced to resort to empirical testing on a set of benchmarks [15]. Sometimes these test instances will have arisen in a real situation, but real data are scarce, and may represent only a small fraction of the possible population of instances. Thus, benchmarks are often generated by randomly sampling from what is assumed to be the population of instances of problem of the appropriate type.

COVERING PROBLEM
In this section we will demonstrate the utilization of the notions introduced in the first part of this article with the coveting problem. First, the coveting problem and several related notions will be formally defined. Thereafter, several rules will be proposed to find a decomposition and/or reduction of the coveting problem, and finally, the adaptation of the general algorithm with respect to these rules and its evaluation will be described.

Definition and Complexity
In this subsection we will introduce a formal definition of the coveting problem and discuss its complexity. We say that dji is true if ai S and dji ai, or if a qS and di ai. If there is any di true in a constraint T, We say that T is covered by variable ai; otherwise we say that T is an uncovered constraint.  4. In the following text we will use a shortened notation; O's are obviously not necessary in T and can be omitted: T (a /a2), T 2 (a 2 /a 3 /t4), T 3 (a 2 /a 3 /a4), T 4 (a 3 /a4)

History and Applications
The covering problem has a wide range of applications in VLSI design. Probably the best known example is the Quine-McCluskey's method of minimization of boolean functions [10]: given a set of prime implicants, a subset of minimum total cost is sought such that all 1-minterms are covered. This condition can be expressed as a boolean formula in a conjunctive normal form 13], that maps directly to the system of constraints T. In this case T contains no inverted literal. This is why this type of covering problem has been called Unate Covering Problem. Other applications of the unate covering problem in the field of circuit design are the following (for overview see 19]): -design of a minimum-cost two-level combinational network, -design of a minimum-cost TANT network, finding a minimum test set, and minimizing the bit dimension of the control memory.
The Binate Covering Problem is a generalization of the unate covering problem where T is not restricted to be unate and therefore can express more complex constraints.
Probably the best known application of a binate covering problem is the problem of state reduction in incompletely specified finite state machines [6], [7]: given a set of maximal compatibles, a cover of minimal total cost is sought such that closure constraints are satisfied. In the field of electronic circuit design the problem has more applications: minimization of boolean relations [2], synthesis of sequential logic circuits [4], DECOMPOSITION AND REDUCTION 365 -library-based technology mapping [17], and -minimum test pattern generation [8].

Decomposition Rules
In this subsection we will deal with a "true" decomposition, i.e. such a decomposition where an instance of the covering problem is broken up into two non-trivial smaller instances that are solved separately. The solution of the original instance is obtained as a union of solutions of the new instances.
We will propose three theorems exemplifying different types of decomposition. All the theorems will be considering three instances of the covering problem denoted W" W"(a2)--1, W"(a3)-1, W"(a4)-Apparently, S' {a1, a2} is a minimal cover of II' and S"= { a 2, a 4 } is a minimal cover of II". Since it holds that S' f3 A"= S" f-I A' {a2}, the set S S' U S"--{al, a2, a 4 } is a cover of II. However, the minimal cover of II is the set S1 {a2, a3}. Let us consider the following instances: First, we will propose a simple observation that will be H A {a1, a 2, a 3, a4} found useful for proving the subsequent theorems" T" T (a /a2), T 2 (a 2 /a3), T (a /a3) T'" T ' (a 2 /a3) T .' (a /an) T ' (a 2 /a4) Proof: Apparently, set A is divided into three disjoint parts: A AM", A 2 AM', and A 3 A' f"l A". Variables in A1 have no effect on constraints in 7" and variables in A 2 have no effect on constraints in T'. Variables in A 3 are contained in both T' and T"; this is why each variable from A 3 either must be contained in both S' and S", or must not be contained in any of them. Then the set S S' LI S" is a cover of II. [] W": W"(a2) 2, W"(a3) 3, W"(a4) 2 Apparently, S' { a l, a2 } is the only minimal cover of H' and S"= { a 2, a4 } is the only minimal cover of II". Since it holds that S' f3 A"= S" f3 A' {a2}, the set S S' I,.J S"= {a 1, a 2, a4} is a cover of II. However, the only minimal cover of II is the set $1 {a2, a3}.
The following theorem introduces a condition similar to that proposed by Pipponzi and Somenzi in [14]: Theorem 5.3 Let II (A, T, W), II' (A', T', W'), and H"= (A", T", W') be instances of the covering problem; let S' and S" be the only minimal covers of II' and II" respectively. The pair (II', II") is a proper decomposition of H if the set of conditions (1) is fulfilled and if S' f3 S"= A' f3 A". Proof: First, let us consider that H is a feasible instance. Then it is sufficient to prove that S S' t_J S" is the only minimal cover of II. Apparently, S S' t_J S" is a cover of H, because conditions of Theorem 5. W(S) thus can be minimized only with the increase of the cost of S' f) S". Obviously, the cost of this set is maximal in cases such that S' fq S"= A' fq A". Since S' and S" are the only minimal covers of II' and H", S S' t3 S" is the only minimal cover of H. Now, let us consider that H is an infeasible instance. By contradiction: Let S', S" be minimal covers of II', II" respectively so that S' fq S"= A' fq A". Then, obviously, S' fq A"= S" N A', and the set S S' t.J S"is a cover of H (Theorem 5.1). This is why no such decomposition exists for an infeasible instance. [] Example 5.4 Let us consider the following instances: H :A {a1, a 2, a3, a4, as} T: T (a /a3), T 2 (a 2 /a4), Z (a 4 /as) W:W(al)--1, W(a2)--2, W(a3)--2, W(a4) 1, W(a5) 2 I-I' :A' {a1, a 2, a 3, a4} T'' T (a /a3),.T 2 (a2/a4) W': W'(al)= 1, W'(a2) 2, W'(a3) 2, W'(a4)-1 II" A" {a4, as} T" T ' (a n /a5) W": W"(a4) 1, W'(as) 2 Apparently, S' { a 1, a 4 } is the only minimal cover of II' and S"= {a4} is the only minimal cover of H". S S' 1,3 S"= { a 1, a4} is the only minimal cover of II because it holds that A' fq A"= { a , a 2, a 3, a 4 } 7) { a 4, a 5 } { a 4 } S' fq S".
The following theorem presents a special case of The application of Theorem 5.4 is quite straightforward. A system of m (i.e. number of constraints in T) subsets of A is created so that each subset Aj contains elements related to literals in constraint Tj. All subsets that are not disjoint are then united. As a result we obtain either a system of disjoint subsets of A, or the set A. Each subset corresponds to an "independent" part of the original instance that can be solved separately from the remaining ones. Apparently, such an evaluation of an DECOMPOSITION AND REDUCTION 367 intersection of a system of subsets can be done in polynomial time.
On the other hand, we have not found any polynomialtime algorithm that would find whether there exists a decomposition of an instance according to Theorem 5.2 and/or 5.3.
An algorithm based on a theorem similar to Theorem 5.3 has been published by Pipponzi and Somenzi in [13]. However, this algotithm does not run in polynomial time and the results presented do not show substantial difference in comparison with a branch and bound algorithm.

Reduction Rules
Unlike "true" decompositions, possibilities of reduction of an instance of the coveting problem have been investigated by other authors (see [2], [13]). Other reduction rules are known for the closely related satisfiability problem [5], [16] and can be easily adopted for the coveting problem. This is why we will limit ourselves to an overview of the most important reduction rules. Those, who are interested in a complete survey are referred to 18].
Generally, the reduction rules are based on forms of occurrences of a variable in the system of constraints, and/or on relations between two variables, or two constraints. The following observation presents a simple rule of approximate reduction.

Observation 5.1
Let II (A, T, W) be an instance of the coveting problem and a A. Then an approximate reduction (II', R) can be derived as follows: 1. A' A\{ax} 2. Create T' from T in the following way: (a) Copy from T into T' all constraints Tj so that jx=/: ax (b) Remove all literals 'x from T'

R={ax}
Let us note that a complementary rule can be derived by exchanging literals ax and a and setting R .
A rule of pure reduction based on occurrences of a pair of variables (denoted Dominant Variables) is described in [18]. A similar notion is presented in [13] and is referred to as column dominance.
All the subsequent reduction rules present possibilities of proper reduction. Definition 5.2---Essentially Accepted Variable Let II (A, T, W) be an instance and ax A. Variable a is Essentially Accepted iff there exists such a constraint Tj in T that T. (ax). Definition

5.3---Essentially Rejected Variable
Let II (A, T, W) be an instance and a A. Variable a is Essentially Rejected iff there exists such a constraint Tj in T that Tj (ax). Observation 5.2 Let II (A, T, W) be an instance and a A an essentially accepted variable. Then a proper reduction (II', R) can be derived as follows: 1. A' A\{ax} 2. Create T' from T in the following way: (a)Copy from T into T' all constraints Tj so that tjx :/: ax (b) Remove all literals a from T' 3. R={ax} A complementary observation can be derived for an essentially rejected variable. Let us note that the notions of essentially accepted and essentially rejected variables are known for both coveting and satisfiability problems and are referred to as essential rows 13], or unit clauses [16]. In the following text the both types will be dealt together and referred to as Essential Variables. 1. A' A\{ax} 2. Copy from T into T' all constraints Tj so that 6x=0 3. R= Let us note that a notion analogical to reducible vatiable is known for the satisfiability problem and is called a pure literal [16]. The adaptation of this notion for the coveting problem is simple: only pure inverted literals can be applied. Definition 5.5mDominant Constraints Let II (A, T, W) be an instance and T, and Tq any two distinct constraints in T. Constraint T, is dominated by Tq iff for each two literals d,i and qi it holds that ('t,i pi) V (pi 0). " Observation 5.4 Let II (A, T, W) be an instance and Tt,, Tq any two distinct constraints in T so that Tt, is dominated by Tq.
Then a proper reduction (II', R) can be derived as follows: A' 2. Copy from T into T' all constraints except Tq 3. R= A notion similar to this one is known and is referred to as dominant rows [13]. Definition 5.6---Resolvable Constraints Let I-I (A, T, W) be an instance and T, and Tq any two distinct constraints in T. Constraints Tt, and Tq are resolvable iff there exists exactly one x so that dpx ax and tqx ax, and for all the remaining values of 1, 2 x 1, x + 1 n it holds that lpi qi. Variable ax is a resolving variable of Tp and Tq. Observation 5.5 Let II (A, T, W) be an instance, and let Tp, Tq be any two resolvable constraints in T with a resolving variable ax. Then a proper reduction (II', R) can be derived as follows: 1. A'=A 2. Create T' from T in the following way: (a) Remove literal a from Tp (b) Copy from T into T' all constraints except Tq 3. R= Let us note that this observation is in fact a special case of the well-known resolution principle which is used for solving the satisfiability problem. However, this principle is used in the opposite direction: particular constraints are expanded using this rule until either a tautology, or a contradiction is found.
In [18] we present one more rule of proper decomposition based on occurrences of a pair of variables denoted Mutually Reducible Variables.

Complexity Analysis
In this subsection we will estimate the upper bound time complexity of evaluation the above-mentioned reduction rules. We will consider an instance containing rn constraints and n variables.
The We have not found any possibility to analytically evaluate the probability of successful application of the reduction rules from characteristics of particular instance.

Empirical Testing
The empirical testing has been performed in two steps. In the first step our aim was to identify those decomposition and reduction rules which can be successfully used for construction of an effective algorithm solving the covering problem.
In the second step we have removed those rules that in the first step showed to be inefficient and/or too timeconsuming. Simultaneously, we have added an approximate reduction step and thus implemented the complete algorithm from Fig. 2. In this series of experiments we wanted to examine the influence of various ways of the approximate reduction. At the same time we wanted to know which rules are worth evaluating with respect to the execution time and quality of result.

Experimental Environment
We have used the random instance generator described in [18] for testing of the algorithm: we have generated square instances (number of constraints rn equal to number of variables n) of the following sizes: 10, 20, 40, and 80. To get an idea of the influence of instance characteristics, we have generated instances with 0% (unate instances) and 10% of inverted literals. The both types of instances were further divided according to costs of variables and according to quantity of literals in particular constraints. Costs of all variables were either equal to (unweighted instances), or randomly generated with equal probability in interval 1 to 5 (weighted instances).
All the experiments were repeated for various numbers of literals in each constraint. These numbers were randomly generated with equal probability as follows: -sparse instances (number of literals in each constraint 10 to 25% of n), medium instances (25 to 75% of n), and -dense instances (75 to 90% of n).
Thus, for each size 12 sets of examples were generated, each of them consisting of 20 experiments. The total number of experiments was 960.
All the programs were written in C and the tests run on Sun SparcStation 2.

First Experiment
In this step our aim was to identify those pure and proper decomposition/reduction rules which can be successfully used for construction of an effective algorithm.
We have developed a program depicted in Fig. 4 for this purpose. Notice that this program represents in fact the first two steps of the algorithm shown in Fig. 2.
Various combinations of reduction rules presented earlier and the decomposition according to Theorem 5.4 have been incorporated into the program and tested. For each combination of rules the relative decrease in the DECOMPOSITION  The execution time of evaluation of the remaining reduction rules tested was considerably high with respect to the average decrease of the size of an instance (for detailed results see 18]).
On the other hand, the computational time of evaluation of the decomposition rule tested was short, but only four instances (out of the set of 960 examples) were successfully decomposed 18].
Let us note that the usefullness of application of Essential Variables is in accordance with results reported by other authors (see e.g. [13], [16]).

Second Experiment
In this step the algorithm as shown in Fig. 5 has been implemeted to determine: -the effectivity of the remaining reduction rules, and -the influence of approximate reduction (Observation 5.1) with respect to the search strategy.
Notice the similarity of Fig. 5 to the general scheme in Fig. 2.
To evaluate the effectivity of the tested proper reduction rules we have compared results of the following modifications of the algorithm (see Table 1 The modifications tested are not orthogonal, because the heuristic function selected (described with full details in. [19] and [18]) detects the essential and reducible variables automatically. Since the usefullness of application of Essential Variables has been reported by other authors, we have not evaluated any algorithm without this rule.

Proper Reduction
No Yes Analysis Restore status before the last approximate reduction STOP FIGURE 5 The optimized algorithm for the covering problem. Table 1 show the average quality ratio (AQR) and the runtime (Time) of the first solution found. The application of neither Reducible Variables, nor Dominant and Resolvable Constraints leads to improvement of the average quality ratio, but increases the computational time.

Results in
Thus, we cannot recommend the application of these rules. In further experiments we will deal with the R and H versions only.
The best average quality ratio is provided by the H algorithm, however, the computational time is higher than that of the R algorithm. This is why we have evaluated the convergence curves of each of these algorithms, The smoothed complete convergence curves for examples 40 * 40 are shown in Fig. 6. Table 2 shows one important point of the convergence curve: average quality of the result delivered by the R algorithm at the time when the H algorithm delivers the first solution.
We may conclude that the H algorithm provides better average quality ratio than the R algorithm even if the computational time is taken into consideration.

CONCLUSIONS
In this article decomposition and reduction have been discussed as general paradigms for solving set minimization problems. A hierarchy of types of decomposition and reduction has been proposed according to the ability to keep none, at least one, or all minimal solutions of the original instance. A general algorithm schema utilizing all kinds of decomposition and reduction has been presented and discussed.
The second part of this article contains a case-study demonstrating the above-mentioned general approach with the covering problem. Several reduction/ decomposition rules and "search strategies have been Size AQR presented and tested. The empirical testing of the efficiency of particular rules and strategies allows us to propose an optimized version of the general algorithm for the covering problem 19], 18].