Path-Wise Test Data Generation Based on Heuristic LookAhead Methods

Path-wise test data generation is generally considered an important problem in the automation of software testing. In essence, it is a constraint optimization problem, which is often solved by search methods such as backtracking algorithms. In this paper, the backtracking algorithm branch and bound and state space search in artificial intelligence are introduced to tackle the problem of path-wise test data generation. The former is utilized to explore the space of potential solutions and the latter is adopted to construct the search tree dynamically. Heuristics are employed in the look-ahead stage of the search. Dynamic variable ordering is presented with a heuristic rule to break ties, values of a variable are determined by the monotonicity analysis on branching conditions, and maintaining path consistency is achieved through analysis on the result of interval arithmetic. An optimization method is also proposed to reduce the search space. The results of empirical experiments show that the search is conducted in a basically backtrack-free manner, which ensures both test data generation with promising performance and its excellence over some currently existing static and dynamic methods in terms of coverage. The results also demonstrate that the proposed method is applicable in engineering.


Introduction
Software testing plays an irreplaceable role in the process of software development, as it is an important stage to guarantee software reliability [1], which is a significant software quality feature [2].It is estimated that testing cost has accounted for almost 50 percent of the entire development cost [3], if not more, but manual testing is time-consuming and error-prone with low efficiency and is even impracticable for large-scale programs such as a Windows project with millions of lines of codes (LOC) [4].Therefore, the automation of testing is an urgent issue.Furthermore, as a basic problem in software testing, path-wise test data generation (denoted as Q) is of particular importance because path-wise testing can detect almost 65 percent of the faults in the program under test (PUT) [5] and many problems in software testing can be transformed into Q.
The methods of solving Q can be categorized as static and dynamic.The static methods utilize techniques including symbolic execution [6,7] and interval arithmetic [8,9] to analyze the PUT without executing it.The process of generating test data is definite with relatively less cost.They abstract the constraints to be satisfied and propagate and solve these constraints to obtain the test data.Due to their precision in generating test data and the ability to prove that some paths are infeasible, the static methods have been widely studied by many researchers.DeMillo and Offutt [10] proposed a fault-based technique that used algebraic constraints to describe test data designed to find particular types of faults.Gotlieb et al. [11] introduced "static single assignment" into a constraint system and solved the system.Cadar et al. from Stanford University proposed a symbolic execution tool named KLEE [12] and employed a variety of constraint solving optimizations.They represented program 2 Mathematical Problems in Engineering states compactly and used searching heuristics to reach high code coverage.In 2013, Yawen et al. [13] proposed an interval analysis algorithm using forward data-flow analysis.But no matter what techniques are adopted, the static methods require a strong constraint solver.
The dynamic methods including metaheuristic search (MHS) algorithms [18] such as genetic algorithms [19], ant colony optimization [15], and simulated annealing [20] all require the actual execution of the PUT.They select a group of test data (usually randomly) in advance and execute it to observe whether the goal is reached; that is, coverage criteria are satisfied or faults are detected and if not, they spot the problem and alter the values of some input variables to make the PUT execute in the expected way.They are flexible methods as they can easily change the input in the testing process, but they are sensitive to the search-space size, the diversity of initial population, the effectiveness of evolution operators, and the quality of fitness function [21].The repeated exploration requires a large number of iterations, sometimes even causing iteration exception.The randomness of initial values is also a big problem because it brings uncertainty to the search result [22].
In this paper, considering the drawbacks of the dynamic methods mentioned above and the demand for static methods, we propose a new static test data generation method based on Code Test System (CTS) (http://ctstesting.cn/),which is a practical tool to test codes written in C programming language.Our contribution is threefold.First, path-wise test data generation is defined as a constraint optimization problem (COP).Two techniques (state space search and branch and bound) in artificial intelligence are integrated to tackle the COP.Second, heuristics are adopted in the lookahead stage of the search to improve the search efficiency.Third, an optimization method is proposed to reduce the search space.We try to evaluate the performance of our method and the relationship between look-ahead and lookback techniques through the experimental results.
The rest of this paper is organized as follows.Section 2 provides the background underlying our research.The problem Q is reformulated as a COP and the solution is presented in Section 3. Section 4 illustrates the proposed search strategies in detail and describes the optimization method used to reduce the search space.The heuristic look-ahead techniques with a case study are given in Section 5. Section 6 focuses on experimental analyses and empirical evaluations of the proposed approach as well as coverage comparisons with some existing test data generation methods.Section 7 concludes this paper and highlights directions for future research.

Background
State space search [23,24] is a process in which successive states of an instance are considered, with the goal of finding a final state with a desired property.Problems are normally modeled as a state space, a set of states that a problem can be in.The set of states forms a graph where two states are connected if there is an operation which can be performed to transform the first state into the second.State space search characterizes problem solving as the process of finding a solution path from an initial state to a final state.In state space search, the nodes of the search tree are corresponding to partial problem solution, and the arcs are corresponding to steps in a problem-solving process.State space search differs from traditional search methods because the state space is implicit; the typical state space is too large to generate and store in memory.Instead, nodes are generated as they are explored and typically discarded thereafter.
Branch and bound (BB) [25,26] is an efficient backtracking algorithm for searching the solution space of a problem as well as a common search technique to solve optimization problems.The advantage of the BB strategy lies in alternating branching and bounding operations on the set of active and extensive nodes of a search tree.Branching refers to partitioning of the solution space (generating the child nodes); bounding refers to lowering bounds used to construct a proof of feasibility without exhaustive search (evaluating the cost of new child nodes).The techniques for improving BB are categorized as look-ahead and look-back methods.Look-ahead methods [27] are invoked whenever the search is preparing to extend the current partial solution, and they concern the following problems: (1) how to select the next variable to be instantiated or to be assigned a value; (2) how to select a value to instantiate a variable; (3) how to reduce the search space by maintaining a certain level of consistency.Look-back methods are invoked whenever the search encounters a dead end and is preparing for the backtracking step, and they can be classified into chronological backtracking and backjumping.
An important static testing technique adopted in this paper is interval arithmetic [8,9,[28][29][30], which represents each value as a range of possibilities.An interval is a continuous range in the form of [min, max], while a domain is a set of intervals.For example, if an integer variable  ranges from −3 to 6, but it cannot be equal to 0, then its domain is represented as [−3, −1] ∪ [1,6], which is composed of two intervals.Interval arithmetic has a set of arithmetic rules defined on intervals.It analyzes and calculates the ranges of variables starting from the entrance of the program and provides precise information for further program analysis efficiently and reliably.

Reformulation of Path-Wise Test Data Generation
This section addresses the reformulation of path-wise test data generation.Problem definition and its solution are presented in Sections 3.1 and 3.2, respectively.Particularly, each constraint defined by the PUT along  should be met to make it feasible.An example with a program test and its corresponding CFG is shown in Figure 1, where if out 7, if out 8, if out 9, and exit 10 are dummy nodes.Adopting branch coverage, there are four paths to be traversed, namely, Path1: The numbers along the paths denote nodes rather than edges of the CFG.Assuming that Path3 is the path to be traversed as shown in bold, our work is to select  = { 1 ,  2 ,  3 } from { 1 ,  2 ,  3 } for  1 ,  2 , and  3 , so that when executing test using { 1 ,  2 ,  3 } as an input, the path traversed is Path3.There are three branching nodes, if head 1, if head 3, and if head 5, along Path3 and three corresponding branches, F 2, F 4, and T 5, that contain the constraints to be met.

Solution to the Problem.
A COP is generally solved by search strategies, among which backtracking algorithms [34] are widely used.In this paper, state space search and the backtracking algorithm BB are introduced to solve the COP mentioned above.The process of exploring the solution space is represented as state space search.This representation will facilitate the implementation of BB.In classical BB search, nodes are always fully expanded; that is, for a given leaf node, all child nodes are immediately added to the so-called open list.However, considering that one solution is enough for path-wise test data generation, best-first-search is our first choice.To find the best, ordering of variables is required for branching to prune the branches stretching out from unneeded variables.In addition, as the domain of a variable is a finite set of possible values which may be quite large, bounding is necessary to cut the unneeded or infeasible solutions.In BB frame, bisection [35] is often used to help prune unneeded part of the solution space.Employing bisection, this paper proposes best-first-search branch and bound (BFS-BB) to automatically generate the test data.
It has been observed empirically that the enhancement of look-ahead methods is sometimes counterproductive to the effects of look-back methods [36].As for BFS-BB, heuristics are adopted in the look-ahead search.Particularly, they are used in the dynamic ordering of variables, the selection of the values to assign a variable, and the judgment of the feasibility of the path after the assignment to a variable and the reduction of the search space.Chronological backtracking is used for look-back.And from the results of the experiments, we try to seek out the relationship between look-ahead and lookback methods.
During the search process, variables are divided into three sets: past variables (short for PV, already instantiated), current variable (now being instantiated), and future variables (short for FV, not yet instantiated).All the variables involved in this paper are symbolic variables.In the interest of simplicity, the transformation from input variables to symbolic variables and the inverse transformation are beyond the scope of this paper.In addition, although the experiments were carried out on benchmarks in the literature or industrial programs of different variable types, integer variables are used for brevity in the following algorithms.

The Proposed Search Strategies
This section proposes the framework of the search strategies.Particularly, the representation of state space search is described in detail in Section 4.1, which is followed by the search algorithm BFS-BB in Section 4.2.And an optimization method in BFS-BB is explained in Section 4.3.State space is a quadruple (, , , ), where  is a set of states,  is a set of connections between the states in accordance with the search operations,  is a nonempty subset of  denoting the initial state of the problem, and  is a nonempty subset of  denoting the final state of the problem.

The Representation of State Space
State space search is all about finding one final state in a state space (which may be extremely large).Final means that every variable has been instantiated with a definite value successfully.At the start of the search Precursor is null, and when Queue is null the search ends.The path made up of all the extensive nodes in the search tree makes the solution path.The state space needs to be searched to find a solution path from an initial state to a final state.

The Search Algorithm BFS-BB.
The idea of the search algorithm BFS-BB is to extend partial solutions.At each step, a variable in FV is selected and assigned a value from its domain to extend the current partial solution.It is checked whether such an extension may lead to a possible solution of the COP and the subtrees containing no solutions based on the current partial solution are pruned.Some concepts in BFS-BB are explained as follows.
Irrelevant variable removal (IVR) identifies variables relevant to the path to be traversed and removes those irrelevant.Dynamic variable ordering (DVO) permutates FV and returns a queue.Path tendency calculation (PTC) calculates the path tendencies of all relevant variables along the path, which will be used to calculate the domains in which their initial values are selected.
Initial domain calculation (IDC) calculates the domain of a variable in which its initial value is selected according to its path tendency calculated by PTC.Bisection reduces the domain of the current variable when its value just assigned fails to satisfy a constraint on the path.Maintaining path consistency (MPC) utilizes interval arithmetic to determine whether the domains of all variables satisfy the constraints along the path.
The overview of our approach can be seen from Figure 2. The path to be traversed is shown in the left part, where the red circles represent nodes and the arrows represent edges of the CFG.The path contains the constraints to be met, the set of input variables, and the domains corresponding to the variables.The first stage is to perform the initialization operations.At first, IVR (see Section 4.3) is called to reduce the search space by removing irrelevant variables and leaving only those relevant to the path.Then four heuristic lookahead methods take effect.MPC (see Section 5.3) is used to partially reduce the input domains of all variables and find infeasible paths on occasion.All the relevant variables in FV are permutated by DVO (see Section 5.1) to form a queue  1 and its head  1 is determined the best or the first variable to be instantiated.Next PTC (see Section 5.2) calculates path tendency of each variable, and IDC (see Section 5.2) reduces the domain  11 in which the initial value  11 is selected for  1 .With all these, the initial state is constructed as (,  1 ,  11 ,  11 , V, and  1 ), which is also the current state,   , shown as the red ring.
The second stage implements state space search.Four heuristic look-ahead methods work in this stage.To each active state, MPC is carried out to determine the direction of the next search step.If MPC succeeds, Type becomes extensive, the variables in FV will be permutated by DVO to get  =   ,   becomes Precursor, and the head of   (  ) will be Variable of next state.Then IDC is used to calculate the domain  1 in which the initial value  1 is selected for   .With all these, a new state (,   ,  1 ,  1 , V, and   ) is constructed, for which the MPC check continues.If after a successful MPC check no variable needs to be permutated, then all the relevant variables have been assigned the right values to make  feasible.The final state is reached, shown as the red double ring.Finally giving the irrelevant variables random values fulfills the generation of the test data, which is the output of BFS-BB as shown in the right part of Figure 2. If a MPC check fails, Type remains active, bisection (see Section 5.2) is conducted to reduce the domain of Variable using the information from the failed MPC check, and Value is reselected from the reduced domain, all of which indicate that the search will expand to a state with a different value for the same variable   .If all the values within its domain for   are tried out or the number of MPC checks has reached the upperbound , then   is moved out of PV and Type becomes inactive.In this case, the search will have to backtrack to Precursor at the higher level of the search tree, as shown by the bidirectional arrow between backtracking and the heuristic look-ahead methods.The above-mentioned search process is described by pseudocodes as Algorithm 1.

Irrelevant Variable Removal.
As mentioned above,  = { 1 ,  2 , . . .,   } is the set of input variables for a program .The search space needs to involve every   ( = 1, 2, . . ., ) in .However, it is possible that not every variable will be responsible for determining whether every path in  will be traversed or not.Therefore, when attempting to generate test data for a particular path , the search effort on the value of a variable which is not relevant to  is wasted since it cannot influence the traversal of .Thus, removing irrelevant variables from the search space and only concentrating on the variables relevant to the path of interest may improve the performance of the search.Hence we propose an optimization method irrelevant variable removal (IVR).Relevant variable and irrelevant variable are defined as follows.
Definition 1.A relevant variable is an input variable that can influence whether a particular path  will be traversed or not.To put it more precisely, for all the input variables {  |   ∈ ,  = 1, 2, . . ., }, there exists a corresponding set of values {  |   ∈   ,  = 1, 2, . . ., }, with which  is not traversed.
But when the value of a particular variable is changed, for example, when the value of   (  ) is changed into    ,  is traversed with the input { 1 ,  2 , . . .,    , . . .,   }.Then   is a relevant variable to path .Definition 2. An irrelevant variable is an input variable that is not capable of influencing whether a particular path  will be traversed or not.To put it more precisely, for all the sets {  |   ∈   ,  = 1, 2, . . ., } of the search space of path , with which  is not traversed, if  is still not traversed with the input { 1 ,  Generally, for a particular path, whether an input variable is relevant or irrelevant, cannot be completely decided due to the complex structure of programs.But we can make conservative estimate of irrelevancy with static control flow technique.We give the most common condition in PUTs.Assume that there are  branches along a path, each branch (  ,  +1 ) ( ∈ [1, ]) needs to be traversed to find the set of relevant variables.The removal of irrelevant variables involves the judgment of whether a variable appears on each branch, so we give the definition below, which is utilized by Algorithm 2. And considering the relation between the complexity of BFS-BB and the number of variables, we give Proposition 4 about the effectiveness of IVR.Definition 3. The branching condition Br(  ,  +1 ) is the constraint on the branch (  ,  +1 ) ( ∈ [1, ]), and it can be represented as where R is a relational operator and   ( ∈ [ We conduct IVR for all the paths in Figure 1, and the process is shown in Table 1.The position where a variable is judged relevant to the path of interest is highlighted in bold.

The Heuristic Look-Ahead Methods
In this section, the heuristic look-ahead methods in BFS-BB are explained in detail in Sections 5.1, 5.2, and 5.3, respectively.And Section 5.4 provides a case study to illustrate these methods.

Heuristics in Variable
Ordering.In practice, the chief goal in designing variable ordering heuristics is to reduce the size of the overall search tree.In our method, the next variable to be instantiated is selected to be the one with the minimal remaining domain size (the size of the domain after removing the values judged to be infeasible), because this can minimize the size of the overall search tree.The technique to break ties is important, as there are often variables with the same domain size.We use variables' ranks to break ties.In case of a tie, the variable with the higher rank is selected.This method gives substantially better performance than picking one of the tying variables at random.Rank is defined as follows.
The rank of the first branch is 1, the rank of the second one is 2, and the ranks of those following can be obtained analogously.The variables appearing on a branch enjoy the same rank as the branch.The rank of a variable on a branch where it does not appear is supposed to be infinity.As a variable may appear on more than one branch, it may have different ranks.The rule to break ties according to the ranks of variables is based on the heuristics from interval arithmetic that the earlier a variable appears on a path, the greater influence it has on the result of interval arithmetic along the path.Therefore, if the ordering by rank is taken between a variable that appears on the branch (  ,  +1 ) and a variable that does not, then the former has a higher rank.That is because, on the branch (  ,  +1 ), the former has rank , Table 1: IVR process for each path of test in Figure 1.

Path
Branching condition while the latter has rank infinity.The comparison between  and infinity determines the ordering.The algorithm is described by pseudocodes in Algorithm 3.
Quicksort is utilized when permutating variables according to remaining domain size and returns   as a result.If no variables have the same domain size, then DVO finishes.But if there are variables whose domain sizes are the same as that of the head of   , then the ordering by rank is under way, which will terminate as soon as different ranks appear.

Heuristics in Value
Selection.DVO determines the next variable to be instantiated and then the value selection strategies are employed.Considering the difference between the variable in question (e.g.,   ) and other variables, the branching condition defined by formula (1)  After decomposing a branching condition into its basic functions, its monotonicity can be utilized in the selection of the initial value as well as other values of the variable in question.

Initial Value
Selection.Initial values of variables are of great importance to a search algorithm.On the one hand, in a backtrack-free search, the initial value of a variable is almost part of the solution.On the other hand, the selection of initial values affects whether the search will be backtrack-free.Initial values are often selected at random in MHS methods, which return different test data each time allowing diversity but randomness without any heuristics is a kind of blind search and causes too many iterations, sometimes even exception.Meanwhile, midvalues are selected in methods using bisection, so it is obvious that sometimes the same result may be returned since the same initial value is always selected.In our method, the above two methods are combined, and the initial value of a variable is determined based on its path tendency, which is defined and calculated as follows.
Definition 7. Path tendency ∈ {V, V} is an attribute of a variable on a path, which is in favor of the satisfaction of all the branching conditions along the path.And it provides the information about where to select its initial value.Positive implies that a larger initial value will work better, while negative implies that a smaller initial value is better.
The calculation of the path tendency of a variable   involves the calculation of its weight on each branch (  ,  +1 ) ( ∈ [1, ]) and its path weight, denoted as   (  ,  +1 ) and   , which are calculated as (3) Path tendency calculation (PTC) gleans the path tendency of each variable with   .Subsequently, initial domain calculation (IDC) works on the result of PTC.In this way, the initial value selection allows for both diversity and heuristics.The algorithms are expressed by pseudo-codes in Algorithms 4 and 5.

Bisection by Tendency.
Bisection functions only when a value (including the initial value) assigned to the current variable   is judged to be infeasible and the conflicted branch (  ,  +1 ) with the false branching condition is located.Then the tendency of   is used by bisection, defined as follows.
Definition 8. Tendency ∈ {V, V} is an attribute of a variable at a branch (  ,  +1 ) ( ∈ [1, 𝑘]) determined by the analysis on the monotonicity of the corresponding branching condition, and it provides the information about where to select a value to better satisfy the branching condition.Positive implies that a larger value will work better, while negative implies that a smaller value is better.It is calculated according to the following formula: Each branch holds a tendency map ⟨, ⟩, which includes the variables appearing on the branch and their corresponding tendencies.With the tendency map, bisection can be applied to reduce the domain of   (  ), leading the branching condition to be true as presented by pseudo-codes in Algorithm 6.
For example, if the conflicted branch is the first branch of Path3 in Figure 1, then the corresponding branching condition is 1 − 2 > 0, which has different monotonic relations with 1 and 2, respectively.Table 2 shows how to use bisection to reduce the domains of variables.If the current variable is 1, then retrieval of tendency map returns positive indicating that a larger value will help satisfy the branching condition, so we reduce its domain to the larger part.But if the current variable is 2, bisection will function in the opposite way due to the opposite monotonic relation.

Heuristics in Maintaining Path Consistency.
As mentioned in Section 4.2, MPC can be used in both stages of BFS-BB.In this part, the focus is on the state space search stage.A value assigned to the current variable   , no matter it is the initial value or another value selected after bisection, should be examined by interval arithmetic to see whether it is part of the solution.Path consistency is a prerequisite for the success of interval arithmetic.In the implementation of BFS-BB, interval arithmetic is enhanced to provide more precise interval information.The enhancement is to make clear how the value of the branching condition defined by formula (2) is calculated, as shown in formula (5).Here we use   to denote the domain of all variables before calculating the th branching condition.Besides, a library of inverse functions is added in case of the occurrences of library functions in the PUT.Consider Hence, for  branching nodes along path , all the  branching conditions should be true to maintain path consistency.MPC receives the value of the current variable   (  ), which is part of the domain of all variables denoted as  1 (  = [  ,   ] ∈  1 ) and evaluates the branching condition corresponding to the branch ( 1 ,  1+1 ), where  1 is the first branching node.The branching condition Br( 1 ,  1+1 ) is generally not satisfied for all the values in  1 but for values in a certain subset  2 ⊆  1 ensuring the traversal of the branch ( 1 ,  1+1 ); that is,  1 ( 1 , 1+1 )   →  2 .Next, the branching condition Br( 2 ,  2+1 ) is evaluated given that the domain of all variables is  2 .Again, generally, Br( 2 ,  2+1 ) is only satisfied by a subset  3 ⊆  2 .This procedure continues along  until all the branching conditions are satisfied to maintain path consistency and  +1 is returned as the domain of all variables.The process of maintaining path consistency is the propagation of the branching conditions along p in the form of  1 Br( 1 , 1+1 )   →  2 Br( 2 , 2+1 )   →  3 ⋅ ⋅ ⋅   Br(  , +1 )   →  +1 , where  1 ⊇  2 ⊇  3 ⋅ ⋅ ⋅ ⊇   ⊇  +1 .But if in this procedure Br( ℎ ,  ℎ+1 ) = (1 ≤ ℎ ≤ ), which means a conflict is detected, then MPC is terminated and bisection will function according to the result of MPC at the conflicted branch ( ℎ ,  ℎ+1 ).The process of checking whether path consistency is maintained is shown by pseudocodes in Algorithm 7.

Case Study.
In this part, the problem mentioned in Section 3.1 is used as an example to explain how BFS-BB works, especially the heuristic look-ahead methods proposed ahead.The input is Path3 as shown in bold in Figure 3, where each branching condition is decomposed into its basic functions in the right.The IVR process has been illustrated in detail in Table 1, and all the three variables are determined relevant to Path3.For simplicity, the input domains of all variables are set [−2, 2] with the size 5.In the initialization stage, MPC check reduces their domains to 1: [−1, 2], 2: [−2, 1], and 3: [−1, 2].The path tendency of each variable is calculated by PTC as shown in Table 3. DVO serves to determine the first variable to be instantiated as shown in Table 4, with the head of the queue (2) highlighted in bold.On determining 2 to be the current variable, an initial value needs to be selected from[−2, 1].The retrieval of path tendency map by IDC returns negative for 2, indicating that a smaller value will perform better and −1 is selected.
1 is selected for 1 after IDC.MPC checks whether 1: [1,1], 2: [−1, −1], and 3: [0, 2] works.It succeeds and in the same manner 3 is assigned 1.Finally, {⟨1, 1⟩, ⟨2, −1⟩, ⟨3, 1⟩} is checked by MPC to be suitable for Path3.No variable needs to be permutated and BFS-BB succeeds with the test data {⟨1, 1⟩, ⟨2, −1⟩, ⟨3, 1⟩}.Table 6 shows how the domains of variables are changed during the search process.The changed domains are highlighted in bold.The changes listed in the fourth column are owing to variable assignments according to the results of IDC, and the changes listed in the fifth column are owing to domain reduction by MPC checks.The process of generating the test data {⟨1, 1⟩, ⟨2, −1⟩, ⟨3, 1⟩} is presented as the search tree in Figure 4.It is a backtrack-free search that accounts for an extremely large proportion in the implementation of BFS-BB.Each variable consumes one MPC check in the state space search stage, and the initial values of each variable make the solution.The solution path is shown by the bold arrows.

Experimental Results and Discussion
To observe the effectiveness of BFS-BB, we carried out a large number of experiments in CTS.Within the CTS framework, the PUT is automatically analyzed, and its basic information is abstracted to generate its CFG.According to the specified coverage criteria, the paths to be traversed are generated and provided for BFS-BB as input.The generated test data will be used for mutation testing that requires a high coverage, ideally 100% [37].This is a challenge for test data generation.
The experiments were performed in the environment of MS Windows 7 with 32 bits, Pentium 4 with 2.8 GHz and 2 GB memory.The algorithms were implemented in Java and run on the platform of eclipse.The experiments include two parts.Section 6.1 presents the performance evaluation of BFS-BB, and Section 6.2 tests the capability of BFS-BB to generate test data in terms of coverage and makes comparisons with some currently existing static and dynamic methods.
No (2 has Rank 2 while 1 has infinity) The search tree of generating the test data for Path3 using BFS-BB.

Performance Evaluation.
The number of relevant variables is an important factor that affects the performance of BFS-BB, so in this part experiments were carried out to evaluate the performance of BFS-BB for varying numbers of input variables.To be specific, our major concern is (1) the relationship between the number of MPC checks (exclusive of the one taken in the initialization stage) and the number of relevant variables; (2) the relationship between generation time and the number of relevant variables.This was accomplished by repeatedly running BFS-BB on generated test programs having input variables  1 ,  2 , . . .,   where  varied from 1 to 50.Adopting statement coverage, in each test the program contained five if statements (equivalent to five branching conditions along the path for MPC check) and there was only one path to be traversed of fixed length, which was the one consisting of entirely true branches (TTTTT); that is, all the branching conditions are the same as the corresponding predicates.Considering the relationship between variables, experiments involving two situations were conducted that (1) the variables are all independent of each other and (2) the variables are linearly related in the tightest manner.Generation time varied greatly in these two cases, so the axes of generation time of both cases are normalized for simplicity.

Variables Are All Independent of Each
Other.The predicate of each if statement is an expression in the form of where  1 ,  2 , . . .,   are randomly generated numbers either positive or negative, rel op  ( = 1, 2, . . ., ) ∈ {>, ≥, <, ≤, = , ̸ = }, and const is an array of randomly generated constants.The randomly generated   and [] should be selected to make the path feasible.This arrangement constructs a relationship that all the variables are independent of each other but all of them are relevant to the path.The programs for various values of n ranging from 1 to 50 were each tested 50 times, and the number of MPC checks and time required to generate the data for each test were recorded.The results can be seen from Figures 5 and 6.
Figure 5 shows the relationship between the number of MPC checks and the number of variables () for variables that are all independent of each other, and from (a) to (d) represent four different situations, marked by the ordinates.It can be seen that since the relation in formula (6) is the simplest one between variables, the number of MPC checks remains linearly increasing with the number of variables no matter in which situation, from (a) to (d). =  means that, for this kind of constraint, one relevant variable requires only one MPC check.It also can be seen that  2 = 1 in all the four situations and the number of MPC checks increases completely linearly with the number of variables.
Figure 6 shows the relationship between generation time and the number of variables () for variables that are all independent of each other, and from (a) to (d) represent four different situations marked by the ordinates.It can be seen that generation time increases approximately linearly with the number of variables and the linear correlation relationship is significant at 95% confidence level with  value far less than 0.05.By the increase of the number of variables, generation time increases at an even speed.The minimum value can be commendably represented as a straight line, showing that it is the most ideal in the four situations with a larger value of  2 .Variations between tests with the same values of  were attributed to the randomness in the difference in the selection of the initial values.
where  1 ,  2 , . . .,   are randomly generated numbers either positive or negative, rel op ∈ {>, ≥, <, ≤, =, ̸ = }, and [] ( ∈ {1, 2, 3, 4, 5}) is an array of randomly generated constants.The randomly generated   and [] should be selected to make the path feasible.This arrangement constructs the tightest linear relation between the variables, all of which are relevant to the path.The programs for various values of  ranging from 1 to 50 were each tested 50 times and the number of MPC checks and time required to generate the data for each test were recorded.The results can be seen from Figures 7 and 8.
Figure 7 shows the relationship between the number of MPC checks and the number of variables () for variables that are the tightest linearly related, and from (a) to (d) represent four different situations marked by the ordinates.It can be seen that the number of MPC checks remains approximately linearly increasing with the number of variables and the fitting curves are all near  = .The linear correlation relationship is significant at 95% confidence level with value far less than 0.05.The general, average, and maximum numbers of MPC checks are all larger than those in the experiment for variables that are all independent of each other, because the relation in formula (7) is the tightest linear one between variables.The minimum number of MPC checks can be completely represented as  =  with  2 = 1, which means that the minimum number is the most ideal in the four situations.
Figure 8 shows the relationship between generation time and the number of variables () for variables that are the tightest linearly related, and from (a) to (d) represent four different situations marked by the ordinates.It is clear that the relation between generation time and the number of variables can be commendably represented as a quadratic curve and the quadratic correlation relationship is significant at 95% confidence level with -value far less than 0.05.The better fitting curves of average and minimum generation times show that average generation time is perfectly stable and minimum generation time is still the most ideal.Variations between tests with the same values of n were attributed to the randomness in (1) the difference in the selection of the initial values and (2) the difference in the expressions along the path (an equality relational operator will generally require more calculation than an inequality relational operator).Besides, generation time increases at a uniformly accelerative speed by the increase of the number of variables.Take (b) for example, the differentiation of average generation time indicates that its increase rate rises by  = 9.994 − 77.34 as the number of variables increases.We can roughly draw the conclusion that generation time is very close for  ranging from 1 to 8, while it begins to increase when  is larger than 8.
The above cases are both completely backtrack-free search owing to the linear correlation relationship between the number of MPC checks and the number of relevant variables.Surely they cannot include all the relations between variables in engineering, so the analyses in this part are just from the theoretic perspective.The real-world PUTs are much more complex.What's more, 50 tests were conducted for each case of n ranging from 1 to 50.So the results from the samples can only approximate the actual situation.But it can be concluded that BFS-BB functions are stably given a PUT of regular structure, which lays a solid foundation for its application in engineering.

Coverage Evaluation.
To evaluate the capability of BFS-BB to generate test data in terms of coverage, four experiments were carried out.The first involves the testing with a benchmark used in CTS, the second aims at generating test data for a project in engineering, the third compares BFS-BB with a static method, and the last compares it with dynamic methods.

Testing a Benchmark in CTS.
In this part, test data were automatically generated to meet three coverage criteria which were statement, branch, and MC/DC.The test bed was branch bound.c, a benchmark in CTS with 402 LOC, 29 input variables, and complex structure trying to include more content that might appear in engineering. was set 10 for each variable as the upperbound of the number of MPC checks, so it can be estimated that the simplest backtracking will consume at least 11 MPC checks for the variable in question.
The result is shown in Table 7.The numbers of paths was different owing to different coverage criteria adopted.BFS-BB was able to generate test data for all the feasible paths, no matter which coverage criterion was taken.The MC/DC coverage did not reach 100%, because it is relatively strict and difficult to meet and subsumes statement and branch coverage [38].But tolerable coverage was achieved within tolerable time.There exists a trade-off between efficiency and success rate.IVR had no significant influence on coverage, but it did on generation time.Generation time after IVR was much less than that without IVR.Note that the amount of generation time reduced by IVR is determined by the structure of the PUT.The numbers of generation time reduction in Table 7 are only related to the program branch bound.c.Our following analyses all concern BFS-BB with IVR.Average number of MPC checks per relevant variable adopting statement coverage was larger than the results in Section 6.1,   because there are some library functions as well as nonlinear constraints in the PUT, which require more MPC checks.But from all the average values of the number of MPC checks, it can be concluded that the tests for all the three coverage criteria were basically backtrack-free, and not all the tests include the bisection operation.

Testing Programs from a Project in Engineering.
In this part, seven programs were selected from the project de118i-2 at http://www.moshier.net/as the test beds, each of which contains several functions or loops.Three coverage criteria were adopted, which were statement, branch, and MC/DC.The information of the programs and the coverage results are shown in Table 8.
From Table 8, it can be seen that BFS-BB performed encouragingly for programs with complex structure in engineering.Some of the tests adopting MC/DC coverage did not reach 100%, resulting from the fact that MC/DC is a relatively strict criterion.But all the tests were basically backtrack-free with a high coverage, proving its effectiveness in engineering.

Comparison with a Static
Method.This part presents the results from an empirical comparison of BFS-BB with the static method [13] (denoted as "method 1" to avoid verbose description), which was implemented in CTS prior to BFS-BB.The details of the test beds are shown in Table 9.The comparison adopted three coverage criteria: statement, branch, and MC/DC.And the result is shown in Table 10.To calculate the number of days between the two given days in the same year [17] Since interval arithmetic has been improved in CTS, for the sake of validity, test data were generated for all the test beds on the same foundation of interval arithmetic.It can be seen that BFS-BB reached 100% coverage for all the test beds using three coverage criteria, while method 1 did not.That is largely due to the heuristic methods utilized in BFS-BB.There are modulus operations in the program days, which had been difficult for the interval arithmetic in method 1 to handle.But the functionality of interval arithmetic has been improved by adding a library of inverse functions, so it was not so difficult even for method 1.

Comparison with Dynamic
Methods.This part presents results from an empirical comparison of BFS-BB with two dynamic methods which are genetic algorithm (GA) and simulated annealing (SA) on four different benchmark programs using branch coverage.The details of the benchmark programs are shown in Table 11.In order to obtain unbiased experimental results, we made a number of experiments with different parameter settings and chose the one that GA and SA performed the best as shown in Table 12.Test data were automatically generated for each program using GA, SA, and BFS-BB, with each tested 100 times.The average coverage was recorded to make the comparison, and the result was presented in Table 13.
It can be seen that BFS-BB reached 100% coverage on all the four benchmark programs, which are rather simple programs for BFS-BB, and it outperformed the algorithms in comparison.The better performance of BFS-BB is due to three factors.The first is that the initial values of variables are selected by heuristics on the path, so BFS-BB reaches a relatively high coverage for the first round of the search.The second is that MHS crashes on several occasions due to the iteration exception, while the probability of aborting is quite low for BFS-BB because it has no demand for iteration.The third is that MPC is checked not only in the state space search stage but also in the initialization stage, which reduces the domains of the variables to ensure a relatively small search space that follows.

Conclusion
This paper presents an intelligent algorithm best-first-search branch and bound (BFS-BB) for path-wise test data generation (Q).The problem Q is reformulated as a constraint optimization problem (COP) and two techniques from artificial intelligence are introduced to tackle the COP, which are state space search and branch and bound (BB).The former is used to construct the search tree dynamically and the latter is used as the search method.Heuristics are adopted in the look-ahead stage.Dynamic variable ordering (DVO) is presented with a heuristic rule to break ties.Maintaining path consistency (MPC) is achieved through analysis on the result of interval arithmetic.The monotonicity analysis on branching conditions is applied both in the selection of initial values by path tendency calculation (PTC) and initial domain show that it searches in a basically backtrack-free manner, generates test data on programs of complex structure with promising performance, and outperforms some currently existing static and dynamic methods in terms of coverage.The application of BFS-BB in engineering proves its effectiveness.From the perspective of BFS-BB, the heuristics used in the look-ahead search are counterproductive to the look-back search.
Our future research will involve how to generate test data to reach high coverage with more types of constraints so as to give scalability to BFS-BB.We will also study how coverage criteria, generation approach, and system structure jointly influence test effectiveness.The effectiveness of the generation approach continues to be our primary work.Particularly the MC/DC coverage criterion will be given more emphases.

Figure 2 :
Figure 2: Program test and its counterpart with branching conditions decomposed into basic functions.

Figure 3 :
Figure 3: Overview of our approach for searching the test data.

Figure 5 :
Figure 5: Relationship between the number of MPC checks and the number of variables for variables that are all independent of each other.

Figure 6 :
Figure 6: Relationship between generation time and the number of variables for variables that are all independent of each other.

Figure 7 :
Figure 7: Relationship between the number of MPC checks and the number of variables for variables that are linearly related in the tightest manner.

Figure 8 :
Figure 8: Relationship between generation time and the number of variables for variables that are linearly related in the tightest manner.
is the branching factor or the threshold used to control the breadth of the search tree) in the form of [min, max] is the set of possible values to be selected to instantiate Variable;  =   ∈   is a value selected from Domain; Type marks the type of state: active, extensive, or inactive; Queue is a sequence of variables corresponding to the state in question.
2 , . . .,    , . . .,   } when the value of a certain variable   (  ) is changed into    ; then   is an irrelevant variable to path .
rel denote the set of relevant variables to path , and let  irrel be the set of irrelevant variables; one more element in  rel will involve more MPC checks on an exponential basis.If all the irrelevant variables are removed from the search space, the complexity will be reduced by  | irrel | .| irrel | is the cardinality of the set of irrelevant variables.
FV: the set of future variables   : the domain of   (  ∈ FV) (  ,  +1 ) ( ∈ [1, ]):  branches along the path Output   : a queue of FV Begin (1)   ← quicksort (FV,           ); (8) > ,   ,   ∈   ) (4) break; (5) else for (  ,  +1 ) ( ∈ [1, ]) (6) if (rank(  ,  +1 )(  ) = rank(  ,  +1 )(  ))(7)++;(8)else permutate   ,   by rank(  ,  +1 ); can be further represented as a function of   : Br (  ,  +1 ) (  ) :   →  = (    + ∑     ) R, (2) where   is the domain of   and  is a set of Boolean values {, }.∑  ̸ =      is the linear combination of the variables except   and is regarded as a constant.Then we can design the value selection strategies, starting from the monotonic relation between the branching condition and   .Monotonicity describes the behavior of a function in relation to the change of the input.It gives an indication whether the output of the function moves in the same direction as the input or in the reverse direction.If a branching condition is a function whose monotonicity is known, the direction in which the input needs to be moved to make the function true can be determined.The following proposition gives an attribute of a function composed of piecewise monotonic functions.Case  +1 =  +1 ∘   .The composed function   is piecewise monotonic by the induction assumption.let  be a subset of its domain's partition, and let  and   be two arbitrary elements in  with  ≤    ; then one of the monotonicity conditions holds; that is, either   () ≤    (  ) or   () ≥    (  ).For simplicity, we denote it as   ()R  (  ), where R ∈ {≤, ≥}.Function  +1 is piecewise monotonic by assumption.The monotonicity condition is satisfied by   () and   (  ) if both lie in the same subset   of its domain's partition.Then  +1 ()R +1 (  ) holds and  +1 is also monotonic on   .

Table 6 :
Domain changes in the search process.

Table 7 :
Coverage achieved by BFS-BB on branch bound.c.

Table 8 :
Test result achieved by BFS-BB on programs from de118i-2.

Table 9 :
Programs used for comparison with method 1.

Table 10 :
Comparison result with method 1 using three coverage criteria.

Table 11 :
Programs used for comparison with GA and SA.

Table 12 :
Parameter setting for GA and SA.

Table 13 :
Comparison result with GA and SA using branch coverage.IDC) and in the selection of other values by bisection when MPC encounters a conflict.An optimization method irrelevant variable removal (IVR) is also proposed to reduce the search space.Empirical experiments were conducted to evaluate the performance of BFS-BB.The results