Mutation-Based Harmony Search Algorithm for Hybrid Testing of Web Service Composition

Web service composition is a method of developing a new web service from an existing one based on business goals. Web services selected for composition should provide accurate operational results and reliable applications. However, most alternative service providers have not yet fulfilled users' needs in terms of services and processes. Service providers, in fact, have focused on enhancing nonfunctional attributes, such as efficiencies of time, cost, and availability, which still face limitations. Furthermore, it remains advantageous to compose services and suitably plan them around business plans. Thus, this study introduces hybrid testing using a combination of the functional and nonfunctional testing approaches. The former was used to design a test case through the equivalence class partitioning technique, and the latter was used to select suitable services for the test results. We find defects and appropriate solutions for combining services based on business requirements. The mutation-based harmony search (MBHS) algorithm is proposed to select web services and to compose with minimum defects. The results of this study reveal that MBHS can support a combination of various services more efficiently and dramatically than other metaheuristic methodologies. Additionally, it helps find appropriate solutions to compose services based on business plans.


Introduction
e service-oriented architecture (SOA) concept is used to design organisational information systems as evolving service-oriented systems. e use of web services implies a SOA developing methodology [1]. SOA has been rapidly implemented to develop information systems because it encourages interoperability, considering changes in information, public relations, purchases, and services. us, web services can dramatically enhance business activities. Currently, various types of web services can be developed via simple object access protocols or representational state transfers [2]. With increases in system complexity, single web service mechanisms cannot respond to full system operations because data processing requires web services from other resources to be composed under the same conditions to achieve desired results.
Web service composition ensures that a web service concurs suitably with business goals. However, user needs and web services have grown more complex. us, web services will perform according to users' needs if they interoperate suitably with other web services. is is a primary advantage of web service composition [3]. Currently, the web services business process execution language is the standard for describing web service interoperations [4]. Furthermore, web service composition requires efficiency and quality. Various relevant studies have performed web service composition, focusing on quality of service (QoS) [5][6][7].
Web services have been widely used, both inside and outside organisations. e accuracy of service operational results is important. Generally, the problem of selecting a web service is the QoS and efficiency of its nonfunctional properties. Furthermore, during web service composition, it may never be reviewed if it provides operational results per the planned business service requirements. Additionally, any problems will appear as data defects. Functional testing is thus used to detect the nature of defects. In [8,9], researchers implemented functional testing to detect minimum defects, and the results were as expected.
Currently, metaheuristic methodologies are used to estimate the fitness value of a web service composition for overcoming the complexity problem and reducing execution time. However, the primary purpose of metaheuristic development is to achieve optimisation. A web service developer anticipates the quality of optimisation, and metaheuristic algorithms are usually compared to the web service process and time constraints to identify the most efficient solution. In [10], the researcher categorised metaheuristic algorithms into groups to achieve optimisation.
Several familiar evolutionary algorithms are the genetic algorithm (GA) [11], genetic programming (GP), and differential evolution (DE). Swarm intelligence algorithms include the ant colony optimisation (ACO) and particle swarm optimisation (PSO) [12,13]. Physics-related algorithms are simulated annealing (SA) [14] and harmony search (HS) [15,16]. Harmony search (HS) algorithm is an interesting evolutionary algorithm which was developed in an analogy with music improvisation process, to improve the pitches of instruments and obtain better harmony. Whereas HS algorithm is good at identifying high performance of search space, when compared with other evolutionary algorithms [17], and it has still a drawback: the fixed parameter adjusting pitch adjusting rate (PAR) when applied the web service composition [18]. Adjusting pitch adjusting rate (PAR) is a very important factor for the high efficiency of the HS algorithm and useful for optimal search [19][20][21][22].
is study contributed to the current understanding of the innovation process in two main stages: (1) a hybrid testing approach for web service composition is combining the functional and nonfunctional testing approaches. e hybrid testing method is selected because it can estimate the minimum defect, the efficiency of the web service, and its fitness value and time cost when more web service providers are selected. (2) e new proposed metaheuristic algorithm is a mutation-based harmony search (MBHS). Optimisation is compared with GA, which is categorised by evolutionary algorithms, SA (physics-related) and PSO (swarm intelligence). Moreover, business models are also considered for optimisation of web service compositions. e proposed algorithm provides useful data for an efficient search algorithm and will estimate the minimum defects per business plans and user needs. e remainder of this study is organised as follows: Section 2 presents related works. Section 3 presents the proposed framework. Experiments and evaluations are presented in Section 4. Finally, the conclusion and future works are presented in Section 5.

Related Work
e objective of testing a web service is to provide a system that operates well per the business goals. In [23], researchers designed criteria for evaluating a SOA to facilitate service providers and users and to develop a system resulting from commissioning, efficiency, and QoS criteria. e study of [6] designed characteristics of QoSs for SOAs to investigate the extent to which they affect the service of each characteristic and to accordingly adjust as the business requirements. Moreover, the technique used to check the validity of the web service has changed from document analysis to test case study. In [24], Bai et al. created a test case with the web services description language (WSDL), considering data types. Additionally, Hanna and Munro and Ma et al. [25,26] implemented the boundary value analysis from WSDL to create a test case for testing services. In [27], Bhat and Quadri compared this to the equivalence class partitioning technique. However, this method retained limitations on efficiency checking.
As for the problems of enhancing the efficiency of selecting a web service with different types and components [4], most studies focused on QoS. For example, in [28], Sun and Zhao solved the problem of selecting and sorting web service QoSs in terms of cost, time, and reliability by using global and local QoSs. Moreover, in [29], researchers used GA to examine the best web service from global and local optimisations. Liu et al. [30] proposed a web service composition that sorted global and local optimisations through a cultural genetic algorithm per the QoS of each selected service, including execution time. Additionally, Wang et al. [31] proposed a web service composition evaluated by QoS attributes to suggest the best web service composition. Decision makers generally examine a web service based on its fitness for responding to a service delivery requirement. e study of Upadhyaya et al. [7] proposed a method for selecting a hybrid web service that found a compromise between QoS and user perceptions to maximise business requirements. Moreover, in [32], Lin et al. classified human and web services of different operations for efficiency.
Considering web service composition via metaheuristic fitness algorithms, the studies of Mardukhi et al. and Liu et al. [29,30] applied genetic algorithms to find optimisation. Additionally, Fan et al. [33] implemented a stochastic swarm particle optimisation and simulated annealing to handle problems of selecting a web service composition based on QoS. e study of Parejo et al. [34] used GRASP and the path relinking hybrid algorithm to evaluate web service selection based on execution time. In [35], Yu et al. proposed service components for selecting efficient services from groups by using the greedy algorithm and ACO. ese algorithms helped efficiently locate each group of complex services by time and quality. e study of Mao et al. [36] predicted the priority of QoSs for service providers and users using PSO, helping achieve optimisation and adjust to users' need for equivalence. In [37], the researchers implemented GP to estimate and analyse QoS. Moreover, Liu et al. [38] implemented social learning optimisation to enhance the efficiency of problem solving for selecting web services based on optimisation.
Currently, enhancing the efficiency of web service composition is extremely important for business organisations that provide services. However, most web testing only focuses on efficiency.
is lacks the hybrid flexibility 2 Computational Intelligence and Neuroscience represented in Table 1. Furthermore, it does not focus on data accuracy or service defects. Simply focusing on efficiency is inadequate. In this study, functional and nonfunctional testing are implemented in a hybrid fashion to locate web services with minimum defects. With regard to the optimal solution, harmony search (HS) algorithm [15] has many advantages compared with other metaheuristic algorithms [17], which imposes fewer mathematical requirements and does not require initial value settings of the decision variables. In particular, the HS algorithm uses stochastic random searches, with derivative information being also unnecessary and generates a new vector after considering all existing solutions. Many studies have modified the HS algorithm by dynamically updating the parameters and generating a new harmony search. For example, Improved harmony search algorithm called IHSA [21] is a modification been done in two parameters: pitch adjusting rate (PAR) and using bandwidth (BW) randomly. In addition, dynamic selection of BW and PAR parameters has been proposed [39]. Maximum and minimum values were replaced BW in HM process, and the PAR value was linearly decreased. In [20], a novel HS modification for both HMCR and PAR, dynamically in the improvisation process, was proposed. Al-Betar et al. [19] proposed a multipitch adjusting rate strategy to modify PAR. Sarvari and Zamanifar [22] proposed improvement in HS by statistical analysis, in which new harmony and BW is modified. erefore, the proposed model improved harmony search (HS) algorithm and focused on the pitch adjustment rate (PAR) stage. Till date, HS algorithm used fixed values for PAR stages, and many researches modified HS for applications [18,19,[40][41][42][43][44] where PAR is a very important parameter that can help in increasing the variations of the generated solutions by including more solutions in the search space of the optimal solution. is motivated current research in developing our proposed model to a new stage of PAR, called mutation-based HS approach.

Proposed Model
is section presents a hybrid testing approach for the web service composition framework, as shown in Figure 1. Web service testing begins with the creation of a business process. Each process employs web services that are tested to provide services with minimum defects. Furthermore, QoS represents web service quality. For testing, a business process is created for a goods ordering service [34], as shown in Figure 2. It uses business process modelling notation, consisting of seven types of services and three representatives for each type. For example, one delivery company provides three types of services. e process begins with the ordering of goods, which can be paid in cash or by credit card. If credit card, the payment is verified and later accepted. Furthermore, the stock of goods is checked. e ordered goods will then be subtracted from the stock. If the ordered goods are out of stock, delivery will be delayed and annotated. When the goods are ready for delivery, it will be delivered to the customer with a digitally signed invoice. Finally, the customer's satisfaction will be monitored.
We present three processes for testing web service composition. First, we analyse the WSDL and XML schema definition (XSD) from the created business process. Test cases are designed using the equivalence class partitioning technique of functional testing. Finally, the developed test case is verified and measured to estimate the efficiency of the QoS for nonfunctional testing. After testing the web services, the one with minimum defects for composition is selected.

Data Analysis.
e WSDL and XSD schema are analysed to find the operator, parameters, the data type of each parameter, and conditions for each data type. is is performed to set the conditions for testing (e.g., check credit card, Figure 3). Additionally, XSD data types are designed by the equivalence class partitioning method.

Test-Case Generator.
e created test case from the schema analysis stage is thus implemented. Creating the data for the test case can be divided into two stages, as follows.

Creating Equivalence Class Partitioning.
To create the equivalence class from WSDL and XSD of check credit card, the data are categorised into two groups: valid and invalid equivalence partitioning. is reduces the complexity of the data for testing, as shown in Figure 4.

Creating Test Cases.
e data, after partitioning through the equivalence class, are shown in Table 2. e test case (TCn) designs are stored as XML documents.

Test Case Execution.
is is the process of evaluating defects found by the test case. e calculation results can be divided into three sections, as follows.
(1) Rate-of-Defect Detection (RDT i Table 3. e severity value is calculated per the following equation: Computational Intelligence and Neuroscience where SV is the severity value of defect, j, and t is the number of defects identified by S of web service i. e defect severity impact for each test case is defined as follows: where max(S) is the highest severity value of all test cases.
(3) Test Case Weight (TCW i ). e TCW is the total sum of the two factors of web service i (i.e., RDT and DI).
Mathematically, TCW i can be computed by the following equation [45]:    [29] X GA X Liu et al. [30] X GA X Fan et al. [33] X PSO + SA X Parejo et al. [34] X GRASP + PR X Yu et al. [35] X GRASP + ACO X Mao et al. [36] X PSO X Fanjiang et al. [37] X GP X Liu et al. [38] X SLO X Proposed model X X MBHS X 4 Computational Intelligence and Neuroscience e test results of the goods ordering case study are represented in Table 4. ere are three service tasks, each having three web services available (i.e., WS1, WS2, and WS3). e bold value is the total sum of defect, which is calculated as TCW i .

Adjusting Defect in Tasks with QoS.
is process enhances the calculated service via functional testing, with an equal number of task defects (Table 5). is process is calculated by nonfunctional testing as QoS, which is divided into two parts of the calculation performance for each web service.
(1) QoS Attribute. e QoS describes the nonfunctional properties of the service. e QoS attributes of the candidate services defined in the related studies [28,46,47] are considered in this study. (2) QoS-Based Evaluation. Some QoS attributes with higher or lesser values give better results separately.
To adjust the process via normalisation, it is divided into three types: positive values (e.g. reputation (R)); negative values (e.g., response time (T) and execution cost (C)); and percentage values (e.g., availability (A)). e following equations are used to normalise the positive, negative, and percentage values, respectively where QoS reputation represents the value obtained from the conditional comparison; x is quality value of data which is required to be compared; q min is the least quality value of data; and q max is the maximum quality value of data where QoS response time/cost represents the value obtained via conditional comparison; x is the quality value of data which is required to be compared; q min is the least quality value of data; and q max is the maximum quality value of data where QoS availability represents the value of conditioned comparison and x is quality value of data required for comparison.

Computational Intelligence and Neuroscience
After the score of each data is calculated and adjusted, the total score of the web service is calculated using Equation (8) to represent it as the weight of test results, instead of zero defects via the QoS weight score where TCW QoS i represents the total QoS i weight score of the web service, i, which is less than satisfactory; T i is the response time; C i is the monetary value of the service; A i is availability; and R i is the reputation of web service i.

Service Selection for Composition by Mutation-Based Harmony Search.
is process provides a repository for the increasing task and various candidate services. For example, the candidate service from web service test results for each task is composed considering the business plan, as shown in Figure 2 and detailed in Figure 5. T 1 , T 2 , . . . , T n are tasks that include the web service composition process, and CS 1 , CS 2 , . . . , CS n are candidate service sets for the tasks, where the candidate service number of each set is m i , i ∈ 1, 2, . . . , t { }. e mathematical model of the web service composition can be described as follows: fitness value : where X i represents a service selection scheme; f(X i ) represents the fitness value; TCW i is the weight of the test results by test case in functional testing; TCW QoS i is the weight of the test results by QoS in nonfunctional testing; and n is the number of tasks.
To solve the problem of web service composition, the HS algorithm is used to determine the fitness value while avoiding all possible compositions. However, the problem of the basic HS algorithm is continuous and cannot directly generate a candidate service for this problem. erefore, we present a discrete variant of the HS algorithm and enhance a novel method of generating a new harmony pitch adjust rate (PAR) using mutation operations instead of random consideration. All the operators are summarised in Figure 6.

Initialisation Parameters and Harmony Memory.
First, we set the default parameter of the harmony memory size (HMS), harmony memory consideration rate (HMCR), PAR, and the termination criterion (e.g., maximum number of iterations). Furthermore, we randomly select from the candidate services of each task of the business plan. ose services are then initialised (i.e., HMS) and stored in the harmony memory (HM). Subsequently, we evaluate the fitness value using Equation (9). e population model is defined by Equation (10), where X i is a candidate service and 1 ≤ i ≤ HMS HM �

Generating a New
Harmony. At this stage, another harmony position is created, X new � [X new 1 , X new 2 , . . . , X new n ], considering the HMCR. First, we randomise a digit between 0 and 1. If the value is less or equal to the HMCR, another position is created from memory. Otherwise, the new position is randomised in the set range. Furthermore, after obtaining every component of the memory consideration, we check whether the pitch should be adjusted. For randomising the digit between 0 and 1, mutation operations [48,49] are used when the value is less than or equal to PAR for the nearest position. An example that explains these three operators are detailed in Figure 7.

Updating the HM.
At this stage, X new is compared to X worst in the HM. If a new harmony has a better fitness value than X worst , it is substituted at the new position with X new .

Experiment and Evaluation
is section describes the experiment and the results with a case study of web service composition. e proposed experimental framework is described as follows. First, the case study covers the testing activities performed on a sample business process model for servicing all composition plans of the candidate web service. Furthermore, experimentation settings and algorithm parameters are described.
e aim of the experiment is to compare the performance of hybrid testing for the service composition with those using other metaheuristic algorithms for solving problems. Finally, the results are obtained from the case study of the framework for testing processes and composition recommendations.

Experimental Design.
is section describes the methodology for the efficiency check to find web service composition defects via MBHS for the various business processes. For the first business model shown in Figure 2 [34], a goods ordering service consists of seven services: check credit card; pay by credit card; check stock; reserve for shipment; ship goods; invoice; and evaluate satisfaction. Each service includes representative service providers. In this experiment, each service has three tasks that show the defect results of service groups calculated from Table 6. e second business model concerns ticket reservation [50]. Considering the set of QoS attributes (i.e., cost, response time, availability, and reputation) and TCW, the QoS value ranges are randomly generated among 0 ≤ cost ≤ 100, 0 < response time ≤ 20 s, 60 ≤ availability ≤ 100, 0 ≤ reputation ≤ 10, and 0 ≤ TCW ≤ 10. e following experiment runs 20  Computational Intelligence and Neuroscience times, and the results are averaged. e parameters used for each algorithm are described in the experiment. e service selection algorithm is realized using MATLAB 8.3.0.532 and is performed on a computer equipped with an Intel CPU, Core i7 @ 3.4 GHz, running on Windows 10 Pro 64 bit and 8-GB memory.

Feasibility and Scalability Analysis.
To analyse the feasibility using MBHS, the tasks are classified as 7, 9, 20, 40, and 80. Each task consists of 10 candidate services, which are compared by discrete PSO (DPSO) [51], GA, and population-based simulated annealing (PBSA) [52]. e response times of different tasks are evaluated and processed in a similar environment for estimating fitness value. e experimental results are presented in Figure 8, where the y axis indicates the execution time and the x axis indicates the number of tasks. e parameters used for each technique are described in Table 7.
According to Figure 8, there is an increase in tasks. MBHS uses less time compared to the other three algorithms. According to the experimental results, MBHS estimates the fitness value without considering time impacts, although the number of tasks increases.

Case Study I.
To estimate the efficiency of MBHS for solving the problem of service selection with minimum defects, the defect results of the first business model (Table 6) are used. For this experiment, MBHS, DPSO, GA, and PBSA algorithms are applied to the service selection problem with minimum defects. e four algorithms are executed in the same environment, using the following parameters. e common parameters are initial population � 10 and iteration � 100. e MBHS settings are N h � 10, HMCR � 0.9, and PAR � 0.1. e GA settings are Pc � 0.8, Pm � 0.1, and  Figure 6: Mutation-based harmony search algorithm. 8 Computational Intelligence and Neuroscience selector with elitism.
e PBSA settings are initial population � 2, T0 � 100, alpha � 0.9, and nMove � 3. e four algorithms are executed, and the running times and fitness values are obtained by each algorithm and recorded. e experimental results are depicted in Figure 9, where the y axis indicates the execution time of the algorithm and the x axis indicates the iteration. In Figure 10, the y axis indicates the fitness value of the algorithm, and the x axis indicates the iteration. According to Figure 10, the MBHS algorithm can estimate the best fitness value within the same iteration, compared to the other three algorithms that estimate using more than 30 iterations. It also uses the least time for estimating the fitness value, as shown in Figure 9. e mean values obtained are presented in Table 8. e fitness value obtained by MBHS is 0.279 at the 80 th iteration. e PBSA algorithm has the worst fitness value,  (15) X new � New Harmony position; (16) For j ⟵ 1 to N task do (17) If (rand(0,1) ≤ HMCR) then (18) X new i j ∈ X 1 j , X 2 j , . . . , X HMS j ; (19) If rand(0,1) ≤ PAR then (20) Mutation Operator ⟵ Random (Swap, Insert, Reverse); (21) X new i j ⟵ Mutation Operator; (22) End if (23) End if (24) End for (25) If (fitness value new < fitness value worst ) then (26) X worst � X new ; (27) Update Worst, Best; (28) End if (29) End for (30) While (it ⟵ 1 to N it )// * e terminate criterion * // (31) Return X best � e best solution in the harmony memory; us, we conclude that MBHS is more efficient than the other algorithms.

Case Study II.
is experiment solves the service selection problem with minimum defects. e experiment is divided into tasks ∈ {20, 40} and candidate services ∈ {20, 60, 100} for service composition. To check the MBHS efficiency of this study, the DPSO, GA, and PBSA algorithms are used to solve the service composition defect. All algorithms are processed in the same environment for estimating the fitness value. For all the algorithms, the iteration is limited to 1,000.       Figure 11, where the y axis indicates the fitness value of the four algorithms, and the x axis indicates the algorithm. e execution time and fitness value estimated for the 20 tasks using the four algorithms are presented in Table 9, which show that MBHS is the quickest in estimating the fitness value at the 1,000 th iteration, whereas PBSA was the slowest. Furthermore, MBHS, with 20 Figure 12, where the y axis indicates the fitness value of the four algorithms, and the x axis indicates the algorithm. e execution time and fitness value estimated for 40 tasks, using the four algorithms, are presented in Table 9. MBHS was the quickest in estimating the fitness value at the 1,000 th iteration, whereas PBSA was the slowest. Furthermore, MBHS, with 20 candidate services, had a fitness value of 2.034, processed for 7.346 s. With 60 candidate services, it had a fitness value of 0.731, processed for 12.367 s. With 100 candidate services, it had a fitness value of 0.566, processed for 15.010 s. Compared to the other three algorithms, MBHS searched for the best fitness value in least time. According to this experiment, MBHS is more efficient than the other algorithms.

Conclusion
is study proposed hybrid testing to overcome the problem of web service selection validity and reliability using MBHS to search for the minimum defect and provide the availability and accurate performance for web services in terms of functional and nonfunctional requirements. Furthermore, it is more complicated to enhance the service efficiency of a web service. However, doing so generally focuses on   solving the problem without considering data accuracy. us, the proposed framework can be efficiently applied to detect defects. Additionally, it is most efficient for selecting web services for composing. To compare web service selection using different algorithms, a test case for testing the QoS, based on the functional requirement, was required. Furthermore, it required nonfunctional requirements (e.g., response time, cost, availability, and reputation) for testing web services. e MBHS algorithm combined hybrid testing and helped select various web services per business requirements.
Data Availability e 7 tasks of service data used to support the findings of this study are available from the cited paper [34]. e 9 tasks of service data used to support the findings of this study are available from the corresponding author upon request. And 20, 40, and 80 tasks of service data used to support the findings of this study are available from simulation in experiment upon request.  Computational Intelligence and Neuroscience 13