Solving Ontology Metamatching Problem through Improved Multiobjective Particle Swarm Optimization Algorithm

In recent years, knowledge representation in the Arti ﬁ cial Intelligence (AI) domain is able to help people understand the semantics of data and improve the interoperability between diverse knowledge-based applications. Semantic Web (SW), as one of the methods of knowledge representation, is the new generation of World Wide Web (WWW), which integrates AI with web techniques and dedicates to implementing the automatic cooperations among di ﬀ erent intelligent applications. Ontology, as an information exchange model that de ﬁ nes concepts and formally describes the relationships between two concepts, is the core technique of SW, implementing semantic information sharing and data interoperability in the Internet of Things (IoT) domain. However, the heterogeneity issue hampers the communications among di ﬀ erent ontologies and stops the cooperations among ontology-based intelligent applications. To solve this problem, it is vital to establish semantic relationships between heterogeneous ontologies, which is the so-called ontology matching. Ontology metamatching problem is commonly a complex optimization problem with many local optima. To this end, the ontology metamatching problem is de ﬁ ned as a multiobjective optimization model in this work, and a multiobjective particle swarm optimization (MOPSO) with diversity enhancing (DE) (MOPSO-DE) strategy is proposed to better trade o ﬀ the convergence and diversity of the population. The well-known benchmark of the Ontology Alignment Evaluation Initiative (OAEI) is used in the experiment to test MOPSO-DE ’ s performance. Experimental results prove that MOPSO-DE can obtain the high-quality alignment and reduce the MOPSO ’ s memory consumption.


Introduction
In recent years, knowledge representation in the Artificial Intelligence (AI) domain is able to help people understand the semantics of data and improve the interoperability between diverse knowledge-based applications. Semantic Web (SW) [1][2][3][4], as one of the methods of knowledge representation, is the new generation of World Wide Web (WWW), which integrates AI with web techniques and dedicates to implementing the automatic cooperations among different intelligent applications. Ontology, as an information exchange model that defines concepts and formally describes the relationships between two concepts, is the core technique of SW, implementing semantic information shar-ing and data interoperability in the Internet of Things (IoT) domain [5]. However, one concept may be described with different terminologies in various specific fields, yielding the heterogeneity problem among different ontologies [6]. The heterogeneity issue hampers the communications among different ontologies and stops the cooperations among ontology-based intelligent applications. If there is incapability of sharing knowledge for the sake of the final goal of SW, machines could not cooperate with each other. Therefore, the ontology heterogeneity issue directly affects the development of SW. To solve this problem, it is vital to establish semantic relationships, such as classes and properties for correspondences between heterogeneous ontologies so as to find the identical entity mappings, which is the so-called ontology matching. Similarity measures, which represent the core technology of the ontology matching [7], are used to compute the similarity value between two entities. The selection, combination, and tuning are three components of the similarity integration. Among them, selection is of importance for the integration, for some contradictory results are not possible to be integrated. Since only using a single similarity measure fails to ensure the confidence on all heterogeneous scenarios, various similarity measures are integrated to obtain a satisfactory alignment [8]. Ontology metamatching problem is aimed at how to choose the decent similarity measures, assign appropriate weights for them, and how to verify the alignment by removing the incorrect correspondences to enhance the quality of matching results, which is commonly a complex optimization problem with many local optima [9].
Although there have been many studies on solving the ontology matching problems with single-objective optimization strategies [10], it only optimizes one of the two conflicting objectives, namely, recall or precision. Optimizing recall (or precision) will result in decreasing precision (or recall), resulting in bias improvement. Due to the alignment's fmeasure that comprehensively considers these two objectives, it is necessary to trade off two conflicting objectives at the same time to achieve better results. However, the research on the multiobjective ontology matching technology is still in its infancy. To this end, MOPSO is used in this work to improve the alignment's quality. Most MOPSO algorithms use an external archive to save local and global best particles, greatly increasing the memory consumption when there are multiple local optimal solutions on the Pareto front, and sometimes these solutions cannot fully converge to the real Pareto fronts. For MOPSO, how to enable the solutions to fast converge to the Pareto fronts (PF) and how to make solutions evenly distributed on the true PF are two fundamental issues to be addressed. The search space of the solutions is greatly cut down by the local and global best particles used to guide the update of solutions at current generation, and MOPSO tends to be trapped into local optima due to its fast convergence and uneven distribution. Therefore, enhancing the population diversity to reduce the probability of premature convergence is vital to the performance of MOPSO. In recent years, decomposition-based archiving approach for MOPSO has become a popular method to balance convergence and diversity of the population [11], but it still faces great challenges and increases memory consumption in dealing with complex optimization problems with multiple local optima, such as the ontology metamatching problem. Multiobjective particle swarm optimization for feature selection with fuzzy cost [12] also has good performance in improving the diversity of population. This method defines a fuzzy crowding distance measure to save candidate solutions and determine the global best particles in the archive. To better balance the convergence and diversity of the population and save the algorithm's memory consumption, an improved MOPSO algorithm for enhancing population diversity in MOPSO is proposed in this work to better guide the update of the solutions in the swarm. Since the particle swarm optimization (PSO) is a popular strategy to solve the ontology matching problem [13], this paper proposes a multiobjective PSO based on diversity enhancing strategy (MOPSO-DE) to solve it. In particular, MOPSO-DE uses a diversity enhancing strategy to efficiently improve the alignment's quality. To be specific, the contributions of this paper are as follows: (1) a multiobjective optimization model is constructed for the ontology metamatching problem; (2) a general multiobjective optimization framework is presented for solving the ontology metamatching problem to evaluate the alignment's quality; and (3) a MOPSO-DE is proposed to solve the ontology metamatching problem efficiently.
The rest of the paper is organized as follows: Section 2 presents the development of the existing swarm intelligence algorithm-based ontology matching techniques. Section 3 describes the related concepts of the ontology matching problem and the mathematical optimization model. Section 4 elaborates the implementation details of MOPSO-DE, Section 5 presents the experimental results and analysis, and Section 6 summarizes the work of this paper and determines the direction of following work.

Swarm Intelligence Algorithm-Based Ontology Matching Technique
Due to the complex optimization problem in the ontology matching domain, the Swarm algorithms (SI) algorithms, such as Brain Storm Optimization (BSO) algorithm [14], Parallel Compact Differential Evolution (PCDE) algorithm [15], Compact Genetic Algorithm (CGA) [16], Artificial Bee Colony (ABC) algorithm [17], and Evolutionary Algorithm (EA) [18], have become popular methods to integrate heterogeneous ontologies. Martinez-Gil and Aldana-Montes are the first to propose Genetics for Ontology Alignments (GOAL) [19] to determine the suitable aggregating weights generated by each similarity measure using the Evolutionary Algorithm (EA). Alexandru-Lucian and Iftene [20] optimize both parameters and the threshold in the matching process to further filter unauthentic results in the matching process. Acampora et al. [21] improve the quality of the solution and the convergence speed of EA through a local search process. Xue and Wang [22] utilize a new metric to determine the weight required by several pairs of ontologies in the matching process at a time in order to approximately measure the alignment's f-measure and guide the search direction of the algorithm. He et al. [23] propose an optimization method named Artificial Bee Colony (ABC) algorithm to integrate diverse similarity measures in the matching process to improve the alignment's quality. Xue et al. [24] use NSGA-III [25] to integrate diverse similarity measures in the matching process. However, as the scale of the similarity measures increases, the quality decreases. These proposals need distributions of memory space to store the similarity matrices determined by similarity measures, which increases the space complexity of the algorithm and thus cannot obtain high-quality alignment. To solve this problem, the Genetic Algorithm based Ontology Matching (GAOM) [26] uses EA to find the optimal entity matching pairs to achieve high-quality matching results. Alves et al. [27] 2 Wireless Communications and Mobile Computing propose a hybrid GA, which combines the EA with a local search strategy to match instance information and determine the optimal concept mapping. Recently, Chu et al. [28] propose model ontology in vector space and propose a Compact Evolutionary Algorithm (CEA) to determine the optimal alignments. Although the above ontology matching methods based on SI to integrate heterogeneous ontologies can ensure the quality of alignment, the convergence speed of the algorithm is slow. Compared with the SI mentioned above in the ontology matching domain, PSO has the advantage that there is only one-way information flow; that is, all particles can converge quickly. MapPSO [29] addresses the ontology matching problem by using PSO and introduces a new metric based on statistical results to approximately evaluate the alignment's quality. Huang et al. [30] propose a compact PSO (cPSO) for sensor ontology matching on Artificial Internet of Things (AIoT). However, these proposals all have the drawback of premature convergence so that the global optimal solution cannot be found to obtain a high-quality alignment. To overcome this drawback, multiobjective PSO (MOPSO) is introduced to the ontology matching domain. Xue et al. [31] propose a compact MOPSO applied in large-scale biomedical ontology domain.
There exist a number of local optimal solutions in the ontology metamatching problem, and infinite candidate solutions can be found in the search space [9]. One of the main concerns in the existing MOPSO in terms of the ontology metamatching domain is how to enhance population diversity to improve the alignment's quality. To this end, this work puts forward an improved MOPSO to get rid of an external archive with a high computational cost, and it utilizes a diversity enhancing strategy to strike a balance between convergence and diversity.

Ontology and Ontology Alignment
Definition 1. An ontology is defined as a 3-tuple [32].
where C represents a collection of objects targeted at a certain domain, e.g., "book" can be interpreted as a class of all book objects in a library; P represents the set of relationships between two objects, e.g., "has author" is a relationship between the "book" object and the "author" object; and I represents the collection of specific individual instances corresponding to instance objects, e.g., "Computer Book" is an instance of the class 'book." Figure 1 presents an example of two ontologies under alignment. The classes, properties, and instances of ontology are called entities. The alignment of two ontologies is the process of finding correspondences between entities. In addition, rounded rectangles stand for classes, e.g., "Vehicle," and one-way arrows are properties between two objects, e.g., "is a." To ensure the semantic interoperation between different systems, ontology is needed to describe semantic relations. Due to ontology designers' subjectivity, the concepts of ontology may have different descriptions. Thus, the heterogeneity problem of ontology is presented. To solve this problem, it is necessary to find out the mappings between the concepts of ontology, i.e., the so-called ontology alignment.
Definition 2. Ontology alignment is a set of correspondences between entities, which is defined as a 4-tuple.
where e and e ′ are, respectively, the entities of source and target ontologies, sim is the similarity in the range [0,1], rel is the relation between e and e ′ . In Figure 1, the doublesided arrows establish correspondences between the two entities. For example, a connection between "Wheeled" in ontology 1 and "Wheeled Vehicle" in ontology 2 is established with a similarity of 0.82. The value of similarity represents the confidence, for example, 0 means the nonequivalence relation and 1.0 is the equivalence relation.
Definition 3. The process of ontology alignment is usually defined as a function.
where A is the final matching result; O 1 and O 2 are source ontology and target ontology to be matched; RA is the reference alignment; p is the set of parameters such as weights and thresholds; and r is an external resource, such as an external dictionary, e.g., WordNet. Figure 2 shows the process of ontology alignment: in the matching process, only the matching results' confidence value greater than the threshold t ∈ ½0, 1 are considered to be authentic, so how to filter the unauthentic results is the key to the ontology alignment process, and the filtering process is finished by the similarity measures.

Similarity
Measures. Generally, there are three kinds of similarity measures in the ontology matching field, i.e., linguistic-based, string-based, and structure-based. Linguistic-based similarity measure uses the external digital dictionary, such as WordNet [33] to calculate the similarity value of two words by comparing their relation of hypernymy or whether they are synonymous. Wu and Palmer [34] is a classic similarity measure used in WordNet, which calculates the correlation of two words by considering the depth of the two synonym sets and the depth of the Least Common Subsumer (LCS). The formula of Wu and Palmer is defined as follows: where s 1 and s 2 are the strings to be matched. LCSðs 1 , s 2 Þ represents the closest common parent concept of s 1 and s 2 ; and depðs 1 Þ, depðs 2 Þ, and depðLCSðs 1 , s 2 ÞÞ are depth of s 1 and s 2 and LCSðs 1 , s 2 Þ, respectively.

Wireless Communications and Mobile Computing
String-based similarity measure utilizes different distance calculation algorithms, such as N-gram [35], SMOA [20], Levenshtein [36], and Jaro-Winkler [37] distance. Because of the superior performance of N-gram in comparing the similarity of two strings [35], this paper uses N-gram to calculate the similarity value of strings in the field of ontology alignment, and its formula is as follows: where comðs 1 , s 2 Þ is the amount of common substrings of s 1 and s 2 and N s 1 and N s 2 are the number of substrings, respectively. According to the paper [38], the best performance can be achieved when the number of substrings is 3. Structure-based similarity measure mainly calculates the similarity value of two entities through the relationship between superclasses and subclasses. Matching entities are considered structurally similar if they have the same amount of superclasses and subclasses. In this paper, similarity is calculated based on the amount of superclasses and subclasses of entities in different ontologies, and the relevant formula is defined as follows: 3.3. Ontology Metamatching Problem. Since one similarity measure cannot guarantee the matching quality in all circumstances, various similarity measures are usually integrated to enhance the quality of matching results and get the final similarity value. In this work, the weighted average strategy is chosen to integrate different similarity measures, which is defined as follows: where w i ∈ ½0, 1, i = 1, 2, 3, ∑w i = 1.
How to assign the appropriate weights to similarity measures and filtering weights of the ontology matching result for ontology alignment is referred to the ontology metamatching problem [19]. In addition, the weight tuning process requires a trade-off between two conflicting objectives, i.e., the alignment's recall and precision, which are as follows: where R and A are reference alignment and the final alignment, respectively, formulated by experts in specific fields.  When recall equals 1, all correct matching pairs in the correspondences are found, while when precision equals 1, all matching pairs in the correspondences found are correct. To be specific, the multiobjective optimization model of the ontology metamatching problem in this paper is defined as follows: where W and T represent the weight sets and the filtering threshold, respectively; w i is the aggregating weights of the i-th similarity measure; N is the amount of similarity measures; and recallðW, TÞ and precisionðW, TÞ calculate the recall value and precision value of the ontology metamatching results under the parameters W and T.

Multiobjective Particle Swarm Optimization Algorithm Based on Diversity Enhancing Strategy
To effectively solve the ontology metamatching problem, a framework of multiobjective particle swarm optimization algorithm based on diversity enhancing strategy for solving the ontology metamatching problem is proposed. In order to simplify the structure of the algorithm and reduce the influence of external archive on the performance of solving complex optimization problems, such as the ontology metamatching problem, a diversity enhancing strategy is utilized in this framework to search for the optimal solutions to obtain perfect alignment. The framework of MOPSO-DE for ontology metamatching is given in Figure 3, where s 1 , s 2 , and s 3 represent the similarity measures; M 1 , M 2 , and M 3 represent the corresponding similarity matrix; and w 1 , w 2 , and w 3 are the aggregating weights for the similarity matrix, respectively. M is the matrix aggregated by M 1 , M 2 and M 3 ; Threshold is the filtering threshold to exclude unauthentic results; the multiobjective processing module, representing the core of the framework for the ontology metamatching problem, utilizes a MOPSO-DE algorithm to trade off the two conflicting objectives simultaneously, i.e., recall and precision, to evaluate the alignment's quality. In Figure 3, the final goal of MOPSO-DE for ontology metamatching is to obtain the corresponding similarity matrix for each similarity measure, assign the right weight and find a suitable threshold to filter unauthentic alignment, to trade-off the two conflicts evaluation metrics, namely, recall and precision, and then, with the help of the use of the diversity enhancing strategy, to ensure the perfect alignment.
4.1. Encoding Mechanism. In this paper, a decimal coding scheme [13] is used to encode each solution, where each particle consists of the weight set and a threshold. Figure 4 shows an example of encoding aggregating weights. r 1 ′ and r 2 ′ are the cut points used to get the weights. Two cut points are needed to get three subintervals on one interval. n − 1 cut points are used to represent the aggregating weights of n similarity measures, and the sum of the weights is equal to 1, which ensures the efficiency of the algorithm. This coding mechanism ensures the allocation of the weight sets. In particular, the encoding process is as follows: (1) randomly generate n real numbers in [0,1], marked as r 1 , r 2 , …, r n−1 , and r n , respectively; (2) sort the first n − 1 cut points r 1 , r 2 , …, and r n−1 in ascending order to get r 1 ′ , r 2 ′ , …, and r n−1 ′ , and r n is the threshold to filter invalid results; and (3) calculate the aggregating weights according to the following equation: Two conflicting objectives Evaluate Multi-objective processing module Alignment's quality Diversity enhancing strategy A Figure 3: The framework of multiobjective particle swarm optimization algorithm based on diversity enhancing strategy for solving ontology metamatching problem.   [13]. To improve the algorithm's efficiency, in this work, we use the diversity enhancing strategy to execute the evolutionary process. Diversity enhancing strategy is introduced to trade off convergence and diversity. In the proposed MOPSO-DE, pairwise particles at each generation guide the update of the particles instead of the local and global best particles. In particular, two particles are first randomly selected from the elite set to compete in pairs, and then, the particles with better recall and precision value are marked as the winner particles, which guide the update of the loser particles at each generation. To better illustrate the principle of the diversity enhancing strategy, Figure 5 is given to present its framework.
MOPSO cannot solve the ontology metamatching problem well to obtain good alignment because there are many optimal solutions which affect the performance of the algorithm. Therefore, the diversity enhancing strategy guides the update of particles, which no longer relies on optimal solutions and external archive, simplifying the structure of the algorithm, greatly reducing memory consumption, and ensuring the quality of the final alignment. Assuming that X w,k ðtÞ is the position variable of the winning particle and V l,k ðtÞ and X l,k ðtÞ are the velocity variable and position variable of the loser particle, respectively. For the i-th particle in generation t, the update formula on the loser particles in the k-round competition is as follows: where φ 1 and φ 2 are two weight vectors. As can be seen from the update formula of loser particles in the k-round competition, the position update formula of particles adopts the position update formula of the classic PSO, and the diversity enhancing strategy affects the update of velocity of loser particles in the competition. The first part of the velocity update formula φ 1 V l,k ðtÞ is consistent with the inertia term of the classic PSO algorithm. The second part φ 2 ðX w,k ðtÞ − X l,k ðtÞÞ indicates that loser particles learn from winner particles through DE to update instead of being guided by the optimal solutions in the population.
In addition, because DE can guide the loser particles to update, it can further ensure the performance of the algorithm and get better ontology metamatching results.

The Pseudocode of Multiobjective Particle Swarm
Optimization Algorithm Based on Diversity Enhancing Strategy. Given the source and target ontologies O 1 and O 2 , number of iteration N, population size n, particle's current position X, particle's current velocity V, elite particle set L, current generation t, particle's fitness value P recall and P precision , the problem dimension dim, and the winner particle P win and the loser particle P loser , the pseudocode of MOPSO-DE is presented in Algorithm 1. MOPSO-DE first initializes the velocity and position of the particles and calculates the two objective function values of the particles, e.g., P recall and P precision , as fitness value. Since the selection of elite particles should have good convergence and distribution, this paper combines nondominated sort with the calculation of the crowding distance of the solution from the fronts' solution set to obtain the elite particle set L. The calculation of crowding distance requires sorting all solutions in the population in descending order. Specifically, the crowding distance of the first and last solution is set to a maximum. The crowding distance of the i-th solution is the product of the absolute value of the objective function of the i − 1-th and i + 1 -th solution. Then, the recall and precision values of particles a and b are compared according to the pairwise competition strategy. If both recall and precision of particle a are greater than that of particle b, then a is the winner particle. Among them, the winner particle is defined as P win , and it guides the update of the particle P loser .

Experimental Configurations.
In order to verify the effectiveness of MOPSO-DE, the well-known benchmark Competition Learning

ID
Brief description

101-104
The matched source ontologies and target ontologies with same lexical, linguistic, and structural characters.

201-210
The matched source ontologies and target ontologies with same structural characters but with different lexical and linguistic characters.

221-247
The matched source ontologies and target ontologies with same lexical and linguistic features but with different structural characters.

248-266
The matched source ontologies and target ontologies with different lexical, linguistic, and structural characters.

Input:
two ontologies O 1 and O 2 , number of iteration N, population size n; Particle's current position X, Particle's current velocity V, elite particle set L; Output: Winner particle P win ; Initialization: 1 initialize generation t = 0; 2 calculate particle's fitness value P recall and P precision ; 3 for (i = 0; i < dim; i ++). 4 V½i=random(0, 1); 5 X½i=random(0, 1); 6 end for. 7 NonDominatedSort() 8 calculateCrowdingDistance(); 9 sortFronts(); 10 get elite particle set L; Evolution 11 get elite particle set L; 12 whilet < Ndo 13 randomly select two particles a and b from the elite set L;  Table 2 shows the alignment quality of recall-driven and precision-driven PSO and MOPSO as well as MOPSO-DE proposed in this paper running independently for 30 times. R and P represent recall-driven and precisiondriven in the classic PSO, respectively. Tables 3 and 4 show the comparison of mean and standard deviation in terms of recall and precision among recall-driven and precision-driven PSO, MOPSO, and MOPSO-DE. Tables 5 and 6 show the statistical analysis of t-test based on Tables 3 and 4, respectively. In terms of the memory consumption, MOPSO-DE is compared with MOPSO in Figure 6. In addition, recall and precision are averaged on 30 independent runs, and f-measure is calculated from them. Table 7 shows the comparison of MOPSO-DE with OAEI's participants in terms of the average matching results' quality. We choose f-measure as alignment's metric to trade off recall and precision, since f-measure comprehensively takes recall and precision into account.  Since the ontology metamatching problem is a relatively small-scale problem, the population size is set as 20. In particular, the configurations of PSO and MOPSO are determined according to the corresponding literatures [13,29] to guarantee the alignment's quality.   Table 2, since the ontology metamatching based on MOPSO-DE optimize two conflicting objectives simultaneously, i.e., recall and precision, better f-measure can be obtained compared with the classic PSO driven by only one objective. The single-objective PSO optimizes the quality of alignment by improving recall or precision, which leads to the sacrifice of another objective. MOPSO can better balance two objectives and achieve better results. In addi-tion, thanks to the diversity enhancing strategy, the solutions can fast converge to the Pareto fronts and the distribution of Pareto fronts' solutions are evenly distributed to help the algorithm jump from local optimal solutions. Therefore, the proposed MOPSO-DE has better convergence and distribution than MOPSO; thus, better solutions can be found. It can be found that the fmeasure obtained by MOPSO-DE is superior to PSO and MOPSO.

Wireless Communications and Mobile Computing
This work uses the t-test to measure the differences between different matching systems. The t value is the absolute value. The larger the t value, the more significant the performance difference between the two systems.
Since the experiments are running 30 times independently on each testing case, the tvalues are compared with 2.045. Tables 3 and 4 present the mean and standard deviation of recall and precision among recall-driven and precision-driven PSO, MOPSO, and MOPSO-DE in bench-mark track. In round parenthesis, there are the standard deviation values. As can be seen from Tables 3 and 4, the mean of recall and precision of MOPSO-DE is higher and the standard deviation is lower, which indicates the efficiency and stability of the MOPSO-DE. In addition, as can be seen from Tables 5 and 6, all t values are greater than 2.045, which indicate that MOPSO-DE is significantly different from the classical PSO and MOPSO in terms of performance.   Table 7, MOPSO-DE achieves the best fmeasure in the testing cases 1XX and 3XX, which shows that the multiobjective evolution mechanism can find better alignments than other ontology matching systems. In addition, the introduction of the diversity enhancing strategy can better balance the diversity and convergence to significantly improve alignment's quality. MOPSO-DE does not need subjective threshold given by experts in advance, which has strong robustness for various matching problems. About the testing cases 202, 209, 248, 249, and 250, our method does not require additional reasoning and repairing process, and less similarity measures are taken into consideration, which is a reasonable method for small-scale matching problems and guarantees the alignment's quality. MOPSO-DE achieves the good but not optimal f-measure than other ontology matching systems such as AgrMaker and ASMOV. While the results of other systems such as edna, AROMA, CODI, Falcon, MapPSO, and TaxoMap almost dropped or close to 0 in testing cases 202, 209, 248, 249, and 250. The reason for this is that heterogeneous characteristic is a complex problem with more local optimal solutions. For the testing cases 202 and 209, two ontologies under alignment with different lexical and linguistic characters only have the same structural characters, which makes the matching process difficult. For the testing cases 248, 249, and 250, two ontologies under alignment are highly heterogeneous, which makes alignment's quality relatively low. MOPSO-DE can achieve satisfactory results compared with the SI algorithms such as MapPSO in these cases due to the multiobjective evolution and the diversity enhancing strategy. MapPSO is a PSO-based approach, whose weights are manually set, making it easier to be trapped into local optima. However, MOPSO-DE combines multiobjective PSO and DE to avoid local optima and achieve highquality alignments. Figure 6 compares MOPSO-DE with MOPSO on the memory consumption. The introduction of the diversity enhancing strategy updates the particles by pairwise particles at each generation instead of the local and global best particles in the archive. It can be seen that without the external archive in storing particles' information, MOPSO-DE can significantly reduce the memory consumption, which indicates that the diversity enhancing strategy can effectively reduce the computational cost. To sum up, MOPSO-DE is capable of efficiently solving the ontology metamatching problem.

12
Wireless Communications and Mobile Computing

Conclusions
This paper is devoted to solving the ontology metamatching problem. The cooperation between intelligent application systems needs to share the semantic information of data, and the term description of different fields leads to semantic heterogeneity. The heterogeneity issue hampers the communications among different ontologies and stops the cooperations among ontology-based intelligent applications. In order to build a semantic bridge between different applica-tion systems, ontology is needed to describe semantic relations. To solve this problem, it is necessary to find the mappings between ontology entities, that is, to perform the ontology metamatching process. The ontology metamatching problem is firstly defined as a multiobjective optimization problem in this work, and a MOPSO-DE is proposed to solve it. MOPSO-DE uses a diversity enhancing strategy to efficiently improve the alignment's quality. Diversity enhancing strategy enables the solutions to converge quickly to the Pareto fronts and to be uniformly distributed on the However, according to the experimental results, the alignment's quality of MOPSO-DE is not good when dealing with relatively complex datasets, such as 202, 209, 248,249, and 250. Therefore, we will further improve the diversity enhancing strategy later. The performance of MOPSO-DE in this paper is good when dealing with problems with small dimensions, so it can be considered to apply MOPSO-DE to solve large-scale problems in the future work.

Data Availability
The data used to support this study can be found at http:// oaei.ontologymatching.org.

Conflicts of Interest
The author declares that there are no conflicts of interest.