The dual hesitant fuzzy sets (DHFSs) were proposed by Zhu et al. (2012), which encompass fuzzy sets, intuitionistic fuzzy sets, hesitant fuzzy sets, and fuzzy multisets as special cases. Correlation measures analysis is an important research topic. In this paper, we define the correlation measures for dual hesitant fuzzy information and then discuss their properties in detail. One numerical example is provided to illustrate these correlation measures. Then we present a direct transfer algorithm with respect to the problem of complex operation of matrix synthesis when reconstructing an equivalent correlation matrix for clustering DHFSs. Furthermore, we prove that the direct transfer algorithm is equivalent to transfer closure algorithm, but its asymptotic time complexity and space complexity are superior to the latter. Another real world example, that is, diamond evaluation and classification, is employed to show the effectiveness of the association coefficient and the algorithm for clustering DHFSs.
1. Introduction
Correlation indicates how well two variables move together in a linear fashion. In other words, correlation reflects a linear relationship between two variables. It is an important measure in data analysis, in particular in decision making, medical diagnosis, pattern recognition, and other real world problems [1–7]. Zadeh [8] introduced the concept of fuzzy sets (FSs) whose basic component is only a membership function with the nonmembership function being one minus the membership function. In fuzzy environments, Hung and Wu [9] used the concept of “expected value” to define the correlation coefficient of fuzzy numbers, which lies in [-1,1]. Hong [10] considered the computational aspect of the Tω-based extension principle when the principle is applied to a correlation coefficient of L-R fuzzy numbers and gave the exact solution of a fuzzy correlation coefficient without programming or the aid of computer resources. Atanassov [11, 12] gave a generalized form of fuzzy set, called intuitionistic fuzzy set (IFS), which is characterized by a membership function and a non-membership function. In intuitionistic fuzzy environments, Gerstenkorn and Mańko [13] defined a function measuring the correlation of IFSs and introduced a coefficient of such a correlation. Bustince and Burillo [14] introduced the concepts of correlation and correlation coefficient of interval-valued intuitionistic fuzzy sets (IVIFSs) [12]. Hung [15] and Mitchell [16] derived the correlation coefficient of intuitionistic fuzzy sets from a statistical viewpoint by interpreting an intuitionistic fuzzy set as an ensemble of ordinary fuzzy sets. Hung and Wu [17] proposed a method to calculate the correlation coefficient of intuitionistic fuzzy sets by means of “centroid.” Xu [18] gave a detailed survey on association analysis of intuitionistic fuzzy sets and pointed out that most existing methods deriving association coefficients cannot guarantee that the association coefficient of any two intuitionistic fuzzy sets equals one if and only if these two intuitionistic fuzzy sets are the same. Szmidt and Kacprzyk [5] discussed a concept of correlation for data represented as intuitionistic fuzzy set adopting the concepts from statistics and proposed a formula for measuring the correlation coefficient (lying in [-1,1]) of intuitionistic fuzzy sets. Robinson and Amirtharaj [19] defined the correlation coefficient of interval vague sets lying in the interval [0,1] and proposed a new method for computing the correlation coefficient of interval vague sets lying in the interval [-1,1] using a-cuts over the vague degrees through statistical confidence intervals which is presented by an example. Instead of using point-based membership as in fuzzy sets, interval-based membership is used in a vague set. In [20], Robinson and Amirtharaj presented a detailed comparison between vague sets and intuitionistic fuzzy sets and defined the correlation coefficient of vague sets through simple examples. Hesitant fuzzy sets (HFSs) were originally introduced by Torra [21, 22]. In hesitant fuzzy environments, Chen et al. [23] derived some correlation coefficient formulas for HFSs and applied them to two real world examples by using clustering analysis under hesitant fuzzy environments. Xu and Xia [24] defined the correlation measures for hesitant fuzzy information and then discussed their properties in detail.
Recently, Dubois and Prade introduced the definition of dual hesitant fuzzy set. Dual hesitant fuzzy set can reflect human’s hesitance more objectively than the other classical extensions of fuzzy set (intuitionistic fuzzy set, type-2 fuzzy set (T-2FS) [25], hesitant fuzzy set, etc.). The motivation to propose the DHFSs is that when people make a decision, they are usually hesitant and irresolute for one thing or another which makes it difficult to reach a final agreement. They further indicated that DHFSs can better deal with the situations that permit both the membership and the nonmembership of an element to a given set having a few different values, which can arise in a group decision making problem. For example, in the organization, some decision makers discuss the membership degree 0.6 and the non-membership 0.3 of an alternative A that satisfies a criterion x. Some possibly assign (0.8,0.2), while the others assign (0.7,0.2). No consistency is reached among these decision makers. Accordingly, the difficulty of establishing a common membership degree and a non-membership degree is not because we have a margin of error (intuitionistic fuzzy set) or some possibility distribution values (type-2 fuzzy set), but because we have a set of possible values (hesitant fuzzy set). For such a case, the satisfactory degrees can be represented by a dual hesitant fuzzy element {(0.6,0.8,0.7),(0.3,0.2)}, which is obviously different from intuitionistic fuzzy number (0.8,0.2) or (0.7,0.2) and hesitant fuzzy number {0.6,0.8,0.7}. The aforementioned measures, however, cannot be used to deal with the correlation measures of dual hesitant fuzzy information. Thus, it is very necessary to develop some theories for dual hesitant fuzzy sets. However, little has been done about this issue. In this paper, we mainly discuss the correlation measures of dual hesitant fuzzy information. To do this, the remainder of the paper is organized as follows. Section 2 presents some basic concepts related to DHFSs, HFSs, and IFSs. In Section 3, we propose some correlation measures of dual hesitant fuzzy elements, obtain several important conclusions, and given an example to illustrate the correlation measures. In Section 4, we propose a direct transfer clustering algorithm based on DHFSs and then use a numerical example to illustrate our algorithm. Finally, Section 5 concludes the paper with some remarks and presents future challenges.
2. Preliminaries2.1. DHFSs, HFSs, and IFSsDefinition 1 (see [26]).
Let X be a fixed set then a dual hesitant fuzzy set (DHFS) D on X is described as;
(1)D={〈x,h(x),g(x)〉∣x∈X}
in which h(x) and g(x) are two sets of some values in [0,1], denoting the possible membership degrees and non-membership degrees of the element x∈X to the set D, respectively, with the conditions
(2)0≤γ,η≤1,0≤γ++η+≤1,
where γ∈h(x), η∈g(x), γ+∈h+(x)=∪γ∈h(x)max{γ}, and η+∈g+(x)=∪η∈g(x)max{η} for all x∈X. For convenience, the pair dE(x)=(hE(x),gE(x)) is called a dual hesitant fuzzy element (DHFE), denoted by d=(h,g), with the conditions γ∈h(x), η∈g(x), γ+∈h+(x)=∪γ∈h(x)max{γ}, and η+∈g+(x)=∪η∈g(x)max{η}, 0≤γ, η≤1 and 0≤γ++η+≤1.
Definition 2 (see [21, 22]).
Let X be a fixed set; a hesitant fuzzy set (HFS) A on X is in terms of a function that when applied to X returns a subset of [0,1], which can be represented as the following mathematical symbol:
(3)A={〈x,hA(x)〉∣x∈X},
where hA(x) is a set of values in [0,1], denoting the possible membership degrees of the element x∈X to the set A. For convenience, we call hA(x) a hesitant fuzzy element (HFE). We use 〈x,hA〉 for all x∈X to represent HFSs.
Definition 3 (see [11, 12]).
Let X be a fixed set, an intuitionistic fuzzy set (IFS) A on X is an object having the form
(4)A={〈x,μA(x),νA(x)〉∣x∈X}
which is characterized by a membership function μA and a non-membership function νA, where μA:X→[0,1] and νA:X→[0,1], with the condition 0≤μA(x)+νA(x)≤1, for all x∈X. We use 〈x,μA,νA〉 for all x∈X to represent IFSs considered in the rest of the paper without explicitly mentioning it. Furthermore, πA(x)=1-μA(x)-νA(x) is called a hesitancy degree or an intuitionistic index of x in A. In the special case π(x)=0, that is, μA(x)+νA(x)=1, the IFS A reduces to an FS.
2.2. Correlation Coefficients of HFSs and IFSs
Many approaches [4, 13, 17, 20, 21] have been introduced to compute the correlation coefficients of IFSs. Let X={x1,x2,…,xn} be a discrete universe of discourse, for any two A and B on X.
The correlation of the IFSs A and B is defined as [13]
(5)CIFS1(A,B)=∑i=1n(uA(xi)uB(xi)+vA(xi)uB(xi)),
Then, the correlation coefficient of the IFSs A and B is defined as
(6)ρIFS1(A,B)=CIFS1(A,B)(CIFS1(A,A)·CIFS1(B,B))1/2=(∑i=1n(uA(xi)uB(xi)+vA(xi)uB(xi)))×(·(∑i=1n(uB2(xi)+vB2(xi))))1/2((∑i=1n(uA2(xi)+vA2(xi)))·(∑i=1n(uB2(xi)+vB2(xi))))1/2)-1.
In [23], Chen et al. defined the correlation and correlation coefficient for HFSs as follows, respectively:
(7)CHFS1(A,B)=∑i=1n(1li∑j=1lihAσ(j)(xi)hBσ(j)(xi)),ρIFS1(A,B)=CHFS1(A,B)(CHFS1(A,A)·CHFS1(B,B))1/2=(∑i=1n(1li∑j=1lihAσ(j)(xi)hBσ(j)(xi)))×(·(∑i=1n(1li∑j=1lihBσ(j)2(xi))))1/2((∑i=1n(1li∑j=1lihAσ(j)2(xi)))·(∑i=1n(1li∑j=1lihBσ(j)2(xi))))1/2)-1,
where li=max{l(hA(xi)),l(hB(xi))} for each xi in X, and l(hA(xi)) and l(hB(xi)) represent the number of values in hA(xi) and hB(xi), respectively. We will talk about li in detail in the next section.
3. Correlation Measures of DHFEs
In this section, we first introduce the concept of correlation and correlation coefficient for DHFSs and then propose several correlation coefficient formulas and discuss their properties.
We arrange the elements in dE(x)=(hE(x),gE(x)) in decreasing order and let γEσ(i)(x) be the ith largest value in hE(x) and ηEσ(j)(x) the jth largest value in gE(x). Let lh(dE(xi)) the number of values in hE(xi) and lg(dE(xi)) be the number of values in gE(xi). For convenience, l(d(xi))=(lh(d(xi)),lg(d(xi))). In most cases, for two DHFSs A and B, l(dA(xi))≠l(dB(xi)); that is, lh(dA(xi))≠lh(dB(xi)), lg(dA(xi))≠lg(d(xi)B). To operate correctly, we should extend the shorter one until both of them have the same length when we compare them. In [24, 27], Xu and Xia extended the shorter one by adding different values in hesitant fuzzy environments. Similarly, Torra [21] also applied this ideal to derive some correlation coefficient formulas for HFSs. In fact, we can extend the shorter one by adding any value in it. The selection of this value mainly depends on the decision makers’ risk preferences. Optimists anticipate desirable outcomes and may add the maximum value, while pessimists expect unfavorable outcomes and may add the minimum value. The same situation can also be found in many existing references [13, 14].
We define several correlation coefficients for DHFEs.
Definition 4.
For two DHFSs A and B on X, the correlation of A and B, denoted as CDHFS1(A,B), is defined by
(8)CDHFS1(A,B)=∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)(xi)γBσ(j)(xi)+1lg(i)∑k=1lg(i)ηAσ(k)(xi)ηBσ(k)(xi)).
Definition 5.
For two DHFSs A and B on X, the correlation coefficient of A and B, denoted as ρDHFS1(A,B), is defined by:
(9)ρDHFS1(A,B)=CDHFS1(A,B)(CDHFS1(A,A)·CDHFS1(B,B))1/2=(∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)(xi)γBσ(j)(xi)+1lg(i)∑k=1lg(i)ηAσ(k)(xi)ηBσ(k)(xi)))×(+1lg(i)∑k=1lg(i)ηBσ(k)2(xi)))1/2(∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)2(xi)+1lg(i)∑k=1lg(i)ηAσ(k)2(xi))·∑i=1n(1lh(i)∑j=1lh(i)γBσ(j)2(xi)+1lg(i)∑k=1lg(i)ηBσ(k)2(xi)))1/2)-1.
Definition 6.
For two DHFSs A and B on X, the correlation coefficient of A and B, denoted as ρDHFS2(A,B), is defined by
(10)ρDHFS2(A,B)=CDHFS1(A,B)max{CDHFS1(A,A),CDHFS1(B,B)}=(∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)(xi)γBσ(j)(xi)+1lg(i)∑k=1lg(i)ηAσ(k)(xi)ηBσ(k)(xi)))×(max{∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)2(xi)+1lg(i)∑k=1lg(i)ηAσ(k)2(xi)),∑i=1n(1lh(i)∑j=1lh(i)γBσ(j)2(xi)+1lg(i)∑k=1lg(i)ηBσ(k)2(xi))})-1.
Theorem 7.
For two DHFSs A and B, the correlation coefficient of A and B, denoted as ρDHFSi(A,B)(i=1,2), should satisfy the following properties:
0≤ρDHFSi(A,B)≤1;
A=B⇒ρDHFSi(A,B)=1;
ρDHFSi(A,B)=ρDHFSi(B,A); i=1,2.
Proof.
(1) The inequality 0≤ρDHFS1(A,B) and 0≤ρDHFS2(A,B) is obvious. Below let us prove that ρDHFS1(A,B)≤1, ρDHFS2(A,B)≤1:
(11)CDHFS1(A,B)=∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)(xi)γBσ(j)(xi)+1lg(i)∑k=1lg(i)ηAσ(k)(xi)ηBσ(k)(xi))=∑i=1n(∑j=1lh(i)γAσ(j)(xi)γBσ(j)(xi)lh(i)+∑k=1lg(i)ηAσ(k)(xi)ηBσ(k)(xi)lg(i))=(∑j=1lh(1)γAσ(j)(x1)lh(1)·γBσ(j)(x1)lh(1)+∑j=1lh(2)γAσ(j)(x2)lh(2)·γBσ(j)(x2)lh(2)+⋯+∑j=1lh(n)γAσ(j)(xn)lh(n)·γBσ(j)(xn)lh(n))+(∑k=1lg(1)ηAσ(k)(x1)lg(1)·ηBσ(k)(x1)lg(1)+∑k=1lg(2)ηAσ(k)(x2)lg(2)·ηBσ(k)(x2)lg(2)+⋯+∑k=1lg(n)ηAσ(k)(xn)lg(n)·ηBσ(k)(xn)lg(n)).
Using the Cauchy-Schwarz inequality
(12)(x1y1+x2y2+⋯+xnyn)2≤(x12+x22+⋯+xn2)·(y12+y22+⋯+yn2),
where (x1,x2,…,xn)∈Rn,(y1,y2,…,yn)∈Rn, we obtain
(13)CDHFS1(A,B)2≤(∑j=1lh(1)γAσ(j)2(x1)lh(1)+∑j=1lh(2)γAσ(j)2(x2)lh(2)+⋯+∑j=1lh(n)γAσ(j)2(xn)lh(n)+∑k=1lg(1)ηAσ(k)2(x1)lg(1)+∑k=1lg(2)ηAσ(k)2(x2)lg(2)+⋯+∑k=1lg(n)ηAσ(k)2(xn)lg(n))·(∑j=1lh(1)γBσ(j)2(x1)lh(1)+∑j=1lh(2)γBσ(j)2(x2)lh(2)+⋯+∑j=1lh(n)γBσ(j)2(xn)lh(n)+∑k=1lg(1)ηBσ(k)2(x1)lg(1)+∑k=1lg(2)ηBσ(k)2(x2)lg(2)+⋯+∑k=1lg(n)ηBσ(k)2(xn)lg(n))=∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)2(xi)+1lg(i)∑k=1lg(i)ηAσ(k)2(xi))·∑i=1n(1lh(i)∑j=1lh(i)γBσ(j)2(xi)+1lg(i)∑k=1lg(i)ηBσ(k)2(xi))=CDHFS1(A,A)·CDHFS1(B,B).
Therefore,
(14)CDHFS1(A,B)≤(CDHFS1(A,A))1/2·(CDHFS1(B,B))1/2.
So, 0≤ρDHFS1(A,B)≤1.
In fact, we have
(15)(x12+x22+⋯+xn2)·(y12+y22+⋯+yn2)≤(max((x12+x22+⋯+xn2),(y12+y22+⋯+yn2)))2,∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)2(xi)+1lg(i)∑k=1lg(i)ηAσ(k)2(xi))·∑i=1n(1lh(i)∑j=1lh(i)γBσ(j)2(xi)+1lg(i)∑k=1lg(i)ηBσ(k)2(xi))≤(max{∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)2(xi)+1lg(i)∑k=1lg(i)ηAσ(k)2(xi)),∑i=1n(1lh(i)∑j=1lh(i)γBσ(j)2(xi)+1lg(i)∑k=1lg(i)ηBσ(k)2(xi))})2.
Then
(16)(CDHFS1(A,A)·CDHFS1(B,B))1/2≤max{CDHFS1(A,A),CDHFS1(B,B)}.
We also obtain 0≤ρDHFS2(A,B)≤1.
(2) and (3) are straightforward.
Moreover, from the proof of Theorem 7, we have Theorem 8 easily.
Theorem 8.
For two DHFSs A and B on X, then ρDHFS1(A,B)≥ρDHFS2(A,B).
However, from Theorem 7, we notice that all the above correlation coefficients cannot guarantee that the correlation coefficient of any two DHFSs equals one if and only if these two DHFSs are the same. Thus, how to derive the correlation coefficients of the DHFSs satisfying this desirable property is an interesting research topic. To solve this issue, in what follows, we develop a new method to calculate the correlation coefficient of the DHFSs A and B.
Definition 9.
For two DHFSs A and B on X, the correlation coefficient of A and B, denoted as ρDHFS3(A,B), is defined by
(17)ρDHFS3(A,B)=(12n∑i=1n(Δγminλ+ΔγmaxλΔγiλ+Δγmaxλ+Δηminλ+ΔηmaxλΔηiλ+Δηmaxλ))1/λ,
where
(18)Δγiλ=1lh(i)∑j=1lh(i)|γAσ(j)(xi)-γBσ(j)(xi)|λ,Δγminλ=mini{Δγiλ},Δγmaxλ=maxi{Δγiλ},Δηiλ=1lg(i)∑k=1lg(i)|ηAσ(k)(xi)-ηBσ(k)(xi)|λ,Δγminλ=mini{Δηiλ},Δγmaxλ=maxi{Δηiλ},λ>0.
Equation (17) is motivated by the generalized idea provided by Xu [18]. Obviously, the greater the value of ρDHFS3(A,B), the closer A to B. By Definition 9, we have Theorem 10.
Theorem 10.
The correlation coefficient ρDHFS3(A,B) satisfies the following properties:
0≤ρDHFS3(A,B)≤1;
A=B⇔ρDHFS3(A,B)=1;
ρDHFS3(A,B)=ρDHFS3(B,A).
Proof.
(1) The inequality 0≤ρDHFS3(A,B) is obvious. Below let us prove that ρDHFS3(A,B)≤1:(19)Δγminλ+ΔγmaxλΔγiλ+Δγmaxλ+Δηminλ+ΔηmaxλΔηiλ+Δηmaxλfori=1,2,…,n=(Δγminλ/Δγmaxλ)+1(Δγiλ/Δγmaxλ)+1+(Δηminλ/Δηmaxλ)+1(Δηiλ/Δηmaxλ)+1≤2.
We obtain
(20)ρDHFS3(A,B)=12n∑i=1n(Δγminλ+ΔγmaxλΔγiλ+Δγmaxλ+Δηminλ+ΔηmaxλΔηiλ+Δηmaxλ)≤12n·2n=1.
(2) and (3) are obvious.
Usually, in practical applications, the weight of each element xi∈X should be taken into account, and, so, we present the following weighted correlation coefficient. Assume that the weight of the element xi∈X is wi(i=1,2,…,n) with wi∈[0,1] and ∑i=1nwi=1; then we extend the correlation coefficient formulas given:
(21)ρDHFS-w1(A,B)=(∑i=1nwi(1lh(i)∑j=1lh(i)γAσ(j)(xi)γBσ(j)(xi)+1lg(i)∑k=1lg(i)ηAσ(k)(xi)ηBσ(k)(xi)))×(+1lg(i)∑k=1lg(i)ηBσ(k)2(xi)))1/2(∑i=1nwi(1lh(i)∑j=1lh(i)γAσ(j)2(xi)+1lg(i)∑k=1lg(i)ηAσ(k)2(xi))·∑i=1nwi(1lh(i)∑j=1lh(i)γBσ(j)2(xi)+1lg(i)∑k=1lg(i)ηBσ(k)2(xi)))1/2)-1,(22)ρDHFS-w2(A,B)=(∑i=1nwi(1lh(i)∑j=1lh(i)γAσ(j)(xi)γBσ(j)(xi)+1lg(i)∑k=1lg(i)ηAσ(k)(xi)ηBσ(k)(xi)))×(max{wi∑i=1n(1lh(i)∑j=1lh(i)γAσ(j)2(xi)+1lg(i)∑k=1lg(i)ηAσ(k)2(xi)),∑i=1nwi(1lh(i)∑j=1lh(i)γBσ(j)2(xi)+1lg(i)∑k=1lg(i)ηBσ(k)2(xi))})-1,(23)ρDHFS-w3(A,B)=(12∑i=1nwi(Δγminλ+ΔγmaxλΔγiλ+Δγmaxλ+Δηminλ+ΔηmaxλΔηiλ+Δηmaxλ))1/λ,
where
(24)Δγiλ=1lh(i)∑j=1lh(i)|γAσ(j)(xi)-γBσ(j)(xi)|λ,Δγminλ=mini{Δγiλ},Δγmaxλ=maxi{Δγiλ},Δηiλ=1lg(i)∑k=1lg(i)|ηAσ(k)(xi)-ηBσ(k)(xi)|λ,Δγminλ=mini{Δηiλ},Δγmaxλ=maxi{Δηiλ},λ>0.
Note that all these formulas satisfy the properties in Theorem 7.
In what follows, we use a medical diagnosis problem in [28, 29] to illustrate the developed correlation coefficient formulas. Actually, this is also a pattern recognition problem.
Example 11.
To make a proper diagnosis Q={Q1 (viral fever), Q2 (malaria), Q3 (typhoid), Q4 (stomach problem), and Q5 (chest problem)} for a patient with the given values of the symptoms, S={S1 (temperature), S2 (headache), S3 (cough), S4 (stomach pain), and S5 (chest pain)}, Xu [18] considered all possible diagnoses and symptoms as HFEs. Utilizing DHFSs can take much more information into account; the more values we obtain from patients, the greater epistemic certainty we have. So, in this paper, we use DHFEs to deal with such cases; each symptom is described by a DHFE, which is described by two sets (γij) and (ηij). (γij) indicates the degree that symptoms characteristic Si satisfies the considered diagnoses Gj and (ηij) indicates the degree that the symptoms characteristic Si does not satisfy the considered diagnoses Gj. The data are given in Table 1. The set of patients is P={Al,Bob,Joe,Ted}. The symptoms which can be also described by DHFEs are given in Table 2. We need to seek a diagnosis for each patient.
Symptoms characteristic of the considered diagnoses.
Temperature
Headache
Cough
Stomach pain
Chest pain
Viral fever
{(0.6, 0.4, 0.3), (0.2, 0.0)}
{(0.7, 0.5, 0.3, 0.2), (0.3, 0.1)}
{(0.5, 0.3), (0.5, 0.4, 0.2)}
{(0.5, 0.4, 0.3, 0.2, 0.1), (0.5,0.3)}
{(0.5, 0.4, 0.2, 0.1), (0.5, 0.4, 0.3)}
Malaria
{(0.9, 0.8, 0.7), (0.1, 0.0)}
{(0.5, 0.3, 0.2, 0.1), (0.4, 0.3)}
{(0.2, 0.1), (0.7, 0.6, 0.5)}
{(0.6, 0.5, 0.3, 0.2, 0.1), (0.3, 0.2)}
{(0.4, 0.3, 0.2, 0.1), (0.6, 0.5, 0.4)}
Typhoid
{(0.6, 0.3, 0.1), (0.3, 0.2)}
{(0.9, 0.8, 0.7, 0.6), (0.1, 0.0)}
{(0.5, 0.3), (0.5, 0.4, 0.3,)}
{(0.5, 0.4, 0.3, 0.2, 0.1), (0.5, 0.4)}
{(0.6, 0.4, 0.3, 0.2), (0.4, 0.3, 0.2)}
Stomach problem
{(0.5, 0.4, 0.2), (0.5, 0.3)}
{(0.4, 0.3, 0.2, 0.1), (0.4, 0.3)}
{(0.4, 0.3), (0.6, 0.5, 0.4)}
{(0.9, 0.8, 0.7, 0.6, 0.5), (0.1, 0.0)}
{(0.5, 0.4, 0.2, 0.1), (0.5, 0.4, 0.3)}
Chest problem
{(0.3, 0.2, 0.1), (0.7, 0.6)}
{(0.5, 0.3, 0.2, 0.1), (0.5, 0.3)}
{(0.3, 0.2), (0.6, 0.4, 0.3)}
{(0.7, 0.6, 0.5, 0.3, 0.2), (0.2, 0.1)}
{(0.8, 0.7, 0.6, 0.5), (0.2, 0.1, 0.0)}
Symptoms characteristic of the considered patients.
Temperature
Headache
Cough
Stomach pain
Chest pain
Al
{(0.9, 0.7, 0,5), (0.1, 0.0)}
{(0.4, 0.3, 0.2, 0.1), (0.5, 0.4)}
{(0.4, 0.3), (0.5, 0.4, 0.2)}
{(0.6, 0.5, 0.4, 0.2, 0.1), (0.3.0.2)}
{(0.4, 0.3, 0.2, 0.1), (0.5, 0.4, 0.3)}
Bob
{(0.5, 0.4, 0.2), (0.5, 0.3)}
{(0.5, 0.4, 0.3, 0.1), (0.4, 0.3)}
{(0.2, 0.1), (0.7, 0.6, 0.5)}
{(0.9, 0.8, 0.6, 0.5, 0.4), (0.1, 0.0)}
{(0.5, 0.4, 0.3, 0.2), (0.5, 0.4, 0.3)}
Joe
{(0.9, 0.7, 0.6), (0.1, 0.0)}
{(0.7, 0.4, 0.3, 0.1), (0.2, 0.1)}
{(0.3, 0.2), (0.5, 0.4, 0.3)}
{(0.6, 0.4, 0.3, 0.2, 0.1), (0.4, 0.3)}
{(0.6, 0.3, 0.2, 0.1), (0.4, 0.3, 0.2)}
Ted
{(0.8, 0.7, 0.5), (0.2, 0.1)}
{(0.6, 0.5, 0.4, 0.2), (0.4, 0.3)}
{(0.5, 0.3), (0.5, 0.4, 0.3)}
{(0.6, 0.4, 0.3, 0.2, 0.1), (0.4, 0.3)}
{(0.5, 0.4, 0.2, 0.1), (0.5, 0.4, 0.3)}
We utilize the correlation coefficient ρDHFS1 to derive a diagnosis for each patient. All the results for the considered patients are listed in Table 3. From the arguments in Table 3, we can find that Ted suffers from viral fever, Al and Joe from malaria, and Bob from stomach problem.
Values of ρDHFS1 for each patient to the considered set of possible diagnoses.
Viral fever
Malaria
Typhoid
Stomach problem
Chest problem
Al
0.9257
0.9620
0.7957
0.8680
0.7110
Bob
0.8380
0.8791
0.8041
0.9922
0.9035
Joe
0.9427
0.9521
0.8757
0.9329
0.7026
Ted
0.9718
0.9472
0.8890
0.8790
0.7644
If we utilize the correlation coefficient formulas ρDHFS2 and ρDHFS3 to derive a diagnosis, then the results are listed in Tables 4 and 5, respectively.
Values of ρDHFS2 for each patient to the considered set of possible diagnoses.
Viral fever
Malaria
Typhoid
Stomach problem
Chest problem
Al
0.8718
0.8888
0.7571
0.8286
0.7591
Bob
0.7586
0.8451
0.7960
0.9852
0.8889
Joe
0.9075
0.8607
0.8152
0.8712
0.6500
Ted
0.8764
0.9140
0.8835
0.8678
0.7550
Values of ρDHFS3 for each patient to the considered set of possible diagnoses.
Viral fever
Malaria
Typhoid
Stomach problem
Chest problem
Al
0.8085
0.7739
0.7900
0.7515
0.8026
Bob
0.7480
0.7714
0.6824
0.7547
0.8006
Joe
0.7925
0.7887
0.7548
0.6878
0.8100
Ted
0.8063
0.7244
0.7516
0.7386
0.8230
From Tables 3–5 we know that the results obtained by different correlation coefficient formulas are different. That is because these correlation coefficient formulas are based on different linear relationships.
4. Clustering Method Based on Direct Transfer Algorithm for HFSs
Based on clustering algorithms for IFSs [30, 31], and HFSs [23], and the correlation coefficient formulas developed previously for DHFSs, in what follows, we propose a direct transfer algorithm to clustering analysis with respect to the problem of complex operation of matrix synthesis when reconstructing analogical relation to equivalence relation clustering under hesitant fuzzy environments. Before doing this, some concepts are introduced firstly.
Definition 12.
Let Aj(j=1,2,…,m) be m DHFs; then C=(ρij)m×m is called an association matrix, where ρij=ρ(Ai,Aj) is the association coefficient of Ai and Aj, which has the following properties:
0≤ρij≤1, for all i,j=1,2,…,m;
ρij=1 if and only if Ai=Aj;
ρij=ρji, for all i,j=1,2,…,m.
Definition 13 (see [23, 30]).
Let C=(ρij)m×m be an association matrix; if C2=C∘C=(ρ-ij)m×m, then C2 is called a composition matrix of C, where ρ-ij=max{min{ρik,ρkj}}, for all i,j=1,2,…,m.
Based on Definition 13, we have the following theorem.
Theorem 14 (see [23, 30]).
Let C=(ρij)m×m be an association matrix; then the composition matrix C2 is also an association matrix.
Theorem 15 (see [23, 30]).
Let C=(ρij)m×m be an association matrix; then, for any nonnegative integer k, the composition matrix C2k+1derived from C2k+1=C2k∘C2k is also an association matrix.
Definition 16 (see [23, 30]).
Let C=(ρij)m×m be an association matrix, if C2⊆C, that is,
(25)maxk{min{ρik,ρkj}}≤ρij,∀i,j=1,2,…,m,
then C is called an equivalent association matrix.
By the transitivity principle of equivalent matrix, we can easily prove the following theorem.
Theorem 17 (see [23, 30, 32]).
Let C=(ρij)m×m be an association matrix; then, after the finite times of compositions: C→C2→C4→⋯→C2k→⋯, there must exist a positive integer k such that C2k=C2k+1, and C2kis also an equivalent association matrix.
Definition 18 (see [23, 30, 31]).
Let C=(ρij)m×m be an equivalent correlation matrix. Then we call Cλ=(ρλij)m×m the λ-cutting matrix of C, where
(26)ρλij={0ifρij<λ,1ifρij≥λ,i,j=1,2,…,m
and λ is the confidence level with λ∈[0,1].
Next, a traditional transfer closure algorithm is given as follows.
Step 1.
Let {A1,A2,…,Am} be a set of DHFSs in X={x1,x2,…,xn}. We can calculate the correlation coefficients of the DHFSs and then construct a correlation matrix C=(ρij)m×m, where ρij=ρ(Ai,Aj).
Step 2.
Check whether C=(ρij)m×m is an equivalent correlation matrix; that is, check whether it satisfies C2⊆C,where
(27)C2=C∘C=(ρ-ij)m×m,ρ-ij=maxk{min{ρik,ρkj}},i,j=1,2,…,m.
If it does not hold, we construct the equivalent correlation matrix C2k: C→C2→C4→⋯→C2k→⋯, until C2k=C2k+1.
Step 3.
For a confidence level λ, we construct a λ-cutting matrix Cλ=(ρλij)m×m through Definition 18 in order to classify the DHFSs Aj(j=1,2,…,m). If all elements of the ith line (column) are the same as the corresponding elements of the jth line (column) in Cλ, then the DHFSs Ai and Aj are of the same type. By means of this principle, we can classify all these mAj(j=1,2,…,m).
By analyzing the aforementioned transfer closure algorithm, this algorithm has one drawback such as complex operation of matrix synthesis when reconstructing the equivalent correlation matrix. In this paper, we have the following theorem of the correlation coefficients in dual hesitant fuzzy environment.
Theorem 19.
For all x,y∈A, for the confidence level λ, if ∃x1,x2,x3,…,xp, when ρ(x,x1)≥λ, ρ(x1,x2)≥λ, ρ(x2,x3)≥λ,…,ρ(xp,y)≥λ, then x,x1,x2,…,xp and y are of the same type.
Proof.
we are motivated by the generalized idea based on the transitivity principle of ordinary equivalent relation R: for all x,y∈A (here, A is an ordinary set, not a fuzzy set), ∃x1,x2,x3…,xp, when (x,x1)∈R, (x1,x2)∈R,…,(xk,xk+1)∈R,…,(xp,y)∈R, we can have (x,y)∈R.
And from Definition 18, we can see that the λ-cutting matrix of C is an ordinary correlation matrix, which completes the proof of Theorem 19.
From the above theoretical analysis, we propose a direct transfer algorithm for clustering DHFSs as follows.
Step 1.
Let A={A1,A2,…,Am} be a set of DHFSs in X={x1,x2,…,xn}. We can calculate the correlation coefficients of the DHFSs and then construct a correlation matrix C=(ρij)m×m, where ρij=ρ(Ai,Aj).
Step 2.
By setting the threshold to the confidence level λ, we can construct a λ-cutting matrix Cλ=(ρλij)m×m. If ρij=1, this means that the DHFSs Ai and Aj are of the same type. By means of this principle, we can classify all these mAj(j=1,2,…,m).
We can see that the transfer closure algorithm must construct the equivalent correlation matrix C→C2→C4→⋯→C2k→⋯, until C2k=C2k+1 and then construct a λ-cutting matrix Cλ=(ρλij)m×m through Definition 18 in order to classify the DHFSs Aj(j=1,2,…,m). Simply, the transfer algorithm only constructs a λ-cutting matrix Cλ=(ρλij)m×m by setting the threshold to the confidence level λ and then classifies the DHFSs Aj(j=1,2,…,m) directly. In what follows, we will talk about the relationship between the transfer closure algorithm and the direct transfer algorithm.
Theorem 20.
The clustering results are the same by the transfer closure algorithm and the direct transfer algorithm, at the same confidence level.
Proof.
(1) For a confidence level λ, for all Ai,Aj∈A, ∃x1,x2,x3…,xp, if ρ(Ai,x1)≥λ, ρ(x1,x2)≥λ, ρ(x2,x3)≥λ,…,ρ(xp,Aj)≥λ, then Ai and Aj are of the same type by the direct transfer algorithm.
Assume that we construct the equivalent correlation matrix C2k when we employ the transfer closure algorithm. We must prove that ρC2k(Ai,xj)≥λ. Consider(28)ρC2(Ai,Aj)=maxAq∈A{min{ρC(Ai,Aq),ρC(Aq,Aj)}},i,j,q=1,2,…,m=maxAq∈A,Ai≠Aj{max{min{ρC(Ai,Aq),ρC(Aq,Aj)}},ρC(Ai,Aj)}≥ρC(Ai,Aj).
So C2⊇C, and, for the same reason, we have C2k⊇C2k-1⊇C2k-2,…,⊇C2⊇C. Consider
(29)ρC2k(Ai,Aj)=maxAq∈A{min{ρC2k(Ai,Aq),ρC2k(Aq,Aj)}},i,j,q=1,2,…,m=maxAq∈A,Aq≠Axp{max{min{ρC2k(Ai,Ap),ρC2k(Ap,Aj)},min{ρC2k(Ai,Axp),ρC2k(Axp,Aj)}}}≥min{ρC2k(Ai,Axp),ρC2k(Axp,Aj)}≥min{ρC2k(Ai,Axp-1),ρC2k(Axp-1,Axp),ρC2k(Axp,Aj)}≥min{ρC2k(Axp,Aj)ρC2k(Ai,Ax1),ρC2k(Ax1,Ax2),…,ρC2k(Ai,Axp-1),ρC2k(Axp-1,Axp),ρC2k(Axp,Aj)}≥min{ρC(Axp,Aj)ρC(Ai,Ax1),ρC(Ax1,Ax2),…,ρC(Ai,Axp-1),ρC(Axp-1,Axp),ρC(Axp,Aj)}≥λ.
For a confidence level λ, when we get that Ai and Aj are of the same type using the direct transfer algorithm, we can also have the same clustering results by the transfer closure algorithm.
(2) For a confidence level λ, for all Ai,Aj∈A, ∃ the equivalent correlation matrix C2k, ρC2k(Ai,xj)≥λ, then, Ai and Aj are of the same type by the transfer closure algorithm.
Let
(30)ρC2k(Ai,Aj)=maxAq∈A{min{ρC2k-1(Ai,Aq),ρC2k-1(Aq,Aj)}}.
Then ∃x1, ρC2k(Ai,Aj)=min{ρC2k-1(Ai,Ax1),ρC2k-1(Ax1,Aj)}≥λ, ρC2k-1(Ai,Ax1)≥λ, ρC2k-1(Ax1,Aj)≥λ.
So Ai and Aj are the same type in C2k-1 by the direct transfer algorithm.
For the same reason, ∃x2, ρC2k-2(Ai,Ax2)≥λ, ρC2k-2(Ax2,Ax1)≥λ. ∃x3, ρC2k-2(Ax1,Ax3)≥λ, ρC2k-2(Ax3,Aj)≥λ.
Ai and Aj are the same type in C2k-2 by the direct transfer algorithm.
So, ∃x1,x2,x3,…,x2k, ρC(Ai,Ax1)≥λ, ρC(Ax1,Ax2)≥λ, ρC(Ax2,Ax3)≥λ,…,ρC(Ax2k,Aj)≥λ.
Ai and Aj are the same type in C by the direct transfer algorithm.
For a confidence level λ, when we get Ai and Aj are of the same type using the transfer closure algorithm, we can also have the same clustering results by the direct transfer algorithm, which completes the proof.
We assume A={A1,A2,…,Am} to be a set of DHFSs, and we construct the equivalent correlation matrix C2k: C→C2→C4→⋯→C2k→⋯, until C2k=C2k+1 and then construct a λ-cutting matrix Cλ=(ρλij)m×m for the transfer closure algorithm. Consequently, the running time of the transfer closure algorithm is Ttca=O(km3+km2); by the same arguments, the direct transfer algorithm requires Tdta=O(m2) time on the same example. And we have established Stca=O(m2) space bound at least for the step of constructing the equivalent correlation matrix based on the transfer closure algorithm, while, for the transfer algorithm, it constructs a λ-cutting matrix Cλ=(ρλij)m×m by setting the threshold to the confidence level λ and needs Stca=O(m) space bound. We can see that the computational complexity of both two algorithms ranges depends on the number of m, and the direct transfer algorithm exhibits better behavior.
Below, we conduct experiments in order to demonstrate the effectiveness of the proposed clustering algorithm for DHFSs.
Example 21.
Every diamond is a miracle of time and place and chance. Like snowflakes, no two are exactly alike. Every consumer shopping for diamonds is faced with endless diamond combinations. In addition to different diamond combinations, prices are also influenced by market supply and demand conditions, fashion trends, and so forth. While consumers' tastes and budgets change, most seek to find a fair price for the diamond they choose. Until the middle of the twentieth century, there was no agreed upon standard by which diamonds could be judged. No matter how beautiful a diamond may look you simply cannot see its true quality. GIA created the first and now globally accepted standard for describing diamonds: color, clarity, cut, and carat weight. Concerning color, the less color in the stone there is, the more desirable and valuable it is. Grades run from “D” to “X.” Clarity measures the amount, size, and placement of internal “inclusions”, and external “blemishes.” Grades run from “Flawless” to “included.” Cut does not refer to a diamond's shape but to the proportion and arrangement of its facets and the quality of workmanship. Grades range from “excellent” to “poor.” Carat refers to a diamond's weight. Generally speaking, the higher the carat weight, the more expensive the stone. Two diamonds of equal carat weight, however, can have very different quality and price when the other three Cs are considered. We choose a “perfect” diamond whose 4C is “D” color, “FL” clarity, “3 excellent” cut, and “1carat” weight. For the convenience of analysis, the weight vector of these attributes is w=(0.25,0.25,0.25,0.25). Here, there are ten diamonds. In order to better make the assessment, several evaluation organizations are requested. The normalized evaluation diamond data, represented by DHFSs, are displayed in Table 6.
Diamond data set.
“D” color
“FL” clarity
“3 excellent” cut
“1 carat” weight
A1
{(0.5, 0.4, 0.3); (0.4, 0.2)}
{(0.6, 0.5); (0.3, 0.2, 0.1)}
{(0.6, 0.4, 0.3); (0.4, 0.2, 0.1)}
{(0.6); (0.4)}
A2
{(0.8, 0.7, 0.6); (0.2, 0.1)}
{(0.7, 0.6); (0.3, 0.2, 0.1)}
{(0.7, 0.6, 0.5); (0.3, 0.2, 0.1)}
{(0.7); (0.2)}
A3
{(0.9, 0.8, 0.7); (0.1, 0.0)}
{(0.8, 0.7); (0.2, 0.1, 0.0)}
{(0.8, 0.7, 0.6); (0.2, 0.1, 0.0)}
{(0.9); (0.1)}
A4
{(0.4, 0.3, 0.1); (0.6, 0.5)}
{(0.6, 0.5); (0.4, 0.2, 0.1)}
{(0.6, 0.5, 0.4); (0.3, 0.2, 0.1)}
{(0.3); (0.6)}
A5
{(0.6, 0.5, 0.4); (0.3, 0.2)}
{(0.3, 0.2); (0.6, 0.5, 0.4)}
{(0.3, 0.2, 0.1); (0.6, 0.5, 0.4)}
{(0.1); (0.8)}
A6
{(0.6, 0.5, 0.4); (0.4, 0.2)}
{(0.7, 0.6); (0.3, 0.2, 0.1)}
{(0.2, 0.1, 0.0); (0.7, 0.2, 0.1)}
{(0.8); (0.1)}
A7
{(0.8, 0.6, 0.5); (0.2, 0.1)}
{(0.6, 0.5); (0.3, 0.2, 0.1)}
{(0.4, 0.3, 0.2); (0.5, 0.4, 0.3)}
{(0.5); (0.4)}
A8
{(0.7, 0.6, 0.5); (0.2, 0.0)}
{(0.4, 0.3); (0.6, 0.5, 0.4)}
{(0.6, 0.5, 0.4); (0.4, 0.3, 0.2)}
{(0.8); (0.2)}
A9
{(0.4, 0.3, 0.2); (0.6, 0.5)}
{(0.4, 0.3); (0.6, 0.5, 0.4)}
{(0.2, 0.1, 0.0); (0.8, 0.6, 0.5)}
{(0.2); (0.6)}
A10
{(0.2, 0.1, 0.0); (0.7, 0.6)}
{(0.8, 0.6); (0.2, 0.1, 0.0)}
{(0.6, 0.5, 0.3); (0.4, 0.2, 0.1)}
{(0.7); (0.3)}
Now we utilize the direct transfer algorithm to cluster the ten diamonds, which involves the following steps.
Step 1.
Utilize (21) to calculate the association coefficients, and then construct an association matrix:
We give a detailed sensitivity analysis with respect to the confidence level, and, by (26), we get all the possible classifications of the ten diamonds; see Table 7 and Figure 1.
From the above numerical analysis, under the group setting, the experts’ evaluation information usually does not reach an agreement for the objects that need to be classified. Example 21 clearly shows that the clustering algorithm based on DHFSs provides a proper way to resolve this issue.
In the following, a comparison is made among the method proposed in this paper, Chen et al.’s method [23], and Zhao et al.’s method [31] in Table 8.
Comparisons of the derived results.
Classes
The results derived by the direct transfer algorithm method
The results derived by Chen et al.’s transfer algorithm method
Through Table 8, it is worthy of pointing out that the clustering results of the direct transfer clustering method proposed in this paper are exactly the same with those of Chen et al.’s transfer clustering method and Zhao et al.’s Boole method, but our method does not need to use the transitive closure technique to calculate the equivalent matrix of the association matrix and thus requires much less computational effort than Chen et al.’s method. The computational complexity of Chen et al.’s method and Zhao et al.’s method has, relatively, high computational complexity, which indeed motivates our clustering method proposed in this paper. Furthermore, from Example 21 we can see that the clustering results have much to do with the threshold; the smaller the confidence level is, the more detailed the clustering will be.
5. Conclusions
Dual hesitant fuzzy set, as an extension of fuzzy set, can describe the situation that people have hesitancy when they make a decision more objectively than other extensions of fuzzy set (interval-valued fuzzy set, intuitionistic fuzzy set, type-2 fuzzy set, and fuzzy multiset). In this paper, the correlation coefficients for DHFSs have been studied. Their properties have been discussed, and the differences and correlations among them have been investigated in detail. We have made the clustering analysis under dual hesitant fuzzy environments with one typical real world example. To further extend the application range of the present clustering algorithm, in particular for the case that needs to assign weights for different experts, it will be necessary to generalize the original definition of DHFSs.
Given that DHFSs are a suitable technique of denoting uncertain information that is widely encountered in daily life and the latent applications of our algorithm in the field of data mining, information retrieval and pattern recognition, and so forth, may be the directions for future research.
Acknowledgments
The authors are very grateful to the anonymous reviewers for their insightful and constructive comments and suggestions that have led to an improved version of this paper. This work is supported by the National Nature Science Foundation of China (no. 70971136).
XuZ.Choquet integrals of weighted intuitionistic fuzzy information2010180572673610.1016/j.ins.2009.11.011MR2574550ZBL1186.68469BonizzoniP.VedovaG. D.DondiR.JiangT.Correlation clustering and consensus clustering20053827226235Lecture Notes in Computer Science10.1007/11602613_24MR2258104ZBL1173.68624KriegelH. P.KrögerP.SchubertE.ZimekA.A general framework for increasing the robustness of PCA-based correlation clustering algorithms20085069418435Lecture Notes in Computer Science2-s2.0-4904911972910.1007/978-3-540-69497-7_27ParkD. G.KwunY. C.ParkJ. H.ParkI. Y.Correlation coefficient of interval-valued intuitionistic fuzzy sets and its application to multiple attribute group decision making problems2009509-101279129310.1016/j.mcm.2009.06.010MR2583417ZBL1185.68714SzmidtE.KacprzykJ.Correlation of intuitionistic fuzzy sets20106178169177Lecture Notes in Computer Science2-s2.0-7795488406710.1007/978-3-642-14049-5_18WeiG. W.WangH. J.LinR.Application of correlation coefficient to interval-valued intuitionistic fuzzy multiple attribute decision-making with incomplete weight information20112623373492-s2.0-7865150327410.1007/s10115-009-0276-1YeJ.Multicriteria fuzzy decision-making method using entropy weights-based correlation coefficients of interval-valued intuitionistic fuzzy sets201034123864387010.1016/j.apm.2010.03.025MR2659640ZBL1201.91039ZadehL. A.Fuzzy sets19658338353MR0219427ZBL0139.24606HungW. L.WuJ. W.A note on the correlation of fuzzy numbers by expected interval200194517523MR1852344ZBL1113.03339HongD. H.Fuzzy measures for a correlation coefficient of fuzzy numbers under TW (the weakest t-norm)-based fuzzy arithmetic operations2006176215016010.1016/j.ins.2004.11.005MR2187028ZBL1075.03025AtanassovK. T.Intuitionistic fuzzy sets1986201879610.1016/S0165-0114(86)80034-3MR852871ZBL0631.03040AtanassovK.GargovG.Interval valued intuitionistic fuzzy sets198931334334910.1016/0165-0114(89)90205-4MR1009265ZBL0674.03017GerstenkornT.MańkoJ.Correlation of intuitionistic fuzzy sets1991441394310.1016/0165-0114(91)90031-KMR1133981ZBL0742.04008BustinceH.BurilloP.Correlation of interval-valued intuitionistic fuzzy sets199574223724410.1016/0165-0114(94)00343-6MR1349433ZBL0875.94156HungW. L.Using statistical viewpoint in developing correlation of intuitionistic fuzzy sets200194509516MR1852343ZBL1113.03338MitchellH. B.A correlation coefficient for intuitionistic fuzzy sets20041954834902-s2.0-234254106710.1002/int.20004HungW. L.WuJ. W.Correlation of intuitionistic fuzzy sets by centroid method20021441–421922510.1016/S0020-0255(02)00181-0MR1930161ZBL1013.03067XuZ.On correlation measures of intuitionistic fuzzy sets200642241624Lecture Notes in Computer Science10.1007/11875581_22-s2.0-33750570227RobinsonP. J.AmirtharajE. C. H.Vague correlation coefficient of interval vague sets201221183410.4018/ijfsa.2012010102RobinsonP. J.AmirtharajE. C. H.A short primer on the correlation coefficient of vague sets201112556910.4018/ijfsa.2011040105TorraV.Hesitant fuzzy sets20102565295392-s2.0-7795456157410.1002/int.20418TorraV.NarukawaY.On hesitant fuzzy sets and decisionProceedings of the 18th IEEE International Conference on Fuzzy SystemsAugust 2009Jeju Island, Korea137813822-s2.0-7124912715410.1109/FUZZY.2009.5276884ChenN.XuZ.XiaM.Correlation coefficients of hesitant fuzzy sets and their applications to clustering analysis20133742197221110.1016/j.apm.2012.04.031MR3002311XuZ.XiaM.On distance and correlation measures of hesitant fuzzy information20112654104252-s2.0-7995284641110.1002/int.20474DuboisD.PradeH.1980144New York, NY, USAAcademic PressMR589341ZhuB.XuZ.XiaM.Dual hesitant fuzzy sets201220121387962910.1155/2012/879629MR2923341ZBL1244.03152XuZ.XiaM.Distance and similarity measures for hesitant fuzzy sets2011181112128213810.1016/j.ins.2011.01.028MR2781775ZBL1219.03064SzmidtE.KacprzykJ.A similarity measure for intuitionistic fuzzy sets and its application in supporting medical diagnostic reasoning20043070388393Lecture Notes in Computer Science10.1007/978-3-540-24844-6_562-s2.0-9444274930VlachosI. K.SergiadisG. D.Intuitionistic fuzzy information—applications to pattern recognition20072821972062-s2.0-3375081870010.1016/j.patrec.2006.07.004XuZ.ChenJ.WuJ.Clustering algorithm for intuitionistic fuzzy sets2008178193775379010.1016/j.ins.2008.06.008MR2445215ZBL1256.62040ZhaoH.XuZ.WangZ.Intuitionistic fuzzy clustering algorithm based on boolean matrix and association measure20131219511810.1142/S0219622013500053WangP. Z.1983Shanghai, ChinaShanghai Scientific and Technical Publishers