Research on Selection Method of Privacy Parameter ε

unrestricted use, distribution, and


Introduction
In recent years, with the rapid development of information technology, user data have experienced explosive growth.Personal information extracted by data mining and information collection has become a valuable resource for research and decision-making of various research institutions, organizations, and government departments [1].e analysis and use of massive user data not only bring convenience to peopleʼs lives but also bring a great threat to user privacy protection [2].
More and more people pay attention to protecting data privacy while applying data.On the one hand, for published data, k-anonymity, l-diversity, and T-closure protect sensitive information from attacks, such as link attacks, skew attacks, and underlying knowledge attacks [3][4][5][6][7].However, due to the lack of a strong attack model, they are not strong against background knowledge attack.e existing privacy protection models lack effective and strict methods to prove and quantify the level of privacy protection.Once the model parameters change, the quality of privacy protection will not be guaranteed.However, differential privacy has better resistance to the above attacks and has good privacy protection, which has been widely used by scholars [8,9].

Motivation. Privacy protection theory and technology
need to be able to prevent different attack means.What is more, with the rapid development of data analysis techniques such as data mining in recent years, attackers can extract information related to user privacy from massive data.erefore, how to protect the privacy of user data and provide high availability data as much as possible in the process of data query, publishing, and sharing has become a research hotspot in privacy protection [10,11].
At present, most of the proposed privacy protection schemes use anonymous fuzzy or data distortion processing (such as adding random noise) and other technologies and use mathematical regression analysis, data distortion adjustment, and noise scale parameter adjustment to reduce the error caused by noise, so as to improve the availability of data [12][13][14].However, these schemes also have some shortcomings; that is, the same query results will cause the disclosure of privacy information when the query users with different permissions and reputation levels query the sensitive data.
e differential method has become a hot research topic on many practical applications in recent years.Compared with the traditional privacy protection mode, differential privacy has its unique advantages.Firstly, the model assumes that the adversary has the greatest background knowledge.Secondly, differential privacy has a solid mathematical foundation, a strict definition of privacy protection, and a reliable quantitative evaluation method.By using the output perturbation technology to add random noise to the query output, the single record is in the dataset or not in the dataset, which has little impact on the calculation results.Even if the adversary has the maximum background knowledge, it can ensure that the adversary cannot obtain accurate individual information by observing the calculation results.
e research work of differential privacy protection mainly focuses on improving the privacy protection and data utility in the differential privacy data release, but a small amount of strict mathematical reasoning and research is conducted on the configuration method of privacy protection parameters in the specific practice of differential privacy.In practice, the dataset size, query function sensitivity, and privacy protection probability threshold should be considered in the configuration of privacy protection parameters.
Differential privacy is based on a good mathematical basis and can quantitatively describe the problem of privacy disclosure [1].ese two unique features make it different from other methods.Even in the worst case, if an adversary knows all the sensitive data except one record, it can ensure that the sensitive information will not be disclosed, because the adversary cannot judge whether the record is in the dataset from the query output [2].
e research of differential privacy protection technology mainly considers three problems: (1) how to ensure that the designed privacy algorithm meets the differential privacy to ensure that the data privacy is not leaked; (2) how to reduce the error probability to improve the availability of data; (3) in the face of different environments and attack modes, how to determine the value range of parameter ε and give a credible and reasonable reasoning proof process.
1.2.Contributions.Aiming at the above problems, to ensure the privacy and availability of sensitive data in the process of data query, solve the problem of real data information leakage in the process of data, and reduce the probability of attackers to obtain real results through differential attack and probabilistic reasoning attack, we study the differential privacy parameter selection methods in various situations; these specific contributions are as follows: (i) We propose a differential privacy parameter configuration method based on fault tolerance interval and analyze the adversaryʼs fault tolerance under different noise distribution location parameters and scale parameters and study the influence of the user's query permission on privacy protection parameter configuration.
(ii) We study the location parameters and scale parameters in detail and propose a differential privacy mechanism to solve the multi-user query scenarios.(iii) For a single attack, we propose a differential privacy attack algorithm and calculate the upper bound of the parameter ε based on the sensitivity Δq, the length of the fault tolerance interval L, and the success probability p.Furthermore, we propose an attack model to achieve the security of differential privacy protection technology under repeated attacks, analyze the results of repeated attacks and the characteristics of noise distribution function to obtain the probability of noise falling into the faulttolerant interval, deduce the probability of the adversaryʼs successful attack by the permutation and combination method, and then obtain the selection range of parameter ε. (iv) We design several experiments, analyze the relationship between adversaryʼs fault tolerance and privacy parameters, derive the configuration formula of the privacy parameter ε, and configure appropriate parameters without violating the privacy probability threshold.
is paper studies the selection of the parameter ε in three cases of differential privacy.e structure of this paper is as follows.In Section 2, we introduce and analyze the research progress of correlation differential privacy parameters.In Section 3, we introduce the concept and theory of differential privacy.In Section 4, we propose a privacy parameter selection method-based fault tolerance and analyze the case of multiple scale parameters.In Section 5, we propose a differential privacy algorithm for a multi-user query.In Section 6, we introduce the query attack mode in differential privacy.In Section 7, we design relevant experiments and show the characteristics of the study through analysis and comparison.In Section 8, we summarize and propose future work.

Related Work
Recently, many achievements have been made in differential privacy research.At present, the research of differential privacy protection technology combines database theory, cluster algorithm, statistical knowledge, and modern cryptography [1,2].It defines a very strict mathematical model and provides rigorous and quantitative representation and proof of privacy leakage risk [3][4][5][6][7]15].Based on the relevance contents, this paper divides the research work of differential privacy protection into two parts.

Research on the Basic
eory of Differential Privacy.How to reduce the noise of dataset on the premise of differential privacy: Yi and Zhabin [16] proposed a data publishing algorithm based on wavelet transform, which can effectively reduce the size of ε parameter and improve the accuracy of the histogram counting query.Park and Hon [10] studied parameter ε to protect differential privacy and 2 Security and Communication Networks introduced a new attack index to capture the relationship between attack probability and privacy assurance.Yao [12] introduced the concept of α-mutual information security and showed that statistical security meant mutual information security.Du and Wang [13] proposed a query model and implemented differential privacy by Laplace noise.Tsou and Chen [17] quantified the disclosure risk and linked the differential privacy with k-anonymity.Zhang and Liu [18] proposed a privacy-preserving decision tree classification model based on differential privacy mechanism, through the Laplace mechanism and index mechanism, which provided users with a secure data access interface and optimized the search scheme to reduce the error rate.Lin et al. [19] proposed an optimized differential private online transaction scheme for online banking, which set consumption boundary with additional noise, and selected different boundaries while satisfying the definition of differential privacy.Besides, they provided a theoretical analysis to prove that the scheme can meet the differential privacy restriction.
e choice of a privacy mechanism usually does not have a significant impact on performance but is critical to maintaining the usability of the result.Goryczka and Xiong [20] described and compared distributed data aggregation methods with security and confidentiality, studied the secure multiparty addition protocol, and proposed a new effective Laplace mechanism, ensuring the security of computation, the minimum communication traffic, and the high reliability of the system.Kang and Li [21] proposed a new framework based on the concept of differential privacy, by purposefully adding noise to locally perturb its training parameters, which achieved a compromise between the convergence performance and privacy protection level.
Li et al. [22] focused on the linear query function based on Laplacian mechanism and proposed a method to determine the upper bound of the number of linear queries from the perspective of information theory.Huang and Zhou [23] proposed a differential privacy mechanism to optimize the number of queries in multi-user scenarios and analyzed the distortion of data distribution and the absolute value of noise in terms of utility.Ye and Alexander [15] studied the minimax estimation problem under the restriction of the discrete distribution in the privacy of differential privacy, under the given conditions, considering the structure ε-privacy level of the optimal problem of the privatization program, minimizing expected estimated losses.

Application of Differential Privacy
. Differential privacy has a wide range of applications.Cheng et al. [11] realized the private publishing of high-dimensional data and determined the optimal parameters by non-overlapping coverage.e studies in [14,24] introduced differential privacy to protect data privacy and prevented the adversary from inferring important sensitive information.Due to the high complexity and multi-dimension of data, [25] proposed a data partition technology and further used the interactive differential privacy strategy to resist the privacy leakage.
Based on noise estimation and Laplace mechanism, the work in [26] studied the trade-off relationship between privacy and utility, derived the optimal differential privacy mechanism, and effectively adapted to the needs of personalized privacy protection.
Zhang et al. [27] formally studied the issue of privacypreserving set-value data publishing on hybrid cloud, provided a complete system framework, and designed a new data partition mechanism, further setting up query analysis tools that can be automatically switched on the structure of the query optimization of hybrid cloud data query, ensuring the confidentiality of data.In a voting system, users can report their desired parameter values to the selector mechanism.Without limiting user preferences, [28] struck a balance between protecting personal privacy and returned accurate results through the parameter epsilon control.
Sun and Tay [29] constructed an optimization framework that combined local variance privacy and inferential privacy measures and proposed a two-stage local privacy mapping model that can achieve information privacy and local variance privacy within a predetermined budget.Cao and Yoshikawa [30] studied the potential privacy loss of a traditional differential privacy mechanism under time dependence, analyzed the privacy loss of adversaries with time dependence, and designed a fast algorithm to quantify the time privacy leakage.Based on the differential privacy model, the study in [31] constructed a privacy protection method based on clustering and noise and proposed a privacy measurement algorithm based on adjacency degree, which can objectively evaluate the privacy protection strength of various schemes and prevent graph structure and degree attacks.
In the cloud service, the study in [32] proposed a priority ranking query information retrieval scheme to reduce the query overhead on the cloud.e higher-ranking query can retrieve a higher percentage of matching files; users can retrieve files on demand by selecting different levels of queries.Sun and Wang [33] proposed a weight calculation system based on the classification regression tree method, which combined differential privacy and decision tree method, and used differential private small-batch gradient descent algorithm to track privacy loss and prevented adversary from invading personal privacy.Chamikara et al. [34] proposed a recognition protocol, which used different privacy to disturb the featured face and stored the data in a third-party server, which can effectively prevent attacks such as member inference and model memory attacks.
To determine the reasonable release time of dynamic positioning data, the study in [35] designed an adaptive sampling method based on proportional integral derivative controller and proposed a heuristic quadtree partition method and a privacy budget allocation strategy to protect the difference privacy of published data, which improved the accuracy of statistical query and improved the availability of published data.ere is often a trade-off between privacy and mining results.Xu and Jiang [36] described the interaction between users in the distributed classification scenario, constructed a Bayes classifier, and proposed an algorithm that allowed users to change their privacy budget; Security and Communication Networks users can add noise to meet different privacy standards.Yin and Xi [37] combined practicability with privacy to establish a multi-level location information tree model and used the index mechanism of differential privacy to noise the access frequency of selected data.

Basic Concepts
Here, this paper will introduce some concepts of differential privacy and related theories.
Definition 1 (Adjacent dataset) [1].Given the dataset D and D ′ with the same attribute structure, when the number of records difference is 1, the datasets D and D ′ are called adjacent datasets.
Definition 2 (Differential privacy) [1].A random algorithm A satisfies ε differential privacy, if and only if, for any two sets D, D ′ and any output S with only one tuple difference, the following conditions are met: where ε is a constant number of the user.Both D and D ′ differ by at most one tuple; e is a natural logarithm constant.
When the parameter ε is small enough, it is difficult for an adversary to distinguish whether the query function acts on D or on D ′ for the same output S.
Definition 3 (Global sensitivity) [1].ere is a function q: D ⟶ R d ; the global sensitivity Δq of function q is expressed as follows: where D and D ′ are adjacent datasets, d is the dimension of function q, and ‖q(D) − q(D ′ )‖ 1 is the 1-order norm distance between q(D) and q(D ′ ).
Definition 4 (Laplace mechanism) [1].It adds independent noise to the true answer and uses Lap(b) to represent the noise from Laplace distribution with a scaling parameter b.For a function q; D ⟶ R over a dataset D, the mechanism A provides the ε-differential privacy: For query q on the database D, the random algorithm A returns q(D) + x to the user based on a query result q(D) and adds the noise x to satisfy the Laplace distribution.In the theory of probability and statistics, the probability density function of variable x is expressed as follows: ( is is the Laplace distribution, μ is the position parameter, and b > 0 is the scale parameter, and x is the sample value that satisfies the f(μ, b) Laplace distribution: x∞f(μ, b), b � (Δq/ε); notice that the larger the ε, the smaller the b.For the convenience of discussion, μ � 0; the expectation and variance are μ and 2b 2 , respectively.e implementation of ε-differential privacy algorithm is relatively simple.From Laplace distribution f(μ, b), the location parameter μ does not affect the adversary, while the parameter b � (Δq/ε) directly affects the vulnerability of the attack.When the parameter b is smaller, the sampling data x is closer to the location parameter μ; on the contrary, when the parameter b is large enough, the sampling data x is equal to the average distribution on (− ∞, +∞), which is very difficult for the adversary.
Definition 5 ((α, β) − useful) (see [1,38]).A mechanism A meets the (α, β) − useful; it has the formulafd6 where α and β are the accuracy parameters and A j is the private algorithm of A i .
eory 1 (Sequential composition theory [2]).For A 1 , A 2 , . . ., A k , they satisfyε 1 -difference privacy, ε 2 -difference privacy, and ε k -differential privacy.When they are applied to the same dataset, publishing results eory 3 (Medium convexity theory [38]).Given that two algorithms A 1 and A 2 satisfy ε-differential privacy, for any probability p ∈ [0, 1], A p is used as a mechanism.It uses the algorithm A 1 with the probability p and uses the A 2 algorithm with the probability 1 − p; then the A 2 mechanism satisfies the ε-differential privacy.

Privacy Parameter Selection Based on
Fault Tolerance e query value of the adversary is generated based on the real value; the distribution of noise directly affects the probability of the adversary obtaining the real information.
4.1.Privacy Fault Tolerance.For some query functions, if the noise x is distributed in [− L, L](L > 0), the adversary can infer the true value f(x) with a large probability and then analyze whether a specific record is in or not in the dataset.In this paper, [− L, L] is called the fault tolerance interval, and the corresponding fault tolerance is fatl(x).
According to the Laplace definition, the probability that the random noise x lies in the fault tolerance F(x) can be obtained by us, the mathematical expression of the adversaryʼs fault tolerance fatl(x) is obtained as follows: rough this mathematical theory analysis, we can select appropriate privacy parameters ε and add noise that meets the requirements of differential privacy protection, to prevent the adversaryʼs probabilistic reasoning attack.

Analysis of Privacy Parameter.
When the adversaryʼs fault tolerance level satisfies the privacy probability threshold, the appropriate scale parameter value can be obtained.In this method, the privacy probability threshold PT pr ∈ (0, 1) is determined by the privacy attribute, which means that the adversaryʼs probabilistic inference attack will not exceed the privacy protection threshold.
To meet the requirements of privacy protection, the scale parameter b can meet the formula e mathematical expression of fault tolerance fatl(x) has many forms according to the different position parameters μ.
(1) When μ ≤ − L ≤ L, we can get the formula (2) When b > 0, by solving the formula (8), we can get the formula (3) When − L ≤ μ ≤ L, we can get the formula Security and Communication Networks (4) when b > 0, by solving the above inequality, we can obtain the formulafd12 b ≥ (5) When − L ≤ L ≤ μ, formula (8) can be rewritten as follows:fd13 Budget parameter ε configuration can be expressed as follows: From the above analysis, we can deduce the selection range of privacy parameter ε under different location parameters, scale parameters, and privacy probability thresholds.
In this paper, the value range of query authority is set as [0, 1].To configure smaller privacy protection budget parameters to users with low query rights, the privacy budget 6 Security and Communication Networks parameter ε ′ � εP b is set.Based on this, the configuration method of privacy parameter ε ′ under different query permissions can be obtained by the following formula: rough this privacy parameter configuration method, the privacy protection probability threshold can be set, and the appropriate privacy parameter ε can be selected according to the query function and the fault tolerance, so as to achieve the privacy protection and ensure the maximization of data utility.

Differential Privacy of Multiuser Query
In this section, we continue to study the location parameters and scale parameters and propose a differential privacy mechanism to solve the multi-user query.
Assume that the number of users is m, and the query number of each user is k.
e query set is ; the results for i th user are covered with scale parameter b � (Δq/ε) and location parameter u � u i .e u i is randomly chosen from the interval According to Definition 3, the global sensitivity is Δq, the  r ij � r ij + x ij is the noisy value of the query q ij by the database D, the r ij is the real value of the q ij , and x ij is noise with b � (kΔq/ε) and μ � μ i .e  r ij ′ � r ij ′ + x ij ′ is the noisy value answer of the query q ij by the database D ′ , x ij ′ is the noisy value for q ij by the database D ′ , and r ij ′ is the real value for q ij by the database D ′ .eory 4. For the database D and query set Q, the mechanism A is ε-differential privacy.
Proof.For the D, D ′ and the i th userʼs query q ij , the location parameter is μ i , so it can get the formula x ij meets the Laplace distribution; it can get the formula For the adjacent database, it can get the formula For the i th user's query q ij , it can get the formula In Algorithm 1, there are some denotations.e database is denoted by D and its global sensitivity is Δ D. for the query q ij of the i th user, the privacy budget is (ε/k).According to eory1 of differential privacy for the query set Q, this mechanism is ε differential privacy.□

Research of the Attack Model
In the actual application scenario, users often face attack problems of different privacy. is section is divided into two parts: single attack and repeated attack.
6.1.Single Attack.Assume that there are only two potential input sets in the worst case, this section discusses how to guess the real value q(D) according to the q(D) + x.An adversary puts forward a query question q against the attack object.e database owner gets the result q(D) according to the query question and returns it to the adversary after adding the noise x. e adversary needs to make a judgment by the result q(D) + x; an attack object is not in the collection.Each noise x satisfies the Laplace distribution, so it is impossible for the adversary to accurately guess this x.Considering the characteristics of query functions, the adversary can only guess that x falls in a certain range.To describe the above phenomenon, the probability of x in interval [μ − L, μ + L] decreases with the increase of b, which can reflect the difficulty of the adversary.

Security and Communication Networks
Lemma 1.If the Laplace distribution is used to add noise x to q(D), then the probability of q(D) Proof.Based on Definition 3, the probability of q(D) + x falling in the interval (− ∞, q(D) + μ + L) is equal to the probability of x in the interval (− ∞, μ + L). erefore, from the Laplace function, according to b � (Δq/ε), the probability of x in the interval (− ∞, μ + L) is expressed as e probability of an adversaryʼs success in Algorithm 2 is 1 − (e (− ε/2) /2).
With Lemma 2 and Algorithm 2, when the adversaryʼs success probability p ≤ (1 − (1/2)e (− Lε/Δq) ) is solved, it can obtain the upper bound of the ε that meets the formulafd20 e upper bound of the parameter ε in formula ( 20) is independent of the dataset, which is related to the query function (Δq, L) and the adversaryʼs success probability p. □ 6.2.Repeated Attack.Although differential privacy is the latest technology to protect personal privacy, it has an obvious defect in the Laplace mechanism.If the adversary can perform the same query function infinitely, he can infer the real query result by observing which point the query results concentrate on.erefore, it is necessary to study the limit of the number of query times.
According to the above sections, an adversary can obtain q(D) + x 1 , q(D) + x 2 , . . ., q(D) + x n results after N times of attacks.

Lemma 3. If the adversary attacks N times and adds noise
x 1 , x 2 , . . ., x N to q(D) by Laplace distribution, the probability of n times q(D) + x i in (− ∞, q(D) + μ + L) is expressed as follows: Proof.According to Definition 2, it can be known in a query that the probability of q(D) + x in the (− ∞, q(D) + μ + L) is expressed as follows: If there are n times in the interval (− ∞, q(D) + μ + L), from the binomial distribution function, the probability of n in the N times of repeated attacks is C n N (1 − (1/2)e (− Lε/Δq) ) n ((1/2)e − (Lε/Δq) ) N− n .In Algorithm 3, Δq � 1 is the normal query, μ � 0; the half-length of the fault-tolerant interval L � 0.5.After Require: the number of user is m e number of query for each user is k e query set is Q e interval is [μ − L, μ + L] e database D and its global sensitivity is Δq e privacy budget is ε Ensure: the set of answer  r ij   for queries (1) For each user i ∈ [n] do (2) Choose μ i from [μ − L, μ + L] for i th user (3) Set the i th user's noise distribution lap((kΔq/ε), μ i ) (4) For each query q ij ∈ Q do (5) e answer  r ij � q ij (D) + lap((kΔq/ε), μ i ) (6) End ALGORITHM 1: Multi-user query.
Input: A(q(D)) � x + q(D) Output present or absence / * Laplace distribution f(μ, b), and q(D) ∈ m, m Proof.Let N � n + 1; assume that q(D) � y or q(D) � y + 1. Considering two intervals (− ∞, y + 0.5] and (y + 0.5, +∞], a times falls into [y + 0.5, +∞) and b times falls into (− ∞, y + 0.5] afterN times of attacks.According to Lemma 4, if q(D) � y + 1, the probability of a > b, a erefore, the probability of a successful attack is expressed as q(D) � y indicates that the attack object is not in the original dataset, and q(D) � y + 1 indicates that the attack object is in the original dataset.
e experiment uses UCI machine learning dataset, which contains 48842 records of US census data with 14 attributes.Here, we select five attributes in Table 1: education, marital status, occupancy, native country, and work class.

Fault-Tolerant Experiment.
To express the problem more intuitively, according to the configuration method of privacy parameter in Section 4, the parameter ε is analyzed qualitatively and quantitatively.
In Figure 1, PT pr � 0.7, when the location parameter μ is outside the fault tolerance interval (μ ≤ − L ≤ L or − L ≤ L ≤ μ), the adversaryʼs fault tolerance on the fault interval [− L, L] is low; the adversary cannot effectively obtain the real information in the dataset. is is because the location parameter is large, the data distortion is serious, and the data availability is low.
When the location parameter μ is in − L ≤ μ ≤ L, the adversaryʼs fault tolerance is higher, which has reference significance for privacy protection analysis.According to Figure 1, this paper analyzes the impact of different interval lengths on the adversaryʼs fault tolerance when the location parameter μ is within the fault tolerance interval.
In Figure 2, the configuration of ε is related to the location parameter μ and fault tolerance interval [− L, L] of noise distribution.Under the same fault tolerance interval, when the position parameter ε is taken as 0, the adversaryʼs fault tolerance is larger.Under the same location parameters, the larger the fault tolerance interval, the greater the fault tolerance level.e maximum privacy parameter value can be obtained without violating the privacy protection probability threshold PT pr .
In Figure 3, the smaller the query authority is, the smaller the upper limit of privacy protection budget parameters is.By limiting the upper limit of privacy protection budget parameters, different values can be configured for query users with different query permission ranges.
Input A(q(D)) � X + q(D) Output present or absence / * Laplace(μ, b) distribution, and q(D) ∈ y, y  Security and Communication Networks 7.2.Query Success Rate Experiment.ε is an important factor to measure the intensity of privacy protection.Its different allocation schemes have a great impact on the error of the privacy protection algorithm.Next, we give the query probability in different conditions to verify the role of our parameters.
In Figure 4, in the interval [− ∞, μ + L], with the increasing of the value of ε, the probability of q(D) + x falling in the given interval also increases.
Figure 5 shows the probability curve of different values of the privacy parameter ε in the interval [μ − L, μ + L].It can be seen from the figure that, in the range of [μ − L, μ + L], the probability of q(D) + x falling into the interval [μ − L, μ + L] will decrease with the increase of the value of ε; that is, the probability of q(D) + x falling in the interval [μ − L, μ + L] will decrease with the increase of ε.
Figure 6 shows the probability curve image of the noise value falling in the interval [− ∞, μ − L] with different privacy parameters.As can be seen from Figure 6, with the increase of the privacy parameter ε, the probability of q(D) + x falling into a given interval becomes smaller.
In Figure 7, under the same privacy budget ε, the probability of attack success increases with the number of  Security and Communication Networks attacks; with the increase of ε, the success rate reaches 1; furthermore, the selection range of parameters can be deduced by formula (23).

Conclusion
is paper studies the selection of the parameter ε in several cases of differential privacy.Firstly, this paper proposes a differential privacy parameter configuration method based on fault tolerance interval and analyzes the adversaryʼs fault tolerance under different noise distribution location parameters and scale parameters.Secondly, this paper proposes an algorithm to optimize the application scenarios of multi-query and proposes a differential privacy mechanism to solve the multi-user query scenarios.irdly, this paper proposes the differential privacy parameter selection methods of several attack models and calculates the upper bound of the parameter ε based on the sensitivity Δq, the length of the fault tolerance interval L, and the success probability p. Finally, we have carried out a variety of simulation experiments to verify our research scheme and given the corresponding analysis results.
e research of ε is limited not only to choosing a proper privacy parameter value in the Laplace mechanism but also to choosing a reasonable ε in exponential mechanism and calculating an ideal parameter value by the method of probability and statistics.

Figure 1 :
Figure 1: Fault tolerance values of adversary under different parameters.

Table 1 :
Attributes of dataset.