Research on the Privacy Security of Face Recognition Technology

To solve the problem that privacy data are easy to leak in the application of face recognition technology in apps, a method which is based on differential privacy for privacy security protection is proposed. Firstly, Bayesian GAN is conducted to obtain the training data with the same distribution as the privacy data, and the algorithm of differential privacy is conducted to train the training data to obtain these labels with privacy protection. Then, based on the proposed lightface lightweight face recognition model, the tag with noise is generated, and the gradient descent is conducted on the recovered face feature vector from the attack. Finally, through the analysis of privacy loss, an accurate privacy protection boundary is provided. From the results of experiments, it could be known that the proposed privacy security protection method can effectively protect the parameter information of the face recognition model under the face recognition technology and reduce the recognition accuracy of the image recovered by the attacker. Compared with the privacy protection methods such as DPSGD and PATE, it has strong privacy protection ability and can be applied to the privacy protection of practical APP.


Introduction
Since the new century, the application of the Internet industry and the rapid development of computer technology have not only brought great convenience to data sharing but also increased the risk of privacy leakage. In recent years, with the enrichment of network attack means and the frequent occurrence of security events such as privacy leakage, the protection of privacy data is no longer simply hiding sensitive attributes in data but more to maximize the accuracy of data query and minimize the possibility of identification and recording, which poses a new challenge to privacy protection, especially today when face recognition technology is becoming more and more mature.
In addition, the researchers also did a lot of research on face recognition and privacy protection. First of all, in the field of face recognition, Shoba et al. put forward that the accuracy of face recognition can be improved through efficient feature extraction [1]. Singha et al. fused AOS and VGG to extract and fuse the face features and then recognized faces. e characteristic of this method is that the accuracy of recognition can be greatly improved through deep fusion [2]. Sang et al. proposed an image noise removal method based on the hybrid norm constrained regression model for the noise problem in face recognition images. Finally, experiments are carried out in five commonly used face recognition databases. It could be known from the results that the abovementioned method is superior to the traditional regression model [3]. Padmanabhan et al. applied the machine learning algorithm to face recognition and verified the efficiency of machine learning algorithm in face recognition [4][5][6][7][8]. Liu et al. came up with a method of face image recognition which is based on singular value processing, so as to broaden the data sample set [9]. It can be seen from the abovementioned research that the current technology for face recognition mainly starts from three aspects: one is the feature extraction of the face image; the other is the image features classification; and the third is the image sample processing. However, the abovementioned research has one thing in common, that is, to improve the accuracy of face recognition.
To solve the problem of privacy security, Kumar et al. proposed to introduce biometric technology into cloud computing technology, such as face recognition technology, so as to improve the security of the cloud computing environment [10]. For the privacy protection of smart home system, Yassine et al. applied the recognition method of improved k-nearest neighbor classifier to it. e results show that the security is greatly improved by this method. However, generally , there are relatively few research studies on privacy protection of application programs [11].
Based on the large number of studies by the abovementioned scholars, deep learning has achieved good results in privacy protection under face recognition technology. erefore, this study uses Bayesian generative adversarial networks (GANs) in deep learning to obtain the training data with the same distribution as the privacy data and proposes a privacy security protection method of differential privacy. e lightface lightweight face recognition model is constructed to generate labels with noise, the gradient descent is conducted on the recovered face feature vector from the attack, and the privacy loss analysis is used to obtain the accurate privacy protection boundary so as to realize the privacy protection under the face recognition technology.

Introduction to Bayesian GAN.
e Bayesian GAN is a variant of the GAN. Bayesian formula is used to train the GAN to realize unsupervised and semisupervised learning, which can solve the problems of gradient disappearance and training instability in the training process of the GAN. In the Bayesian GAN, the weight vectors θ x and θ d of generator and recognizer can be obtained by iterative sampling of conditional posterior probability [12][13][14]: where K is the amount of classes; N s is the amount of samples which keep labels; z stands for white noise; n g and n d are, respectively, the amount of small-lot training samples for generators and recognizers; a g and a d are hyperparameters; p(θ g |a g ) and p(θ d |a d ) are the prior probabilities of generator and recognizer parameters under a g and a d ; s N belongs to tag y (i) s ; 0 means that the generator generates sample class labels.
Assuming that the input of the Bayesian GAN is x, the prediction distribution whose output is y * can be expressed as [15] p y * |X * , D � p y * |X * , .
In the Bayesian GAN, HMC of the dynamic gradient is conducted to make the weights of the generation network and discrimination network to proceed marginalization, and this method can get good performance without any standard intervention.

Differential Privacy Introduction.
Differential privacy is a new technology for privacy protection whose purpose is to make the accuracy of data query to the maximum and the possibility of identification and recording to the minimum. e basic conceptual requirement is that each single element in the dataset has limited impact on the overall output of the dataset. erefore, this technology can ensure that after the attacker queries the output results of the dataset, the attacker still cannot infer which element in the dataset has the impact on the output results and then cannot infer the relevant individual privacy information in the dataset to realize privacy protection. e definition of the form of differential privacy is represented as follows: Let the values of all output results of the random algorithm m be PM and SM be a subset of PM, then for any pair of D and D′ which are adjacent datasets, m satisfies ε difference privacy, as shown in equation (4) [16]: where the parameter ∈ stands for the budget of privacy protection, and the smaller the value of ε, the higher the level of privacy protection. According to Definition 1, differential privacy limits the influence of any element in the dataset on the output result of the algorithm and theoretically ensures that the algorithm meets ε differential privacy, which can be realized by adding noise in practical application.

Introduction to Lightface Model.
e lightface model is the first lightweight network model based on the deep decomposable convolution model MobileNets by adding a weight calibration module after the convolution layer. e first layer is a standard convolution layer, and the last layer is a full-connection layer, and the other network layers are the relu activation function and batch normalization. e weight calibration module is located behind the convolution layer, and its concrete structure is shown in Table 1 [17].
In Table 1, D F × D F stands for the size of the feature map which is input, and M stands for the amount of feature map channels. FC1 and FC2 are full-connection layers, similar to the BP neural network. In FC1, the dimension is reduced by adding the c parameter to reduce the amount of parameters. In FC2, appropriate c is set to raise the dimension to the level before dimension reduction. In this paper, let c be 8.
rough the weight calibration module, the weight of each channel can be obtained, and then a new feature map can be obtained by learning the weight and weighting each channel. Finally, the obtained feature map is reduced to 1 × 1 dimension through the last average pool layer of the lightface model, and the results are output through the full-connection layer, and at the same time, L2 feature normalization is carried out to obtain the feature representation. If the L2 norm value is set to 1, all features of the image would be mapped to a hypersphere. is feature is used to calculate the triple loss, and the feature is optimized according to the loss result and whether the two images are of the same class can be judged by the point distance in the feature space.
e triplet loss is calculated as follows [18]: In the formula, represent the characteristic expression of samples A, p, and N, respectively, and p and N are positive samples which are the same kind as A and negative samples which are the different kind as A, respectively. T represents the possible triple combination of the training set, and the amount of elements of T is M. erefore, the corresponding objective function can be represented as According to the objective function, when the distance between A and N is greater than or equal to the sum of a and the distance between A and p, the loss is 0; on the contrary, there will be losses.
According to the above analysis, the lightface model uses less multiplication and addition operations when defining the network, so it can enhance the computing speed of the network to a certain extent. However, this method has limited ability to improve the network computing speed, and for the sake of further improving the network computing speed, this paper uses the highly optimized general matrix multiplication GEMM function for convolution operation. Because the main amount of computation in the lightface is focused on 1 × 1 point convolution and the GEMM can directly realize 1 × 1 convolution operation, the GEMM can realize high-efficiency operation in lightface.

Face Recognition Method Based on Differential Privacy
3.1. Differential Privacy Framework. On the basis of the abovementioned basic knowledge, in this study, the privacy protection method under face recognition technology is designed as Figure 1. In the learning strategy stage, firstly, the Bayesian GAN is used to train the labeled private data, and the trained model is used to generate the new data. en, the newly generated data will be input into the trained Bayesian GAN discriminator to obtain the prediction probability of each data. Finally, the generated data and its corresponding tags will be input into the data processing stage. By these steps, the data learning is completed.
In the privacy protection stage, the main purpose is to add uncertainty to the generated data tags to protect privacy, and this study is realized by adding noise satisfying the Laplace distribution. Firstly, suppose the dimension of each data label is K + l, the data of last one dimension represent the probability value of false data, and the probability that the data belongs to class i data is P i . Adding the noise shown in equation (7) to P i and performing the normalization process shown in equation (8), a noisy label is generated [19].
where c is the privacy protection parameter, indicating the strength of the privacy protection that can be provided, and the greater its value, the stronger the privacy protection provided. Lap(c) represents the Laplace distribution with position 0 and scale c. P i ′ represents the added noise probability value. P i ′ is the normalization result of P i ′ . Finally, the data with noise labels are used to train the final release model. In order to enhance the accuracy of this model, the study uses the ensemble learning method for training the external access model. Firstly, n data are

Layer
Size-in Size-out Avg. pool Computational Intelligence and Neuroscience collected from the data for training the lightface model, and n lightface models are obtained. en, the n data lightface models are aggregated into an output model. Finally, the final result of the input data can be obtained according to the majority principle, as follows: where f(x ⇀ ) stands for the final output result of this model, x ⇀ stands for the input data, and n j (x ⇀ ) stands for the amount of results of x ⇀ belonging to class j, j ∈ [K].
In the whole process of privacy protection, because the privacy data is just conducted for Bayesian GAN training and is not conducted in publishing model training, even if the external model parameters are disclosed, the attacker cannot get the privacy data by reconstruction attack. In addition, because the input data of the publishing model is the newly generated nonsensitive data of the Bayesian GAN and noise is added in the privacy protection stage, it will disrupt the attacker's reconstruction attack so that the attack cannot reproduce the clear privacy data. Finally, the Ensemble learning is adopted to train release model which ensures that the model has high accuracy while realizing privacy protection.

e Loss of Privacy
Analysis. Based on the above differential privacy protection method, the privacy protection under face recognition technology can be ensured. In order to further accurately realize privacy protection, this study uses time accounting to analyze the loss of privacy in detail and provides an accurate privacy protection boundary.

Time Accounting.
Time accounting is a method to accurately calculate the loss of privacy and its basic definition is as follows: Definition 2. Suppose there is an auxiliary input aux, and for output o ∈ R, its privacy loss is where a M (λ; aux, d, d ′ ) represents the time generation function of privacy loss random variable, which can be calculated as follows: In addition, time accounting has the following properties.

Computational Intelligence and Neuroscience
Firstly, in each step of model training, (2c, 0)− differential privacy is selected to generate labels with privacy protection, and the algorithm meets (4Tc 2 + 2c ������� � 2T ln 1/δ √ δ)− differential privacy after Tsteps. Due to the wide range of privacy boundary, it is hard to satisfy the practical application demands of the face recognition model with privacy protection, so that Nicolas Papernot proposes a stricter privacy loss boundary method. is method satisfies the following theorem: Theorem 4. If M satisfies (2c, 0)− differential privacy, any output o * satisfies [24][25][26]: Theorem 5. Suppose n is the label score vector of dataset D and j satisfies n j * ≥ n j , then, According to the above properties, an upper limit q can be provided for a specific fractional vector n to realize the constraint on a specific time. Using λ to calculate a specific time, a more strict privacy boundary than time accounting will be obtained. For the sake of making the images of the experimental dataset conform to the model input, the experimental images are preprocessed in this study. Firstly, the face detector is run on the experimental image to generate a tight bounding box around the face. en, the generated face image is adjusted to the corresponding network input size and in this paper, its resolution is 224 * 224. Finally, the image that meets the model input conditions can be input into the network for training.

e Parameter Setting with Bayesian GAN.
In this experiment, the initial learning rate of the lightface model is set to 0.05, the threshold a is 0.2, and the parameters is updated by using the asynchronous gradient descent algorithm. In order to determine the influence of the amount of lightfaces on the recognition accuracy of face recognition models, bootstrap sampling will be used to extract n data from LFW and YTF datasets, respectively, to train n models, and the results will be aggregated to obtain the accuracy of face recognition models under different lightface models, as represented in Figure 2. e figure shows that the accuracy of the face recognition model gradually increases with the increase of N, and the accuracy tends to be stable after n � 70. By fully considering the accuracy and size of the model, n � 50 is set in this study.

Bayesian GAN Training Data Validation.
For the sake of verifying the effectiveness of the Bayesian GAN used in the paper, this experiment uses different GAN network models to generate the training data for face images with different noise levels under the TensorFlow framework and inputs the face recognition model, and the obtained accuracy results of the model are represented in Figure 3. e figure shows that the training data generated by the Bayesian GAN has the highest recognition accuracy of the face recognition model, and this advantage becomes more obvious with the enhancement of noise. is shows that the data generated by the Bayes GAN is more diverse and real and closer to the real sample distribution.
For the sake of further verifying the effectiveness of the Bayesian GAN, the paper adds noise to the data labels which are generated by the network, adds high noise with a value of 0.05, medium noise with a value of 0.1, and low noise with a value of 0.2, respectively; inputs the face recognition model for training; and obtains model training results with different noise levels, as shown in Figure 4. e figure shows that with the enhancement of noise intensity, the recognition accuracy of data labels and face models generated by the Bayesian GAN gradually decreases. Under the conditions of low noise, medium noise, and high noise, the accuracy of model recognition can reach 99%, 97%, and 86%, respectively. is shows that the Bayesian GAN has certain effectiveness, which can guarantee that the face recognition model has high recognition accuracy and meet the needs of practical application.

Lightface Model Validation
(1) Model Comparison. For the sake of verifying the effectiveness of the lightface model, the parameters and calculation of this model and of other face recognition models have been compared, as represented in Table 2 [17]. e table shows that compared with other models, the amount of calculation and parameters of the lightface model proposed in this study are greatly reduced, indicating that the model proposed in this study has certain advantages, and the speed Computational Intelligence and Neuroscience and volume meet the requirements of the lightweight face recognition model.
For the sake of further verifying the effectiveness and generalization ability of the proposed model, the research will be verified on the experimental dataset. e accuracy of different models on the LFW data is shown in Figure 5, and when FAR � 0.0001, the accuracy of different models is represented in Figure 6. It can be seen from the experimental results that compared with the comparison model, the calculation amount and parameters of the lightface model proposed in this study are greatly reduced and the accuracy is improved by 1.5%-3%. is indicates that the lightface model which is proposed in this study has certain advantages.
(2) Robustness Test. For the sake of verifying the practical application effect of the proposed model, this study conducted the tests on the nexus 6p mobile device, and the performance of the proposed model has been compared with the performance of classical networks such as VGG16. e results are represented in Table 3 [17]. e table shows that the lightface model which is proposed in this study meets the lightweight network standard, can be applied in mobile devices, and has certain advantages in speed and performance.  protection method, this study verifies it on three datasets: LFW, SFC, and YTF and compares it with DPSGD and PATE privacy protection methods [27]. e results are represented in Table 4 [17]. e table shows that compared with the comparative privacy protection methods, the privacy protection methods proposed in this study achieve the highest recognition accuracy on the three experimental datasets, and the average recognition accuracy is about 1% higher than DPSGD and 0.5% higher than PATE. is shows that the privacy protection strategy proposed in this study is effective.

Validation of Privacy Protection
According to eorem 2, Table 5 shows the model accuracy corresponding to the differential privacy protection value (ε, δ). e table shows that compared with the model trained without noise label data, the accuracy of the model proposed in this study is reduced by 0.34%. When the failure rate is 10 −5 , a strict privacy boundary ε � 2.05 will be generated. is shows that the model which is proposed in

Conclusion
To sum up, this study proposes a privacy protection method based on differential privacy. rough the Bayesian GAN and differential privacy algorithm, tags with privacy protection can be obtained to prevent attackers from accessing the privacy data training model directly. By using the proposed lightface lightweight face recognition model to generate labels with noise, the restored face feature vector can be gradient reduced, and the accuracy of the attacker to recover face sensitive data can be reduced. rough privacy loss analysis, an accurate privacy protection boundary can be provided for privacy data. Compared with privacy protection methods such as DPSGD and PATE, the privacy protection method proposed in this study has strong privacy protection ability and can be applied to privacy protection in APP under face recognition technology. Although this research has achieved some research results, there are still some problems in practical environment application, for example, the face recognition model based on differential privacy adopts ensemble learning in the training process, and although this method ensures that the model can maintain high accuracy, its training time is long so that the CNN network with parallel computing can be considered to speed up the training speed. In the follow-up study, from this aspect, the further discussion will be conducted.

Data Availability
e experimental data are available from the corresponding author upon request.

Conflicts of Interest
e author declares that there are no conflicts of interest regarding this work. Computational Intelligence and Neuroscience 9