Multiple Differential Distinguisher of SIMECK32/64 Based on Deep Learning

Currently, deep learning has provided an important means to solve problems in various fields. Intelligent computing will bring a new solution to the security analysis of lightweight block cipher as its analysis becomes more and more intelligent and automatic. In this study, the novel multiple differential distinguishers of round-reduced SIMECK32/64 based on deep learning are proposed. Two kinds of SIMECK32/64’s 6–11 rounds deep learning distinguishers are designed by using the neural network to simulate the case of the multiple input differences and multiple output differences in multiple differential cryptanalysis. +e general models of the two distinguishers and the neural network structures are presented. +e random multiple ciphertext pairs and the associated multiple ciphertext pairs are exploited as the input of the model. +e generation method of the data set is given. +e performance of the two proposed distinguishers is compared. +e experimental results confirm that the proposed distinguishers have higher accuracy and rounds than the distinguisher with a single difference. +e relationship between the quantity of multiple differences and the performance of the distinguishers is also verified. +e differential distinguisher based on deep learning needs less time complexity and data complexity than the traditional distinguisher.+e accuracy of filtering error ciphertext of our 8-round neural distinguisher is up to 96.10%.


Introduction
e size of computing device has been decreasing, and some Internet of things applications such as smart homes and wearable systems have widely emerged in recent years [1,2]. In these resource-constrained applications, lightweight block cipher usually plays a key role in ensuring data security [3]. is has also led to design new lightweight block cipher [4]. e balance between security and low consumption of lightweight block ciphers should be fully considered in design, which may be accompanied by the reduction of security. erefore, the security analysis of lightweight block cipher is the first thing in its wide application. e cipher distinguisher distinguishes the random permutation from the cipher according to the structure of the cipher algorithm or the characteristics of its components. e common construction methods of distinguisher include differential distinguisher [5], linear distinguisher [6], and integral distinguisher [7]. Among them, the differential distinguisher is the primary tool for differential attack, which was first proposed by Biham et al. in 1990 [8]. Differential attack takes advantage of the unbalanced distribution of difference statistics in the iterative process of block cipher algorithm. It has become the most basic and effective means for cryptanalysis of block cipher. e multiple differential cryptanalysis was introduced in reference [9]. It usually has the general form of multiple inputs and multiple outputs. e deep learning technology is developing rapidly in the area of the artificial intelligence, such as computer vision [10], biological information [11], and natural language processing [12]. Deep learning uses the rule that is discovered from data to predict and judge the future moment and unknown situation. e cipher distinction task is coincided with the classification in form. A neural network model for classification can be established by abstracting plaintext pairs, ciphertext pairs, and the round function. e author in reference [13] first proposed a lightweight block cipher cryptanalysis based on deep learning. It successfully provides a new cryptanalysis approach using deep learning. Since then, the cryptanalysis of lightweight block cipher based on deep learning has become more and more active.
Lightweight block cipher will be fully considered to resist differential cryptanalysis at the beginning of design. Nevertheless, it will be of great significance to design a new distinguisher and explore the unknown defects of block cipher. Based on the attack idea of multiple differential cryptanalysis in this study, the deep learning is applied to construct the distinguisher in multiple differential cryptanalysis for round-reduced SIMECK32/64. e structure of multiple ciphertext pairs with multiple input differences is exploited to train the distinguisher. Our contributions are as follows: (1) e deep learning distinguisher of SIMECK32/64's 6-11 rounds for multiple differential cryptanalysis is presented. Our neural network structure adopts different multiple input differences, which is different from others. (2) e general models of two novel kinds of distinguishers are given. e inputs of the two models adopt random ciphertext pairs and associated ciphertext pairs respectively, both of them are based on the multiple input differences. e method of data set generation is put forward. (3) e multiple differential distinguishers based on deep learning have low complexity and high accuracy of filtering error ciphertext.
e rest of the study is structured as follows: in Section 2, the security analysis of relevant lightweight block cipher and the study of the cryptography combined with deep learning are presented. In Section 3, the preliminaries are provided, which include the overview of SIMECK, the differential distinguisher, and the deep learning model. In Section 4, two kinds of neural distinguishers are proposed and discussed in detail. In Section 5, the experiment of our distinguishers and the comparisons are carried out, and the results are given. Section 6 summarizes the work of the full study. Table 1 lists the main symbols and their meanings used in this study.

Related Work
e family of SIMECK cipher similar to SIMON's structure was designed in reference [14] on CHES' 2015. e ciphers of SPECK and SIMON were released by the National Security Agency (NSA) in 2013 [15]. SIMECK continued the good design component of SPECK and SIMON and has an excellent performance in both hardware and software implementation. e designer has initially done the security analysis of SIMECK. e differential cryptanalysis, impossible differential cryptanalysis, and linear cryptanalysis were given. e ability of SIMECK to resist linear attack was evaluated in reference [16]. e better results of differential cryptanalysis for SIMECK were presented in reference [17]. e authors in reference [18] proposed the novel algorithm for finding the more perfect differential trails and gave the differential trails of rounds 14, 21, and 27 of SIMECK32/48/64, respectively. e authors in reference [19] studied different versions of related-key impossible differential distinguisher of SIMECK. ey proposed distinguishers of SIMECK32 for round 15 using the middle encounter method. With the help of MILP, when the difference between input and output was limited to one active key bit, the optimal related-key impossible differential of SIMECK was proposed.
e authors in reference [20] introduced a cube attack based on SMT by using the additional information provided by the intermediate state cube feature for the SMT solver in attacking the round-reduced SIMECK32/64. A series of new distinguishers for statistical fault analysis of SIMECK based on the ciphertext-only attack was presented in reference [21]. e results showed that the distinguishers can restore the keys on the basis of reducing errors and improve the reliability and accuracy.
Deep learning technology is very useful for big data analysis, which can help to discover the small links among data. In cryptography, identifying the subtle relationship among data plays a very important role because the subtle relationship usually defines the security strength. e authors in reference [22] applied deep learning to side channel analysis and discussed the applicability of deep learning technology in classical cryptanalysis. e authors in reference [23] evaluated the various relations between deep learning and cryptography and proposed some possible research directions of using deep learning in cryptanalysis. A feedforward neural network (FNN) was developed in reference [24], which can find plaintext from the ciphertext of AES without using key information.
On CRYPTO'19, the author in reference [13] proposed the improved differential attack of round-reduced SPECK32/64 based on deep learning. e distinguisher was constructed by training the ResNet to distinguish the ciphertext pairs with the encryption using fixed plaintext difference and random data. It confirmed the effectiveness of deep learning on security analysis of symmetric ciphers and put forward a new research direction. e authors in reference [25] proposed a framework of using machine learning to extend the classical distinguisher. e distinguisher used a single differential trail and was implemented on SPECK, SIMON, and GIFT64. is method can reduce the data complexity, but the accuracy was not as high as that of the distinguisher trained by random difference. In reference [26], convolutional neural network (CNN) and multilayer perceptron (MLP) were applied to construct the neural network distinguisher of round-reduced TEA and RAIDEN cipher. e distinguishing task was proposed where the traditional distinguisher could not be applied. e 3-6 rounds of neural network distinguisher for PRESENT cipher were constructed [27]. e distinguisher can distinguish ciphertext data from random data with high probability, which further expanded the application of deep learning in block cipher. In reference [28], deep learning was applied to perform the cryptanalysis on the simplified DES, SPECK, and SIMON under the limited key space, and the key bits were recovered successfully. However, this method was not applicable when the key space was not limited. e authors in reference [29] used neural network to simulate the "single input-multiple output" difference of non-Markov cipher and simplified the distinguish task into classification task. Several distinguishers of four ciphers Gimli, ASCON, Knot, and Chaskey were shown. It was proved that the complexity of each distinguisher was very low. e authors in reference [30] proposed the neural network distinguisher for Chaskey, PRESENT, and DES. e multiple ciphertext pairs use one difference as the input. e module was added to extract the derived features, so as to improve the accuracy of the distinguisher.

Description of SIMECK.
e family of SIMECK cipher is denoted as SIMECK 2n/mn, where 2n (n �16,24,32) is the block size, and mn is the master key size [14]. For example, SIMECK 32/64 refers to the encryption of 32bit block, and the length of master key is 64 bit. SIMECK is designed to follow Feistel structure. e plaintext is firstly divided into L 0 and R 0 , then these two parts are encrypted by round function for r rounds, and the last two outputs L r and R r consist of a complete ciphertext. Figure 1 shows the single round encryption process of SIMECK. e round function F k i (the round i) is defined as follows: where L i and R i are the intermediate state of SIMECK, and k i is the round key. e function f is defined as follows: where ⊙ is bitwise AND, ⊕ is exclusive-or (XOR), and x ≪ i represents that x is cyclically shifted left by i bits. e encryption process of SIMECK is given in Algorithm 1. Algorithm 2 gives the process of generating round key k i from the master key K. In order from high to low, the 64-bit master key K's initial condition (t 2 , t 1 , t 0 , k 0 ) is 4 words. Z 0 and Z 1 are described in reference [14]. C is a constant. e function f is the reusing of SIMECK's round function. e update operation of intermediate state can be defined as follows: where 0 ≤ i ≤ r − 1, and k i is key of round i.

Differential and Distinguisher.
Differential cryptanalysis exploits the differential (α, β) with high probability to Figure 1: Single round encryption of SIMECK.

Input:
K∈{0,1} mn Output: Security and Communication Networks distinguish random data from ciphertext, and the key recovery attack is carried out on this basis. erefore, the first step of differential cryptoanalysis is to look for high probability differential. e differential trail of a block cipher with j rounds is defined as (α, β): where Δ i and Δ i+1 are the input difference and the output difference of round i respectively. e difference operation generally adopts XOR and modular subtraction. In this study, the difference operation uses XOR; that is, Δ � P i 0 ⊕P i 1 , P i 0 and P i 1 are the input pairs. Let α and β be two correlation differences with n bits, x is a n-bit input. e differential probability of the block cipher is denoted as DP(α ⟶ β), which refers to the probability that α propagates to β under round function F. It can be calculated as follows: e differential trail generally is composed with a sequence of the triplet (Δ i , Δ i+1 , p i ), respectively representing the input differences, output differences, and corresponding probability of the round i. e differential gives all states in the whole difference chain.
Multiple differential cryptanalysis is a case of multiple linear cryptanalysis [9]. A group of input differences has no structure, and the corresponding output differences may be different due to the input differences. In multiple differential cryptanalysis, an attacker will adopt a collection Δ of differentials. e input differences set is denoted as Δ α , For a given input difference Δ i α ∈ Δ α , the set of the output differences Δ β can be obtained, and Δ β is e cryptography distinguisher D, or simply called distinguisher, is a probability algorithm. A random permutation O or cipher encryption C as its input. If the distinguisher infers that the input comes from C, then the output will be 1; otherwise, the output will be 0. When determining the success rate of a distinguisher, there are usually two situations. One is to identify the positive sample as the correct output, the other is to identify the negative sample as the correct output. A useful distinguisher often requires that its success rate be greater than 0.5. In deep learning, we use the accuracy to represent the success rate of the distinguisher. e accuracy of the distinguisher can be improved by learning more about the hidden statistical rules and structural features in ciphertexts.

Neural Network.
Feedforward neural network (FNN), or called multilayer perceptron (MLP), is one of the deep learning models. FNN approximates a function f * . In classification, y � f(x; θ) means that the input x is mapped to a category y, and the parameter θ is learned, so that the optimal approximation of the function is obtained. e framework of FNN is generally composed of the input layer, some hidden layers, and the output layer. Finally, the learning model is a chain structure formed by many different functions. e depth of the network usually refers to the length of the chain. Assumed that the layer l has M units, the layer l + 1 has N units, w l ij is the weight of the unit i in the layer l to the unit j of the layer l + 1, and b l+1 j and f l+1 j are the bias and activation functions of the unit j in the layer l + 1, respectively. e output of the unit j in the layer l of the FNN model is formally defined as follows: Convolutional neural network (CNN) uses the convolution operations. CNN is a special structure of FNN. CNN can accept matrices as input and has repetitive neuron blocks (convolution kernels) that can across space (images) or time (audio signals). is special design structure makes the convolution network has partial translation invariance. Convolution operations are generally expressed as follows: where x represents the input and w represents the kernel function. s(t) represents the output, which is called feature mapping, and t represents the current coordinate. CNN learning framework generally consists of the input layer, several convolution layers, the pooling layer, the full connection layer, and the output layer. e convolution layer and pooling layer are responsible for data processing, followed by the full connection layer. erefore, CNNs are also known as FNN with data preprocessing.

Multiple Differential Distinguisher of SIMECK 32/64
In this section, the multiple differential distinguishers of SIMECK32/64 based on deep learning are given. e input structure of the model adopts the composite multiple ciphertexts with multiple differences. Ciphertexts use two composite methods: random ciphertext pairs and associated ciphertext pairs.

Distinguisher
Model. e plaintexts are generated through multiple differences Δ i , and the corresponding ciphertexts are obtained by encrypting these plaintexts. e distinguisher will classify the random data and the ciphertext. e label vector Y is used to represent the classification results. If Y i is 0, it represents the random data. If Y i is 1, it represents the encrypted ciphertext. e definition of Y is given as follows: 4 Security and Communication Networks en, the general model of distinguisher D is defined as follows: where X k is the ciphertext sample. f 1 (X k ) represents the feature extracted from the set of ciphertext pairs X k , and f 2 (y 1, y 2 , . . . , y m ) represents the composite feature re-extraction among multiple ciphertext information. e function F is the posteriori probability estimation function after integrating global features. In this study, two composite forms are designed. Formula (8) gives the structure of plaintext pair using random multiple differences. Another structure of plaintext pairs using multiple differences is proposed that link the plaintext pairs in series, named associated multidifferences. Multiple ciphertext pairs are combined to investigate the extraction of composite features by the distinguisher. e associated composite form is defined as follows:

Design of Neural Network Distinguisher
(1) Input Data Format. e input layer adopts the form of ciphertext pairs set. CNN accepts the matrix as input. Two input data formats of SIMECK32/64 are constructed: (2t + 2) × 16 and 4t × 16. e input data formats represent the byte-oriented structure of ciphertext pairs based on multiple differences.
(2) Network Structure. e network structure consists of four modules, as shown in Figure 2. Module 1 uses 32 parallel convolution kernels with size 1 for bit slicing operation. e matrices of (2t + 2) × 16 and 4t × 16 are mapped to 16 × 32 matrix. e size of the matrix remains unchanged during the mapping process. en, several repeated modules 2 are connected to adjust the depth of the network. Each module 2 contains two convolution layers. Each layer uses 32 convolution kernels whose size is k s , which is able to study the features from the input ciphertext pairs. e result of the addition of the output of module 2 and the output of module 1 is a residual connection. Finally, module 3 and module 4 are fully connected layers. Each layer has d 1 and d 2 neurons respectively, which are used to synthesize global features. Each layer of the neural network applies L2-based kernel regularizer, batch normalization module, and a rectifier nonlinearly to ensure the universality of the network.
(3) Activation Function. e gradient may disappear when the function sigmoi d is used as the activation function.
e reason is that the gradient value of sigmoid function is too small in the interval of |x| > 4. In contrast, the output of the function ReLU is relatively stable, which is a linear function when x > 0. At the same time, the problem of sparsity is solved. erefore, ReLU is used as activation function while training the distinguisher for lower number of rounds. e function of ReLU is defined as follows: While training the distinguisher with a high number of rounds, the gradient explosion often occurs in the training process because there is no obvious difference between ciphertext pairs and random data in the data set. e training model may bias to two directions. erefore, this study selects tanh function, whose function expression is as follows: (4) Super Parameter Setting. e mean square error is added to L2 norm. e effect of parameter penalty and regularization are achieved. e loss function is set as follows: where f(x i ) represents the neural network's output, w represents the neural network's parameters, y i represents the true label, and λ represents the penalty factor, which is 0.0001. e optimizer uses the Adam algorithm, which corrects the learning rate of each parameter in real time by Security and Communication Networks 5 first-order and second-order matrix estimates of gradients. e learning rate decreases with the increment of training rounds and does not decrease after 40 epochs. erefore, the training cycle in the algorithm is set to 40 epochs. Meanwhile, the ModelCheckPoint method is triggered by the callback function to save the best learning model.

4.3.2.
Training. Algorithm 4 is the training of ND am model. e set used for training is composed of ciphertexts and random data. First, t associated plaintext pairs are generated by the given P 0 and the multiple input differences (Δ 0 , Δ 1 , . . . , Δ t−1 ). en, the ciphertext pairs are obtained. In order to make the encrypted data and random data account for half of the sample, half of the data is replaced with random data. Finally, the trained neural network model is saved and returned.

Testing.
e data sets of testing and training are generated in the same way. e model can be regarded as a function F, which accepts inputs, judges, and outputs. If the label is 0 and it is judged that the data belong to random data, the output is 0. Similarly, if the label is 1 and the model judged that the data belong to cipher, the output is 1. Finally, the test accuracy of neural network model is returned. Algorithm 5 describes the process of testing neural network distinguisher model. e process of training and testing of the distinguisher ND rm is similar to the above ND am model, which will not be repeated here.

Experiment and Performance Evaluation
e performance of some distinguishers and the comparisons among these distinguishers are presented in this section. e proposed distinguishers in this study are ND rm and ND am . e distinguisher ND S is realized by using the ideas of reference [13], and the distinguisher ND M is realized by using the ideas of multiple ciphertext and single difference [30]. All experiments were conducted on a computer with a GTX 1650 graphics card, 16 GB memory. Tensorflow is used at the back end and Keras is used at the front end. Figure 3 shows the accuracy of distinguisher ND am and ND rm of round-reduced SIMECK 32/64 (6-11 rounds). e network of the distinguisher shown in Figure 3 adopted CNN. In our experiments, the two distinguishers have the same parameters. e number of differences is set as t � 3, and the threshold is set as δ > 0.5. e training sample size is 2 24 , and the testing sample size is 2 18 . e positive and negative samples in training set and testing set account for 1/2 of them, respectively. e input plaintext differences Δ 0 � 0x0/0x1, Δ 1 � 0x0/0x2, Δ 2 � 0x0/0x4 were selected. e experimental results show that the deep learning distinguisher can easily learn the distinguishing characteristics of encrypted data and random data for the low rounds, but as the number of iteration rounds increases, the accuracy will continue to decrease. According to the confusion and diffusion of the block cipher, the higher iteration rounds is, the weaker the statistical information between plaintexts and ciphertexts become. And the corresponding positive and negative samples have the high similarity. It becomes difficult for deep learning to select the feature effectively. In order to make the neural network distinguisher have a strong generalization ability, the neural network model can be improved to a certain extent by increasing the size of sample in training and testing or prolonging the training epoch.

Results of the Proposed Distinguishers.
It can also be seen from Figure 3 that distinguisher ND rm and ND am have similar trends, but the accuracy of ND am is slightly higher than that of ND rm . e convergence speed of ND am is faster than that of ND rm when the number of rounds is high. It proves that ND am has learned more features from the associated ciphertext pairs. Compared with the ciphertext pairs generated by random multiple differences, there are hidden features in the associated ciphertext pairs. erefore, the performance of the distinguisher is improved. However, ND am distinguisher also has restrictions on data requirements in training and testing. Data generation of the distinguisher ND rm is easier than that of distinguisher ND am .
In training and testing of the distinguisher model, the sample data required for the distinguisher of lower rounds can be reduced. Taking ND am (r � 6, t � 3) as an example, the samples of the training set only need 2 8 . Using the settings in this study, the accuracy about 0.99 can be achieved in 25 s. For the ND am (r � 7, t � 3), the samples of the training set need 2 16 , which takes about 1 minute to achieve the accuracy about 0.98.
Using the same training parameters as CNN, this study also constructed the corresponding distinguisher using MLP. e MLP network consists four hidden layers, whose neurons for each hidden layer are set as 2n, 2n, n, n. Each hidden layer abstracts the features of the input ciphertext pair to another dimension space to extract more abstract features. L2 kernel regularizer and nonlinear activation function ReLU are used in each layer. Since the number of layers of MLP network is not very deep, the batch normalization module is not used. e training of MLP adopts the circular learning rate scheme in reference [13], which is different from the decreasing learning rate scheme adopted in CNN. Table 2 lists the results of the distinguisher based on MLP. It can be seen from the results that the accuracy of MLP is equal to or less thanthat of CNN in each round. But it is up to only 10 rounds at most. In the case of rounds 8 and 9, the accuracy of ND am is significantly higher than that of ND rm . is also shows that the associated difference has certain advantages. Table 3 lists the number of learning parameters, training time, and accuracy of 6-10 rounds of distinguisher ND am (t � 3) when using MLP and CNN. Due to the simple structure of MLP, the training time and the number of parameters will be greatly reduced. It can be seen that CNN can better converge to a local minimum and is easier to be optimized, and the accuracy of CNN is slightly higher than that of MLP. Figures 4 and 5 show the prediction distribution of distinguisher ND am and ND rm (r � 8, t � 3) based on CNN with 512 random positive samples, respectively. It can be seen that our 8-round CNN distinguishers have high reliability in the distribution of predicted values. It is basically consistent with the accuracy of training. Ciphertexts with high probability difference can be easily found.

Comparisons of Distinguishers.
In our experiment, the distinguishers (t � 2, 3, and 4) were given. e sample sizes of training and testing are 2 24 and 2 18 , respectively. e specific input differences used are listed in Table 4. Table 5 lists the accuracy of t-differences neural distinguisher D rm , D am of SIMECK's 6-11 rounds. In the case of the same round, the same input difference, and the same   size of training sets and testing sets, the accuracy of D am is slightly higher than that of D rm regardless of the quantity of plaintext differences. It is indicated that D am has stronger feature selection ability and generalization ability. It also shows that when t increases, the accuracies of D rm and D am increase. But it does not always increase with the increment of t. Table 5 also lists that the accuracies of D rm and D am are all the highest when t � 3. It can be inferred from algorithm 4 and algorithm 5 that in the case of the same size of training sets and testing sets, the more the number of input differences is, the more complex the data become. e distinguishers are difficult to extract the features among ciphertexts, which will not increase the accuracy of the distinguishers. e input difference selected by D rm and D am is the same as that used in the previous experiment when t � 3. e input   Selected differences 2 Δ 0 � 0x0/0x1, Δ 1 � 0x0/0x2 3 Δ 0 � 0x0/0x1, Δ 1 � 0x0/0x2, Δ 2 � 0x0/0x4 4 Δ 0 � 0x0/0x1, Δ 1 � 0x0/0x2, Δ 2 � 0x0/0x4, Δ 3 � 0x0/0x8   difference used in ND M and ND S is Δ 0 � 0x0/0x1. e size of samples is the same as above. Table 6 lists the accuracy of distinguishers of SIMECK32/64's 6-11 rounds. D am and D rm can reach higher rounds than ND S obviously. ey can be applied to the SIMECK 32/64 algorithm with higher rounds; that is, the number of the round is expanded to round 11. And the accuracy is equivalent to the accuracy of round 10 in ND S . e D am and D rm have achieved higher accuracy than ND M under the same number of rounds and still maintain a high accuracy in round 9 without a precipitous decline. It fully proves that our D am and D rm have learned some additional features from multiple ciphertext pairs. At the same time, it also proves that when multiple differences are introduced, the accuracy of the distinguisher can be appropriately improved under the same round or a higher round of the distinguisher can be obtained. It also proves that it is feasible to train distinguishers by using multiple differences.
In reference [18], the authors presented the optimized differential trails. e deep learning multiple differential distinguishers proposed in this study have better performance compared with reference [18]. Table 7 lists the detailed comparison of the complexity.
To further verify our neural distinguisher as a cryptographic tool, 512 random samples are used to test the ability of filtering error ciphertext of the multiple differential distinguishers. Using 3 input differences and 7-9 rounds of encryption, ND am is tested. Among 512 chosen-plaintext sets, half are positive samples and half are negative samples. Table 8 lists the probabilities of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) of our D am . It can be seen that the accuracy of filtering error ciphertext (i.e., FN) of our neural distinguisher is 99.61%, 96.10%, and 77.34%, respectively, and our distinguisher has high reliability.

Conclusions
In this study, we proposed a new deep learning distinguisher based on multiple differences for SIMECK32/64's 6-11 rounds. Two kinds of ciphertext pairs by using multiple differences are designed as the input of neural network. e distinguishers with good performance are verified. We also show some distinguishers and the accuracy of our distinguishers under different conditions. And the differential distinguishers based on deep learning consume less time and data than traditional distinguisher. e accuracy of filtering error ciphertext of our neural distinguisher is high. It is further proved that the deep learning method provides feasible means to simulate the case of the multiple input differences and multiple output differences. We are also trying to study deep learning distinguisher for other ciphers with different structure and block size. In the future, we will also adopt data preprocessing and other methods to study the deep learning distinguisher.

Data Availability
e data used to support the findings of this study are available from the first author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.