Traditional encryption algorithms are inefficient when applied to image encryption because image data have the characteristics of large data sizes and strong correlations between adjacent pixels. The shortcomings of the traditional Data encryption standard (DES) encryption algorithm when applied to image encryption are analyzed, and a new image encryption algorithm based on the traditional DES encryption algorithm model, chaotic systems, DNA computing, and select ciphertext output is proposed. Select ciphertext output selects cipher image with the biggest entropy, and it can increase the randomness of cipher image and reduce the risk of encryption system being broken down. This algorithm overcomes the shortcomings of high computational complexity and inconvenient key management that the traditional text encryption algorithm has when applied to image encryption. The experimental results show that the security of this algorithm is verified by analyzing the information entropy, image correlation of adjacent pixels and other indexes. At the same time, this algorithm passes the noise attack test and the occlusion attack test, so it can resist common attacks.
In modern society, information security issues affect almost all aspects of human production and life. The problem of information leakage is becoming increasingly serious. Developing mechanisms to effectively protect the security of information has become a hot research field. Traditional encryption algorithms such as RSA and DES have been widely used in the encryption of text information [
There are two methods for image encryption: scrambling encryption and pixel grayscale value encryption. Scrambling encryption refers to using algorithms to change the position of pixels. Scrambling encryption can reduce the correlation between pixels, but it cannot change the pixel values. Amnesh et al. proposed a scrambling encryption scheme based on RGB translation scrambling to achieve the encryption of colored images [
Data encryption standard (DES) was established by the US Federal Government in 1977. It has been authorized and applied to governmental nonconfidential communications. The traditional DES algorithm [
As a complex nonlinear dynamic system [
To increase the antiocclusion capability of this encryption scheme, the logistic map is chosen to scramble the position of pixels [
Simulation figure of chaotic systems. The phase diagram of (a) logistic map and (b) 2DLSCM system.
To reduce the correlation between adjacent pixels, the 2DLSCM chaotic system is chosen to encrypt the pixel grayscale value [
In 1994, Adleman designed and completed the first DNA computing experiment and published his experimental results in the journal of Science [
Doublestranded DNA molecules consist of four nucleotides: A (adenine), T (thymine), G (guanine), and C (cytosine). Nucleotides follow the principles of complementary pairing, namely, A and T are complementary, and G and C are complementary. In DNA coding, each nucleotide can represent a 2bit binary string. Pixels in the grayscale images are in the interval [0, 255], so they can be represented by the four nucleotides [
8 kinds of DNA coding rules.
Rule  1  2  3  4  5  6  7  8 

00  A  A  G  C  G  C  T  T 
01  G  C  A  A  T  T  G  C 
10  C  G  T  T  A  A  C  G 
11  T  T  C  G  C  G  A  A 
When converting pixels to DNA coding, we first need to determine the rule for encoding. For example, the binary string of the decimal digit 188 is 10111100, which can be represented by the nucleotide sequence CTTA under rule 1. DNA decoding is the reverse process of DNA encoding. A different digit is obtained when decoding nucleotide sequences under a different rule. For example, if we decode the nucleotide sequence CTTA under rule 2, we will obtain a binary string 01111100, where the decimal digit of this binary string is 124, which is different from 188. In the encryption process, using different rules to encode and decode is an encryption scheme, but in the decryption process, we should ensure the consistency of encoding and decoding rules.
With the advances in biocomputing research, some scholars have proposed algebraic operations based on nucleotides [
Nucleotide computing rules under the first coding rule.





− 





A  G  C  T  A  A  T  C  G 

G  C  T  A  G  G  A  T  C 

C  T  A  G  C  C  G  A  T 

T  A  G  C  T  T  C  G  A 
The nucleotide operations are adding or subtracting binary digits represented by nucleotides. Unlike binary addition or subtraction, only the last 2bit binary digits remain for the results. For example, the nucleotide sequences TCAG and GATC are added under the first coding rule, and the result is ACTT. The sequence ACTT is used to reduce the sequence of GATC under the first rule, and the result is TCAG. The operation processes are shown in Figure
Examples of nucleotide operations under the first coding rule.
To improve the security of the key, the encryption scheme uses the SHA256 algorithm to operate the plaintext image and obtain a 256bit hash sequence as an initial key, so there is a connection between the key and original image. To improve the antiocclusion capability of the encryption system, this encryption scheme uses the logistic map to globally scramble the pixels. Finally, the encryption scheme uses block diffusion, forward diffusion, and entropy selection methods to improve the evaluation indexes of the encryption system to resist statistical attacks and enhance the pseudorandomness of the encryption system. The details of the encryption process are shown as follows.
SHA256 is a type of hash algorithm [
Scrambling is a method of changing pixel locations [
Scrambling process and its decryption process.
To increase the randomness of the cipher images, we adopt a block diffusion scheme similar to the DES structure. This diffusion scheme ignores the grouping operation, and it directly divides the images to be encrypted into two equalsized matrices as left matrix
The purpose of diffusion is to correlate adjacent pixels [
Using forward diffusion can increase the information entropy of the image and reduce the correlation between adjacent pixels. The Lena image and forward diffused Lena image are shown in Figure
Effect of forward diffusion. (a) Lena image. (b) Forward diffused Lena image.
The flow chart of the encryption scheme is shown in Figure
Flow chart of the encryption process.
The process of the encryption algorithm proposed in this paper is reversible, but because it is uncertain which round of cipher the image
Some common images were used to verify the feasibility and security of the encryption algorithm. The original images, the encrypted images, and the decrypted images are shown in Figure
Original images, encrypted images, and decrypted images. (a) Lena. (b) Cameraman. (c) Fingerprint. (d) All white. (e) Encrypted Lena image. (f) Encrypted cameraman image. (g) Encrypted fingerprint image. (h) Encrypted all white image. (i) Decrypted Lena image. (j) Decrypted cameraman image. (k) Decrypted fingerprint image. (l) Decrypted all white image.
A good encryption scheme should have enough key space to resist a brute force attack. The keys used in this paper include the hash sequence
A good encryption scheme should be sensitive to the key. The decrypted image obtained by changing one key by 10^{−15} with the other keys unchanged is shown in Figure
Decrypted images with the keys have minimal change. (a) Correct decrypted image. Decrypted image (b) when
The histogram of the image and the correlation between adjacent pixels are used to characterize the image. The feature of the original image is obvious, and the pixel values in some blocks of the image are distributed in concentration. When this phenomenon is reflected in the histogram, the elements are unevenly distributed. When this phenomenon is reflected in the correlation, the correlation between adjacent pixels is very strong. A good encryption algorithm can break the distribution characteristics of pixels in the original image, make the distribution of pixels in the cipher image more uniform, and reduce the correlation between adjacent pixels in the cipher image. Thus, attackers cannot attack cipher images by statistical means, and the encryption algorithm can effectively resist statistical attacks. In this paper, the histograms of original images and cipher images and the correlation between adjacent pixels are listed to show the ability of the algorithm to resist statistical attacks. At the same time, we add some comparisons with other image encryption algorithms to prove the advantages of the image encryption algorithm.
In Figure
Histograms of original images and cipher images: (a) Lena image, (b) encrypted Lena image, (c) cameraman image, (d) encrypted cameraman image, (e) fingerprint image, and (f) encrypted fingerprint image.
The correlation between adjacent pixels of the plaintext image is very strong. Breaking the correlation between adjacent pixels can enhance the ability of the encryption algorithm to resist statistical attacks. We randomly selected 10000 pixels from the original Lena image and the encrypted Lena image in the horizontal, vertical, and diagonal directions and listed these pixel values and their adjacent pixel values in Figure
Correlations of adjacent pixels: (a) horizontal, (b) vertical, and (c) diagonal correlations of original image; (d) horizontal, (e) vertical, and (f) diagonal correlations of decrypted image.
Correlations of original images and encrypted images.
Schemes  Images  Original image  Encrypted image  

Horizontal  Vertical  Diagonal  Horizontal  Vertical  Diagonal  
This study  Lena  0.9677  0.9358  0.9020  0.0048  0.0001  0.0018 
Cameraman  0.9543  0.9131  0.8984  −0.0055  0.0048  −0.0011  
Fingerprint  0.7739  0.8153  0.6151  −0.0006  −0.0026  −0.0020  


Ref. [ 
Lena  —  —  —  −0.0796  0.0166  0.0032 
Cameraman  —  —  —  −0.0398  −0.0387  −0.0090  


Ref. [ 
Lena  0.9721  0.9739  0.9705  −0.0029  −0.0017  0.0004 
Cameraman  0.9634  0.9732  0.9449  0.0047  −0.0066  0.0031 
In 1948, Shannon put forward the concept of information entropy. The concept of information entropy solves the problem of quantifying and measuring information and can be used to judge the randomness of information. When the information entropy of a piece of information is close to its ideal value, we can judge that the information has good randomness. The information entropy of the image can be used to measure the degree of randomness of the image. The calculation method of information entropy is shown in formula (
Pixel values are distributed in the interval [0, 255], and the probability of each case is
Entropy of images.
Lena  Cameraman  Fingerprint  All white  

Original image  7.4532  6.9046  7.5945  0 
Cipher image  7.9976  7.9974  7.9977  7.9979 
Ref. [ 
7.9971  7.9971  7.9970  7.9970 
Ref. [ 
7.9965  7.9964  7.9962  — 
Ref. [ 
7.9968  7.9904  —  — 
The number of pixel change rate (NPCR) and unified average changing intensity (UACI) are two indices used to measure the correlation degree between a cipher image and an original image as well as the antidifferential attack ability of the encryption algorithm. The closer the NPCR and UACI are to the ideal values, the stronger the antidifferential attack ability of the encryption algorithm. The calculation methods of NPCR and UACI are shown in formula (
NPCR and UACI.
Schemes  Images  NPCR (%)  UACI (%) 

This study  Lena  99.5987  33.5501 
Cameraman  99.6231  33.5269  
Fingerprint  99.6597  33.5613  


Ref. [ 
Lena  99.6521  33.3438 
Cameraman  99.6292  33.4140  


Ref. [ 
Lena  99.58  33.08 
Cameraman  99.60  33.15  


Ref. [ 
Lena  99.6078  28.6203 
The antinoise attack ability of an encryption system is one of the standards for measuring the robustness of the encryption system. In the process of transmission, information will inevitably be disturbed by noise, which will distort the cipher image and affect the decrypted image. The common noises are Gauss noise, Poisson noise, salt and pepper noise, etc. In this section, the antinoise attack ability of the encryption system is analyzed. Different intensities of salt and pepper noise are added to the cipher image using MATLAB software and then decrypted. The simulation results are shown in Figure
Decrypted images with different intensities of noise attacks. Noise intensity (a) of 0.01, (b) 0.04, and (c) 0.1; decrypted image with (d) 0.01 intensity noise, (e) 0.04 intensity noise, and (f) 0.1 intensity noise.
Correlations of decrypted images with different intensities of noise attack.
Noise intensity  Correlations  

Horizontal  Vertical  Diagonal  
Original image  0.9677  0.9358  0.9020 
0.01  0.8584  0.8214  0.7983 
0.04  0.6262  0.5934  0.5714 
0.1  0.3303  0.2831  0.2840 
The antiocclusion attack ability of the encryption algorithm can reflect the scattered degree of cipher text. If the scrambling degree of the encryption algorithm is insufficient, the occlusion area in the decrypted image may completely lose its original characteristics after decryption. The occlusion attack test is to occlude the cipher image and then observe the degree of restoration of the decrypted image. The clipped cipher images are shown in Figures
Decrypted images with occlusion attack. (a) Occlusion
Correlations of decrypted images with different intensities of noise attack.
Occlusion  Correlations  

Horizontal  Vertical  Diagonal  
Original image  0.9677  0.9358  0.9020 

0.8688  0.8378  0.8161 

0.6399  0.6007  0.5880 

0.1430  0.1329  0.1202 

0.1397  0.1457  0.1296 
In this paper, an image encryption algorithm based on block diffusion and the chaotic system is proposed by means of a traditional DES encryption algorithm combined with chaotic system and DNA coding technology. This method compensates for the problems of high computational complexity and inconvenient key management when the traditional text encryption algorithm is used in the digital image encryption algorithm by using DNA coding operation, selecting ciphertext output, and key verification. The experimental results show that the algorithm has a large key space to resist statistical attacks, differential attacks, occlusion attacks, noise attacks, etc. It can be widely used for the secure transmission of image information.
The data used to support the findings of this study are included within the article.
The authors declare that they have no conflicts of interest.
This paper was supported by the National Natural Science Foundation of China (grant nos. 61572446, 61602424, and U1804262), Key Scientific and Technological Project of Henan Province (grant nos. 174100510009 and 192102210134), and Key Scientific Research Projects of Henan High Educational Institution (18A510020).