Steganography Algorithm Based on the Nonlocal Maximum Likelihood Noise Estimation for Mobile Applications

In recent years, thanks to the use of Internet services, daily activities used to imply movement became more accessible to any user. As a result of such interconnection, now millions of people from different countries are able to communicate among themselves through the Internet, generating a great flow of data and classified information. The information on the Internet can be stolen, intercepted, anonymized, or even destroyed, resulting in cases of infringement of intellectual property rights, and the loss or damage of data. In such a globalized and interconnected world, solid security measures have become increasingly important to ensure data privacy protection and its confidentiality during transit. Nowadays, there is a variety of security mechanisms such as the steganography, an information hiding technique, which protects intellectual property by allowing the transmission of hidden data without drawing any suspicion. In order to achieve these criteria, an adaptation of the nonlocal maximum likelihood filter is proposed; in this class of filters, in general, they are used in images that require a high level of irregular pattern detection, based on the statistical dependence of the underlying pixels of the image analysis area, when using it in the wavelet domain as edge detector and/or discontinuities in images in order to have a greater selectivity when inserting information in the image. It strengthens the detection of the areaswith the highest probability of having noise presenting resultswhich are suitable areas to insert the information and that it is imperceptible in a quantitative and qualitative manner as presented in the Results and Discussion.


Introduction
Steganography is the science of hiding information by means of a cover medium in such a way that even the presence of the message is invisible to any eavesdropper.The object, apparently harmless, is known as the "host" and the contained information as the "payload".The variety of host objects can range from text files to images, audio, and/or video.The most common example is the use of images; they are used as hosts due to their omnipresence in our day-to-day activities, as well as their high level of redundancy in their representation.The image steganography techniques are classified according to their domain; the most frequently used are the spatial and the frequency domain ones.However, there are now some techniques that combine both domains, with the advantage of being adaptable to the nature of the image [1].
The spatial domain techniques are used by directly manipulating the pixels of the image to hide the information.They are also characterized by having the simplest schemes, a short implementation time [1], a reduced hardware requirement, and a low time complexity.
In the spatial domain, a steganographic algorithm modifies the data in the host image in the special domain; the most representative algorithm in this domain is the substitution of the least significant bit (LSB); although this method is simple, it has a greater impact in comparison with other methods.
In general, the insertion mechanism is carried out in the LSB up to the 4th LSB.It can be assumed that inserting in the 4th LSB generates greater visual distortion to the hostess image since the hidden information is seen as "unnatural".Similarly, the distortion occurs at the time of recovery of the inserted image.This algorithm has been perfected in order to decrease these distortions presented both in the host image and in the recovered image; one of the most widely used methods is the LSB-OPAP [2], which makes an adaptation to the insertion of the information following the considerations of the LSB algorithm.This consists of the following set of steps: the insertion of the information is done using the LSB method to obtain the stego-image pi  ; in parallel the steganographic algorithm is performed using the technique in [2] obtaining the stego-image pi  .Then consider pi(i), pi  (i), and pi  (i), the pixel values corresponding to the i-th pixel in the host image C, the stego-image C  obtained by the simple LSB substitution method, and the refined stego-image obtained after the OPAP.With i = pi  -pi, the insertion error is between pi and pi  .According to the process of incorporation of the simple LSB substitution method described above, pi  is obtained by directly replacing the least significant bits of pi with k message bits, with the following condition: −2  <  1 ≤ 2  .As a result, it shows a slight improvement against the traditional LSB.
Recently, to improve visual quality and security against histogram attacks, an approach based on LSB with the capacity of 1 bpp was proposed, reducing the probability of changing pixels as a modification of 1/3 of pixels.Due to the smaller modification of the stego-image pixels, it improves the visual imperceptibility and also has resistance against detection attacks based on LSB, that is, the HCF-COM steganalysis [3].
Yuan et al. [4] proposed a method based on multilayer adaptive steganography.The insertion of the secret image adapts to the regions with different texture in the host image.The insertion of the information is done using the LSB algorithm, and this can be extracted using the XOR-based operation.This method resists modern attacks with steganalyzers such as SPAM and AUS.One of the main problems in methods based on the LSB algorithm is that although they are simple to understand and apply and even flexible to integrate with other methods, its main vulnerability is that having a direct relationship between the ability to insert with the visual quality of the stego-image, the latter is affected as the insertion in the maximum level of the LSB in a pixel is made.
The frequency domain or the transform domain is another technique used, which consists of diverse transformations used to retrieve information in the frequency domain or time-frequency.
To avoid the problems presented in the spatial domain, the processing in the frequency domain has been an adequate tool for signal filtering, pattern recognition, and image compression.A complicated problem to solve in the spatial domain becomes easy to deal with in the frequency domain, because the sharp edges and transitions in an image contribute significantly to the content of the high frequency of its transformation.
These techniques applied in the frequency domain insert the information through transforms in which the frequency components of the image are extracted, where the places or zones in which the image in the image will not be affected can be identified in a more precise way, "visual quality" of the image.These compression techniques are often used because they extract characteristics of the host image that represent the high and low frequencies of the image, where the latter represent the edges or contours of the host image, thus allowing an exchange of values in the host image, values of the image to hide.Therefore, to search for the right pixels to hide data, transformations-based schemes are a reasonable approach.In these schemes, the host image is transformed due to its approach oriented to the extraction of the main characteristics in frequency.
Among these, there is the Discrete Cosine Transformation (DCT) technique and the Discrete Wavelet Transformation (DWT) technique [5,6].
Some popular steganographic algorithms on the Internet that apply DCT are the following.
The Jsteg/JPHide algorithm has the following characteristics for the insertion of information [7].
Jsteg has (1) steganographic tool based on the insertion of LSB; (2) the insertion is done replacing the nonzero LSB values by nonzero DCT quantized coefficients by the secret message bits.(1) In JPHide, these quantized coefficients are randomly selected with the help of any pseudo-random number generator that can be controlled with a key, (2) the second LSB can also be modified in JPHide, and (3) the Jsteg capability is equal to the number of DCT coefficients whose values are not equal to 0, 1, and -1 (this condition is selected to avoid ambiguity in the extraction of secret bits).
Another known algorithm is YASS (another steganographic scheme), which is explained below [8]: (1) The input image in the spatial domain is divided into blocks of fixed size known as large blocks (block B).Within each large block, the 8 × 8 subblock called host block (block H) is randomly selected.
(2) The bits of the secret message are integrated into the DCT coefficients of the H block by the quantum modulation index (QIM).
(3) With the help of Inverse Discrete Cosine Transform (IDCT) of block H, a JPEG image can be obtained.
(4) The advantages include the survival of the message bits in the active guardian scenario; it works well against the autocalibration tool called autoanalysis.
For example, DWT, in Subhedar et al. ( 2014) [1], obtains in its best result a value of PSNR = 54.819dB with a secret image of 256x256.The authors propose a steganographic algorithm using an adaptation of the DWT called redundant wavelet transform (RDWT) and QR factorization.
For the implementation of the DWT, Abdulaziz and Pang [1] use the vector quantification called Linde-Buzo-Gray (LBG) together with the block codes known as BCH code and the discrete wavelet decomposition Haar at a decomposition level.The result presented by them argues that their algorithm presents a good quality with few perceptual defects.
Nowadays, there are also techniques adaptable to the nature of the image; spatial and frequency domain techniques are combined in these techniques [5,6].
Adaptive steganography is a special case of the two previous methods.It is also known as "insertion based on statistics", "masking", or "based on models".This method takes global statistical characteristics of the image before attempting to interact with LSB/DCT/DWT coefficients.
The statistical values obtained from the images will define where to make the changes.It is characterized by a random adaptive selection of pixels according to the host image and the selection of pixels in a block with a large local standard deviation.The latter is intended to avoid areas of uniform color (smooth areas).This behavior causes adaptive steganography to search for images with noise or existing or deliberately added images that demonstrate the complexity of the color.
Wayner [5] named in his book "life in noise", pointing out the usefulness of the insertion of data in noise.It is proven that this method is robust with respect to compression, trimming, and image processing [5].
Chin-Chen et al. [9] proposed an adaptive technique applied to the LSB substitution method.His idea is to exploit the correlation between neighboring pixels to estimate the degree of statistical belonging.They discuss the options of having 2-4 crossing lines.The payload (embedding capacity) obtained is 355588 bits.
Hioki et al. [10] present an adaptive method called "data insertion based on block complexity" (ABCDE).The insertion is done by replacing the data in the selected pixels of blocks with high noise content in the image and is replaced with another noisy block obtained with the inserted data.This suitability is identified by two measures of complexity to adequately discriminate complex simple blocks, which are irregular in length and with edges considered noise.The hidden message is part of the noise of the image.
Regardless of their field of application, all of the image steganography techniques should focus on the following three main points: where to hide the information inside the image, the security level when embedding the information into the image, and the security level of the play load in case of intrusion.There are many steganography algorithms and each one is differently addressed.
In order to use steganography in images, it is necessary to select the specific regions to embed the image; these regions will be mentioned as Possible Embedding Region (PEB).The PEB can be in any section or object inside the image that is able to produce the minimum possible distortion.The appropriate PEB can be recognized by abrupt changes in values of surrounding pixels, which are interpreted as the edges of the objects inside the image.The edges are considered to be appropriate sections for hiding information due to the fact that human sight is less sensitive to shape or color distortions inside the peripheral areas of an object, in combination with the fact that it randomly locates pixel values.The random pixel distribution allows the dispersion of payload in the stego-image, reducing its detection.This paper presents an adaptive steganography mechanism that employs three security levels for the retrieval of the embedded information.This embedding mechanism uses the spatial as well as the frequency domain to detect the edges of the PEB.The three levels will request to have a primary key for each embedded data, additionally verifying whether the data is correct or not.In case of not complying with the three security levels, the retrieval of the information could be blocked, which provides an advantage.Finally, the quantitative results of its performance will be shown.In this work, the cover images have 2 different dimensions: 1024 x 768 and 256 x 256 pixels, while the images to be hidden have 4 different dimensions: 712 x 534, 1024 x 768, 576 x 768, and 256 x 256 pixels.

Theory
In this section, the proposed mechanism for using steganography in images will be described in detail.The proposed mechanism is adaptive, which means that it analyses the spatial and the frequency domains of the PEB edges for the possible embedding of information.

Discrete Wavelet Transformation (DWT).
The DWT is used to analyze an image regarding its spatial and frequency domain.It provides a time-frequency representation of the image.The DWT is created by repeatedly filtering the image in each row and in each column in order to obtain the different DWT coefficients.The DWT is useful because it analyses the information at high and at low frequencies in each pixel.The cover image goes through a filter bank, where each filter is expected to be sampled by two (wavelet transform), which has a finite impulse response.
The image processed with the low-pass filter provides a soft wavelet coefficient of the input image and, with the filter, it results in a version of the edges of the image [11].
We have used the DWT Haar to deconstruct the cover image and, in the case of the stego-image, a Haar deconstruction and a DB4 deconstruction are used [12].

Fourth Moment Wavelet.
During the wavelet transform of the image, four subimages are obtained in different frequency bands.The submatrixes obtained are considered as random variables.The fourth moment wavelet (FMW) is used in the submatrixes obtained, considering the following [6,7]: where (, ) is the image with additive noise, (, ) is the original image, (, ) represents the white Gaussian noise, and ,  is the current position of the pixel.From (1), it can be understood that (, ) is a 2D random vector with N consecutive samples of a real process using a Gaussian distribution, with a mean equal to zero.From this consideration, the FMW is obtained as follows: (4)   [0, 0, 0, 0] =  {( (, ) +  (, )) From (3), the mean and the FMW standard deviation are obtained with the following consideration  =  (4)   ; then, the mean of  is obtained from the normal distribution [18,19]: where E represents the submatrix expected value and p represents the probability of its occurrence in the sample.
For the proposed method, the variability of the probability distributions is considered, and a selection threshold is finally chosen, based on the traditional considerations for hard and soft thresholding.The FMW, obtained from the coefficients of the decomposition of details, will have a value higher than 3 2  , where ).This is due to the conditions of variability occurrence of the components of the image [6,7].

Noise-Level Estimation Mechanism.
The noise-level estimation in images requires improved accuracy in the filters in order to be able to distinguish the edges and borders of the image and, thus, be able to separate it from the noise and the edge.Quality in image manipulation allows accurate insertion in the possible embedding regions, detected as edges and noise [20].In this research, the nonlocal filter is used to detect noise in the images, which consists of the progressively selection of images, through each layer of their spatial composition.This type of filter is a multispectral extension of the nonlocal maximum likelihood filter (NLML).It is possible to represent to detect noise in an image through filters used for the multispectral extension of the images; this filter is known as NLML [21].
Besides, given that the standard deviation of noise (SD) is an important reference for all the no local filters, an adaptation to the Maximum Likelihood Estimation (MLE) of noise levels is presented, and it is compared with MLE local and nonlocal methods.
The SD is an important parameter for the filtering of Nonlocal Media (NLM) and the NLML [22].According to this, several noise estimation methods have been developed through the intensity of underlying pixels in the image that is being detected.However, this might sometimes not be enough to guarantee an accurate identification of the edge of the image.Because of this, the method [22] is used, with the pertinent adaptations for the edges estimation.In respect thereof, a Linearized Maximum Likelihood (LML) method has been proposed as edge detector in images [23].

Estimation of Noise Standard
Deviation.The precise estimation of  is essential for the filtering quality, as well as for other image processing tasks such as the segmentation and the estimation of parameters to detect edges [24].The LML approach has been proposed when the image information does not provide the information needed to detect edges and to determine the precise value of  [25][26][27].
For this paper, we propose the use of a modified Noise Estimation Filter using local maximum likelihood (NE-LML) to detect thresholds, employing the FMW as a selection process.During the edge detection, the adaptation of the NE-LML was employed.The estimation of  is made for each layer of the image, which we denominate as k, which is maximized due to the use of the Rician distribution regarding the unknown values of σ and Â range through the following equation [28]: where I 0 is the modified Bessel function of the first kind.For an optimal estimation, S k represents the intensity of the image, k represents the decomposed layer, and A k represents the range of the image in layer k.The combination of the FMW and the NE-LML filter allows the detection of appropriate areas to embed information in order to obtain the stego-image.The base operation of the HDWT when applied in a twodimensional signal that contains NxN samples is the following: each row of one image is filtered with a low and high-pass filter (LPF and HPF) and the output of each filter is sampled by two in order to produce an image known as L and H. L is the image originally filtered in a low-pass (LPF) and divided into direction x, and H is the image originally filtered and divided into direction x.

Methods and Materials
Afterwards, each column of the new images is filtered with a LPF and HPF and sampled by two to produce 4 subimages (LL, LH, HL, and HH).LL is the original image filtered with LPF in a horizontal and vertical direction, sampled by two.LH is the original image filtered with LPF in a vertical direction sampled by two.HL is the original image filtered with HPF in a vertical direction, sampled by two.HH is the original image filtered with HPF in a horizontal and vertical direction, sampled by two.The four subband images contain all the information present in the original image, but their dispersed nature LH, HL, and HH makes them susceptible to compression [29].
In the case of the DDWT, it is defined in the same manner as the HDWT.If the input signal f has N values, then, the transform level 1 for the dB4 is the signal mapping f, D1->(LH-HL) of signal f for the decomposition wavelet LH and HL.The main difference between the HDWT and the DDWT is in the definition of the escalation and the wavelets.The DDWT belongs to the orthogonal wavelet family, defined in a discrete manner and characterized by the number of vanishing moments for a given support.Each wavelet of this type generates a multiresolution analysis of different signal frequencies.

Detection of Threshold Noise
Estimator.In the white Gaussian noise, there is a generalized way because of the frequency components in which it occurs.In the case of the images, this type of noise presents a normal distribution with a mean equal to zero and with an unknown variance  2 .In this specific case, to detect this king of noise, we used the Gaussian white noise estimator (t n ) proposed by [30].
is the noise standard deviation and n is the signal length.
The SD is estimated using the first decomposition wavelet level, which contains a high frequency band inside the image and a high number of coefficients with noise.The main aim of the noise estimator is to quantify the noisy coefficients inside the decomposed image subbands.To achieve this, the estimation methods are used to provide a coefficient reduction.The main idea of a noise estimator is to detect the noisy coefficient to be able to preserve the information related to the image.We propose the implementation of the FMW, which serves as a threshold to noise discrimination along with the adapted NE-LML.
The FMW of an image can be considered as  ≥ 3 4 1 , where n 1 represents the noise in the host image obtained from the first wavelet transform [18,19].
The FMW of the subband LL, which contains information larger than 3 2  , where  2  denotes the subband LL noise power.Using this, the coefficients that represent noise can be localized and the threshold for the selection of the embedding region can be proposed.
We can estimate the noise power by applying the following equation: where  2  is the standard deviation of the cover image.
Finally, the power of then noise after going through the decomposition wavelet is where G represents the filter profits of the low and highpass filters.Generalizing the formula, the noise power for any decomposition level of the subband LL is given by If the focus is the filter profit as in [19], then Thus, to detect the noisy coefficients, the following condition is defined as To finally obtain the threshold value for the embedding of information using the noise estimator function of the filter NE-LML.
The function proposed for the embedding of information is proposed under the name of noise estimator for the embedding of information based on the local maximum likelihood of the image (NEII-LML) as follows: To finally propose the following threshold criterion,

Proposed Method
In summary, the algorithm for the information concealment process is presented in Algorithm 1.
The proposed concealment method consists of two sections, which are concealment, and information mapping, generating the stego-image.Now, the sections of the proposed steganography algorithm will be explained in detail.

Information Concealment.
To analyze the concealment process of an image with the proposed algorithm, it is 1: Preprocessing: RGB cover image and image to be hidden are separated in their respective layers.Each layer is an 8-bit grayscale image.2: for each layer do 3: Cover layer  decomposition: The discrete wavelet transformation or  Haar is applied to cover layer.() =   Where (.) denotes the  desomposition,  is the cover layer and  = (, , , ).

4:
Hidden layer  decomposition: The discrete wavelet transformation or  Haar and Daubechis 4 are applied to the layer to be hidden.

6:
Noise detection: Using a 3x3 Kernel as noise detection mechanism based on the local máximum likelihood is determited the umbral of change necessary to separate the cover image and the image to be hidden in their respective layers; in this case, we worked on RGB color space.Because of the aforementioned, the layers obtained in each image correspond to the color red (R), green (G), and blue (B).After, a "Haar" wavelet decomposition level is applied to each layer of the cover image (IC), as a result, four subimages are obtained: the approximation (LL), the horizontal details (LH), the vertical details (HL), and the diagonal details (HH).
For the image to be hidden (IO), each of its layers receives two levels of wavelets decomposition; in the first level, the "Haar" wavelet is employed, obtaining 4 subbands (ll, lh, hl, and hh) in the case of the layers of the image to be hidden; the Daubechies 4 (db4) is applied to the second decomposition level, to the approximation (LL), obtaining 4 new subbands: the approximation (ll1), the horizontal details (lh1), the vertical details (hl1), and the diagonal details (hh1).
Once they are obtained, the subbands to be employed are separated as in the following method: from the layers of the image, the LH, HL, and HH are used; they contain the coefficient values and the diagonal details; that is to say, they keep the information from the edges of the image, which, by going through modifications, is not affected in comparison with the original.In the case of the layers of IO, the ll1 is used, which contains the coefficients values from approximations corresponding to the second level of decomposition.The reason why this subband is chosen is because the information that it contains defines the majority of the image to conceal [31].
Previous to the information concealing, an adjustment to the escalation is done, based on the work [24].This process is carried out in the ll1 to avoid that any inserted values in the IC visually alter the information and thus provoke changes in the resulting image.Because you are working in a RGB space, the employed image is at a depth of 24 bits.Taking these into consideration, the adjustment operation is described as in the following equation [13]: where 1 is the coefficients subband of the image approximation to conceal from its second level of wavelet decomposition, √ 2 24 factor corresponding to the base 2 (bit) raise to the image depth (24 bits).As a result of the escalation adjustment, an adjustment of the ll1 subband values is obtained; these new bands are identified as New LH, New HL, and New HH.From these new information subbands, we go through them though a windowing, using 3x3 Kernel detection mechanism (K i,j , i,j= 1,2,3,. .., 9), which will scroll thought the subband to detect in detail the values prone to be replaced.This Kernel acts as a noise detector in the subband HH, using the following condition: "the corresponding noise threshold will be calculated every time the Kernel positions itself in the subband; if any of the values is minor or equal to the obtained threshold, this value is changed for one of the values to hide in subband 1", defined for where ll1 is the second level of wavelet decomposition in IO, K , is the current position of the Kernel in the subband, and LL is the first level of decomposition in IC.

Mapping Embedded Information.
To ensure that the embedded data is recovered, the substitution of the proposed values in this research is done in the subband New HH in order not to alter the calculation of the threshold value in the moment of applying the Inverse Discrete Wavelet Transformation (IDWT).In the subband New LH, a change is made to store now the original value of the HL subband in the corresponding position.In New HL subband, the change is made according to the following function: where (, ) is the current value of subband HL, 1( 1 ,  1 ) is the current value of subband ll1,  (, ) is the new value of subband HL, x, y is current position in the obtained subband of IC, and x 1 ,y 1 is the current position of the obtained subband of IO.
The mapping condition was proposed to guarantee minimal distortion and the information retrieval so that the complement of the operation between the original HL subband and the value to hide of subdand ll1 is stored in subband New HL, according to the condition given by (15).
While the complement of the obtained value (key) is stored in subband New LH, for New HL as of the condition given for (16), it is defined next In this way, subband New HL is used as another level of access security to the information called key.With the generated key and applying the complement with New LH, the positions where the embedding was carried out will be secured.
Once the subband ll1 values are hidden inside subband New HH, the reconstruction process through the "Haar" wavelet is carried out, using the subbands LL, New LH, and New HH, in each obtained layer according to the color space, in this case the RGB.As a result, a new image, known as steganography image, is obtained.

Image Recovery.
In summary, the algorithm for the image recovery process is shown in Algorithm 2.
In order to carry out the extraction process using the proposed algorithm, it is necessary to separate the RGB steganography image (SI) in each layer; during the separation into layers, the red (RSI), green (GSI), and blue (BSI) layers participate.Afterwards, each layer receives a level of wavelet decomposition, employing the "Haar" wavelet.As a result of the decomposition in the RGB layers, four subbands are obtained, corresponding to the approximation coefficients (LLSI), the horizontal detail coefficients (LHSI), the vertical detail coefficients (HLSI), and the diagonal detail coefficients (HHSI).
Once the first level of wavelet decomposition is applied, the corresponding subbands are obtained, in the case of subband LHSI, where the recovery map is stored, which allows us to identify the areas where the information of the IO is stored.
Subband HL is used as a key, which allows confirming that the localized values in the given positions in the map correspond to the hidden image, and, finally, in subband HH, the original values of the hidden image are stored.
With the purpose of identifying the correct embedded information, it is necessary to use the stored values in subband New HL because the generated key and the stored value in subband New HH have to coincide; this corresponds to one of the original values of IO, obeying the following condition: Next, the subband identified as New LL is created, in which the extracted values will be stored.From the subband separation, the Kernel 3x3 is defined; it will be used to extract the position values where there might be a value corresponding 1: Preprocessing: RGB stego-image is separated in its respective layers.2: for each layer do 3: Stego layer  decomposition: The discrete wavelet transformation or  Haar is applied to stego layer.() =   Where (.) denotes the  desomposition,  is the cover layer and  = (1, 1, 1, 1).

14:
Generate recover layer: Is applied the inverse discrete wavelet transform or  Daubechis 4 to the recover sub-bands to form the recover sub-band , also is applied the inverse discrete wavelet transform Haar to the recover sub-band  and sub-bands ℎ, ℎ, ℎℎ to form the recover layer  −1 (  ) =  Where  −1 (.) denotes the ,  is the recover sub-band and  = ( , ℎ1, ℎ1, ℎℎ1).
−1 (  ) =  Where  −1 (.) denotes the ,  is the recover layer and  = (, ℎ, ℎ, ℎℎ). to the hidden image.This Kernel runs through the subband LH as a siding window to extract the values in the correct order for the reconstruction.
Every time the Kernel is positioned in the subband a verification is executed in each of the values contained in the Kernel; in this verification, the obtained key is used by the subband HL.If a hidden value is found in the corresponding position, the supplement of the operation performed when the image was hidden must be stored, while the subband HH stores the original values of the hidden image.Thus, the result should be equal or approximate to the original value stored in subband LH; the latter is used as a map.The function used to the verification is defined next: where N is a value to be corroborated with the map (subband LH) to obtain the position of the value to be extracted; HL (x,y) is the value of subband HL in the current position and stores the supplement of the operation performed when the image was hidden; HH (x,y) is the value of subband HH in the current position and stores the original value of the hidden image; Kernel (i,j) is the Kernel value in the current position.
Once the verification of the result is performed, it is corroborated with the value in the map contained in subband LH or in the Kernel.If the difference between these values lies within the range -1 and 1, then, the value corresponding to the current position, stored in subband HH, corresponds to the value of the hidden image, for which the identified value is copied in the subband New LL.
Once all the values of the hidden image are extracted, an adjustment operation is carried out to recover the original values of the hidden image.
Due to the fact that all of this is done in the color space RGB and that all of the images have a depth of 24 bits, the adjustment operation is described as follows: where New LL is the approximation coefficients subband of the extracted image in a second level of wavelet decomposition; √ 2 24 adjustment factor corresponding to base 2 (bit) rose to the image depth (24 bits).As a result of applying the adjustment operation, a new subband New LL is obtained.This is used, along with the subband of the second level of decomposition of the hidden image lh1, hl1, and hh1, to perform the wavelet recomposition.
This process employs the wavelet "db4", and, as a result, the approximation ll coefficients subband, corresponding to the first level, is obtained.Then, the recomposition process is performed again, using the resulting subband ll, and subbands lh, hl, and hh.For this level, the "Haar" wavelet is used.
The layers obtained in the reconstruction process are combined according to the color space in which the process was performed.As a result, a new image is obtained, which is called recovered image.

Results and Discussion
(1) The proposed algorithm was used to hide the selected image inside the cover image, obtaining a new image tagged as stego-image.
(2) Once the stego-image was obtained, the cover and the stego-image were compared with the following criteria: (a) Correlation: it represents the statistical dependency: it establishes the lineal relation between the change in magnitude and direction between two different signals.The correlation is defined as (b) The mean square error (MSE): the mean square error is a risk function between two images that allows identifying the squares of the loss regarding the expected value and the obtained value; that is to say, it reflects the difference between both images in relation to the expected values.Equation ( 21) served to calculate this coefficient, where MSE is the mean square error,   is the image original size  * , and   is the obtained image size  * .
The MSE serves as a base to calculate the following metric used.The peak signal-to-noise ratio (PSNR) is a term used to define the relation between the maximum possible power of an image and the noise that affects it.In this case, the noise is the presence of the new information embedded during the concealment, or the loss of information during the retrieval of the information from the stego-image.With a lower MSE, the PSNR will tend to infinity, which means that the image compared with the original is a faithful copy.Equation ( 22) was used to obtain this coefficient.
where PSNR is the relation coefficient of the peak signal-to-noise ratio and  2  is the maximum value in the layer I squared.For the color space RGB, the maximum value is 255.(c) Root mean square error is given by (d) Normalized absolute error and image fidelity can be expressed as (e) Histogram: the histogram of the images allows having a notion of their spectrum and how it is affected during the hiding and retrieval process.Besides, it allows the calculation of other metrics as it is the case of the standard deviation.
(f) Standard deviation: the standard deviation of an image reflects the dispersion of the values regarding its mean value; this value is obtained from the square of the variance of the image.In order to obtain the variance out of the histogram of the image, ( 26) is employed.
where  2 is the variance of the image,   is the appearance frequency of the value,   ,   are the pixel intensity, and  is average intensity of the image.
Each of these measures was compared in each layer and in the image in general.The recuperation process of the hidden image followed, obtaining a new image tagged as the recovered image.
Having obtained the recovered image, a second comparison was made between the latter and the image to be hidden, using the metrics described in the point 2, for the case of the cover and the stego-image.Finally, the cover image and the image to be hidden were rotated, repeating the process from point 1.The rotation applied to the images was done according to what is established in Table 1, where the column of the number of the rounds specifies the type of rotation that will be made to each image.The set of rounds is applied to each pair of images, because there are only 2 types of rotations, which apply to the 8 pairs of images; in total 16 images are obtained.
From Figure 1, 4 sets in total are obtained.The rotation column makes reference to the rotation in grades of each of the images.Towards the end, the first set obtained a rotation of 0 ∘ in each image; in the second round, the cover image is rotated 45 ∘ and the image to be hidden is rotated 0 ∘ ; in the third round, the cover image is rotated 0 ∘ and the hidden image 180 ∘ ; finally, in the fourth round, the images are rotated 45 ∘ and 180 ∘ , respectively.
As previously mentioned, the proposed algorithm was evaluated by means of quality image metrics applied to each obtained steganographic image and to each recovered image.Figure 2 shows a group of steganographic images obtained from the first round of tests together with the quality metrics of each one, from which it can be seen that any alteration in the images is not visible.
From ( 20), ( 21), ( 22), ( 23), ( 24), (25), and ( 26), the algorithm performance rate is calculated, comparing the new obtained images and the original ones.From the correlation between the images, it is possible to establish the degree of similarity between them, based on the fact that the correlation coefficient between the images reflects the level of the lineal relation between them, that is to say, the existing relation between the quantities of energy that they possess.When the coefficient value approaches 1, it reflects that the changes in energy, in terms of magnitude and direction, are similar; thus, it can be said that they are the same image.Whereas when it approaches 0, the coefficient value reflects a drastic change between the images, which implies that the original image Rotation of the cover image with respect to 0 ∘ Rotation of the image to be hidden with respect to 0 was drastically changed.In order to obtain this coefficient, we used (20).
In Tables 2 and 3, the results of the correlation coefficients obtained are shown.Based on this data, the worst result in the concealment process was in test 6 during round 3, in which the stego-image presents a difference of 1.5 % with respect to the cover image.The best result was test 7 in rounds 1 and 3, in which the stego-image barely differs in 0.02% in respect to the cover image.Despite the fact that the stego-image differs in all the tests, the change is not visually perceptible.The best correlation found per layer is in the red layer in test 7 in both round 2 and round 4, the best overall result is in test 7 in round 1 and round 4, and it was also identified that the layer with the best correlation between the original image and the stego-image is the red layer.The layer with the worst correlation was found in test 6 in round 3; in the same test and round, the worst general correlation was presented; the layer with the worst correlation between original image and stego-image is blue.
In contrast with the results of the comparison between the cover and the stego-image, the results of the comparison between the recovered image and the image to be hidden shown in Table 3 reflect that, in 75% of the cases, the same image was recovered.However, in 81.25 % of the cases, the images are visually perceived as identical.The worst result was in test 6, during round 3, in which the recovered image differs by 19.5% from the image to be hidden.This case coincides with the worst case identified in the concealment process.In this table, the best results were not marked because in most cases they present the best possible result 1, which indicates that the stored image was recovered exactly; the layer with the best correlation identified is the red layer.
In order to validate the results, metrics MSE and PSNR were used.The MSE allows identifying the alteration level that the obtained image had in regard to the original image, whereas the PSNR, being used with the images, allows us to measure the alteration level of the information in regard to its format.The PSNR of the RGB image is in the JPG format between 35dB and 55dB.So, by presenting values out of this range, the images show drastic visual alterations, and so does their perception.
In Tables 4 and 5, the results of the MSE between the cover and the stego-image are shown and between the recovered and the image to be hidden, respectively.The obtained data As can be seen in Table 4, the lower the MSE, the lower will be the distortion obtained in the resulting image and, therefore, the stego-image will have the highest quality.The lowest MSE detected was presented in test 2 round 2 green layer, in which also the best overall result of the stego-image was obtained according to this metric.On the other hand, the worst result was presented in test 3 round 1 green layer; in the same way in the same test and round, the worst result of this metric was detected.When analyzing each layer, it was detected that on average the layer that presents an MSE is the blue layer and the one that presented the worst results is the green layer; these results are at the end of Table 4.
In the case of Table 5, the results obtained allow us to observe that the MSE between the recovered image and the original image to be hidden in 75% of the cases is 0; in other words, the best possible result is obtained.The worst result that was obtained was presented in the test 6 round 3 in the blue layer; in the same way in the same test and round the general result was obtained.When an analysis of each layer was made, it was detected that on average the layer with the lowest MSE and the best quality is the red layer and the one with the highest MSE and the worst quality is the blue layer; these results are shown at the end of Table 5.
Tables 6 and 7 show the results between the cover and the stego-image and the recovered image and the image to be hidden, respectively.These values confirm the distortion presented by the image.In the case of the cover images, as they are in a range between 35 dB and 55 dB, the distortion is almost imperceptible, being the most notorious case when it approaches the inferior limit.In the case of the recovered images, being exactly the same recovered image, a MSE of 0 is obtained, so the PSNR tends to infinity, proving, thus, that the information is not lost and that the image has not been distorted, excluding the cases in which the MSE is very big and stays out of the range in which the visual distortion is noticeable.
The results of Table 6 allow us to observe that there is a change in the cover images and stego-images; however, the higher the PSNR the lower the change that can be perceived quantitatively.The best detected result was presented in the green layer in the round 2 test 2.In the same test and round the best overall result was obtained.The worst result was detected in test round 1 in the red layer; in turn, in the same test and round the worst result was detected in a general way.On average, as shown in Table 6, the red layer is the one that presents the least similarity according to this metric, while the blue layer shows the greatest possible similarity; these results are shown in the lower part of Table 6.
In Table 7, it can be seen that the best possible result to obtain from the PSNR is when it tends to infinity, meaning that both images compared are the same.The worst result obtained in the recovery process was presented in test 6 round 3 in the blue layer; in the same way in the same test and round the worst result was obtained in a general way.
From the histograms of each layer of the images, it was identified that, in the recovered images where a distortion is present, it is due to the fact that the cover image has two characteristics that limit the random behavior of the same, reducing the areas where the information can be hidden without causing distortions: the spectrum of the cover images is very small, in addition to the fact that the distribution of its energy is mostly concentrated in only one area.In the case of the cover images that are distorted, it was identified that the images to be hidden present a very broad spectrum with an inclination of the energy after an abrupt change of its distribution.
The histograms of the cover images and stego-images are shown in Figure 3; for each pair of images the corresponding histogram was graphed for each layer that make them; each one of them is shown with the color of each layer.The histograms corresponding to the cover images are shown in a light tone, while the histograms of the stego-images are shown in a darker tone.When being plotted in an overlapping way, it is possible to identify the changes in energy that may occur between the images; as a sign of the imperceptibility of the proposed algorithm changes in intensity are not perceptible in most cases.The cases with the most notorious alterations were detected in tests 4 (d), 6 (f), and 8 (h); in these cases a visual distortion was perceived between the cover image and the stego-image.
The histograms of the images to be hidden and recovered images are shown in Figure 4; as for the cover images and stego-images, the histograms of each layer that make up the images were plotted.The light tone histograms correspond to the original images, while the dark tone histograms correspond to the recovered images.The comparison of the histograms of these pairs of images allows us to evaluate the integrity of the recovery process.By analyzing the histograms obtained, the results obtained were reaffirmed by means of the previously mentioned metrics such as correlation, MSE, and PSNR.Within the results obtained, the histogram corresponding to test 6 (f) is the one in which a greater number of discrepancies are distinguished between the original image and the recovered image.
Lastly, the metric that conformed how the distribution of the obtained images was affected in relation to the originals, the standard deviation, was calculated from the  information in the histograms.Table 8 shows the comparison of the standard deviation between the cover image and the stego-image; Table 9 shows the comparison of the standard deviation between the recovered image and the one to be hidden.
The results in Table 8 allow us to identify the difference between the standard deviation of the cover image and the image-steamer.The smaller difference between these smaller changes will be shown in the information contained in each image; the smallest difference was detected in test 2 round 3 in the red layer; in the same way it was in the same test and round and detected the best result in a general way.Regarding the greatest difference detected, it was detected in test 6 round 4 in the blue layer; likewise in general, the difference was detected in the same test and round.
The results shown in Table 9 allow us to observe that in 75% quantitatively no change in the distribution of information between the recovered image and the original image was presented.The greatest difference detected was presented in test 6 round 2 in the green layer, and in the same way in the same test and round the greater general difference between the images was detected.
Through the standard deviation, the structural change that the obtained images suffered was identified.The value distribution was affected, as indicated in the histogram, which is reflected in the visual distortion, even though the structure is preserved.By having changes in the energy distribution, the perception of ghost images is provoked, as shown in Figure 5.
The recovered images as well as the cover images were subjected to image quality metrics to check the efficiency of the algorithm proposed in the recovery process.Figure 6 shows the recovered images as well as the metrics corresponding to each one; with these metrics it can be perceived that the quality of the images is maintained in 87.5% of the cases.
Figure 7 shows a comparison of the cover images and the obtained stego-images.As it can be noticed visually, not in all the cases an alteration in the image can be appreciated, without detecting that they contain hidden information inside.
Figure 8 shows the comparison between the recovered images and the original images to be hidden.In contrast to the concealment contrast, during the retrieval process the case of test 6 was presented, in which the visual distortion is noticeable, while in the rest of the cases, distortions, if any, were imperceptible.
In addition to the validation tests carried out, an insertion capacity test was carried out, during this test the amount of information that is inserted in the cover image corresponding to the image to be hidden was measured, this information corresponds to the values of each pixel of each layer, as well as the values of the key and map employees.Equation (27) describes the measurement of capacity and insertion: where  corresponds to the information that has been inserted,  corresponds to the amount of data to be inserted (pixels, key, and map), and  is the number of layers that make up the image; in this case it is equal to 3 for the space of color RGB.Table 10 showed the results of this test.
With this test, it is verified that the proposed algorithm allows inserting information in the cover image in a small fraction of it; on average, it is observed that the information of the image to be hidden is in 5.01% of the cover image.The smallest result identified is 3.03% in tests 1 and 2; in the test that the best results were obtained, 3.51% was obtained, and in the worst result, 6.25% was obtained, which indicates that despite obtaining measurements efficient quantitative, qualitatively errors can be presented as was shown in the previous statements.[14] proposed an adaptation of the algorithm based on variance estimation (VFES), presenting a result with PSNR = 34.3736dB, a similarity index of 0.9969, and an insertion capacity of 165.336Kb.In this work we do not show results of attacks made on host images.[15] proposed a steganographic algorithm based on the wavelet fusion technique which has a PSNR = 37.45 dB, does not show a similarity index, and shows an insertion capacity of 600x600 bits.This work does not show results of attacks made to the host images.

Sidhik et al. (2015)
Nazari S. et al. (2015) [16] proposed a steganographic algorithm which maps the cover image into a morphological representation in which they contain morphological coefficients and then insert the bits of the secret message applying permutation and a coding matrix.The results obtained in the work are for the PSNR = 49.38dB; it does not show the index of similarity and it obtains an insertion capacity of 4096 bits.This work does not show results of attacks made to the host images.threshold using the fourth moment wavelet, which is obtained from first-order statistics.The information to be inserted is mapped in the submatrices obtained in the wavelet domain in such a way that the total recovery of the hidden information is guaranteed, together with the recovery criteria described in the work.
The main difference in this work lies in the adaptation of the nonlocal maximum likelihood filter in the wavelet domain and its noise detection sensitivity in the images, which allows selecting optimal zones for the insertion of information.By preserving the conditions of maximum adaptability to the medium, the PSNR visual quality measure is guaranteed, which indicates the power ratio of the image information against the noise power (everything that is not of the original image) of the image.In the present work, the ratio of information against noise PSNR is 56.1082 dB for the Splash test image.

Steganalysis.
The reverse process of hiding information is known as steganalysis.Steganalysis is a technique which allows the identification and detection of information that is not coherent within the context in which the information is found.
In the application of steganalysis in digital images, advanced steganographic algorithms have now been developed, as well, making algorithms focused on the detection of information a complicated task, especially if current steganographic algorithms are focused on hiding information in noise.
Even when the images generated after the insertion of the information appear to be of good visual quality, so that the changes made are not identifiable at first sight, the insertion can affect the statistical behavior of the image, as well as its behavior in its different decompositions in frequency.
Steganographic algorithms can pass through different types of communication channels; and these communication channels can go through different types of surveillance that can be (i) passive, in which the communication channel is not a review of the information sent; (ii) active, in which the communication channel is continuously in review; and finally (iii) the passive-active mix in which the channel may or may not go through the monitoring of the channel.

15 : end for 16 :
Generate Recover Image: The resulting layers are combined to obtain a RGB image Algorithm 2: Image recovery.

4 4 5 Figure 2 :
Figure 2: Resulting stego-images obtained.Imperceptibility of proposed algorithm via image quality metrics for test images.

Figure 3 :
Figure 3: Histograms per layer between the cover image and the stego-image.

Figure 4 :
Figure 4: Histograms per layer between the recovered image and the image to be hidden.

Table 2 :
Correlation between the cover image and the stego-image.

Table 3 :
Correlation between the recovered image and the image to be hidden.

Table 4 :
MSE between the cover and the stego-image.

Table 5 :
MSE between the recovered image and the image to be hidden.

Table 6 :
PSNR between the cover image and the stego-image.

Table 7 :
PSNR between the recovered image and the image to be hidden.

Table 11
0.9962 and the maximum capacity for the secret message is 4.3204 e03 Kb.This paper shows the analysis of steganalysis through the IQM method with the best result of 1/9 images detected as a possible carrier of information.Carvajal et al. (2014)

Table 8 :
Standard deviation between the cover image and the stego-image.

Table 9 :
Standard deviation between the recovered image and the image to be hidden.