Secure MRI Brain Image Transmission Using IOT Devices Based on Hybrid Autoencoder and Restricted Boltzmann Approach

on autoencoders and restricted Boltzmann machines (RBM) and (ii) implementation of the WSN sensors nodes with Raspberry Pi and Messaging Queue Telemetry Transport (MQTT) Internet of Things (IoT) protocol for secure transmission of the medical images. The experimental results are evaluated using the standard performance metrics like peak signal to noise ratio (PSNR) and presented a Real-Time Linux (RTL) implementation of the design. The proposed model showed 10 dB to 15dB improvement in the PSNR value, while transmitting the medical images, which is better compared to the existing model.


Introduction
Magnetic resonance imaging (MRI) or computerized tomography (CT) scan medical imaging creates digital images of the human body. While these imaging techniques generate enormous quantities of data, compression is needed for storage and transmission [1]. Most current compression schemes achieve a high compression rate at the expense of significant quality loss. Maintaining image quality only in the area of interest, i.e., in medically relevant regions, may be required in some areas of medicine. A standard 12-bit medical X-ray has a dimension of 2048 pixels by 2560 pixels. This equates to 10,485,760 bytes in file size. A standard 16bit mammogram image could be 4500 pixels by 4500 pixels, with a file size of 40,500,000 bytes (40 MB) [2]. This has implications for disc storage as well as image transmission time. Despite the fact that disc capacity has steadily increased, the amount of digital imagery provided by hospitals and their new filmless radiology departments has increased even more rapidly. Even if there was limitless storage, the issue of transmitting the images would still exist.
Most hospitals have remote clinics or satellite centers in small towns and remote areas to make it easier for patients who have difficulty travelling long distances to the clinic, particularly for diagnostic procedures. These facilities use applications, which enable clinic staff to work without the presence of a radiologist. A clinic technician or basic radiologist will take the X-ray and send it to the hospital via a network connection, where a diagnostic radiologist will read it and send back a diagnosis. Although this may seem to be reasonable, keep in mind that the patient is often asked to remain in the imaging machine until the radiologist certifies that the data is sufficient. Not only does compression affect storage costs but it also affects transmission times, MRI apparatus use, and patient safety and security [3].
Compression techniques can help improve overall treatment by reducing file size and transmission time. The redundancy that occurs in images is exploited by image compression techniques. Different forms of redundancy exist. Each compression technique can take advantage of one of these redundancies. Spatial, temporal, and spectral redundancies are the three forms of redundancies [4]. Deep learning is a branch of machine learning that is based on a neural network that processes data and mimics the thought process by layering algorithms. Deep learning uses a deep architecture consisting of several layers of transformations to simulate the functioning of the human brain [5]. This is close to how the human brain processes knowledge. Traditional machine learning techniques performed poorly when faced with high-dimensional data, necessitating a preliminary feature extraction process to transform the most informative representation of the raw data into a normalization matrix. Deep learning got rid of the difficult task of sophisticated automated feature extraction without sacrificing the data's sense. Autoencoder is a form of unsupervised artificial neural network that learns the most efficient data. The goal of an autoencoder is to learn a representation called encoding, which is then used to reduce dimensionality by training the network. We have considered both the autoencoder [6] and RBM [7] as the compression techniques for medical images on the belief that they can be used to select the most discriminative features in an image in a most efficient manner leading to smaller footprint of the image without losing quality.
WSN [8] is a communication network that does not rely on wires or other electronic links between them. WSNs are used in industries like manufacturing, forest fire detection, transportation, construction, and office space surveillance and monitoring. WSNs are usually deployed closer together in inaccessible locations where battery replacement is nearly impossible. The WSN's key challenges include limited power, limited processing ability, and an open climate. We choose to implement the system design for wireless sensor networks with Raspberry pi (RPi) [9], a small footprint hardware device with wireless, and Bluetooth capabilities for communication with other similar Raspberry Pi units that can act as sensor nodes to which the compressed medical images can be transmitted. The platform comprises of four WMSN nodes built with the Raspberry Pi (RPi) [9]; we introduce a technique where every source node retains its data transmission rate uniformly and periodically apprises its information sending rate based on the other node's congestion level. Each node is identified by a unique IP address so that it can create a routing path to the sink using multihop communication; we also propose the MQTT [10] protocol for the transmission mechanisms, where the packets lost can be stored and retransmitted when the network bottleneck has cleared. The suggested platform can be adaptable, accessible, and appropriate for wireless monitoring of constructions, open spaces, remote locations, etc., which is depicted in Figure 1.
Largely, the information handling utilizes far less power than transmitting it in a wireless channel. Hence, by compressing the images before transmitting them will considerably reduce the net energy consumed across the sensor nodes. It is also feasible for image compression to be sustained at a high compression ratio without evident degradation of quality in the reconstructed images [11] The image compression schemes proposed and developed in this work could have some parameters such as minimalism in coding, minimal memory demand, low computational need, and maximum compression rate. The image compression process in WSN is stated in Figure 2.
The major contributions of this research are given below: (1) Developed a medical image compression algorithm based on the autoencoders and restricted Boltzmann machines. An effective image algorithm consumes less space on the hard-drive and effectively retains the same physical size (2) WSN sensor's nodes with Raspberry Pi and MQTT IoT protocol is implemented for secure transmission of the medical images, and then the proposed model effectiveness is validated by using the performance measures like PSNR This research manuscript is structured as follows: the Section 2 surveys the related research literature articles and previous published papers on the topic "medical image compression and WSN.". The Section 3 covers the general performance metrics used to analyze the compression standard and then the relevant research. In addition, the compression methodologies used in this work are discussed effectively in Section 3. The Section 4 provides information about the WSN design with Raspberry Pi, and the Section 5 provides the results achieved with the compression algorithms with the original and the reconstructed images. The RTL schematic diagram is provided in Section 5. The Section 6 discusses the scope of the work and its implications for future research in the field.

Related Research Works
In today's healthcare systems, medical imaging is a necessary tool. With applications in tumor segmentation, cancer identification, classification, image driven therapy, medical image description, and restoration, machine learning plays a critical role in CADx. Since any redundant information is lost during compression, lossy techniques will not be able to restore the original image from the reconstructed image. The lossless procedure will precisely recreate the actual image from the restored image.
Information can be compressed using transform-based coding strategies like DCT [12], DWT [13], SVD [14], PCA [15], and wavelet-based compression techniques like EZW [16], SPIHT [17], WDR [18], and JPEG2000 [19]. On satellite images, Walker et al. [20] used PCA and neural network algorithms to compress the data. Finding the covariance matrix, eigenvalue, and eigenvectors of an input image using the PCA method is extremely difficult. PCA is used to compress images, but the compressed result is not considered to be appropriate. The ANN algorithm, which produces better results, will boost the outcome.
Gaidhane et al. [21] found that image compression using the wavelet transform produces the best results as compared to DCT for ultrasound and angio images. Blocking objects plagued the DCT process. Puniene et al. [22] used DCT and SPIHT-based compression techniques to compress medical images. DCT is used to decompose the medical image, and then SPIHT is used to compress the coefficients. Singular value decomposition (SVD) and wavelet difference reduction techniques are used by Antonini et al. [14] to propose a lossy image compression method (WDR). The image quality is better with SVD compression, but the compression ratio is poor. As a result, the SVD output was compressed once more using WDR. At high compression ratios, WDR achieves excellent image quality.
Angadi and Somkuwar [23] use a combination of SVD and the embedded zero tree wavelet approach to compress ECG signals. The SVD method, followed by the EZW method, has been tested and proven to improve the efficiency of the reconstructed signal. Kumar et al. [24] used DCT and DWT techniques to compress images in wireless sensor networks. As compared to the discrete cosine transform, the discrete wavelet transform has a higher PSNR value and a faster compression process. Each node in a sensor network has extremely limited resources, such as memory, electricity, and processing power. To address these limitations, image compression algorithms rely on DCT and DWT that are used to reduce memory space and storage usage. They evaluate the results of DCT and DWT proce-dures using several performance metrics, and the results showed that the discrete wavelet transform outperforms the discrete cosine transform in terms of PSNR.
Artificial intelligence approaches, specifically in the context of computer vision, imaging, voice recognition, and natural language comprehension [25,26], are just one of those directions that can help resolve the limitations of traditional image compression standards. Deep learning got ahold of the difficult task of sophisticated automated feature extraction without sacrificing the data's context. As a result, deep learning for feature extraction and classification of medical images from a variety of diseases has gained a reputation for exceeding expectations and producing more than satisfactory results.
After training two stacked denoising autoencoders to obtain a reduced version of the data dimension, Xing et al. [27] used a theory components neural network to classify data. They identified the areas of the brain that distinguish ASD from conventional controls (TC) with approximately 70% accuracy across the entire dataset. Heinsfeld et al. [28] reduced multivariate data using a variational autoencoder (VAE) model and discovered the most discriminative features. Choi [29] added spatiotemporal information in the fMRI to the 3D conventional neural network to detect spatially useful features (CNN). The researchers then devised a voting system based on these characteristics to decide whether or not each subject has ASD.
In [30], the authors proposed a DNN-based novel feature election method from fMRI images and then used it to obtain whole-brain function communication patterns using multiple trained sparse autoencoders. They developed a DNN-FS classification model that had an accuracy of 86.36 percent on a sample of 55 ASD and 55 TC.
WSN technology can be used to develop realistic Health Care WSNs that meet the main system design requirements of secure connectivity, node mobility, multicast technology, energy efficiency, and timely data delivery. Long data transmission routes, vast volumes of data, and limited battery power all reduce WSN lifetime. The optimization of energy consumption effectively extends the lifespan of the network, which is important. To get more energy optimization, Guo et al. [31] proposed an energy efficient clustering hierarchy protocol for WSN. Mann and Singh [32] presented a routing protocol for WSNs that was established to preserve a reasonable level of scalability, energy efficiency, and reliability. A fuzzy logic and genetic dependent clustering approach to optimize network energy was defined in the paper presented by Sim and Lee [33]. Saeedian et al. [34] employs a Digital Signal Processing-based Wireless Sensor Network Platform to achieve high compression efficiency of physiological data for telemedicine applications.
With the advancement in rapid hardware prototyping technology and the advent of microcontroller-based board such as Arduino, ESP32, and Raspberry Pi, WSNs with smaller size footprint, better energy efficiency, and low cost are a reality enabling researchers experiment with them. Hsu et al. [35] created a new model for saving, exchanging, and archiving patient health records using a Raspberry Pi board and a hard drive. The disc that is accessible in the local 3 Journal of Sensors cloud and can be shared with other public clouds that are farther out, such as Google Drive, Azure Cloud, and other similar services, are available. They also safeguarded the medical details by introducing a new security protocol.
Elhoseny et al. [36] used fast bilateral filter for noise removal in the medical images, because it has better edge preservation ability. Further, the image segmentation is carried out by using the canny edge detector. In addition, the developed fast bilateral filter algorithm is implemented in Raspberry Pi by utilizing open CV software. Kumar and Gupta [37] have implemented a fast and secure encryption technique for the medical images on the basis of one dimensional logistic map, which is associated with pseudo random numbers. In the resulting phase, the proposed technique is validated on the standard medical datasets under the conditions of noise and differential attacks. Ahmed and Salah [38] applied a fast sub pixel registration technique to achieve high-resolution image registration on the basis discrete wavelet transform and the convolutional neural network. In this literature, the classification result of convolutional neural network and genetic algorithm is used for MRI image registration.

Image Compression and
Performance Metrics 3.1. Image Compression. Image comprises of pixels that are extremely related to each other. Consequently, it holds a sizable quantity of redundancy that utilizes substantial memory for image storage space, which in turn can reduce the transmission bandwidth. These redundancies can be grouped into two classes: (1) spatial and (2) temporal redundancy. In the spatial class, nearby pixels are correlated, whereas in temporal redundancy, there exists similarities between the two subsequent frames. Hence, to eliminate the redundancies, image compression has to be done for a reduced amount of storage area and smaller bandwidth [14,39,40]. Image compression can further be categorized into two types: lossy and lossless. Lossy image compression is commonly applied in WMSN owing to its gains over lossless compression standards, namely, higher compression rates, that reduces the bytes needed to be transmitted over the WMSN and less power consumption for transmitting the images.
Additionally, lossy image compression has the advantage of taking less time for encoding/decoding the image transmitted in relation to lossless compression. This paper focuses its approach on the lossy compression algorithm because of our need to increase the bandwidth availability in the sensor nodes for transmission without any congestion, which can subsequently reduce the transmission delay across the network and provide a more streamlined transfer suitable for live image or video feeds [41].    Journal of Sensors

Performance Metrics.
We have studied the compression ratio and peak signal to noise ratio (PSNR) readings to analyze the performance of the compression algorithms considered here. We have decided not to consider the processing time as it would not be fair to compare it between the simulation-based experimental setup versus the Raspberry Pi-based hardware implementation. The mathematics behind the performance metrics used to evaluate the compression schemes is presented below for better understanding of the concepts.
where N is the data sequence length, and x n represents the input data sequence, while y n corresponds to the reconstructed data sequence.

Peak Signal to Noise Ratio (PSNR).
The quality of signal representation is influenced by the relationship between the maximum feasible signal value and the distortion capacity, which can be used as a comparison of compressed and original image quality. The better the quality of the compressed or reconstructed image, the higher the PSNR. The decomposed image quality is better when the PSNR value is higher [42]. PSNR is defined as the size of the error in relation to the signal's peak value x peak (for 8-bit pixel x 2 peak equals 255) and is calculated using following equation (2).

Compression Ratio (CR).
Compression ratio (CR) is a phrase used to refer to the ratio of binary sequence length of the compressed output image (B1) to binary sequence length of the original uncompressed input image (B0) and It is generally used to find how good the compression efficiency is as a higher value means better compression [42].

Power Consumption.
For all practical implementation cases of the WSN, power consumption must be considered as the most important metric, which is highly influenced by the above discussed metrics. The transmission power and the energy dissipation across the nodes can be greatly reduced by adapting to less complex processing units and by minimizing the data size.
To summarize, less transmission error is a direct product of low MSE values and has an inverse relation with PSNR, which proportionately increases, which in turn points out the noise in the compressed image is lower and this can help with the better reconstruction of the image.

Image Compression Methodologies
Autoencoders. An auto encoder is a fine example of the unsupervised neural network learning algorithm, and it is graphically depicted in Figure 3. Commonly, it is used during the back propagation of the neural network model where the target values are selected to be equal to the inputs such that yðiÞ = xðiÞ. It makes use of the convergent and divergent layers where convolution and deconvolution take place, and the features are compressed in the middle layers to generate the desired output.
An autoencoder can be mathematically characterized as follows to describe the encoder and the decoder sections of the model [6], as mentioned in the equations (4)- (6).
ϕ, ψ = argmin As can be seen from the equations (4)-6, presented above, the encoder and decoder parameters are enhanced in a manner that minimizes the reconstruction error, which is the error between the original input image and the reconstructed compressed image.
The parameter settings of the convolutional autoencoder are given below [12]: Number of batches: the original image is divided into training set and further, it is divided into many batches to perform the stochastic gradient descent optimization of the model. In this experimentation, the batch size is considered as 100 Learning rate: at any time, the model weight is changed at every iteration, a hyperactive parameter that restricts how to adjust the model in response to the predicted error.  Boltzmann machines, as with any other neural network, consist of an input layer and several hidden layers. The neurons make stochastic decisions [43] like when to turn on depending on the data fed during the training process and based on the minimization of the cost function. The Boltzmann machine upon training attains the learning to deduce some interesting features from the dataset on which it is trained that can help the model to learn the complex fundamental relationships and patterns inherent in the data [22].
From Figure 4, it can be seen that the weight, w ij ∋ W, is connected to the visible unit, V, to the hidden unit, h, where W ∋ R mxn is the super set of all the weights, considering both the hidden and visible units. The visible unit, V, biases can be signified as b i ∋ b, while the hidden unit, h, biases are represented as c j ∋ c.
We can assume that the joint distribution of a visible layer vector, V, and a hidden layer vector, h, is proportional to the exponential of the negative energy of the configura-tion as shown in Equation (7) based on [24]: the Boltzmann distribution from statistical physics: Before designing the RBM model using deep learning approach, we can reset the parameters to the original ones or leave them as defaults reasonable for most image processing applications. The parameters setting of restricted Boltzmann machine [43] is discussed below: Random initialization iterations: initially, a set of random number of trail weights is considered for each layer of the network in order to use it as a good starting point for testing the model. Here, a stochastic gradient descent training is applied, and the number of iterations is selected as 2000 Number of batches: the training set is divided into batches to reduce the processing time and memory of hardware, where the batch size is considered as 100 Learning rate: in the case of autoencoder, the learning rate should be small for reducing the training process length that helps in the faster generation of the learning model, and the learning rate is considered as 0.1 Max epochs: the maximum number of epochs is selected by trial and error, and the number is adjusted based on the learning rate, whether it is overfitting or underfitting. The maximum epochs is considered as 100, but it is dynamically changed based on the above scenarios

Raspberry Pi WSN Implementation
The hardware implementation was done using Raspberry Pi, and MQTT (Message Queuing Telemetry Transport) protocol was selected as the de facto method to transfer the compressed images across the Raspberry Pi WMSN [9]. MQTT was an opt choice for wireless networks where high latency is an issue due to its bandwidth constraints and unpredictable network downtimes. In case of the connection gets broken when a subscribing client tries to access the transmitted image from the broker, the broker has the ability to buffer the lost messages back to the subscriber again when the network becomes online. Similarly, when the publisher node loses the connection with the broker, the broker can initiate the process to close the connection; before it can send all the subscribed nodes in the network, the cached message received earlier from the subscriber.
This process is explained clearly in Figure 5 with a block diagram. Here, we have used four Raspberry Pi nodes which can behave as publisher, broker, and subscriber. A publisher, by design, can behave as both a publisher and a subscriber; so, every other Raspberry Pi node can transmit and receive images except the broker which can only facilitate the communication between the subscriber and the publisher. Any Raspberry Pi node can initiate the transmission across the

Results and Discussion
We have used different approaches in the evaluation of results by analyzing the performance metrics of the image compression and WSN using both simulation and hardware implementations. The simulation analysis is done on the system configuration with 128 GB random access memory, 4 TB hard disk, windows 10 operating system, and Intel core i9 processor. The hardware implementation was done using Raspberry Pi, and MQTT (Message Queuing Telemetry Transport) protocol was selected as the de facto method to transfer the compressed images across the Raspberry Pi WMSN [9]. MQTT was the apt choice for wireless networks where high latency is an issue due to its bandwidth constraints and unpredictable network downtimes. In case of the connection gets broken when a subscribing client tries to access the transmitted image from the broker, the broker has the ability to buffer the lost messages back to the subscriber again when the network becomes online. Similarly, when the publisher node loses the connection with the broker, the broker can initiate the process to close the connection; before it can send all the subscribed nodes in the network, the cached message received earlier from the subscriber.
Our experimental process is concerned with analyzing the performance of the neural network-based image compression schemes by comparing their PSNR values when the images are transmitted through the WMSN. We have used the MRI images from the brain tumor dataset gathered from [44] for our experiment for its simplicity and availability of large number of images.
Test images included T1-weighted MR images with a TR of 1740 and an echo time (TE) of 20, T2-weighted MR images with a TR of 5850 and an echo time (TE) of 130, and FLAIR-weighted MR images with a TR of 8500 and an echo time (TE) of 130.
A 3 Tesla Siemens Magnetron Spectra MR computer was used to create these test images. The total number of slices for all channels was 15, resulting in 135 images at 9 slices or images per patient with a field of view of 200 mm, a 1 mm interslice distance, and voxel sizes of 0:78 mm × 0:78 mm × 0:5 mm. The proposed technique is tested on a real dataset that includes 512 × 512 pixel brain MR images, which was converted to grayscale before processing using autoencoder/RBM. The autoencoder/RBM compression for sample test MR images with and without tumor is given in Figures 6-9. The images consisted of both the images with tumors detected and tissues without tumor (healthy tissues). We adopted this approach to analyze the performance of the image compression algorithms on both the MR images with and without tumor and identify if the tumor issues are visible after compression.
The RTL schematic for autoencoder and RBM was implemented with Xilinx ISE development platform version 14.1 of family Virtex 6-XC6VLX757, and the device utilization summary is given in Tables 1-4. RTL schematic is shown below in Figures 10 and 11. Comparing the RTL design of RBM with autoencoder, we can easily infer that RBM utilizes far lesser flip flops in its design for the same image compression, which attests for its superior performance when compared with autoencoder. It also performs better other factorization-based image compression methods by a significant factor.

Discussion
The proposed model's performance is validated by comparing it with the existing model developed by Elhoseny et al.  9 Journal of Sensors [36]. In the existing work, the author developed fast bilateral filter for noise removal in the medical images. The fast bilateral filter has better edge preservation ability. Then, the canny edge detector is developed for segmenting the brain tissues. Lastly, the fast bilateral filter algorithm is implemented in Raspberry Pi by utilizing open CV software. Compared to the existing model, the proposed model achieved showed 10 dB to 15 dB improvement in the PSNR value, while transmitting the medical images. In addition, the effectiveness of the security protocol is as follows: MQTT is validated in terms of accuracy, precision, and recall. In our test case, the MQTT protocol achieved 95.60% of accuracy, 96.90% of recall, and 95.92% of precision, which is better compared to the existing technique [38], where the existing technique achieved only 90.90% of precision and recall and 94% of accuracy.

Conclusion
This work was chiefly carried out to analyze the performance of the deep learning algorithms in compressing medical MRI images and the efficiency of the Raspberry Pi WSN in transmitting the compressed images across the WSN nodes. Since there are many requirements in the medical field where the need to stream the medical images across the WSN with greater efficiency for immediate presentation and diagnosis of doctors, it calls for optimized compression of images for preserving the data transmission bandwidth and lesser transmission time at the same time not losing any considerable loss of image quality as it is impair the prognosis. The deep learning neural network implementations of autoencoder and RBM coded on Raspberry Pi with MQTT as the transmission protocol for additional security in the WSN performed as per the expectations with minimal power loss and latency. The RTL schematic implementation was done for all the image compression schemes used in the paper as a means to find out the device utilization in the WSN. This can be used as the foundation for the further development of the work on custom FPGA boards that can offer more control over the power performance of the WSN.

Data Availability
No data were used to support this study.

Conflicts of Interest
The authors declare that they have no conflict of interest.