Diagnosis of COVID-19 Using a Deep Learning Model in Various Radiology Domains

COVID-19


Introduction
e rapid spread of COVID-19 has motivated scientists to quickly develop countermeasures using technologies such as cognitive computing, deep learning, artificial intelligence, machine intelligence, cloud-based collaboration, and wireless communication [1].
Cognitive computing simulates human thought processes and is extensively used in fields such as finance and investment, healthcare and veterinary medicine, travel, and mobile systems [2][3][4].
e Internet of things (IoT) is implemented on interconnected electronic devices with unique identifiers (UIDs), such as computers, smartphones, coffeemakers, washing machines, and wearable devices [5,6]. e IoT, along with cloud computing, Artificial Intelligence (AI), Machine Learning (ML), and deep learning, could be a powerful tool to combat COVID-19 [7,8], and 4 th generation (4G) and 5 th generation (5G) wireless communication technologies have the potential to revolutionize many sectors, including healthcare [9][10][11]. China has already used 5G technology to fight the COVID-19 pandemic by monitoring patients, collecting and analyzing data, and tracking viruses [1].
Most developing countries utilize wireless technologies, laboratory-based trails, and radiological investigations in order to recognize and diagnose COVID-19 [12]. A standard method is real-time reverse transcription polymerase chain reaction (qRT-PCR), but false-negative results can occur due to asymptomatic patients, and mistakes may also affect its role in identifying COVID-19 [13,14]. In the early stages, imaging technologies such as CT scan, MRI, and X-rays might play a vital role in detecting COVID-19 patients [15][16][17].
Radiology-based chest scanning has been employed to investigate pneumonia [18]. An artificial intelligence-(AI-) based tool was developed [19] to automatically detect, quantify, and monitor COVID-19 and to differentiate affected and normal patients. A deep learning-based approach [20] was developed to automatically segment the entire lung with infection sites under a chest CT. Similarly, an early screening system based on deep learning can discriminate influenza (viral pneumonia) from vigorous cases and COVID-19 [21]. A deep learning-based approach can extract graphical features from CT images of COVID-19 [22]. ese features deliver prior medical analysis pathogenic testing and have been claimed to save crucial time for disease investigation. However, most consider just one radiology domain, such as X-ray or CT.
is work builds a deep learning approach in order to notice COVID-19 from various radiological input images such as X-ray, CT, and MRI. e model is a convolutional neural network (CNN) whose 27 layers include input, convolutional, max-pooling, dropout, flatten, dense, and output. e input layer accepts input grayscale images of size 128 × 128 and uses 64 filters of size 3 × 3. Ae ReLU activation function is employed in the input layer and all hidden layers. Following the max-pooling layer is a dropout layer to avoid overfitting. is drops out different neurons in the hidden layer. e percentage of neurons to drop should be specified when using the dropout function. We drop 30%. Next are two convolutional layers, both with 128 filters of size 3 × 3, then a 2 × 2 max-pooling layer, and a dropout layer to drop 30% of neurons. We add three convolutional layers with 256 filters, each with size 3 × 3, followed by a max-pooling layer with the same parameters as the previous pooling layer. We also drop 30% of the output neurons. We continue increasing filters in more layers, adding three convolutional layers with 512 filters in each layer, with the same filter size as previous layers. A max-pooling layer follows this stack of layers, and 30% of the neurons are dropped. Two stacks, the same as previous layers, are added using the same parameters. e rest of the paper is organized as follows. Section 2 summarizes state-of-the-art work in various radiology domains. Section 3 presents the proposed methodology. e datasets used in this research are described in Section 4. e experimental environment for the proposed approach is presented in Section 5, and the results are discussed in Section 6. Section 7 discusses conclusions and directions of future work.

Related Works
Various radiology techniques (e.g., X-ray, CT, and MRI) have been utilized as imaging modalities in the diagnosis of COVID-19, and research has proposed the identification of COVID-19 against different radiology methods, with various limitations.
An early-stage screening model [23] could differentiate COVID-19 patients from normal humans by employing deep learning techniques under pulmonary CT images, with 86.7% accuracy against 618 CT samples. However, the segmentation model employed before feeding it to the learning model could lose some important features and cause misclassification, and only limited radiology images were employed in experiments. An automatic deep CNN system [18] was based on pretrained models under chest X-ray images. is heuristic model utilized limited X-ray images in a controlled domain.
An integrated technique based on an artificial neural network and convolutional CapsNet [24] was developed to identify COVID-19 against chest X-ray images with pill networks. e performance was assessed with binary and multiclass classifications such as infected, normal, and pneumonia, indicating a 97% recognition rate on binary classification and 84% on multiclass classification. ere was no rule to find the structure of the artificial neural network, which had no specific scheme to define the structure of neurons, which could be achieved by experience or trialand-error [25]. When training of the neural network was completed, the network was reduced to a specific value of error on image samples; hence, it provided no optimum outcomes [25].
Some recent systems [18,[26][27][28] have utilized deep learning and artificial intelligence to identify COVID-19, but only on X-ray images. Similarly, a commercial platform was used to classify infected patients and normal humans [29], with limited contribution from the authors, who utilized only X-rays in experiments. Deep learning and CNN were used to classify positive patients with coronavirus and healthy patients [30]. A very small dataset of X-ray images was used, which might not be applicable in naturalistic domains.
An automated method [31] was proposed to detect COVID-19-positive patients from normal humans, employing a deep learning-based network coupled with gradient-weighted class activation mapping (Grad-CAM) for feature extraction under CT scan images. However, Grad-CAM-based methods require modification of the network architecture, which could degrade accuracy; computational Grad-CAM is expensive [32]; and a nonstandard dataset was utilized. A deep CNN, called decompose, transfer, and compose (DeTraC), was used with principal component analysis (PCA) as a feature dimension reduction method to identify coronavirus against chest X-ray images [33]. However, PCA is problematic in the precise assessment of the covariance matrix [34]. Moreover, even modest invariance might not be taken by PCA unless the training data openly deliver this evidence [35]. A deep learning-based system to identify COVID-19 from normal humans had a recognition rate of up to 100% [36], which is not realistic. Only one radiology (X-ray) image was utilized. Similarly, a system was proposed to classify infected, normal, and pneumonia cases with significant accuracy [37], and a deep CNN-based system was proposed to identify patients with coronavirus and normal humans [38]. However, both systems utilized limited X-ray images and used one radiology images.

Complexity
We develop a deep learning model to accurately categorize the infected patients with COVID-19 and normal humans. e model employs radiology input images such as X-ray, CT, and MRI, through which we can prove the robustness of the model, which is based on 27 layers of a CNN, including input, convolutional, max-pooling, dropout, flatten, dense, and output layers, and shows significant performance on various radiology images such as X-ray, CT scan, and MRI, compared to state-of-the-art methods.

Materials and Methods
We describe the proposed deep learning-based approach, whose flowchart against X-ray images is shown in Figure 1. e model is based on a Convolutional Neural Network (CNN) with 27 layers. e input layer accepts a grayscale image of size 128 × 128 and uses 64 filters of size 3 × 3. A rectified linear unit (ReLU) activation function is used in the input layer and all hidden layers, where ReLU is defined by the relation R(z) � max(0, z), as shown in Figure 2.
e ReLU activation function omits negative pixels in the input image. e second layer is a convolutional layer that has 64 filters of size 3 × 3. Next is a max-pooling layer that takes the maximum value for each patch of the feature map, with pool size and stride both 2 × 2.
Next is a dropout layer to avoid overfitting by dropping out different neurons in hidden layers. We drop 30% of the neurons to reduce overfitting. e dropout technique is shown in Figure 3. e output of the previous layer is flattened to convert a matrix to a single layer. For instance, an output shape of (1, 128, 18) is flattened to (1,16384). en, two dense layers with 4096 units each are added. An ReLU activation function is used in both layers. e last layer is the output layer with two neurons, which is the number of classes (COVID-19 positive and COVID-19 negative). A soft-max activation function normalizes the input vector from the previous layer of real numbers to a probability distribution e proposed approach is described in Figure 4.

Datasets Used
We utilized the following datasets to show the efficacy of the developed approach.

X-Ray Image Dataset.
We utilized a radiology dataset with 270 X-ray images from males and females of age 20-55 years, collected from various open sources (used to diagnose coronavirus). During implementation, we regularly updated the dataset to incorporate the latest complex chest X-ray images. e dataset was thoroughly checked by medical experts (physicians). We did not provide metadata for patients. Images were converted to a vector with dimensions 1 × 6400 by decreasing the dimension of every input image to 80 × 80. To avoid imbalance, we utilized 135 normal patients' images and 135 COVID-19-positive images. e dataset was collected over a period of 3 months (June to August 2020).

Computed Tomography (CT) Scan Image Dataset.
e CT image dataset contained 270 chest CT images. e dataset was built from open sources commonly used to diagnose COVID-19. e dataset incorporated new complex CT scan images that were systematically checked by doctors. e images were from males and females of age 35 to 55 years. For experiments, images in this dataset were transformed to a vector with dimensions 1 × 6400 by decreasing the dimension of every input image to 80 × 80. To avoid imbalance, we utilized images of 135 normal patients and 135 images from COVID-19-positive patients. e dataset was collected over 3 months (June to August 2020).

Magnetic Resonance Imaging (MRI) Image Dataset.
Another type of radiology dataset was of MRI scans, which generate two types of images. T1-weighted images highlight (brighten) lipids and fats by a radio-frequency pulse sequence, and T2-weighted images highlight also water. So, the timing of the radiofrequency pulse sequence highlights the target tissues. We included 270 MRI images of males and females of age 35 to 60 years. ese were confirmed cases of COVID-19. We added controls with approximately similar ages and genders but without COVID-19. All images were converted to a vector of dimension 1 × 6400 by decreasing the dimension of every input image to 80 × 80. To avoid imbalance, we utilized 135 images of normal patients and 135 COVID-19-positive images. e dataset was collected over 3 months (June to August 2020).

Experimental Setup
We performed many experiments to show the significance of the proposed model against each dataset; these were divided into 70% for training and 30% for testing for all tested algorithms. e same model architecture was used for each dataset, with different hyperparameters.
All experiments were performed using Python, Ten-sorFlow, and Google Colab (for training) on an Intel Pentium Core i7-6700 (3.4 GHz) with 16 GB RAM. Experiments are described as follows: (i) e first experiment assessed the proposed model against chest X-ray, CT scan, and MRI datasets through an average cross-validation scheme. (ii) e second experiment included a set of subexperiments performed under the absence of the developed approach against all three datasets. We utilized logistic regression, support vector machine, random forest, k-nearest neighbor, artificial neural network, Naive Bayes, decision tree, passive aggressive classifier, multilayer perceptron, and extra tree classifiers. (iii) e third experiment compared the proposed technique to the state of the art. Complexity 3

First Experiment.
e results of the first experiment are shown in Table 1.
As can be seen from Table 1, the model got significant accuracy among state of the art. It trained in different datasets with different hyperparameters using same structure of the model. at indicates that the structure of the model is significant. Also, the model was compared with different machine learning algorithms to show the difference in accuracy between deep learning and regular machine learning algorithms.

Second Experiment.
e results of the second experiment are presented in Tables 2-11. We observe from Tables 2-11 that all of the existing classifiers did not achieve better accuracy against the three datasets.
is is because most of the medical images are   Table 1.

ird Experiment.
e recognition rates of the proposed model and other models are shown in Tables 12-14, which present that the proposed model achieved higher accuracy on all three datasets. Tables 12-14, the proposed approach achieved higher accuracy than other recent works under all the three radiology datasets.

As illustrated in
6.4. Discussion. COVID-19 has affected millions of people around the world. e early detection of COVID-19 could help in stopping the spread. One of the most effective detections is screening the infected patients. Deep learning plays an affective role in this detection, and it is more              Table 12: e comparison of the proposed approach along with the state-of-the-art methods on the X-ray dataset (unit: %).
State of the art Weighted average recognition rates Standard deviation [28] 89.6 ±2.9 [30] 83.5 ±3.7 [41] 80.3 ±2.6 [42] 85.4 ±1.2 [43] 79.8 ±3.8 [44] 89.3 ±2.3 [45] 91.5 ±1.9 [46] 90.5 ±2.7 Proposed model 94.0 ±3.5 layers include input, convolutional, max-pooling, dropout, flatten, dense, and output. e input layer accepts input grayscale images of size 128 × 128 and uses 64 filters of size 3 × 3. An ReLU activation function is used in the input layer and all hidden layers. Following the max-pooling layer is a dropout layer to avoid overfitting. is drops out different neurons in the hidden layer. e percentage of neurons to drop should be specified when using the dropout function. We drop 30%. Next are two convolutional layers, both with 128 filters of size 3 × 3, then a 2 × 2 max-pooling layer, and a dropout layer to drop 30% of neurons. We add three convolutional layers with 256 filters, each with size 3 × 3, and then a max-pooling layer with the same parameters as the previous pooling layer. We also drop 30% of the output neurons. We continue increasing filters in more layers, adding three convolutional layers with 512 filters in each layer, with the same filter size as previous layers. A maxpooling layer follows this stack of layers, and 30% of the neurons are dropped. Two stacks, the same as previous layers, are added using the same parameters. Moreover, the model got significant accuracy among the state of the art. It trained in different datasets with different hyperparameters using same structure of the model. at indicates that the structure of the model is significant. Also, the model was compared with different machine learning techniques to show the difference in accuracy between deep learning and regular machine learning algorithms.

Conclusions
We developed a model to efficiently detect COVID-19 from different radiology techniques and showed its robustness on X-ray, CT, and MRI datasets. We used a CNN to build the deep learning model, which gives adequate image classification. To show the performance of the proposed model, many experiments were performed against each dataset. In the first experiment, we built a model using a CNN with different layers and trained it on the first dataset, and the same model constructor was used to train the other datasets. For each dataset, we adjusted the hyperparameters for the model to get a robust model. In the second experiment, we utilized different machine learning algorithms on each dataset (in the absence of the proposed model).
is demonstrated the importance and significance of the proposed model. Regardless of the lack of instances in a dataset, our model had high classification accuracy for COVID-19. Finally, the classification rates of our technique were compared to those of the previous work, and the developed approach presented the best performance on various radiology datasets.
e proposed system was tested and validated in a controlled environment. In future research, we will deploy the system in real healthcare systems in which COVID-19 is easily detected from real images.
Data Availability e data utilized in order to support the discoveries of this work are described in the paper and will be offered by the corresponding author upon request.

Conflicts of Interest
e authors declare no conflicts of interest regarding the present study.