An Empirical Client Cloud Environment to Secure Data Communication with Alert Protocol

VIT Bhopal University, Bhopal, Madhya Pradesh, India Department of Computer and Information Sciences, Himalayan School of Science & Technology, Swami Rama Himalayan University, Dehradun, India Department of Computer Science and Engineering, Shivalik College of Engineering, Dehradun, India School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India Bakhtar University, Kabul, Afghanistan


Introduction
Cloud computing technology is a practice where assets and other utilities will be provided to the client based on their request [1][2][3]. is model shows the acquiring properties with the on-request administration in the cloud environment. After the assignment of an asset to the client, it can be taken back to improve the scalability and to address the under-provisioning and over-provisioning threats in the cloud model [4][5][6][7]. To improve the overall performance of cloud computing resources, along with the other resources, cloud clients also share the memory resources [8,9]. is advanced cloud model uses the concepts of enlisting and association of resources which give the clients adaptability, scalability, and a pay-per-use model as mentioned in [10][11][12]. While sharing and managing a large number of resources over the Internet, the cloud manager generally uses three cloud security models, namely Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) [13][14][15]. ese security models show exibility, on-interest association provisioning, fundamental asset pooling, multi-inhabitancy empowering, utility compensation as-you-use, etc. On the other hand, these models also highlighted security as a major concern [16][17][18]. A portion of the well-known cloud suppliers are as follows: (1) e private cloud is used by Google, which it uses for passing on a wide scope of organizations to its customers, including maps, web examination, content translations, e-mail access, record applications, etc [19,20].
(2) Amazon Elastic Compute Cloud (EC2) is a webbased specialist organization that licenses business allies to run application programs in the Amazon.com handling ecosystem. e EC2 can fill in every way that matters, including vast game plans of virtual machines [21].
(3) Microsoft has an online association known as Microsoft SharePoint, which believes substance and business data to be moved into the cloud, and Microsoft as of now makes its office applications open in a cloud [22][23][24]. (4) Customers of Salesforce run its application in a cloud known as Force.com and Vmforce.com; this item gives engineers stages to build changed cloud organizations [25].
Data and resource sharing is the key concern in the cloud computing area. It is widely used in all areas including hospital management systems, universities, libraries, etc. It has been used with different new era technologies like big data and Internet of things. e increasing demand and the popularity of cloud computing are increasing day by day. e other side shows the high-security requirements and needs for data communication in the cloud computing environment [26,27]. For the better use of distributed computing, there is a need for productive treatment of the assets and there ought to be components to give a type of positioning and procuring the system safely. Due to high demand, rapidly increasing user numbers, huge shared data, and possibility of backup and recovery, the need for security increases [28].
Cloud computing can be divided based on the deployment type, which can be further categorized into four parts, namely public, private, hybrid, and community cloud. e major difference between public and private cloud is the security as the public cloud can be more secure because of the security software from the cloud service provider [29,30]. e mixing based on the requirement of public and private cloud comes in the category of the hybrid cloud. Resource sharing is possible in the community cloud. en there is the service-based category to consider, which can be categorized into three parts, namely IaaS, PaaS, and SaaS [31][32][33][34]. IaaS is mainly for infrastructure like disk and servers (virtual), PaaS is mainly for a platform like databases and execution frameworks [35], and SaaS is mainly for the software resources like mailing services and application services.
In this article, an efficient secure cloud computing framework has been developed. is framework consists of data grouping based on FCM [36,37]. It has been used for the individual and associative rankings of uploaded text data on the cloud. For the decision selection ranking of the data, the SAW method has been used. For data security, RC6, RSA, and AES algorithms have been used collectively and individually based on the condition [38]. e system structure is designed and developed for providing data security of the uploaded stream for communication and data processing for data sharing [39]. e major drawbacks of the existing system are that it is inefficient in cloud data security and lacks a security applicability selection procedure for the applicability of the security needs for reliable cloud communication. is situation provokes the need for data security along with the clustering approaches that will provide the grouping of data for finding the data security needed for the individual data. It may be helpful in effective data handling also. e rest of the article is organized as follows: Literature review isdetailed in Section 2. e problem discussion and proposed work explanation are given in Section 3. Experimental results and analysis are represented in Section 4. e conclusion and future work of the proposed work are given in Section 5.

Related Works
Alshammari et al. [40] discussed distributed computing in terms of disseminated processing, lattice registering, and virtualization process, examined the well-being of the distributed computing climate, and analyzed and discussed about the security assaults and potential arrangements.
Salah Khaled et al. [41] proposed a classical model which can capture the behavior and analyze the network server performance in the network. ey used queueing models to implement the hypo-exponential proposed model to measure the performance by incorporating features like packet loss, delay, queue size system utilization, etc.
Kandoussi et al. in [42] discussed the defense system against cyber-attack by integrating honeypot and virtual machine migration (VMM). e proposed model is effective against the attack by applying the security policies, and the model is also capable to classify potential attacks into two categories.
Koo et al. [43] examined security concerns in distributed computing. ey recommended that the present security framework is not adequate for the public guard data framework. ey have proposed the need for security design dependent on distributed computing for public protection in order to organize a framework.
Chalkiadakis et al. [44] proposed a system to implement a practical attestation that helps service providers to provide unified continuous service between the end cloud user and hosted application. ey also tried to remove latency caused by continuously sending/receiving data from remote locations through caching system. is approach gives improved performance when compared with standard TLS handshaking.
Ghaffar et al. [45] proposed a mobile model to deal with data privacy while sharing and managing data in the cloud environment. ey provide flexibility for owner data sharing in the cloud by introducing proxy re-encryption protocol. e proposed protocol resists security attacks and provides better performance in terms of computation, communication, and storage cost in comparison to other related protocols.
Zhou et al. [46] proposed an N-variant system for secure cloud service for cloud service provider CSP in order to save cloud applications. ey discussed SecIngress, an API gateway framework, to overcome the challenge of upgrading SaaS applications. Two-stage timeout processing method has been used to lessen the service latency and analytic hierarchy process voting under the metadata mechanism.
Novković et al. [47] discussed threats, challenges, and mitigating data breach and repudiation risks for cloud service providers occurring in a cloud environment. ey focused on the inner factor of the cloud which is responsible for risk origination and provided a possible solution for confidentiality and mitigation of risk.
andaiah Prabu et al. [48] proposed a system for dynamic updating activity for content access and removal. ey proposed keyword-based ranking to encrypt data and introduced a multi-data handling cipher policy. In the proposed system, they used a depth-first search for a treebased design for a fast, multiple keyword-based ranking search.
Hassaoui et al. in [49] proposed a classical model discrete-time Fourier series (DTFS) with a domain-generated algorithm (DFA) named DTFS-DGA, which maintains the performance of the system even when data size transformation is observed to obtain DGA in real time.
e model is associated with the neural network model (NNM) and machine learning model (MLM), which use 15 features of NNM and MLM, and the result is generated with classification in random forest and support vector machine.
After studying and analyzing several research articles in the related field, we find the following major security concerns: (1) In the cloud, all data do not require an equal level of protection, so data should be categorized before storing in the cloud, and compliance requirements should be identified in case of a data breach. Tenants should ask questions to cloud service providers (CSPs) that how the life cycle of data storage and security policy is implemented, that is whether twoway security is needed: one is from the client and the other from the cloud provider.
(2) Do CSP offer encrypted storage for archive and backup, how strong key management strategy is implemented, how strong identity and access management policy work, etc? Businesses should ensure that the CSP will support secure communication protocols such as SSL/TLS for browser access or virtual private network-based connections for system access for protected access to their services.
(3) e cloud manager manages all the encryption keys. How does the client ensure that access management controls will gratify violated announcement and data residency? (4) If keys are managed by the CSP, then businesses should require hardware-based key management systems within a tightly defined and managed set of key management processes. How to use consistent and accepted algorithms or proof of self-determining documentation for the potentially weakened encryption?

Proposed Work
We have projected an effective methodology dependent on cluster determination with the affirmation of information security to all the information with the calculative risk related to the information. To apply and adopt a security mechanism with the process timeline in the adaptability with the framework and future protocol, the framework is divided into three parts, namely data preprocessing, clustering of data, and multiple-criteria decision-making (MCDM). e main motivation of the proposed work is to provide an efficient security protocol for user authentication and data aggregation at the time of data communication in the cloud. e communication scenario is mainly affected by the security threats in the cloud computing environment.
is proposed framework provides an empirical and algorithmic study for the effective cloud protocol for crucial data security. e objectives are as follows: (1) To provide data categorization for the purpose of data separation and aggregation in a cloud computing environment for secure communication.
(2) To apply the decision-making process for the security requirement to check whether it can cope with the security needs for the concurrent cryptosystem application and adaptions. (3) To apply data key hybridization for the requirement of a complex security system for the cloud data and render an alert mechanism for the protection of the data through the cloud user agreement procedure. e proposed framework is working properly and tested under different file sets in the cloud. One such file set is attached, and to prove the working and effectiveness of the framework, a comparative analysis is done in Table1 over different parameters. Results of these parameters are also shown in (Figures 1-7) along with encryption and decryption delay. All three parts of the framework are important for the process, and the selection of parameters is also equally important. ese three steps are discussed below.

Data Preprocessing.
In the first stage, the information pre-preparing is applied depending on the weight allocated to the data. e weight relegation depends on the data esteems introduced in the cloud for transferring text data.
ere are 10 data classifications done for the information Mathematical Problems in Engineering attribution; it is unprejudiced as it depends on the data recurrence. e total reach considered here is 1 to 10. It has been utilized for gathering dependent on the danger comparability or the related loads. is stage gives the numerical capacity of the calculative loads for the further cycle and tasks. It could be useful in finding the positioning depending on the individual or acquainted loads through simple additive weight (SAW). Attribute selection is shown in Figure 8. It is the blend of the means which is utilized in all parts of boundary choice as indicated by the sink ascribes. At that point, the pre-handled worth is a contribution to the bunching stage. Figure 9 shows the data preprocessing methodology.

3.2.
Clustering. e clustering algorithm functions in the second stage. FCM is preferred in the framework because it provides effortless and computationally deeper clustering as well as the achievement tempo, which is lofty in complex data [4,9]. FCM has been applied to the uploaded data from cloud computing after the data preprocessing. Data grouping is done based on the weight matrix, and this matrix is in the range of 1-10. Dependent on the security closeness prerequisite or dependent on the weight accumulation, grouping of data has been performed. A completely random seed process is applied in the case of the centroid mechanism. Algorithm 1 shown below is the FCM algorithm. For data selection and ranking order, the SAW algorithm is applied.
Time ( Time (ms) Data Figure 2: Decryption time at the cloud server.

Approach.
en for the conflicting criteria decision, multiple-criteria decision-making (MCDM) methods have been used. It has been useful for decision performances and provides proper separation based on the aggregated weights. e MCDM methods have been used in different scenarios for the ranking of the data based on the rule vectors or weights [50]. SAW has been used as the MCDM method for decision selection based on the performance matrix. First, the score in terms of aggregated assessment ranking has been calculated based on   en the decision performances have been arranged. For each data, a separate score for the decision evaluation has been calculated as per eq (1). en it is multiplied by the multiplicative factor which is between 0 and 1, and then it has been summed based on the attributes and the data. is will arrange the data based on the performance, and ranking will be obtained. It gives the score which can be useful in the positioning criteria. e advantage of this procedure is that it compares direct differences in the unrefined data which infers that the overall solicitation of the size of the standardized score stays equals the initial investment. SAW algorithm and its calculation are shown below: where CS i is the total evaluation score of the ith data, W j is the weight of jth attribute, and r ij shows the random value for the ith data and jth attribute. en data sharing mechanism can be provided in our cloud platform. Now secure data transmission is possible as the security algorithms are automated on the data based on the performance ranking, weight, and aggregation. It is helpful in providing less computation security where the data are not very sensitive; otherwise, higher security is provided.

Automated Data Security.
Based on the decision ranking, the observed frequency of more than 50% for the data is operated separately. RC6, RSA, and AES algorithm hybridization will be applied to the selected data. RC6 only will be applied to the rest of the data based on the security levels [51,52]. It will be operated automatically based on the data rank, so it will be helpful in the textual data for timely delivery and operations. RC6 will be chosen for the higher key usability, RSA will be chosen for the larger key possibility, and AES will be chosen because of the single S-box capability and faster computation capability.

Security Alert.
Our framework will have the capability of alerting unauthorized access from the data point. e data will be corrupted with the notification of unauthorized data handling and accessing.
Steps 1-6 in the algorithm expressed the working of the proposed model. e FCM algorithm (algorithm 1) is used for clustering and calculating the membership value, and the SAW algorithm (algorithm 2) is used for data selection, calculating ranking order and performance evaluation. RC6 (algorithm 3) is used for key generation and high key usability, RSA (algorithm 4) is used for larger key generation, and AES algorithm (algorithm 5) is used for substitution. All other algorithms (algorithms 3-5) are also used for the encryption/decryption of data.
In Figure 10, a flowchart of the framework working mechanism is represented. Initially, data are uploaded to cloud server then data preprocessing is done in the uploaded data. After preprocessing on data, attributes are selected based on 10 categories and weight is assigned. en FCM algorithm is applied to attributes; and to remove biases, the centroid mechanism is applied, this is known as seed initialization. is step is repeated multiple times until we get the termination point. After this step, data are transferred to the SAW algorithm. Based on the SAW result, our framework will apply security to data while sharing the data in the cloud. e overall framework structure is shown in Figure 11. e representation of steps from data uploading to weight assignment on data is the same. Based on the decision performance ranking, top higher rank whose supports or threshold value are ≥50% are adopted for further process, and the remaining data are discarded. en FCM and SAW algorithms are applied to find the ranking of data. Based on the above result, an individual or combination of keys is applied automatically which is generated by RC6, RSA, and AES algorithms. If unauthorized access occurs, a warning is generated, and the shared copy of data in the cloud is erased.

System Requirements.
is cloud computing security and auto key applicability framework has been developed, designed, and executed on Java platform version 7, NetBeans IDE 7.2 environment. Minimum suggested RAM 2GB, Intel or AMD x86 processor with 64 bit processing (suggested quard core processor), and for installation, it needs a 5GB memory for workspace and other application. ere are no specific operating system requirements although a minimum of Windows 7 is required, no specific software requirements, and no specific graphics card requirements.

Results Based on Fuzzy C-Means (FCM).
In this section, experimental results have been discussed and analyzed based on different data operations. Initially, for experimentation, 30 data points are considered. First, the data had been uploaded to the cloud computing environment. Data security was applied to the data for the shared users. First,  preprocessing was performed. en computational weights were calculated as shown in Table 2. e weights ranged from W1 to W10. Client information was also available in the client column. When data were uploaded to the cloud, we performed data preprocessing, thereafter FCM was applied for computation as mentioned in Table 3 and Table 4. For data grouping in the range of 1 to 10, the weight matrix had been considered as shown in Table 5 and Table 6. Based on weight aggregation and similarity in security requirements, grouping had been done as shown in Figures 12 and 13.
Within the scenario of the centroid methodology, entirely random seeds are processed first, and then the SAW method is used to get data selection and ranking order as depicted in Figures 14 and 15. It clearly shows the clustering approach based on the algorithms applied. As shown in Tables 7 and 8, the data ranking and security operations were performed.
Let A � {a 1 , a 2 , a 3 ..., a n } are the data point values in the complete set and E � {e 1 , e 2 , e 3 ..., e n } be the set of centers.
(1) Randomly choose the cluster midpoint with no biases. (1) Normalization process is performed for the initially calculated weights.
(2) e evaluation of the scores is based on the following formula: (3) CS i � n j�1 w j r ij //where CS i is the total evaluation score of the ith data, W j is the weight of jth attribute, and r ij shows the random value for ith data and jth attribute (4) End (2) for i � 1 to r (a) x � (A 2 * (2A 2 + 1)) ≪ < log 2 w (4) End for (5) Data of A are appended with A with the addition of 2 bits and 2 rounds. (6) Data of C are appended with C with the addition of 3 bits and 2 rounds. (7) Now the subtraction process with the addition of 3 bits and 2 rounds in the decryption process. (8) en the subtraction process with the addition of 2 bits and 2 rounds in the decryption process. (9) Same process is reversed to form the complete text data. (1) Initial round is performed that contains 13 rounds.
(2) AddRoundKey round is performed. It is the XOR operation with the round key.

Comparative Analysis Based on Variable Parameters in the Case of FCM.
e above discussion shows that with different data set, different values of data and aggregated values were obtained. ese aggregated sets will determine the security operation and the risk associated with the data uploaded to the cloud environment. e process is completely automated, so there is no need of considering the individual files for the processing, and the data held will be appropriately correlated with the sharing inter cloud environment as shown inFigures [16][17][18][19]. CW denotes the weight according to the cluster.

Comparative Analysis Based on Data Security.
A completely random set has been chosen to reduce the chance of biasness. e applicability of keys on sensitive data is shown by key grades [53,54]. In methodology, mainly three keys were used, which give five different types of key distribution   individually or in combination. Figure 20 clearly shows that keys are involuntarily applied to files; as the numbers of files increase based on sensitivity, keys are also increased to give better security and this will reduce risk.

Comparative Analysis Based on Encryption and
Decryption Time. Figures 1 and 2 clearly depict the time taken in encryption and decryption. As the encryption process is completed by RC6; the substitution process is completed by AES; and the key generation process is completed by RC6, AES, and RSA algorithm collectively, the security is high and in comparison, to the aggregation, the total time taken is less. e data size in bytes shows the variability considered in the data uploading mechanism.

Comparative Analysis with Existing Methods.
Comparative analysis is done based on the following parameters shown in Table 1. Figures 2-6 show the comparison based on parameters such as message, key size, delay, and randomness.
In the analysis, we compared message, key, and randomness. e message shows total generated key messages; key shows the total number of used key; and in each iteration key generation, randomization process is shown by randomness. It is clear from Figure 5 that the proposed approach shows better results when compared with    contemporary techniques. A comparison based on key size is shown in Figure 6 which clearly shows that the variability in key size (min/max) is better in our approach. Security is higher and better in our framework for higher risk due to maximum key variability and authentication. Both results show that our auto framework approaches based on FCM with three security algorithms have complex security capability. Figure 7 depicts the discovery time (DT) of unauthorized access in the cloud environment. DT shows the time taken for the notice to the cloud user and server for the unauthorized access probability. e delay in encryption is higher in our case as shown in Figures 1 and 2 which shows the higher security complexity as compared to the previous work. e key complexity is the advantage of our approach which shows the efficiency in our approach is higher in comparison to the traditional approaches.

Conclusion and Future Work
In this article, a secure client framework is proposed which is efficient and secure in a cloud environment along with the security constraints. Data preprocessing and FCM and SAW algorithms are applied, and finally, decision performance ranking is calculated. e performance of clustering shown is based on the constraint used for the chosen attributes in terms of computational weights. e security constraints have been applied based on the performance matrix. e security of data while sharing in the cloud and the generation of keys for sharing data is done through the combination of three encryption standards randomly. Based on the decision performance ranking, whose supports were higher than the given threshold value, the entire three security standards have been adopted and one security standard is applied for the remaining data (random selection). e resultant process is completely automated, so there is no need of considering the individual files for the processing, and the data held will be appropriately correlated with the sharing inter-cloud environment.
Our result illustrates that based on a different number of file sets, keys spreading are automatically increased or decreased as per the sensitivity of data. e delay in encryption is higher in our case which shows the high-security complexity as compared to the previous work.
e key complexity is the advantage of our approach which shows the better efficiency of our approach over the existing approaches. Key variability in the proposed approach is higher in comparison to the traditional approaches, which is also an advantage. e future suggestions are based on the proposed work and the previous analysis; work can be extended with the combination of machine learning (ML) techniques with other clustering algorithms. e parameters can be changed or extended in the future for checking the correlation variation in different ways for analyzing the system and security aspects. Computation can be enhanced with the introduction of ML, other clustering approaches, and new encryption and steganographic approaches. is proposed framework is suitable in the SaaS layer and for text data type only. e framework can be designed for other layers of cloud-like PaaS and IaaS and extended in terms of other data types like images, audio, and video.

Data Availability
e data related to experimentation and other work are available from the corresponding author upon request .  Figure 20: Auto application of key on data based on decision ranking.