An Efficient Integrity Verification and Authentication Scheme over the Remote Data in the Public Clouds for Mobile Users

,e digitalization of themodern world and its applications seem to be integratedmore with themobile phones than with any other communication devices. Since the mobile phones have become ubiquitous with applications for nearly all users, they have become a preferred choice for uploading the sensitive information to the cloud servers.,ough the drive for data storage in cloud servers is implicit due to its pay-per-use policies, the manipulation of the data present in the cloud servers by hackers and hardware failure incidents, as happened in Amazon cloud servers in 2011, necessitates the demand for data verification at regular intervals over the data stored in the remote servers. In this line, modern day researchers have proposedmany novel schemes for ensuring the remote data integrity, but they suffer from attacks or overheads due to computation and communication. ,is research paper provides solutions in three dimensions. Firstly, a novel scheme is introduced to verify the integrity of the data stored in the remote cloud servers in the context of mobile users.,e second dimension is that of reducing the computational and communication overheads during the auditing process than the previous works. ,e third dimension securely authenticates the mobile user during the auditing process and the dynamic data operations such as block modification, insertion, and deletion. Moreover, the proposed protocol is provably secure exhibiting soundness, completeness, and data privacy making it an ideal scheme for implementation in the real-world applications.


Introduction
e modern world which is getting more and more digital everyday has mandated the need for data outsourcing to cloud servers through the mobile phones of individual users to corporate offices [1,2]. Applications like Google Drive, Google App Engine, OneDrive from Microsoft, Google Picasa, Adobe Cloud, Oracle Cloud, Dropbox, Facebook, and other such applications have made an implicitly compelling scenario ranging from layman to the highly resourceful technocrats to make use of the cloud for data storage. Hence, in this digital world, a person who possesses a mobile phone views this vast world as just a small global village where he has access to any information through Internet and cloud storage.
At the same time, cloud storage has its inherent benefits in terms of elasticity, reliability, pay-per-use model, and traffic adjustability during upload, download, and other situations [3]. In such a context, the mobile users who possess powerful processors and random access memory for processing the data need not store the data in their local storage. Once the data is uploaded to the cloud storage, they may be relieved of the burden from maintaining it. But, this flexible data storage comes with its own inherent disadvantages also, as cited in the work [4]. Newer kinds of attacks on the data storage including the cloud servers have become apparent nowadays. If the sensitive data is captured by the hackers, they can use it for mining business patterns, usage patterns which might indeed pave the way for the loss to the actual data owners. Even the cloud servers may try to hide the fact of data server crashes which will lead to permanent data loss to the data owners.
In order to enable the verification process, first, the user splits the large file to be uploaded into smaller units called file blocks. e user uploads all the blocks to the remote storage area like cloud servers. Later, if the user wants to verify the integrity of the uploaded file, he can do so by making use of some cryptographically verifiable procedure [5][6][7].
Similarly, to ensure the data verifiability in the cloud servers, multiple schemes have been proposed by authors in their past literatures. In one such scenario, cloud servers were vested with the responsibility of computing the proof of verification based on all of the blocks stored in the cloud storage [8,9]. In such cases, if the cloud server had to do this computationally intensive work for hundreds of users simultaneously, it may incur a huge computational overhead. On the contrary, authors like Juels and Kalisi in 2007 [10] claimed that the audit task must be done at the cloud user's side. is method will not suit the users who work with computationally constrained battery powered mobile phones.
A pioneering attempt by Ateniese et al. [11] in 2008 claimed that a user who wants to audit the file integrity need not access the entire file stored in the cloud, and also, the user can delegate the audit task to a third party. Some works in this line delegated the verification rights to other parties such as trusted third parties or other such entities. In the works proposed [12][13][14], a patient who undergoes a treatment from a doctor makes use of electronic health records stored in the cloud storage and also allows the doctor to create the records and store them in the remote storage on behalf of the patient. A recent work proposed by Yu et al. in 2017 [15] seems to be a worthwhile protocol with public auditing capability with efficient computational and storage overheads. Many verifiable schemes have been proposed by Chen et al. in 2019 [16], Peng et al. in 2019 [17], Fujisaki and Okamoto in 1998 [18], Patra et al. in 2015 [19], and others.
Hence, a research work should be able to authenticate a legitimate mobile user during the audit response phase. In this regard, each mobile user should store a unique authentication parameter in the cloud server before sending any audit request. Hence, during audit response, the cloud server is able to successfully authenticate only the legitimate users and able to identify the intruders to abort their audit requests.
One essential fact to be considered during the protocol design is to make it certain that the utmost security with less computational necessity and the robustness of the protocol must be resistant to attackers and hackers.
us, though there are multiple methods of ascertaining the integrity of the stored file blocks in the cloud servers, each method suffers from one of the problems such as computational burden of cloud servers or data owners or reliability of the verification procedure by third parties done on behalf of the cloud users, lack of authentication procedures for the cloud servers and cloud users. In this research work, a novel method which will avoid the above shortcomings for verification of data stored in the remote servers has been proposed to enable the user who uploads the file to verify the integrity of the same as well.
Unless the data stored in the remote locations are verified thoroughly to the satisfaction of the user with respect to the security assurance, computational, and communicational capabilities, not only these schemes are prone to attacks but also they would be nonoperational for practical use by the mobile user community which constitutes a larger portion of the Internet users. Based on the necessity to address these issues, the contributions of this research work can be highlighted as follows.

Contributions of is Research Work
(i) A novel scheme to verify the integrity of the remote data and to authenticate the mobile user during the integrity auditing process is introduced (ii) e presence of the authentication procedure prevents eavesdroppers and hackers from intruding into the system (iii) e proposed work is resistant to attacks and computationally more efficient than the previous works (iv) e presence of valid proofs for completeness, soundness, and perfect data privacy makes this a vital contribution for real-world data audits in cloud

Organization of is Research
Work. e organization of this research manuscript is as follows. Section 2 incorporates the much needed recent and old literary works pertaining to the works proposed for remote data integrity verification and showcases the need for the improvements in them. Section 3 provides a quick review of the preliminaries, and a suitable architecture of the protocol proposed in this research work is in Section 4. e subsequent section shows the construction of the proposed protocol with its novel procedures for data integrity verification processes for files and support for data dynamic operations. Section 6 analyses this work in terms of its correctness, soundness, and data privacy during the audit process. In Section 7, the implementation results of the protocol are compared with various schemes, and the results are tabulated. Section 8 concludes this research work.

Literature Survey
Latest advancements such as Internet-of-ings (IoT), fog computing, digital transaction with blockchain-based security assurance, smart cities, cloud computing, and other such technologies enable a mobile user to upload sensitive information to the cloud servers for future processing. Many worthwhile schemes have been put forward by researchers and students of various institutions to ensure the correct possession of data stored in the cloud servers. Wang et al. in 2010 [20] introduced a similar scheme for the proposal of the efficient auditing framework without requiring the user to maintain a local copy of the data uploaded to the cloud server. is work is resistant to the attacks from the auditor and supports integrity verification for multiple users at the same time. A scheme proposed by Zhu et al. in 2012 [21] enabled the clients to store the files in multiple cloud servers and introduced a scalable integrity verification service with reduced computational and communication complexities based on homomorphic procedures and indexing hierarchies.
A worthwhile contribution from Zhu et al. in 2013 [22] attempts to verify the data integrity of the files stored in the cloud servers by making use of fragment structure, hash table indexes, and probabilistic query-based auditing services for frequent verification processes. ough this work is a novel one of its kind, it lacks the proper authentication of the user during the data dynamic operations and incurs relatively significant computational overhead during the integrity verification process. A work on identity-based remote data integrity verification scheme proposed by Yang and Jia [23] in 2013 provides support for both static and dynamic data operations. In this case, the third-party auditor efficiently does the integrity verification of the data stored in the cloud, and provision has been made for doing batch integrity verification operations for multiple data owners and multiple cloud servers at the same time. But, this work does not authenticate the third-party auditor who verifies the integrity of the data.
Huang et al. in 2014 [24] allowed a third-party verification by utilizing the service of semitrusted TPAs. In this scheme, the TPA is assumed to be partially trusted, and a data owner verifies the proof handed to the TPA by the cloud server. Another work proposed by Wang et al. [25] in 2014 was based on the identity-based verification scheme which avoided the complex public key infrastructure based on complex certification process. Similarly, Yu et al. in 2015 [26] proposed a protocol for the public verification using algebraic signatures of data which prevented the replay and deletion attacks.
Another scheme by Liu et al. [27] in 2017 introduced a lattice-based scheme which is free from certificate verification processes and escapes the quantum computer attacks while ensuring data privacy against the third-party auditor.
ey have successfully verified the integrity of the stored data in the cloud without making use of the costly certification process. e scheme is resistant to the attacks posed by the cloud server. ough this work is a commendable one, it lacks user authentication during the verification procedure.
A recent work by Ren et al. [28] in 2018 makes use of rb23Tree to prevent the cloud servers from manipulating some of the sensitive data and escaping from the integrity verification procedure. Luo et al. in 2018 [29] proposed an efficient scheme using BLS short signatures which preserve the user privacy incurring only less computational and communication complexities. A very useful recent research work which involves the verification of the integrity audit of cloud data was proposed by Yan et al. in 2019 [30]. is efficient scheme preserves user privacy along with data blindness at a much less computational cost. A wellacclaimed work by He et al. in 2015 [31] preserves conditional privacy and ensures authentication in wireless environments. Also, a notable work from Zhang et al. in 2019 [32] preserves the privacy without using bilinear pairings.
2.1. Gaps in the Literature Survey. Some of the gaps identified include lack of authentication, protocols being susceptible to attacks, and more computational complexity, among others. e lack of authentication may help attackers masquerade in the verification process.
Objectives of the proposed research work are as follows: (1) To invent a novel algorithm for the remote data integrity verification process which is free from attacks (2) To invent a computationally efficient algorithm for enabling remote data integrity verification over the data stored in the cloud servers (3) To introduce a secure authentication scheme to authenticate the mobile users during the secret key generation (4) To enable secure authentication for the challenge response procedure during the integrity verification process (5) To support dynamic data operations such as modification, deletion, and insertion on blocks of the files stored in the remote cloud storage (6) To ensure perfect data privacy from the third-party auditor (TPA) and thereby allowing him only to do the verification process without gaining any information of the file stored in the cloud server

Properties of Bilinear
Pairing. Let us assume that G 1 and G 2 represent two multiplicative cyclic groups whose order is q and g be the generator of G 1 . Now, the bilinear map e: G 1 × G 1 ⟶ G 2 represents the bilinear pairing if the map exhibits the following three properties: (1) e bilinear property of e(P x , Q y ) � e(P, Q) xy for all P, Q ϵ G 1 and x, y ϵ Z * q (2) e nondegeneracy property of e(g, g) ≠ 1 where g is a generator of G 1 (3) e bilinear pairing function e(P, Q) is computable using an efficient algorithm

3.2.
Notations. e notable notations used in this research work are presented in Table 1. server registers with the system manager which in turn uploads the unique parameters of the mobile users to the cloud which enables user authentication during audit response. Apart from this, the system manager sends a parameter composed of its master secret which enables the mobile user compute its own secret key. Now, a mobile user who wants to do audit of its data, sends an audit request to the third-party auditor. Accordingly, the third-party auditor creates an audit challenge and sends it to the cloud server. If the user authentication is successful, the cloud server generates an audit response which is verified by the third-party auditor, and the verification status is sent to the mobile user who initiated the audit request. Besides, if a data owner wishes to update, modify, or remove any block of data uploaded to the cloud, it can be achieved as well through dynamic data operations.
e proposed system consists of four major entities such as data owner who is the mobile user, the cloud server (CS), the system manager (SM), and the third-party auditor (TPA).

Data Owner.
ey are the mobile users who want to upload the sensitive files to the cloud storage due to lack of local storage and maintenance capabilities. At regular intervals, they will ensure the integrity of the remote data by sending auditing requests to the CS through the TPA. Moreover, the mobile user can modify, delete, or insert blocks of a file in the CS which was previously uploaded by it.

System Manager.
It is the entity which initializes the system and is responsible for generating the secret key to the mobile user and the TPA for verification purposes. is entity also uploads authentication parameters of mobile users to the cloud server to enable secure authentication of the mobile users by the cloud servers.

Cloud Server.
It represents the computer farms with vast potential for data storage sold in pay-per-use cost models. It is where the huge files which are divided into individual blocks of the mobile users are stored with provisions for integrity verification.
e files along with their corresponding tags are stored to enable the auditing of the outsourced data of mobile users and authentication of mobile users. During the audit process by the TPA, the CS P, G, Y Points in the group G 1 in which P, Y are public, and G is kept secret by the system manager 5.
α A random element from Z * q kept as a secret by the system manager 6.
ID i Identity of the mobile user i 7.
n i Public key of the mobile user i 8.
MU i Mobile user i 9.
x i A random element from Z * q kept as a secret by mobile user i 10.
Enc n i Asymmetric encryption function with the key n i 11. m i i th block of file F 12.  receives a challenge from the TPA and accordingly sends a response back to the TPA to enable integrity verification.

ird-Party Auditor.
It is the entity which does the auditing work on behalf of the mobile user. e TPA receives an auditing request from the mobile user and creates an auditing challenge based on some secret parameters and sends it to the CS for the authentication of the mobile user and the data integrity verification. e CS creates the corresponding audit response and sends it to the TPA. Now, the TPA verifies whether the received response is a genuine one or not. If successfully verified, this entity sends the auditing response to the mobile user. At regular intervals, they will assure the integrity of the remote data through auditing requests.

Initialization of the System.
e SM initializes the system by selecting two multiplicative cyclic groups whose order is q, and the bilinear map is defined by (1) It selects a hash function H to produce the message digest and randomly selects an integer α ϵ Z * q and the points Now, the SM publishes the parameters of the system such as G 1 , G 2 , q, e, P, Y, and H. e parameters α and G are kept as a secret by the SM.

Mobile User Registration in the System.
is phase consists of the following steps between the mobile user MU i and the system manager SM. e MU i sends ID i , n i to SM. Here, ID i refers to the identity of MU i , and n i refers to the public key to MU i . e SM in turn computes It sends the computed values X and A to the mobile user MU i . Now, the MU i computes and verifies whether D � A as follows: If successfully verified, MU i ascertains that it has finished its registration with SM while sharing its public key with SM. Also, SM stores n i , A of the user in its local storage. By now, MU i and the SM have identified each other.
is phase avoids any attacks posed by the attackers during the key generation, file upload, and integrity verification processes. Similarly, the TPA registers itself with the SM by sending its identity ID tpa and its public key n tpa . us, the TPA and the SM identify each other as well.

System Manager Generating the Secret Key for the Mobile
User. In this phase, the mobile user MU i , after successful registration, makes a conversation with the SM in order to generate an exclusive secret key pertaining to this mobile user.
(1) e MU i selects x i at random from Z * q and computes P x i . It sends n i , n i · A, ID i , P x i to SM as depicted in Figure 2.
(2) e SM on receiving the parameters, retrieves the value of A based on the identity ID i from its local storage. It computes n i · A and checks whether it is the same as the received value. If not verified, Z � e(P, G), and then the operation aborts. If verified, it selects β at random from Z * q and computes Z � e(P, G).
Now, the SM sends Enc n i (ID i , K 1 , K 2 , Z) to the mobile user MU i . Besides, the SM updates the authentication table as cited in Table 2 with the identity ID i and the corresponding parameter P x i of the MU i . e system manager SM, at the end of each day, uploads this authentication table along with the signature for the authentication details to the cloud server CS. e system manager computes the signature as e(P α·h 1 , G H(ID cs ) ), where h 1 refers to the hash of the details in the authentication table and ID cs refers to the identity of the cloud server. e cloud server verifies the received data by checking whether In equation (10), h 2 refers to the hash value pertaining to the authentication table as received by the cloud server given in Table 2. Hence, the proof for this equation is Security and Communication Networks erefore, if the received hash value h 1 and the computed hash value h 2 are the same, the equation e(P α·h 2 , G H(ID cs ) ) � e(P α·h 1 , G H(ID cs ) ) becomes valid which ascertains that the received authentication table is completely intact.
(3) MU i on receiving the encrypted message Enc n i (ID i , K 1 , K 2 , Z), decrypts it using its corresponding private key (using a suitable algorithm like RSA) and gets the parameters such as ID i , K 1 , K 2 , Z.
Since this message is confidential, it avoids any manin-the-middle attack or other such attacks during its transit from the SM to MU i . (4) Now, the mobile user MU i computes the secret key K 3 as follows: (5) Besides, the mobile user MU i authenticates the system manager SM by verifying whether Since the value of Z can be known only to the SM, this verification procedure successfully authenticates the SM to the mobile user MU i .

Tag Generation and File Upload by the Mobile User.
Let us assume that the mobile user MU i wants to upload a sensitive file F to the public cloud server CS. To store the file without any integrity breach in the middle and in order to be able to ascertain the genuineness of the file later, the mobile user MU i performs the following steps: (1) MU i divides the file F into n blocks. Let us assume that m 1 , m 2 , m 3 , . . . , m n refer to the individual blocks of that file. (2) It randomly selects c ϵ Z * q to be used in the remote integrity verification process.
(3) It takes each block m i and computes the corresponding block tag σ i as (4) e mobile user MU i stores all the blocks of the file F along with the corresponding tags σ i iϵn of those blocks in the cloud server, where σ i refers to the block tag of the corresponding block m i . (5) Finally, MU i deletes its local copy of the file F from its local storage.    (ID i , Z, x i , n i , c), where n tpa is the public key of the TPA. Subsequently, TPA creates a challenge as follows based on the few randomly selected file blocks: (1) e TPA selects a random integer v i ϵ Z * q for each of the random selected block to be verified.
(2) In order to identify the mobile user MU i to the cloud server CS and to avoid the man-in-the-middle attack and other possible attacks, the TPA computes the parameter E 1 as follows: (3) Also, the TPA computes E 2 � [e(P, G) x i ] n i in which n i is the public key of MU i and x i refers to the secret parameter of MU i . (4) Now, the TPA creates the challenge based on the corresponding block numbers and the randomly generated integers for those blocks as chal � (i, v i ) iϵI . For example, consider the case of (5, v 5 ), where 5 refers to block number 5 and v 5 refers to the corresponding integer which was randomly chosen by the TPA. (5) At last, the TPA sends E 1 , E 2 , ID i , chal to the cloud server CS.

RDIC Response by the Cloud Server.
Upon receiving the challenge from the TPA, the cloud server creates the response as follows: (1) Firstly, the CS authenticates the MU i by checking whether the equation holds true or not. Here, P x i refers to the parameter corresponding to the mobile user MU i which is present in the user authentication table. e proof for the equation can be understood as follows: at is, the verification of enables the cloud server to authenticate the mobile user. If the authentication is not successful, the CS aborts the integrity verification process. (2) It computes the parameter (3) It also computes (4) Sends μ, σ to the TPA who is waiting for the response.

Integrity Verification by the ird-Party Auditor.
To verify whether the file uploaded long back was kept intact by the cloud server, the TPA does the integrity verification as is proof for the above equation ascertains the fact that the individual blocks of the file F which is stored in the remote server is kept intact by the cloud server. e overall interaction between the TPA and the CS during the auditing process is depicted in Figure 3.

Data Dynamic Operations.
Under some circumstances, the information stored in a sensitive file may need to be modified or inserted or deleted. is research work strives to ensure the same with secure authentication procedure as follows.
For the file block modification operation, the mobile user wants to replace the block m i of the file F with m * i . (1) e mobile user MU i finds that the i th block m i of the file F needs to be replaced by m * i in the cloud server. Hence, it computes the corresponding block tag for the block m * i as (2) Now, the mobile user MU i computes E 1 � e(P, G) x i ·n i ·H(ID i )+α·x i and E 2 � [e(P, G) x i ] n i as in the challenge generation process.  (1) As in block modification operation, MU i sends the parameters (DO, ID i , i, m i , σ * i , E 1 , E 2 ) to the cloud server where DO refers to the block deletion operation (2) e CS, upon receiving the parameters

Security Analysis of the Proposed Work
A remote integrity protocol is assumed to be secure if it exhibits the properties such as completeness, soundness, and data privacy. is section analyses the proposed protocol with regard to these essential properties.

Theorem 1 (completeness). e integrity verification done by the TPA after receiving the audit response is based on a valid proof.
Proof. In this research work, iϵI Z x i ·v i ·c · Z c·μ � σ, as shown during the integrity verification process by the TPA, ensures the completeness of the proposed RDIC protocol. e proof for this equation can be given as follows:  If the mobile user and the cloud server are truthful and free from deceit, then the equation σ � ? iϵI Z x i ·v i ·c · Z c·μ should hold true.

Theorem 2 (user authentication).
e authentication verification of the cloud server during audit response is done based on a valid proof.
Let us assume that a user i with identity ID i has uploaded the parameter P x i to the cloud before uploading any file. Based on the request from that user, the TPA computes E 1 � e(P, G) x i ·n i ·H(ID i )+α·x i and E 2 � [e(P, G) x i ] n i and sends E 1 , E 2 , ID i , chal to the cloud server. Here, the validity of E H(ID i ) 2 · e(P x i , Y) � E 1 done by the cloud server during the integrity verification ensures that only a valid user is part of the verification process and not an intruder. In this case, P x i is taken by the cloud server from the authentication table, and E 1 , E 2 are received from the TPA. Assume that an attacker with a random value x i ′ and masquerading as a legitimate user MU i sends an audit request e CS, after receiving the audit request E 1 ′ , E 2 ′ , ID i , chal from the attacker, retrieves P x i from the authentication table and tries to verify the authentication as follows: us, the authentication fails. Hence, the proposed system gives a valid proof only to a legitimate user during the authentication process.
Theorem 3 (soundness). In our scheme, the attacker by possessing e(P, G), v i , c cannot break the system and involve in audit cheat.
e parameters e(P, G) x i , v i , c are vital and are shared only between MU i and the TPA. An attacker may be an existing member of the system and would like to masquerade as user MU i by using its Z � e(P, G), v i , c and tries construct the equation iϵI Z x i ·v i ·c · Z c·μ but will only fail to do so since the value of x i is known only to the legitimate user and the TPA. e equation in the proposed work iϵI Z x i ·v i ·c · Z c·μ � iϵI e(P, G) x i ·v i ·c · e(P, G) c·Σ iϵI m i ·v i shows that, during integrity verification, an attacker who wants to break the system should possess e(P, G) x i , v i , c which are kept confidential and being shared only between MU i and the TPA. Unless the TPA or the mobile user divulges this information, an attacker cannot guess the values of c and e(P, G) x i by knowing only the pairing operation e(.) and P. Moreover, since the integrity verification is done based on all the blocks under consideration, neither the CS nor the TPA can involve in fraud as the values of μ � iϵI m i · v i and σ � iϵI σ v i i cannot be easily computed if the data is corrupt in the cloud server or fiddled with by an attacker.
Theorem 4 (perfect data privacy). In our scheme, the TPA is unable to learn any information regarding the data.
A user sends Enc n tpa (ID i , Z, x i , n i , c) to the TPA during an audit request. During the whole process of challenge generation and integrity verification, the TPA never accesses the file blocks m 1 , m 2 , m 3 , . . . , m n or their signatures σ 1 , σ 2 , σ 3 , . . . , σ n . During the challenge generation process, the TPA works with ID i , n i , E 1 , E 2 , chal which do not divulge the details of any of the file block. Besides, during the verification process, the TPA verifies σ � ? iϵI Z x i ·v i ·c · Z c·μ which does not reveal any information regarding the signatures or the file blocks. ough the value of μ is based on the file blocks and the value of σ is based on the signatures of those blocks, they are computed by the cloud server and not by the TPA. e TPA plays only the verification role. Hence, the TPA cannot learn any file information from these parameters leaving the system preserving the privacy of data during the file integrity challenge and the verification processes.

Results and Discussion
e proposed protocol is implemented in a machine with Windows operating system, Intel Core i5-4460 processor running at 3.20 GHz and 3.20 GHz with a primary memory of 4 GB. e experiments were conducted using pairingbased cryptography library pbc-0.5.14, and C programming language is used for the implementation purposes. Table 3 shows the security features provided by the proposed work and some notable similar works in the literature.
Let us assume that T E refers to the cost of an exponentiation operation, T H refers to the cost of a hash operation, T P refers to the cost of a pairing operation, T PA refers to the cost incurred during one point addition, T PM refers to the cost of one point multiplication, and T M and T A refer to the cost of one integer multiplication and integer addition, respectively. During the tag generation phase, tag for each of the file block is generated, and n refers to the number of blocks in the file. e proposed research work is compared with the significant works proposed by Yu et al.  1. Yu et al. [15] n * (1T Wang et al. [20] c . Zhu et al. [22] (n * T E + n * T A + . Yang et al. [23] n Proposed protocol [15], Wang et al. [20], Zhu et al. [22], and Yang et al. [23]. ough the previous works strive to ensure the integrity of the documents, they lack the user authentication factor during this process.
is novel research work does both integrity verification and authentication processes. During the document integrity verification process, the major overhead incurred is due to the tag generation, proof generation, and proof verification processes. Hence, as part of this research work, Table 4 compares the computational cost of the proposed protocol with the recent protocols in the literature. e order of the group is setup as 160 bits, and the tests were conducted with a file size of 1 MB with approximately 50,000 data blocks. In Table 4, n refers to the number of blocks uploaded to the cloud server and c refers to the number of challenged blocks during the integrity verification process. e computation involved in the system initialization is done only during the initialization of the system. For each user, the computation during registration is done only once during the user registration. A user may try to upload multiple files, and hence, tag generation is done only during file upload to the cloud server, and the computations for audit works will be done at regular intervals. From Table 4, it is evident that the proposed work incurs relatively less overhead than the other recent works in the literature.
For tag generation during the file upload, the mobile user makes one pairing operation and one exponentiation operation as e(P, G) (x i +m i )·c for each block. For n blocks, the user has to make n pairing and n exponentiation operations which are very less compared to the similar works in the literature. e results as shown in Figure 4 indicate the fact that the proposed work shows an improved efficiency with regard to the computational overhead of the previous protocols in the literature. e graph shows the comparison for the cost for a minimum of 100 blocks and a maximum of 1,000 blocks. e mobile user incurs a computational cost of only one exponentiation and one pairing operation for the tag generation for each block. All the other recent works under comparison incur greater costs compared to the proposed work. For instance, for the tag generation of one block, the work proposed by Yu et al. involves two exponentiation operations per block, one hash operation per block, and one point addition operation which is more costly than the proposed work. From the graph, it can be inferred that, for the generation of tags for a total of 300 blocks, the proposed protocol incurs a computational overhead of 691 ms which is 599 ms less than Yu et al. Similarly, Figure 6 shows the comparison of the computation cost for the proof verification process by the thirdparty auditor. e proof verification makes use of the blocks which are randomly selected by the third-party auditor     us, the proposed protocol shows improved performance than the previously proposed protocols in terms of the computational overhead. Table 5 provides the comparison of the communication cost incurred by various protocols. In the table, |p| refers to the size of an element in G, |q| refers to the size of an element in Z * q , |n| refers to the size of an element with regard to the block number, and c refers to the number of challenged blocks. e communication cost is mainly due to the frequent audit challenges sent by the third-party auditor to the cloud server and the audit responses from the cloud server to the third-party auditor. e communication overhead occurs due to the messages exchanged as part of the registration process, challenge generation, and response processes. e communication overhead during the registration is not accounted in this work as it happens only once for a cloud user. For the phase of audit challenge sent by the TPA to the CS, TPA sends E 1 , E 2 , ID i , chal to the cloud. Moreover, the size of chal � (i, v i ) is based on the number of challenged blocks, and hence, the size of the audit challenge is in O(chal). TPA sends E 1 , E 2 to the CS only for the authentication purpose.
us, the size of the audit challenge is O(chal) which is better than Yu et al. [15] and Zhu et al. [22] schemes and incurs similar overhead as that of Wang et al. [20] and Yang and Jia [23] schemes, respectively.
During the audit response phase of the proposed protocol, the cloud server sends μ, σ to the TPA. e size of both μ and σ is based on the number of challenged blocks. e proposed scheme is better than Yu et al.'s scheme [15] which incurs a communication overhead of l + log 2 r + 320 bits and is more efficient than Wang et al.'s scheme [20] which involves μ, σ, R where R incurs an additional overhead of log 2 q bits compared to the proposed work. Moreover, the proposed work incurs half of the communication overhead as that of Zhu et al. [22] and is identical compared to Yang and Jia [23] scheme. us, the proposed protocol shows that it incurs a minimal communication overhead while providing support for efficient authentication feature and minimal computational complexity.

Conclusions
To sum up, a novel attack resistant and an efficient protocol for the verification of the remotely stored data in the cloud servers has been introduced in this research work. is work is the first of its kind in enabling authenticated verification procedure over the remote data stored in the cloud servers for the mobile users involved in the verification process. Enough security analysis has been provided to ascertain the genuineness of this work with regard to its resistance to the attacks in all aspects. e implementation results clearly show that this work provides auditing service with less computational complexity compared to the similar works in the recent literature. In this era of mobile computing, this work shall strive to prove its importance for sensitive files stored by the mobile users in the cloud servers. In future, this work can be further extended to support verification for the users of a corporate office or other communities who store data in multiple cloud servers.

Data Availability
No data were used to support this research work.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.