Provable Cloud Data Transfer with Efficient Integrity Auditing for Cloud Computing

Due to the promising market prospects, more and more enterprises invest cloud storage and oﬀer on-demand data storage services, which are characterized by distinct quality; thus, users would dynamically change the cloud service providers and transfer their outsourced data. However, the original cloud server might not honestly transfer the outsourced data for saving overhead, and the outsourced data might be polluted during the transfer process. Therefore, how to achieve secure outsourced data transfer has become a primary concern of users. To solve this issue, we put forward a solution for the issue of provable cloud data transfer, which can also simultaneously realize eﬃcient data integrity auditing. By taking the advantages of Merkle sum hash tree (MSHT), our presented scheme can satisfy veriﬁability without dependency on a third party auditor (TPA). Meanwhile, the formal security proof can demonstrate that our presented scheme meets all of the expected security requirements. Finally, we implement our presented scheme and oﬀer the precise performance evaluation, which can intuitively show the high-eﬃciency and practicality of our presented scheme in the real-world applications.


Introduction
Cloud storage, a very attractive service offered by cloud service providers, can offer convenient and on-demand data storage services to tenants [1,2]. By utilizing cloud storage services, the users whose resources are limited can upload their large-scale data to the cloud servers, thus greatly saving their local storage capacity and computation resources [3,4]. As a result, a great number of individuals and business organizations prefer to embrace cloud storage services. A report of Cisco shows that the number of Internet users reached about 3.6 billion by the end of 2019, and about 55% (near 2 billion) employed cloud storage services. Moreover, the global cloud storage economic output is forecasted to reach 137 billion by the end of 2025 [5].
Because of the promising market prospects, an increasing number of enterprises invests cloud storage and subsequently offers on-demand and convenient data storage services to tenants through the mean of pay-as-you-go [6,7]. However, their data storage services are equipped with different storage capacities, security, reliability, using price, access speed, and so on. To enjoy more suitable data storage services, users would change the cloud service providers and transfer their outsourced data from one cloud server to another [8]. As a consequence, data transfer becomes more and more frequent in recent years. A report of Cisco shows that the cloud data flow is predicted to occupy 95% of the total data flow by the end of 2021. At the same time, nearly 14% of the cloud data flow would be the data flow among different cloud servers [9].
Although a lot of schemes have been presented for achieving data transfer between two different cloud servers, there are some inherent deficiencies in the existing solutions, resulting in some severe challenges that had not been solidly solved. Firstly, almost all of the previous schemes require a third party auditor (TPA) to manage the related keys or audit the data integrity/transfer results. However, like other ordinary entities, the TPA would refuse to serve because of the continuously increasing computational overhead, resulting in service interruption. Meanwhile, the TPA might be frequently attacked by the adversary, resulting in privacy leakage. erefore, the TPA would become a technology bottleneck which greatly impedes the rapid popularization of cloud data transfer systems [10].
Secondly, the existing data transfer schemes are not efficient because they contain some complex protocols or calculations, for instance, provable data possession (PDP) protocol, bilinear pairings calculation, and modular exponentiation computation, which will cost plenty of computational resources [11]. To the best of our knowledge, there does not exist such a solution which can simultaneously achieve secure cloud data transfer and efficient integrity auditing without requiring a TPA. erefore, one of the most important goals in this field is to present a provable cloud data transfer scheme supporting efficient integrity auditing without a TPA. Merkle sum hash tree (MSHT) can be seen as a specific binary tree, which would offer a potential solution to achieve the above goal.

Our Contributions.
We aim to study a challenging issue, i.e., provable cloud data transfer with efficient data integrity auditing in cloud computing environment. en, we put forward a new scheme, which can achieve provable cloud data transfer and efficient integrity auditing simultaneously. erefore, the main contributions of this paper are as follows: We put forward a solution for the issue of provable cloud data transfer with efficient integrity auditing. Specifically, users can dynamically change the cloud service providers and transfer their outsourced data from one cloud server to another securely. Subsequently, users have the ability to efficiently audit the transferred data integrity on target cloud to confirm whether the data blocks are intact. By taking the advantages of MSHT, our presented solution can satisfy verifiability without dependency on a TPA. In the meantime, we provide the security analysis to formally prove that our presented solution can meet the desired security requirements. Finally, we implement our presented scheme and offer the performance evaluation, which can obviously show the practicality and high-efficiency of our presented solution.

Related Works.
Cloud data integrity auditing has been studied for several tens of years, and there are lots of solutions [12][13][14][15]. Ateniese et al. [16] presented the first PDP model in 2007. In their model, sampling was used to achieve data integrity auditing, which is characterized by low computational overhead and communication burden. en, Erway et al. [17] put forward a model for dynamic PDP, in which they can simultaneously support provable data updates and integrity verification. In 2019, He et al. [18] presented the first public shared data integrity verification protocol, which can support fully dynamic operations with constant storage overhead for verifiers. Huang [19] studied the proxy provable data possession (PPDP) verification model, in which a proxy performed the data possession verification operation for the client. Meanwhile, he adopted bilinear pairing to design an efficient PPDP scheme. Bian and Chang [20] designed a certificateless PDP protocol for multicopy outsourced data stored in different cloud servers. eir novel scheme can resist the key-escrow attack. Chen et al. [21] adopted blockchain to design a DPDP model for smart cities. en, they used multireplica storage tricks to propose a concrete construction of decentralized PDP.
Except PDP, some other authentication data structures are also utilized to achieve data integrity auditing. Zhang and Dong [22] designed an efficient ID-based outsourced data integrity verification scheme. ey proved that their solution was provable secure in the random oracle model. In the meantime, they extended their scheme to achieve batch verification in multiuser scenario. Li et al. [23] suggested to achieve data integrity auditing through fuzzy identity and formalized the corresponding system model. Subsequently, they designed a concrete solution based on fuzzy identity, which is characterized by error tolerance. Yu et al. [24] designed an attribute-based outsourced data integrity verification solution, which achieved attribute privacy-preserving and collusion-resistance. In their scheme, they introduced attribute-based outsourced data integrity verification to solve complex key management challenge. Recently, Fan et al. [25] proposed a secure identity-based aggregate signatures protocol to ensure the integrity of cloud data.
Yue et al. [26] studied the data integrity checking in P2P cloud storage scenarios. en, they designed a general framework for data integrity checking and adopted blockchain to present a concrete solution. Li and Luo [27] adopted homomorphic encryption and homomorphic signature to establish an integrity auditing scheme for smart grid data, which can prevent the data from accidental errors. angavel and Varalakshmi [28] utilized ternary hash tree (THT) to present a data integrity verification scheme, which can support Block-level, File-level, and Replica-level auditing. Meanwhile, their scheme can support data dynamic updates. Zhou et al. [29] studied the problem of data integrity verification in cloud-based electronic medical storage scenarios. Subsequently, the multicopy Merkle hash tree was adopted to present a concrete data integrity auditing scheme. Although the above solutions can support cloud data integrity verification, they cannot support data transfer between two different cloud servers.
Yu et al. [30] presented the first method to simultaneously support outsourced data integrity checking and secure transfer. Wang et al. [31] adopted PDP and homomorphic technology to complete cloud data transfer and integrity auditing. Yang et al. [10] utilized blockchain to present an outsourced data integrity auditing protocol supporting reliable data transfer, in which the data blocks are stored in the consortium blockchain. Xue et al. [32] utilized PDP and rank-based Merkle hash tree (RMHT) to achieve cloud data transfer and integrity checking. Nevertheless, Liu et al. [33] found that there was a serious security flaw in Xue et al.' solution [32]. at is, the dishonest cloud server can successfully forge a random block and generate a related block tag which can successfully pass the auditing. After that, they proposed an improved scheme to overcome this problem. Meanwhile, their solution can reduce the computational overhead in data integrity checking phase.
Wang et al. [34] put forward the notion of PDP with outsourced data transfer (DT-PDP). Subsequently, they utilized bilinear pairing to design a detail DT-PDP scheme. Yang et al. [35] utilized counting Bloom filter (CBF) and hash function to deal with the challenge of cloud data transfer with integrity verification. Ni et al. [36] designed a scheme to move outsourced data securely, in which they utilized PDP to achieve cloud data integrity auditing. Yang et al. [37] made use of vector commitment (VC) to present a provable data migration scheme for cloud storage. ey achieved cloud data integrity auditing by utilizing the position blinding of VC. Although the above solutions can realize data transfer and integrity auditing, they either rely on a TPA or contain some complex calculations/protocols.

Organization.
Next, we introduce the organization of the rest of this paper. In Section 2, we describe some preliminaries which are the building blocks of our presented scheme. Section 3 describes the system framework, security challenges, and security goals. en, the concrete scheme is presented in Section 4. e scheme analysis and performance comparison are, respectively, offered in Section 5 and Section 6. Finally, this paper is simply concluded in Section 7.

Merkle Sum Hash Tree
e primitive of Merkle sum hash tree (MSHT) was first put forward by Miao et al. [38]. Generally speaking, MSHT can be seen as an extension of the traditional Merkle hash tree (MHT). Between the MSHT and the MHT, there exist two main differences. On the one hand, every leaf node of MHT merely maintains a data item. However, MSHT can store plenty of data items in each leaf node, as demonstrated in Figure 1. As a result, the height of MSHT will not infinitely increase with the total number of data items. Further, MSHT is able to deal with the challenge that the growth of data items leads to the fast increase in the height in MHT. On the other hand, when computing the hash values in the leaf nodes of MSHT, the data items and the rank are taken as inputs; here, the rank is the number of data items this leaf node maintains. In each nonleaf node, the inputs consist of the concatenation of its left child and right child and the rank of this nonleaf node [39]. At last, a digital signature on the Merkle root is generated by a public key signature scheme.
Like MHT, MSHT is also able to verify data integrity with low computational overhead and communication cost. Specifically, the verifier utilizes the data items which are maintained by a leaf node and its related auxiliary authentication information (AAI) to rebuild the Merkle root h 11 ′ . Here, the AAI contains all the hash values of the sibling nodes on the path from the authenticated leaf to the Merkle root. en, the user compares h 11 with h 11 ′ and verifies the signature. If and only if the signature is valid and the equation h 11 � h 11 ′ holds, the verifier thinks that the data blocks are intact.

Problem Statement
In this section, we first establish the system framework. Subsequently, we introduce the security threats and finally identify the security requirements.

System Framework.
In this paper, we study a specific scenario: users change the cloud service providers and move their cloud data from one cloud server to another. Subsequently, the target cloud server audits the transferred data integrity before accepting them. Hence, the system framework of provable cloud data transfer with efficient integrity auditing consists of three entities: a user, an original cloud server, and a target cloud server, as shown in Figure 2 3.1.1. User. In our system framework, the user owns limited resources, thus cannot store large-scale data in local. Hence, to greatly reduce local cost, the user might want to upload his large-scale data to the cloud server. Later, the user would change the cloud service providers for some subjective or objective factors and move the outsourced data to the target cloud.

Original Cloud Server.
e original cloud server (called cloud A for short) is an entity that owns almost limitless resources. Hence, the cloud A can provide user with on-demand data storage services, thus maintaining largescale data for the user. Later, the cloud A transfers the related data according to the user's commands.

Target Cloud Server.
e target cloud server (called cloud B for short) connects a lot of distributed storage mediums, network bandwidths, and computation resources to provide user with on-demand data storage service. Specifically, the cloud B receives the transferred data blocks and verifies the data integrity before accepting and maintaining them for the user. Meanwhile, the cloud B returns the data transfer result to the user.
In the system framework, we assume that the two cloud servers will never collude together to cheat the user. is assumption is reasonable because of two reasons as follows. Firstly, if the two clouds collude together, they can be seen as an entity and the data transfer between them is meaningless. Secondly, the two clouds belong to two different enterprises, and they are competitors. For economic interests, they will not be on the same side to cheat the user. In addition, we assume that the cloud B would never deliberately interrupt the data transfer and maliciously slander the cloud A. is assumption is rational since the cloud B aims to offer data storage services to the user. is means that the cloud B tries to favourably complete the data transfer, rather than maliciously destroying the transferred data to slander the cloud A. Meanwhile, the cloud B is not able to obtain any economics interests from slandering the cloud A since the user has already terminated the cooperation with the cloud A and stopped its data storage services.

Security Challenges.
In our presented scheme, we must seriously consider four security challenges as follows.

Privacy Leakage.
e data might contain some privacy information. en, both the internal attacker and the external attacker might try their best to dig some valuable information from the cloud data. Meanwhile, the hardware failure and software breakdown may also lead to privacy leakage. erefore, privacy leakage is a serious security challenge that should be considered.

Data Pollution.
e selfish cloud administrator might delete some data blocks which are accessed rarely, resulting in data pollution. Meanwhile, the data blocks would be destroyed in the data transfer process. However, the user cannot easily detect the data pollution. Hence, data pollution is another security threat that should be considered.

Dishonest Data Transfer.
To achieve cloud data transfer, the cloud A needs to cost plenty of network bandwidths and computational resources, thus greatly reducing the access speed of other users. To guarantee service quality, the cloud A may not sincerely complete the cloud data transfer operation according to the user's command.

Malicious Slander.
Both the cloud A and the user will not fully trust each other. Specifically, the cloud A does not execute data transfer operation and slanders that the user has not asked to migrate the data. Moreover, the user has not asked to migrate the data and slanders that the cloud A does not sincerely complete the data transfer operation.

Security Goals.
Our presented scheme must achieve the following expected security goals.

Data Confidentiality.
e privacy information should be confidential. As a consequence, the file has to be encrypted before outsourcing to the cloud server. If the related data decryption key is not obtained, the adversary cannot correctly decrypt the related ciphertext.

Data Integrity Auditing.
It is ensured that the cloud server honestly maintains the outsourced data. If the cloud server arbitrarily destroys the data, the user has the ability to detect the malicious data pollution through efficiently checking the cloud data integrity.

Provable Data Transfer.
It is ensured that all the data blocks would be migrated integrally. If the data blocks have been polluted, the cloud B can detect the data destroy and would refuse to receive the data blocks. Meanwhile, the cloud B will inform the failure of data transfer to the user.

Nonframeability.
e fairness is guaranteed in the process of data transfer. at is, the honest participant cannot be slandered and the malicious participant cannot be shielded. If the malicious participant slanders the honest participant, the honest one can prove the sinlessness by himself.

Our Presented Scheme
Next, we would present our new scheme in detail. Generally speaking, the user must register and pass the auditing before employing cloud storage service. For simplicity, we let the user have passed the identity auditing and become an authorized customer of the cloud A and the cloud B.

System Setup.
is phase mainly aims to generate some public parameters and related keys. Specifically, the user, the cloud A, and the cloud B generate their public/private key pair (SK U , PK U ), (SK A , PK A ), and (SK B , PK B ), respectively. en, they, respectively, publish their public key. Meanwhile, the user names the outsourced file F with the file name n f , which is unique in the cloud storage system. For preventing data privacy from leakage, the user encrypts the file before outsourcing to the cloud A. at is, the user firstly computes a data encryption key k � H 1 (n f || SK U ), where H 1 (·) is a secure hash function. en, the user utilizes key k to efficiently encrypt the file F, i.e., C � Enc k (F), where Enc is a symmetric encryption algorithm which is IND-CPA secure. Meanwhile, the user divides the ciphertext C into n′ subfiles. Subsequently, the user might insert n − n′ random subfiles into these n' subfiles at random positions. Next, the user further divides every subfile into s data blocks. erefore, the outsourced file can be denoted as f � (f 1 , f 2 , . . ., f n ) � {f i,j } 1≤i≤n,1≤j≤s . Finally, the user sends the large-scale data set f to the cloud A.

Data Outsourcing.
e cloud A stores the data and generates a proof, which can be utilized to audit the data storage consequence by the user. e cloud A builds a MSHT to store the outsourced file f. More specific, every leaf node maintains a subfile, i.e., leaf node Ni maintains subfile f i . en, the cloud A utilizes his private SK A to generate a signature sig R on the Merkle root H R . at is, sig R � Sign SK A (H R ), where Sign is the signature generation algorithm of ECDSA. Finally, the cloud A returns H R and sig R to the user. Upon receiving of H R and sig R , the user verifies the data storage consequence by auditing the storage proof. e user firstly generates a set of leaf node indices λ that contains n/2 numbers which are randomly chosen from 1, 2, . . ., n. en, the user retrieves the related AAI set Φ q q∈λ . Next, the user utilizes the data blocks which are stored in N q q∈λ and Φ q q∈λ to rebuild a Merkle root H R ′ . Finally, the user compares H R with H R ′ and audits sig R . If and only if equation H R � H R ′ holds and sig R is a valid signature on H R ′ , the user believes that the outsourced file f has been honestly maintained. Finally, the user would delete the local copy of f.

Data Transfer.
e user changes the cloud service providers and then transfers his data to the cloud B for more suitable data storage services. e user first generates a set of subfile indices φ that identifies the subfiles which are needed to be transferred to the cloud B. en, the user generates a signature sig t � Sign SK U (transfer ||n||φ||T t ), where T t represents the timestamping. Further, the user can generate a data transfer command TQ � (nf, φ, T t , sig t ). Finally, the user sends the transfer command TQ to the cloud A and the cloud B. Meanwhile, the user sends the signature sig R and the Merkle root H R to the cloud B. Once received TQ, the cloud A checks whether TQ is valid through signature auditing. at is, the cloud A audits if the signature sig t is valid or not. If sig t is invalid, the cloud A terminates and outputs failure; otherwise, the cloud A migrates the subfiles f i i∈φ to the cloud B, along with the AAI set Φ i∈φ i { }. At the same time, the cloud A generates a signature (TA) A � Sign SK ( TQ ) and returns it to the user. On receipt of subfiles f i i∈φ , the cloud B first audits the integrity of data blocks before accepting them. at is, the cloud B utilizes the received subfiles f i i∈φ and the AAI set Φ i i∈φ to rebuild the MSHT and obtain a new Merkle root H * R . en, the cloud B compares H * R with H R and verifies the signature sig R . If H R � H * R holds and signature sig R is valid, the cloud B thinks that the subfiles f i i∈φ are integrated. Finally, the cloud B informs the user the success of data transfer.

Scheme Analysis
We firstly analyse the security of our presented scheme. Subsequently, we offer the theoretical computational complexity analysis and approximate efficiency comparison through numeric analysis.

Security Proof.
We provide the formal security proof to show that our presented scheme can meet all of the expected security requirements.

Wireless Communications and Mobile Computing
Theorem 1. Our scheme can achieve data confidentiality.
Proof. In our presented scheme, the cloud data are encrypted by the user through a symmetric encryption algorithm before uploading. e security of the cloud data is ensured by the chosen encryption algorithm. Hence, if do not get the related decryption key, the ciphertext is IND-CPA secure. e detailed security proof process is similar to scheme [40], so we ignore it here. Hence, if an attacker does not own the related decryption key, he cannot correctly decrypt the ciphertext. at is, our presented scheme is able to guarantee that no other than the user can correctly decrypt the ciphertext. erefore, the cloud data confidentiality can be guaranteed. □ Theorem 2. Our scheme can achieve data integrity auditing.
Proof. In order to realize data integrity verification, the user should rebuild the MSHT and verify the signature. Specifically, assume that the user prefers to audit the integrity of data blocks which are stored in the leaf node N k , the user first receives these data blocks (i.e., cloud data blocks f k,1 , f k,2 , . . ., f k,s , where 1 ≤ k ≤ n). Meanwhile, the user can obtain the related auxiliary authentication information Φ k . en, the user utilizes f k,1 , f k,2 , . . ., f k,s and Φ k to rebuild the Merkle root H r and compares it with H R . In the meantime, the user audits the validity of the signature sig R . If data blocks f k,1 , f k,2 , . . ., f k,s are maliciously polluted, the cloud server cannot successfully forge new data blocks f k,1 ′ , f k,2 ′ , . . . , f k,s ′ to let the equation H r � H R hold. erefore, when H r � H R holds and sig R is valid, the user thinks that the data blocks f k,1 , f k,2 , . . ., f k,s are intact.
at is to say, our presented scheme can achieve cloud data integrity auditing. □ Theorem 3. Our scheme can achieve provable data transfer.
Proof. In our presented scheme, the user sends the cloud data transfer command to the two clouds (i.e., both cloud A and cloud B). Hence, if the cloud A arbitrarily transfers the cloud data without the user's command, the cloud B can detect the arbitrary data transfer. Further, the cloud B refuses to accept the transferred cloud data and informs the malicious data transfer to the user. Conversely, if the user indeed prefers to migrate the cloud data, the cloud A transfers the data to the cloud B. en, the cloud B would audit the integrity of the received data blocks before accepting them. If and only if the cloud data blocks are not destroyed, the cloud B would accept them and inform the data transfer result to the user. Note that the cloud B tries its best to favourably complete the data transfer, rather than destroying the transferred data to slander the cloud A maliciously. Meanwhile, the cloud B would never collude with the cloud A to cheat the user. Hence, the data transfer result returned by the cloud B is trusted. Further, our presented scheme can achieve provable cloud data transfer. □ Theorem 4. Our scheme can achieve nonframeability.
Proof. We analyse the nonframeability from the following two aspects.
Dishonest User. If the user is dishonest, he would slander the cloud A maliciously. Specifically, the user has asked the cloud A to migrate the outsourced data blocks. Nevertheless, the user would claim that the cloud A maliciously interrupted the storage services and arbitrarily migrated the related data blocks to the cloud B. In this scenario, the cloud A can show the data transfer command TQ � (n f , φ,T t , sig t ) sent by the user.
Note that the data transfer command TQ contains a signature sig t . At the same, except the user, nobody can compute it without private key SK U . In other words, the data transfer command TQ can only be generated by the user and nobody else can forge it successfully. erefore, the data transfer command TQ could be seen as a proof which can demonstrate that the user has asked the cloud A to migrate the data blocks to the cloud B. Further, the cloud A can prove his sinlessness by demonstrating the data transfer command TQ. As a result, the user cannot successfully slander the cloud A.
Malicious Cloud A. Similarly, the cloud A would also slander the user maliciously. Specifically, the user has asked to transfer the data blocks to the cloud B; however, the cloud A did not sincerely carry out the data transfer command. When the dishonest acts of cloud A are detected by the user, the cloud A would slander that the user did not require to migrate the cloud data. In this setting, the user is able to demonstrate the signature sig A which is sent from the cloud A. Note that the signature sig A is the response to the data transfer command of the user. Meanwhile, except the cloud A, nobody can compute the signature sig A without private key SK A . erefore, sig A can be seen as a proof to demonstrate that the cloud A has received the data transfer command TQ from the user and has responded to it. Further, the malicious cloud A cannot successfully deny the data transfer command and maliciously slander the user.

Numeric Analysis and Efficiency
Comparison. Next, we compare the theoretical computational complexity of our presented scheme with the existing scheme [35], which can achieve similar functionalities. For simplicity, we ignore the communication cost and only consider the main computational cost in each phase. Moreover, we first define some symbols which will be utilized in the comparison.
Specifically, we denote by ε the symmetrical encryption calculation and H of an operation for computing a hash value. Meanwhile, we adopt symbols S and V to, respectively, represent the signature generation operation and verification operation. Moreover, symbol n and symbol l are, respectively, used to define the total number of cloud data blocks and the number of transferred cloud data blocks. In our comparison, we assume that each leaf node of MSHT maintains one data block. Subsequently, the comparison is described in Table 1.
From Table 1, we can easily obtain the following two findings. On the one hand, the existing scheme [35] needs to perform more hash calculations than our presented scheme to achieve data encryption. In this phase, both the two schemes need to compute a hash value to generate the data encryption key and then utilize it to encrypt the outsourced file. Nevertheless, the existing scheme [35] needs n more hash calculations to compute the hash value of every data block. On the other hand, our presented scheme costs much less computational overhead than the existing scheme [35] in data outsourcing and data transfer phases. As a result, we think that our presented scheme would be more attractive than solution [35].

Performance Evaluation
We implement our presented scheme and offer the performance evaluation. Specifically, all of the simulation experiments are carried out on a Desktop equipped with Linux operation system, Intel(R) Core(TM) i7-7700 processors running at 3.6 GHz and 8 GB RAM. Meanwhile, we implement the related cryptographic algorithms with the opensecure sockets layer library and the pairing-based cryptography library. For simplicity, we ignore the communication overhead and merely measure the time overhead of performing the main operations in every phase.

Computational Overhead of Data Encryption.
e main computations are data encryption key generation and data encryption operation, which are closely related to the size of encrypted file. erefore, in this simulation experiment, we increase the size of encrypted file from 1 MB to 10 MB with a step for 1 MB. In the meantime, we assume that the total number of data blocks is 20,000. Subsequently, we measure the approximate time overhead, which is presented in Figure 3.
From Figure 3, we are able to readily discover that the overhead approximatively linearly rises with the size of encrypted file. In the meantime, the growth rates of time cost in both the two schemes are almost the same. Nevertheless, the computational overhead of our presented solution is obviously less than that of scheme [35]. For instance, when the size of encrypted file is 5 MB, our presented scheme might cost approximate 12.6 milliseconds, while the scheme [35] would cost more than 13.3 milliseconds. Predictably, in this data encryption phase, scheme [35] will cost more time cost than our presented scheme under the same size of encrypted file. Moreover, note that for every file, this data encryption phase is one-time, and it could be performed offline by the user. erefore, in this process, we can think that our presented solution is more efficient.

Computational Overhead of Data
Outsourcing. e major computations are hash calculations, signature generation, and auditing calculations. In order to present the comparison more clearly, we can divide this phase into two portions for simplicity, that is, data storage and data storage result verification.
In data storage phase, the major time cost is from the calculations for generating the data storage proof, which is closely related to the total number of cloud data blocks. erefore, in the simulation experiment, we would rise the number of cloud data blocks from 200 to 2,000 with a step for 200. Subsequently, we measure the time cost, which is presented in Figure 4.
We could obviously have two findings from Figure 4. Firstly, we can discover that the overhead of our presented solution is obviously less than that of scheme [35]. For instance, scheme [35] requires more than 13.0 milliseconds when the number of cloud data blocks is 2,000. Nevertheless, our presented scheme requires nearly 1.5 milliseconds. Secondly, we can see that both the time overheads of the two schemes approximatively linearly rise with the total number of cloud data blocks. Nevertheless, the time overhead of our presented scheme grows much slower than that of the previous solution [35]. From the above two findings, we can think that our presented scheme would always cost less time overhead than scheme [35] under the same number of cloud data blocks. As a result, we can think that our presented scheme has higher efficiency than the existing scheme [35] to realize data storage.
Similarly, in the data storage result verification process, the major computational cost is from the calculations for auditing the correctness of data storage proof, which is also quite closely related to the number of cloud data blocks. As a consequence, we would also rise the total number of cloud data blocks from 200 to 2,000 with a step for 200 in the simulation experiment. Subsequently, we measure the approximate time overhead, as demonstrated in Figure 5.
We can directly see that both the time overheads of the two schemes approximatively linearly rise with the total number of cloud data blocks. However, the time overhead of   [35] grows faster than that of our presented scheme. At the same time, we can intuitively discover that our presented scheme requires much less time overhead than scheme [35]. For instance, scheme [35] requires more than 6.7 milliseconds when the number of cloud data blocks is 2,000. Nevertheless, our presented scheme only requires near 4.0 milliseconds. erefore, we can forecast that when the number of cloud data blocks is the same, the time overhead of our presented scheme will be less than that of scheme [35]. Moreover, note that this process would be performed by the user, who is resource-constraint. Hence, this process would greatly affect the overall efficiency of the two schemes. As a result, we are able to think that our presented scheme has more advantages in efficiency than scheme [35].
As a whole, according to the time overhead comparison presented in Figures 4 and 5, we can think that our presented solution is obviously much more efficient than scheme [35] in data outsourcing phase.

Computational Overhead of Data Transfer.
In this simulation experiment, the major time overhead is from the calculations of verifying the data integrity, which is closely related to the number of transferred cloud data blocks. Because of the instability of network, we ignore the communication overhead. erefore, we rise the number of transferred cloud data blocks from 10 to 100 with a step for 10. Subsequently, we measure the time cost of performing the main operations, which is shown in Figure 6. We intuitively discover that the time overhead of the two schemes both rise with the number of transferred cloud data blocks. In the meantime, the time overhead of scheme [35] grows much faster than that of our presented scheme. In addition, our presented scheme needs much less time overhead than scheme [35]. As a consequence, we are able to easily think that our presented scheme has some obvious advantages over scheme [35] in data transfer phase.
According to the time overhead comparison demonstrated in Figures 3-6, we are able to distinctly find that our presented scheme always costs less time cost than scheme [35] in each phase. Further, we can easily think that the total time overhead of our presented scheme is much less than that of scheme [35]. As a result, our presented scheme not only has more advantages in efficiency of every phase but also has more advantages in overall efficiency.

Conclusions and Future Works
In this paper, we studied a severe challenge, that is, provable cloud data transfer with efficient data integrity auditing. To deal with this challenging problem, we adopt MSHT to  Our scheme Scheme [35] Time cost (ms)  present a new solution. Our presented scheme enabled the user to transfer the cloud data from one cloud server to another and then check the transferred cloud data integrity on the target cloud server. By taking the advantages of MSHT, our presented scheme does not require to rely on a TPA, thus can avoid the problems of service interruption and privacy leakage caused by the single-point-of-failure of TPA. Moreover, the formal security analysis can prove that our presented solution can meet all of the expected security requirements. Finally, we implement our presented scheme and offer the performance evaluation, which can intuitively prove the high-efficiency and practicality of our presented solution.
Note that all of the existing outsourced data transfer solutions only consider the scenarios that the outsourced data blocks are transferred from one cloud server to another one, which are not suitable for the settings that the outsourced data blocks are transferred to two or more cloud servers. However, the user might want to transfer the outsourced data blocks to more than one cloud servers. erefore, we will study the problem of secure data transfer under multitarget cloud servers in the future.
Data Availability e data utilized to sustain the contributions of this research are all included in the paper. Hence, there is no need of more data collection.

Conflicts of Interest
e authors declare that they have no conflicts of interest.