SCCAF: A Secure and Compliant Continuous Assessment Framework in Cloud-Based IoT Context

The Internet of Things (IoT) offers a wide variety of benefits to our daily lives in many ways, ranging from smart wearable devices to industrial systems. However, it also brings well-known security and compliance concerns, especially in the physical layer. In addition, due to numerous IoT architectures which have been developed and deployed based on the cloud, the security and compliance of IoT depend on the cloud thoroughly. In this paper, a secure and compliant continuous assessment framework (SCCAF) is proposed to evaluate the security and compliance levels of cloud services in life-cycle. The SCCAF facilitates cloud service to customers to select an optimal cloud service provider (CSP) which satisfies their desired security requirements.Moreover, it also enables cloud service customers to evaluate the compliance of the selected CSP in the process of using cloud services. To evaluate the performance and availability of SCCAF, we carry out a series of experiments with case study and real-world scenario datasets. Experimental results show that SCCAF can assess the security and compliance of CSPs efficiently and effectively.


Introduction
The rise of IoT has led to the constant universal connection between people and things (e.g., sensors or mobile devices), and it plays a remarkable role in all aspects of our daily lives [1,2]. Furthermore, combined with preeminent technologies such as cloud, the cloud-based IoT architecture is becoming a trend in the IoT market. However, as with many new technologies, there are several challenges when it comes to achieving success in cloud-based IoT context adoption [3][4][5]. Two of the biggest concerns for cloud-based IoT context are security (especially, the physical layer security) and compliance (e.g., lack of customer control mechanism, security assurance and service level agreements (SLAs) guarantee, and dynamic change in the IoT devices) [6,7]. Nevertheless, there is little literature on the IoT security and compliance assessment [8]. The traditional existing literatures with respect to security of IoT focus on dealing with wireless networks [2,9,10]. Therefore, the premise of securing the cloud-based IoT context is to evaluate the security and compliance of cloud service.
Due to massive number of CSPs offering similar kinds of services in the cloud market, it becomes a tricky challenge to select an optimal cloud service. Moreover, from the perspective of cloud service customers (CSCs), it is becoming more and more important to identify which is the real optimal cloud service provider (CSP). A real optimal CSP is supposed to satisfy the security requirements of CSCs in the process of deploying cloud service and the compliance requirements of SLAs continuously while the cloud service is operating. Before CSCs are planning to use cloud services, the major challenge is to select a security CSP among various CSPs based on their security requirements. After that, the main challenge for CSCs is to ensure the conformance between the actual quality of service (QoS) of cloud services and the SLAs claimed by the selected CSP. Intuitively, security and compliance issues are equally important to CSCs throughout the entire process of using the cloud service.
However, the actual situation is that CSCs frequently concentrate on the security or compliance in different periods of using cloud service. Security is the primary concern for CSCs while selecting CSP and the main obstacle of promoting 2 Wireless Communications and Mobile Computing cloud computing. Nonetheless, research results usually target selecting the security CSP and overlook the compliance issue [11][12][13][14][15][16]. On the other hand, compliance is the critical concern for CSCs while using cloud services. But the significance of security and the feasibility of assessment tend to be ignored [17][18][19]. For instance, it is almost impossible for CSCs to evaluate trustworthiness of cloud services by objective and direct means (e.g., QoS monitoring). It is due to the fact that before signing a service contract with CSC, CSP will neither provide the technical details of cloud services nor open interface to CSCs to monitor service QoS for the purpose of confidentiality, security, and competition.
As aforementioned, few literatures take the integration of security and compliance assessment into consideration for evaluating the cloud service. To the best of our knowledge, there is no continuous assessment framework existing in literature which concatenates security and compliance of cloud service from comprehensive perspective. Moreover, the compliance issues of cloud service are urgent and worth studying, especially during the use of cloud services.
In this paper, we propose a novel secure and compliant continuous assessment framework (SCCAF), which evaluates the security level and the compliance level of cloud service. Additionally, a new concept of cloud service life-cycle (CSL) is proposed and elaborated. The CSL enables CSCs to well understand the objects which need to be considered for adopting cloud services. Accordingly, SCCAF can offer more flexibility in the hands of CSCs to evaluate cloud on the basis of their security and compliance requirements.
In a nutshell, the main contributions in this paper are summarized as follows.
(i) We propose a new concept of cloud service life-cycle which enables CSCs to well understand the objectives that need to be considered at each phase of the adoption of cloud services.
(ii) The SCCAF, a novel secure and compliant continuous assessment framework based on CSL, is proposed. It combines assessment methods of security and compliance as mutual complementation. Hence, the SCCAF enables CSCs to continuously evaluate the cloud service provided by CSPs during the full CSL.
(iii) To illustrate the efficiency and effectiveness, we conduct comprehensive experiments to validate our proposed SCCAF from two dimensions, respectively. The results show that SCCAF can achieve better performance and availability.
The rest of the paper is organized as follows. Section 2 surveys related work. Section 3 introduces the proposed concept of CSL and Section 4 elaborates the SCCAF. Section 5 presents the experimental results and their analyses for validating our proposed assessment method. Section 6 concludes this paper with directions for future work.

Related Work
A variety of recent research works target selecting an optimal CSP by evaluating cloud services from the dimensions of security and trust. One example is Luna et al. [20] who presented a security metrics framework for CSPs security assessment. In [21], a methodology to quantitative benchmark CSPs security SLA with respect to the security requirements of CSC is presented based on the reference evaluation methodology [22]. Paper [14] presents a methodology for quantizing and evaluating security threats, which weighs each security threat to consider which security controls are required to meet the users' needs in a security SLA, and in [23] a new technique for conducting quantitative and qualitative analysis of the security level provided by CSPs is proposed. Both works are based on the analytic hierarchy process (AHP) [24]. In [11], two evaluation techniques are proposed to conduct the quantitative assessment and analysis of the security SLA based security level provided by CSPs with respect to a set of CSCs security requirements. In [12], a novel cloud security assessment technique is presented which is a more simple and effective approach that can be deployed for the needed online real-time assessment and offers both accuracy and high computational efficiency. Refernce [13] presents a methodology for evaluation and selection of cloud services based on a multicriteria analysis [25] process using a set of evaluation criteria and quantitative metrics. However, the security assessment methods mentioned above are all about selecting CSP before CSCs use of cloud services, and none of them is involved in the case of, after selecting CSP, how to determine the compliance of the SLA claimed by the CSP in the runtime of cloud service.
Besides the security assessment, the assessment techniques to select CSP have also been focused on the evaluation of trust, which has captured researchers' attention in recent years. The assessment methods based on the trust are mainly divided into two aspects, including subjective assessment and objective assessment. From the subjective perspective, [26] presents a distributed framework for determining trustworthiness of federated cloud entities, which uses a reputation manager to capture and store the behavior of cloud entities. In [27], a trust management model is proposed which comprises SLA agent, cloud service directory, cloud provider, and cloud consumer to select most reliable cloud providers by managing trust relationships based on three types of information (local experience of consumers with providers, opinion of others, and reports provided by SLA agent). Paper [28] proposed a model of reputation-enhanced QoS-based web services discovery that combines an augmented UDDI registry to publish the QoS information and a reputation manager to assign reputation scores to the services based on customer feedback ratings on their performance. In [29], a trust management approach is presented, namely, ServiceTrust, and it takes rater's credibility into consideration by combining a user's needs and other personal ratings to estimate a CSP's trust value for the support of reputation-oriented service selection. However, the subjective assessment methods which are difficult to quantify usually makes the evaluation results less accurate and also presents difficulty in its practical adoption.
Many objective assessment methods of service selection have also been proposed, such as those based on monitored QoS data [30][31][32][33][34][35]. These works mainly focus on determining Wireless Communications and Mobile Computing 3 the most satisfactory services according to users' requirements and preferences relative to QoS. However, not all of these works apply to cloud environment. Moreover, QoS data of services is hard to be acquired [36] and might not be reliable [34]. To select a satisfactory CSP from objective perspective, [37] proposed a ranking technique that utilizes performance data to measure various QoS attributes and evaluates the relative ranking of cloud providers. In [38], a QoS ranking prediction framework is presented for cloud services by taking advantage of the past service usage experiences of other consumers, which requires no additional invocations of cloud services. Authors of [39,40] propose an automated framework called CloudGenius and a comparative framework named CloudCmp, respectively. The purpose of the former is automating the decision-making process based on a model and factors specifically for web server migration to the cloud. The purpose of the latter is measuring the elastic computing, persistent storage, and networking services offered by a cloud along metrics that directly reflect their impact on the performance of customer applications. Both of them provide mechanisms to evaluate performance indicators of CSPs in order to help customers pick a cloud that fits their needs.
In addition, there are a few works combining subjective perception and objective measurement to evaluate trustworthiness of cloud services. Reference [19] designs a novel framework named CSTrust for conducting cloud service trustworthiness evaluation by combining QoS prediction and customer satisfaction estimation. Reference [18] proposes a trustworthy selection framework for cloud service selection named TRUSS, which contains an integrated trust evaluation method via combining objective trust assessment and subjective trust assessment. Paper [41] proposes a novel trust evaluation method named UsageQoS for accurately measuring quality of cloud services via leveraging service QoS parameters and user ratings. Although objective assessment methods can yield more accurate results and are easier to implement through existing technical means, they ignore the fact that security is one of the major barriers for adoption of cloud computing and the paramount consideration for CSCs. Moreover, as stated earlier, CSCs are unable to evaluate trustworthiness of cloud services by objective and direct technical means before selecting a CSP to provide cloud services.
As can be seen from the related work discussed above that many existing literatures evaluate security or trustworthiness of CSPs for merely selecting an optimal CSP, but they overlook an important issue; that is, CSCs are concerned about the compliance of SLAs claimed by CSP during the use of cloud services. However, there is no comprehensive and continuous assessment framework that combines security and compliance as a complete and complementary attribute to facilitate CSCs to continuously evaluate CSPs during the full cloud service life-cycle. This paper presents a concept of cloud service life-cycle (CSL) from CSCs' perspective that enables CSCs to clearly understand the items that need to be considered at each phase of the adoption of cloud services. Then a secure and compliant continuous assessment framework is proposed based on CSL, which concatenates security and compliance assessment methods. Such a framework not only enables CSCs to select a security CSP from numerous candidate CSPs out of security perspective, but also allows CSCs to evaluate the compliance of SLAs claimed by the selected CSP in the cloud service runtime. Compliance assessment result can help CSCs make further decisions (e.g., continue to use, change CSP, seek remedies, and claims and even terminate cloud service).

Cloud Service Life-Cycle
In this paper, cloud service life-cycle (CSL) is articulated as an assumption or an expectation that a CSC will experience a continuous and integral process about adopting cloud service. This assumption or expectation is based upon a series of more specific phases, which form the components of CSL. As shown in Figure 1, the CSL, an extension from our previous work [43], comprises six phases: initial preparation, alternatives choice, solution deployment, continuous monitoring, decision making, and termination of service. The detailed phases of CLS are described as bellow.
(1) Initial Preparation. Initial preparation is the first phase for all the potential CSCs that are eager to leverage the benefits of cloud services. Customers should analyze the benefits of using cloud computing services based on their data and business types and determine whether the cloud computing services are suitable for them. At the same time, they should determine cloud capabilities types and cloud service categories in accordance with their data and business characteristics. As mentioned above, security is the top primary concern for customers to adopt cloud services. Therefore, CSC should conduct security demand analysis according to the key characteristics and potential security threats of cloud computing. Additionally, the main objective of this phase is that CSC defines the security metrics in accordance with the security analysis results for the preparation of selecting the optimal CSP in next phase.
(2) Alternative Selection. In the alternative selection phase, CSC should select an appropriate CSP in accordance with their security requirements and the security capability of cloud services. Due to the competition among various CSPs, there is large number of CSPs offering similar kinds of cloud services and security provisions. As a result, it has become a challenging task for CSC to identify which service is the best appropriate for them. Hence, to ensure data and business security, CSC should take advantage of the security metrics defined in the previous phase. At the same time, CSC can employ an effective security assessment method to evaluate the security level of CSPs. Customers can select the optimal CSP to provide cloud service based on the security assessment result. The security assessment method involved in this phase is the focus of this paper, which will be elaborated later.
(3) Solution Deployment. In order to ensure robust and efficient use of cloud services, the primary purpose for CSC is to negotiate with the selected optimal CSP to set up the cloud SLAs, which stipulates the QoS of cloud services offered by the CSP. In the meanwhile, CSC should also negotiate with the CSP on the terms including the remedies and claims and change service provider and termination of service in case that the CSP violates the agreed QoS in the operational process of cloud services. Therefore, CSC should define compliance metrics based on the analysis of actual business requirements and the agreed SLAs, which will be used for compliance evaluation of cloud services in the next phase. Finally, CSC should confirm the deployment plan developed by the selected CSP and entrust the CSP to deploy cloud service.
(4) Continuous Monitoring. The primary purpose of CSC in the phase of continuous monitoring is to ensure that actual QoS in the runtime of the cloud service conforms with the QoS agreed in SLAs. In light of planned and unplanned changes that occur in the cloud environment over time, the state of cloud services is not always maintained. Moreover, the CSP may be likely to achieve the benefit maximization at the expense of service quality; that is, CSP not always fully comply with the QoS in the SLAs to offer cloud services, especially that the CSC are not aware of SLAs while the CSP may reduce cloud computing resources (e.g., computing, storage, and network). Therefore, CSC should continuously monitor and record cloud service quality in the process of using cloud services and evaluate the conformance of cloud services through an effective compliance assessment method. The compliance assessment method involved in this phase is the focus of this paper, which will be elaborated later.
(5) Decision Making. In this phase, compliance assessment results in the previous phase can help CSC make decisions. CSC can determine which measures will be taken according to their tolerance to the compliance level of cloud services, which have an impact on their business performance (e.g., reliability and availability). In other words, the CSCs can decide to take corresponding measures according to the violation extent of SLAs. For example, if cloud services are compliant, namely, the monitoring QoS of cloud services is complying with the agreed QoS of SLAs, CSCs can decide to continue to use and evaluate the cloud services. If the cloud services are not compliant, CSC can decide to change CSP or seek remedies and claims to the CSP according to the violation extent, as shown in Figure 1. In the worst case, CSC can choose to terminate the cloud service and exit. The establishment of compliance rules and corresponding measures will be elaborated later.
(6) Termination of Service. The termination of cloud service deals with the exit process, where the use of a cloud service is terminated. Once CSC choose to exit the cloud service, they need to focus on addressing specific termination issues including the exit process and the handling of all classes of data related to the cloud service. For instance, the CSC is able to retrieve their cloud service data and application artifacts. In the meantime, the CSP needs to delete all the CSCs' data. Moreover, the CSC expects that the CSP will not retain any materials belonging to CSC after an agreed period. After exiting cloud service, CSCs can repeat the "Initial Preparation" phase when considering using cloud service again. At the end of the exit process, the CSP should provide the CSC with notification that the process is complete.

The Proposed Framework
In this section, the SCCAF, a CSL-based continuous assessment framework, is proposed. This framework can be divided into three main processes, encompassing (1) security assessment, (2) compliance assessment, and (3) taking measures. As shown in Figure 2, the SCCAF includes the following steps.
(1) Security Assessment. The CSC evaluates the security level of alternatives (CSPs) according to the conformance between the claimed security provisions provided by CSPs and the security metrics (e.g., facility security, risk management, and information security) defined by the CSC. Then, the CSC selects the optimal CSP to deploy cloud service based on the security assessment result as shown in Figure 3. The security assessment is implemented in initial preparation and alternative selection phases of CSL. (2) Compliance Assessment. After the security assessment is completed, the CSC can select the optimal (high security level) CSP to provide cloud service. During the use of cloud service, the CSC evaluates the compliance level of the cloud services based on the conformance between the claimed cloud SLAs and the actual QoS, as shown in Figure 4. The premise for compliance assessment is that CSC defines the compliance metrics in the solution deployment phase of CSL, namely, specific requirements of SLAs. The compliance assessment is implemented in continuous monitoring phase of CSL.     tolerance rules based on the impact of compliance level on their actual business. Then, CSC can take corresponding measures (e.g., change CSP, continue to use, and seek remedies and claims) according to the compliance level of the cloud services, as shown in Figure 5. In the worst case, the CSC may consider terminating the cloud service. The process of taking measures is implemented in the decision-making phase of CSL.

Security Assessment.
In security assessment process, the CSC defines security metrics, which include CSC's security requirements related to its data and business. The CSPs determine and describe the conformance between security metrics and their security provisions. According to this conformance, the proposed security assessment approach evaluates the security level of CSPs and ranks them based on the evaluation result. The quantitative security level of CSPs is the primary objective of the security assessment process. For convenience, the key notations used in security assessment are given in Table 1. Specifically, it includes four steps as follows.
(1) Security Metrics Definition. CSC defines a set of security metrics and provides it to CSPs. For instance, CSC can select security controls from the cloud controls matrix (CCM) [44] or consensus assessments initiative questionnaire (CAIQ) [42] according to its security requirements. Then, the CSPs measure their security provisions in accordance with the security metrics and submit measurement results to CSC in the form of deliverables. Deliverables include specific security metrics representing the security provisions of CSPs. The first round using security metrics is to collect information with respect to the security provisions (deliverables) of the CSPs in a uniform format. The deliverables contain security provisions of CSPs. The definition of security metrics is implemented in initial preparation phase of CSL.  (2) Security Metrics Quantification. The second round is to quantify the security metrics (deliverables) of each CSP for convenient comparison of their security capabilities. The quantification approach depends on different comparison types of different security metrics. In this step, we can employ the quantification approach proposed by [11,23]. This approach quantifies security metrics into two categories: Boolean (e.g., a YES/NO measurement result representing the conformable or unconformable to the security metric) and numeric (e.g., a cryptographic key length measurement result representing the extent of conformance to the security metric). The quantitative deliverables are used as input dataset × of security assessment process. The quantification of security metrics is also implemented in initial preparation phase of CSL.
(3) Weights Assignment. After quantifying security metrics, CSC can determine the weights of security metrics by employing the AHP method [24]. In this step, CSC assigns scale of relative importance from 1 to 9 (e.g., such that 9 represents extremely more important and 1 equal importance) for each security metric. These security metrics with specific numerical value can be used to construct a pairwise comparison matrix according to standard AHP method. At the same time, the consistency of this matrix needs to be validated. Then, the weight vector can be obtained by calculating the eigenvector corresponding to the maximum eigenvalue of the pairwise comparison matrix [45]. The weights of assigning security metrics denoted as can be obtained by (1), which holds that ∑ = 1.
(4) Security Level Evaluation. For the given quantitative security metrics × and their weights , CSC can employ the TOPSIS method [46] to evaluate the security level of each CSP and compare their security level in the same context. In this step, a normalized weighted decision matrix needs to be constructed first by (2).
Then, the ideal solutions of each security metric can be determined by (3) and (4), which includes positive + and negative − .
After that, separation measures can be calculated by (5) and (6), which represent the geometric distance from alternatives (CSPs) to ideal solutions . It includes positive + and negative − : where = 1, 2, . . . , , and + and − denote the separation measure from each alternative (CSP) to positive and negative ideal solutions, respectively. Next, the relative closeness representing the degree of conformity between the alternatives (CSP) and the ideal solution can be obtained by (7): where = 1, 2, . . . , and 0 ≤ ≤ 1. Then, CSC can rank CSPs according to their relative closeness and select the optimal one with closest to 1.
To sum up, the security evaluation process is implemented in alternative selection phase of CSL. Algorithm 1 illustrates the security assessment process in the SCCAF. Algorithm 2 demonstrates the procedure of security level evaluation.
After selecting the optimal CSP, the CSC negotiates with the selected optimal CSP on the details of cloud SLAs, namely, specific QoS stipulations. CSC can define compliance metrics in accordance with the agreed SLAs. The compliance metrics should contain the monitorable and measurable QoS attributes belonging to a specific service and their specific compliance values, namely, details of SLAs claimed by the optimal CSP. Such compliance metric can be exploited to evaluate the compliance level of a cloud service during the period of using cloud services, as shown in Figure 4. Actually, since there has been much literature on the establishment of SLAs, CSC can employ existing methods to determine cloud SLAs and define compliance metrics. For instance, the methods proposed by ISO/IEC [47,48] can be employed to formulate SLAs. The methods proposed by National Institute of Standards and Technology (NIST) [49] and ISO/IEC [50] can be employed to define the compliance metrics. Therefore, the solution deployment phase of CSL is not the focus of this paper.
According to the compliance metrics, the CSC can evaluate the compliance level of the cloud services by employing the compliance assessment method, which we will elaborate its details as follows.

Compliance Assessment.
The compliance assessment process is performed after the security assessment process yields an optimal CSP who will provide cloud service to Input: set of deliverables , the number of alternatives (CSPs) , the number of security metrics 1: procedure Security Level Evaluation( , , ) 2: Create arrays 1× , × ← 0; 3: Create vector ; 4: ← 0; 5: The quantized deliverables is assigned to ; 6: ← ASSIGNWEIGHTS4METRICS ( , ); 7: ← Obtain The Optimal Alternative ( , , , ); 8: ← the index of ( ); 9: return ; 10: end procedure Algorithm 2: Security Level Evaluation. the CSC. In the compliance assessment process, CSC continuously evaluates the compliance level of cloud services according to the compliance metrics. Compliance assessment is performed in terms of periods. The quantitative compliance level of cloud services is the primary objective of the compliance assessment process, which can be referred by CSC to make decisions. For convenience, the key notations used in security assessment are given in Table 2.
Specifically, it includes five steps as follows.
(1) Data Collection and Preprocessing. The CSC determines a set of evaluation period and a monitoring frequency within each evaluation period ( ∈ ). For a given evaluation period , CSC continuously monitor and record the QoS attributes of a specific service. Then, a monitoring dataset with respect to can be obtained. The first round is to collect and preprocess the monitoring dataset of QoS attributes. For convenience, we take a QoS attribute ( ∈ ) as an example to describe the compliance assessment process in detail. For a given QoS attribute , its monitoring dataset ( ⊂ ) can be obtained from the dataset . Moreover, for a given evaluation period , the dataset can be divided into smaller datasets ( ⊂ ) based on . The datasets are used as input data of compliance assessment process to calculate the single conformance V of in each . The compliance level of can be obtained by aggregating the weighted V within .
(2) Compliance Interval Construction. The second round is to construct the compliance interval of in accordance with its monitoring dataset . First, we can calculate the mean and variance of by (8) and (9).
In fact, the actual monitoring QoS of an attribute fluctuates around the compliance metrics value of SLAs in the runtime of cloud services (in addition to outage, equipment failure, etc.) [36]. Moreover, its fluctuation range of monitoring data cannot be determined accurately [34]. In addition there are only limited monitoring data for a QoS attribute, namely, a small sample. Therefore, we assume that the variation of monitoring data conform to t-distribution [51]. Then, the compliance interval can be constructed by (10) and (11): where and represents the lower and upper bounds of the compliance interval, respectively. is the confidence level assigned by the CSC. /2 can be obtained by look-up table [51].
(3) Single Conformance. The single conformance is the compliance value of a QoS attribute in an evaluation period . The third round is to obtain single conformance V of according to its compliance interval , and compliance value . Since different attributes may have different ranges and units, we normalize QoS values into a unified range [0, 1]. Then, the single conformance V can be calculated by (12) and (13). The single conformance can be divided into two different types: Positive Factor V + representing that higher is better (e.g., throughput) and Negative Factor V − representing that lower is better (e.g., response time).
(4) Weights Assignment. After obtaining the single conformance of , its weight can be determined by employing the entropy method [52]. Firstly, the entropy of in can be calculated according to its mean and (14) and (15), which is denoted as . represents the ratio between and the sum including the means of each .
Then, for the weight of assigning to in denoted as , it can be obtained in accordance with (16), which holds that This round is to calculate the compliance level of in accordance with the obtained weight and single conformance. For the compliance level of , each of the weighted single conformance needs to be aggregated in . According to (17), the compliance level of can be obtained, which holds that 0 ≤ ≤ 1. The closer the compliance level is to 1, the more compliant the evaluated attribute is.
Broadly, the compliance assessment process is implemented in continuous monitoring phase of CSL. Algorithm 3 illustrates the compliance assessment process in the SCCAF. Algorithm 4 demonstrates the procedure of compliance level evaluation.

Take Measures.
The process of taking measures is performed after the compliance assessment process yields assessment results regarding the cloud services. In this process, CSC establishes relevant compliance tolerance rules (e.g., assessment validity) on the basis of compliance assessment results. Additionally, these rules need to be associated with the corresponding measures (e.g., change CSP). Then, CSC can determine which measure to be taken based on the conformity between compliance level and the compliance tolerance rules. The primary objective of this process is to help the CSC stop loss in time in the event of cloud SLAs compliance violations. For convenience, the key notations given in the compliance assessment apply to the process of taking measures. The details of compliance tolerance rules and corresponding measures will be described in the following.
(1) Validity. The CSC can establish a validity indicator to determine whether the compliance assessment is valid. For a given evaluation period , the compliance assessment of cloud service is performed ( ∈ ) times, and each of the single conformance V is different. A compliance assessment which subjects to V ̸ = 0 is considered as a valid assessment. Then, we define assessment validity as follows.

Definition 1.
Let and denote the number of valid assessment and invalid assessment within the evaluation period , respectively. The validity of compliance assessment denoted as can be calculated by The CSC can establish assessment validity threshold based on their actual business requirements. For instance, we assume that the acceptable assessment validity threshold of the CSC is . Thus, for a given evaluation period (e.g., a year), the compliance assessment is performed in terms of ( ∈ ) (e.g., a day). If the assessment validity is less than , the CSC may consider changing CSP. The CSC may select another one from the CSPs ranked by security assessment. If the assessment validity consecutively fails to meet the condition ≥ for times, the CSC may consider terminating the cloud service.
(2) Effectiveness. After determining that the validity meets the requirements, the CSC can set up an effectiveness indicator to determine whether the compliance level of cloud services meets its requirements. The effectiveness of compliance level is that the cloud services QoS can support critical business functions of CSC to an acceptable level within an evaluation period of time. Then, we define effectiveness as follows.
The CSC can establish effectiveness threshold based on their actual business requirements. For instance, we assume that the acceptable effectiveness threshold of the CSC is . Similarly, for a given evaluation period (e.g., a year), the compliance assessment is performed in terms of ( ∈ ) (e.g., a day). If the effectiveness is less than , the CSC may consider seeking claims and remedies from the CSP. If the effectiveness is greater than or equal to , the CSC can use and evaluate the cloud service continuously.
In general, CSCs have to establish compliance tolerance rules based on their actual business requirements. In practice, different CSCs may have different compliance tolerance rules. In this process, we provide a feasible and referential method for CSC to make decision according to the compliance level of cloud service.

Simulation Studies
This section presents the experiments to validate performance and availability of the proposed security and compliance assessment methods in the continuous assessment framework, respectively. The experiments are conducted by using MATLAB R2017b and performed on a DELL desktop computer with configuration as follows: Intel Core i5 2.7 GHz CPU,8 GB RAM and Windows 10 operating system.

Security Assessment Validation.
First, we conduct the experiments to compare our security assessment method with the Quantitative Hierarchy Process (QHP) method proposed by [11] in terms of performance and accuracy. The QHP method is is an assessment technique that enables ranking of CSPs with respect to CSCs requirements. Due to the similar concepts and evaluation steps, we utilize the same security metrics as QHP, which are developed by Cloud Security Alliance [44]. For facilitating comparison, we employ the same quantification approach for security metrics as QHP. Additional, for convenience, we denote our security assessment method of SCCAF as SAM.
(1) Performance Analysis. To compare SAM with QHP method based on time complexity, we set the number of CSPs to 150 and the number of security metrics to 300. At the same time, we assume that each step in these comparative methods is an operation and the total number of operations represents the time complexity. We vary the number of CSPs from 1 to 150 with a step of 30 and the number of security metrics from 1 to 300 with a step of 60. We simulate that the time complexity of the two methods increases with the number of CSPs and the number of security metrics. Figure 6 shows that the time complexity of the two methods increases with the number of CSPs in the case that the number of security metrics is constant. Figure 7 shows that the time complexity (operations) of the two methods increases with the number of security metrics in the case that the number of CSPs is constant. Figure 8 shows that the number of operations in both methods increases with the number of CSPs and security metrics. We can observe from these figures that our method outperforms QHP method in both above cases; that is, SAM has the minimum time complexity. With the increase of the number of CSPs or security metrics, the time complexity of QHP increases significantly. This is due to high complexity of algorithms for calculating the priority vector of comparison matrix constructed by all CSPs as per each security metric. In other words, QHP evaluates the security level of CSPs by comparing each security metric of all CSPs and aggregating the comparison results, while SAM is by taking all security indicators as a whole for comparison. It suggests that our method not only is effective but also outperforms QHP method.
(2) Accuracy of SAM. In order to validate accuracy of the SAM method, we compare evaluation results of SAM with evaluation results of QHP through empirical validation. Table 3 presents a sample dataset associated with security metrics used for this scenario. This dataset is excerpted by [11] from the information available in the CSA STAR repository [42], where the values associated to 16 security metrics for the three selected CSPs are presented. As aforementioned, for conveniently comparing the accuracy of both methods, we employ the same quantification approach for security metrics as QHP, which is described below. The selected security metrics comprised both qualitative (e.g., YES/NO) and quantitative (e.g., security levels from 1 to 4) metrics. The YES/NO metrics thresholds are modelled as Boolean 1/0, whereas metrics associated to security levels as V 1 , V 2 , V 3 , and V 4 are modelled as 1, 2, 3, 4. For example, the CO3.3 is defined using qualitative thresholds (None, Annually, Quarterly, and Monthly) which are specified as Ve 1 , V 2 , V 3 , and V 4 . Similarly, the RI1.1 is defined using qualitative (Internal, External) values. To facilitate the comparison, we take the 16 security metrics in this table as the CSC's security requirements and consider them as the same relative importance ( ℎ = 0.5) as described in of [11].
In order to obtain the CSPs' security level, we apply the security assessment method presented in Section 5. Table 4 shows the parameters related to security level of CSPs, which are calculated by the algorithms elaborated in Section 5.1. As shown in Table 4, the shortest separation measure from alternative (CSP) to positive and negative ideal solution is 3 and 2 , respectively. It means that for the given positive impact security metrics, 3 is most consistent with them and 2 is the most inconsistent with them. By taking the separation measures, we can obtain the relative closeness of CSPs; the closer it is to 1, the higher the security level of the CSP is. As can been seen from this table, 3 has the highest security level, followed by 1 , and 2 is the lowest.
A side by side comparison is shown in Figure 9. As shown in Figure 9, the resulting ranking of CSPs is consistent for both SAM and QHP: 3 is the provider that best satisfies the CSC's security requirements, followed by 1 and 2 respectively. For CSC specifying the security requirements, this means that both methods result in the same evaluation results. However, compared with the QHP method, the SAM method can better reflect the security level of CSPs. For example, in this scenario, since 3 satisfies all the 16 security metrics, its security level should be the maximum, namely, 1, which is not shown in QHP.

Compliance Assessment Validation.
In this section, we evaluate the availability and efficiency of the proposed compliance assessment method, which exploits a synthesized web service dataset from real world [53]. Additionally, we compare performance and certainty of our method with respect to the TRUSS proposed by [18]. For convenience, we denote our compliance assessment method of SCCAF as CAM.      [53]. It records a real-world QoS data from 142 users on 4,500 web services over 64 different time slices (at 15-minute interval). Each service has two QoS attributes in the original dataset, namely, response time (RT) and throughput (TP). We denote the time slices and the number of users as the evaluation period (EP) and monitoring frequency, respectively. In addition, for facilitating the experiments, we Each set contains 118 specific data values generated by users, which represents a set of monitored samples within an EP. As a result, we obtain two smaller QoS datasets, each containing 64 × 118 records and we denote them as and . We use these two datasets as the monitoring QoS value for compliance assessment in the experiments. The parameter settings are given in Table 5, where the number of items is denoted as and the number of evaluation period is denoted as . The in this table denotes the compliance value with respect to the QoS attribute, which is described in Section 5.2. For convenience, we assumed that the SLA value of two QoS attributes are the mean of the and the , as shown in Table 5.

QHP SAM
Let us now focus on the considered example. For the negative factor (RT), we first construct the compliance confidence interval according to a set of data in , namely, the data in an EP. To facilitate the observation of the variation of the monitoring data in ideal case, we employ moving average method to process the monitoring data, which is denoted as smoothing data. Figure 10(a) illustrates the variation of the monitoring data of RT relative to its SLA and compliance confidence interval. As shown in Figure 10(a), the monitoring data vary around the SLA and its smoothing data basically vary within the compliance confidence interval. Because of the actual cloud environment, the QoS monitoring is a continuous process and its values are likely to vary due to the dynamics of the cloud resources (computing, network, and storage) and application workloads. Therefore, we denote the mean of monitoring data as the valid value to be evaluated in this EP. Similarly, Figure 10(b) shows the variation of the monitoring data of TP relative to its SLA and compliance confidence interval.
Then, we calculate the single conformance of RT in this EP according to the corresponding parameters, which includes SLA, compliance confidence interval, and the mean of monitoring data. The single conformance of RT in all EPs (64) can be obtained by the same way. Figure 11(a) shows that the single conformance of RT varies with the relationships across the confidence interval, SLA, and the mean intuitively. As shown in Figure 11(a), the confidence interval of RT does not fully cover the SLA; that is, there are some evaluation periods where RT is completely uncompliant. We can also conclude that the mean of RT varies around the SLA and always varies within the confidence interval. At the same time, it can be seen from this figure that the single conformance of RT is related to its confidence interval and the SLA, which varies between 0 and 1. In the case of a determined SLA, the single conformance of RT decreases with the increase of its confidence interval. When the upper limit of the confidence interval of RT is less than the SLA, the single conformance of RT is the maximum, namely 1. Conversely, the single conformance is the minimum value, namely 0, when the lower limit of the confidence interval of RT is less than the SLA. Similarly, Figure 11(b) shows the single conformance variation of TP.
Next, we determine the weights for the single conformance in each EP. The weight of single conformance of RT can be calculated according to (14), (15), and (16). Then we can use the obtained weights and the single conformance to calculate the weighted conformance of RT as well as TP in each evaluation period. Figure 12(a) shows the weighted single conformance of RT and TP. From this figure, we can observe that the single conformance of RT and TP in each EP is varying. Finally, we obtain the compliance level of RT and TP by aggregating their weighted single conformance in each EP, respectively. Figure 12(b) shows compliance level of RT and TP in form of the interval with every four EPs. From this figure, we can observe that the compliance level of RT and TP increases gradually over EP. This observation of results indicates that if the monitoring data of RT or TP is more stable and compliant over a period of time, its compliance level is closer to 1.
(2) Comparison with TRUSS. In this section, we compare our method of evaluating compliance with TRUSS [18]. The reason is obvious because of the similar direction of study on conformance evaluation of cloud service. Secondly, the sample dataset is derived from WSDream dataset2, so it becomes appropriate to compare both methods. Figure 13 illustrates the comparative computation function of QoS compliance evaluation between the two methods, which describes the effect of different weights and the single conformance on the compliance of an attribute in both TRUSS and CAM methods, respectively. As shown in Figure 13(a), the weighted conformance value varies with weight coefficient in TRUSS method, which means that the compliance computation function is excessively dependent on the weight. It is easy to cause the uncertainty of the conformance value. Figure 13(b) shows that the weighted conformance value has a certain relationship with the weight coefficient in CAM method, which means that the proposed method is more reasonable.

Conclusion
In this paper, we propose a new concept of cloud service lifecycle from the perspective of cloud-based IoT context, which enables CSCs to clearly understand the items that need to be considered at each phase of the adoption of cloud services. We have also presented a novel secure and compliant continuous assessment framework based on the cloud service life-cycle. This framework combines assessment methods of security and compliance as mutual complementation to facilitate CSCs to evaluate CSPs during the full cloud service life-cycle. Additionally, this assessment framework ensures the security of cloud-based IoT context by evaluating the security level and compliance level of cloud services. Simulation-based and case study experiments validated the performance and availability of our proposed method.   As future work, we plan extensions to the assessment framework in order to facilitate the evaluation of cloud service from the viewpoints of various stakeholders (e.g., cloud auditors, cloud brokers, or peers). We also plan to develop a prototype for our proposed assessment framework and further improve our evaluation algorithms.

Data Availability
The experimental data used to support the findings of this study are derived from the WSDream dataset2 repository (DOI:10.1109/TSC.2012.34.)