Virtual Machine Placement Algorithm for Both Energy-Awareness and SLA Violation Reduction in Cloud Data Centers

The problem of high energy consumption is becoming more and more serious due to the construction of large-scale cloud data centers. In order to reduce the energy consumption and SLA violation, a new virtual machine (VM) placement algorithm named ATEA (adaptive three-threshold energy-aware algorithm), which takes good use of the historical data from resource usage by VMs, is presented. In ATEA, according to the load handled, data center hosts are divided into four classes: hosts with little load, hosts with light load, hosts with moderate load, and hosts with heavy load. ATEA migrates VMs on heavily loaded or little-loaded hosts to lightly loaded hosts, while the VMs on lightly loaded and moderately loaded hosts remain unchanged. Then, on the basis of ATEA, two kinds of adaptive three-threshold algorithm and three kinds of VMs selection policies are proposed. Finally, we verify the effectiveness of the proposed algorithms by CloudSim toolkit utilizing real-world workload.The experimental results show that the proposed algorithms efficiently reduce energy consumption and SLA violation.


Introduction
Cloud computing [1,2] is derived from grid computing.At present, cloud computing is receiving more and more attention, through which people can access resources in a simple way.In contrast to previous paradigms, cloud computing provides infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).
On one hand, the construction of a large-scale virtualized data centers meets the demand of computational power; on the other hand, such data centers consume a great many of electrical energy resources, leading to high energy consumption and carbon dioxide emissions.It has been reported that [3], in 2013, the total electricity consumption of global data center was more than 4.35 gigawatts, and annual growth rate was by 15%.The high energy consumption problem of virtualized data centers causes a series of problems, including energy wastes, low Return on the Investment (ROI), system instability, and more carbon dioxide emissions [4].
However, most hosts in data centers are in a state of low CPU utilization.Barroso and Hölzle [5] took a survey over half a year and found that most hosts in data centers operate at lower than 50% CPU utilization.Bohrer et al. [6] investigated the problem of high energy consumption and came to the same conclusion.Therefore, it is extremely necessary to reduce the energy consumption of data centers while keeping low SLA (Service Level Agreement) violation [7].
In this paper, we put forward a new VM deployment algorithm (ATEA), two kinds of adaptive three-threshold algorithm (KAM and KAI), and three kinds of VM selection policies to reduce energy consumption and SLA violation.We verify the effectiveness of the proposed algorithms through using the CloudSim toolkit.
The main contributions of the paper are summarized as follows: (i) Proposing a novel VM deployment algorithm (ATEA).In ATEA, hosts in a data center are divided

Related Work
At present, there are various studies focusing on energy efficient resource management in cloud data centers.Constraint energy consumption algorithm [8][9][10] and energy efficiency algorithm [11][12][13][14][15] are two main types of algorithms for solving the problem of high energy consumption in data centers.The main idea of the constraint energy consumption algorithm is to reduce the energy consumption in data centers, but this type of algorithm focuses a little on (does not consider) the SLA violation.For example, Lee and Zomaya [8] proposed two heuristic algorithms (ECS, ECS + idle) to decrease the energy consumption, but the two algorithms are easy to fall into local optimum and do not consider the SLA violation.
Hanson et al. [9] presented Dynamic Voltage and Frequency Scaling (DVFS) policy to save power in data centers.When the task number is large, DVFS policy raises the voltage of the processor in order to deal with the task in time; when the task number is small, DVFS policy decreases the voltage of processor for the purpose of saving power.Kang and Ranka [10] put forward an energy-saving algorithm, and they supposed that overestimated or underestimated execution time of tasks is bad for energy-saving algorithm.For overestimation, the extra available time should be assigned to other tasks in order to reduce energy consumption.Similarly, this energy-saving algorithm does not consider the SLA violation.Therefore, the constraint energy consumption algorithm does not meet the requirement of users because of focusing a little on (not considering) the SLA violation.The goal of the energy efficiency (energy consumption and SLA violation) algorithm is to decrease the energy consumption and SLA violation in data centers.For example, Buyya et al. [11] raised a virtual machine (VM) placement algorithm (called Single Threshold (ST)) based on the combination of VM selection policies.ST algorithm sets a unified value for all servers' CPU utilization to make sure all servers are below this value.Obviously, ST algorithm can save energy consumption and decrease the SLA violation, but the SLA violation remains at a high level.Beloglazov and Buyya [12] proposed an energy efficient resource management system, which includes the dispatcher, global manager, local manager, and VMM (VM Monitor).In order to improve the energy efficiency, Beloglazov et al. put forward a new VM migration algorithm called Double Threshold (DT) [13]; DT sets two thresholds to keep all hosts' CPU utilization between the two thresholds.However, the energy consumption and SLA violation for DT algorithm need to be further decreased.Later, Beloglazov and Buyya [14,15] proposed an adaptive doublethreshold VM placement algorithm to improve energy efficiency in data centers.However, the energy consumption in data centers remains at a high level.
In our previous study [16], we proposed a three-threshold energy-aware algorithm named MIMT to deal with the energy consumption and SLA violation.However, the three thresholds for controlling host's CPU utilization are fixed.Therefore, MIMT is not suitable for varying workload.Therefore, it is necessary to put forward a novel VM placement algorithm to deal with energy consumption and SLA violation in cloud data centers.

The Power Model, Cost of VM Migration, SLA Violation Metrics, and Energy Efficiency Metrics
3.1.The Power Model.Energy consumption by servers in data centers is connected with its CPU, memory, disk, and bandwidth.Recent studies [17,18] have illustrated that the energy consumption by servers has a linear relationship with its CPU utilization; even DVFS policy is applied.However, with the decrease of hardware price, multicore CPUs and memory with large-capacity are widely equipped in servers, leading to the traditional linear model not being able to accurately describe energy consumption of servers.
In order to deal with this problem, we use the real data of energy consumption offered by SPECpower benchmark (http://www.spec.org/powerssj2008/).We have chosen two servers equipped with dual-core CPUs.The main configuration of the two servers is as follows: one is HP ProLiant G4 equipped with 1.86 GHz (dual-core), 4 GB RAM; the other is HP ProLiant G5 equipped with 2.66 GHz (dual-core), 4 GB RAM.The energy consumption for the two servers at different load levels is listed in Table 1 [15].

Cost of VM Migration.
Proper VM migration between servers can reduce energy consumption and SLA violation in data centers.However, excessive VM migration could bring negative impact on performance of application which runs on the VMs.Voorsluys et al. [19] investigated the cost problem of VM migration, and the performance degradation caused by VM can be described in (1) PDM [15] (Overall Performance Degradation Caused by VM Migration).It is indicated in where parameter  represents the number of VMs in data center,    means the estimate of the performance degradation caused by VM  migration, and    corresponds to the total CPU capacity requested by VM  during its lifetime.
(2) SLATAH [15] (SLA Violation Time per Active Host).It means the percentage of total SLA violation time, during which active host's CPU utilization has experienced 100%, as indicated in where  represents the number of hosts in data center,    corresponds to the total time, during which the CPU utilization of host  has experienced 100% utilization resulting in the SLA violations,    corresponds to the total time of host  being in active state.The reasoning behind the SLATAH is that the active host's CPU utilization has experienced 100% utilization, the VMs on the host could not be provided with the requested CPU capacity.Both PDM and SLATAH are two effective methods to independently evaluate the SLA violation.Therefore, the SLA violation is defined as in [15] SLA = PDM × SLATAH. (4)

Energy Efficiency Metric.
Energy efficiency includes energy consumption and SLA violation.Improving the energy efficiency means less energy consumption and SLA violation in data centers.Therefore, the metric of energy efficiency is defined as in where  corresponds to the energy efficiency of a data center,  means the energy consumption of a data center, and SLA represents the SLA violation of a data center.Equation (5) shows that the higher the , the more the energy efficiency.

ATEA, Two Kinds of Adaptive Three-Threshold Algorithm, VM Selection Policy, and VM Deployment Algorithm
4.1.ATEA.VM migration is an effective method to improve the energy efficiency in data centers.However, there are several key problems which should be dealt with: (1) when a host is supposed to be heavily loaded, where some VMs from the host should be migrated to another host; (2) when a host is believed to be moderately loaded or lightly loaded, resulting in a decision to keep all VMs on this host unchanged; (3) when a host is believed to be little-loaded, where all VMs on the host must be migrated to another host; (4) selecting a VM or more VMs that should be migrated from the heavily loaded; (5) finding a new host to accommodate VMs migrated from heavily loaded or little-loaded hosts.
In order to solve the above problems, ATEA (adaptive three-threshold energy-aware algorithm) is proposed.ATEA automatically sets three thresholds   ,   , and  ℎ (0 ≤   <   <  ℎ ≤ 1), leading to the data center hosts to be divided into four classes: hosts with little load, hosts with light load, hosts with moderate load, and hosts with heavy load.When CPU utilization of a host is less than or equal to   , the host is believed to be little-loaded.In order to save energy consumption, all VMs on little-loaded host must be migrated to another host with light load and the host is switched to sleep mode; when CPU utilization of a host is between   and   , the host is considered as being lightly loaded.In order to avoid high SLA violations, all VMs on lightly loaded host have to be kept unchanged; the reason is that excessive VMs migration leads to the performance degradation and high SLA violations; when CPU utilization of a host is between   and  ℎ , the host is believed to be moderately loaded; all VMs on moderately loaded host have to be kept unchanged for the reason that excessive VMs migration leads to the performance degradation and high SLA violations; when CPU utilization of a host is greater than  ℎ , the host is considered as being heavily loaded; in order to reduce the SLA violations, some VMs on heavily loaded host must be migrated to another host with light load.Figure 1 shows the flow chart of algorithm ATEA.Different from out previous study [16], where the thresholds are fixed, in this paper, the three thresholds   ,   , and  ℎ in ATEA are not fixed and these values can be autoadjusted according to workload.
ATEA migrates VMs that must be migrated to other hosts, while keeping some VMs unchanged.In doing so, ATEA improves the migration efficiency of VMs.Therefore, ATEA is a fine-grained algorithm.However, two problems should be solved as for ATEA.Firstly, what are the threshold values of   ,   , and  ℎ ?This problem will be discussed in Section 4.2.Secondly, as mentioned above, some VMs on heavily loaded host must be migrated to another host with light load.Which VM should be migrated?This issue will be discussed in Section 4.3.
The VM placement optimization of ATEA is illustrated in Algorithm 1.
In the first stage, the algorithm inspects each host in host list and decides which host is heavily loaded.If the host is heavily loaded (corresponding to Line 2 in Algorithm 1), the algorithm uses the VM selection policy to choose VMs which must be migrated from the host (corresponding to Line 6 in Algorithm 1).Once VMs list that should be migrated from the heavily loaded is created, the VM deployment algorithm is invoked for the purpose of finding a new host to accommodate the VM (corresponding to Line 7 in Algorithm 1).Function "getNewVmPlacement(vmsToMigrate)" means to find a new host to accommodate the VM.In the second stage, the algorithm inspects each host in host list and decides which host is little-loaded.If the host is little-loaded (corresponding to Line 11 in Algorithm 1), the algorithm chooses all VMs from the host to migrate and finds a placement of the VMs (corresponding to Line 15-Line 16 in Algorithm 1).At last, the algorithm returns the migration map.
As the value of   ( = 1, 2, 3, . . ., ) varies from time to time, the value of   ,   , and  ℎ are also variable.Therefore, KAM is an adaptive three-threshold algorithm.When the workloads are dynamic and unpredictable, as compared with a fixed threshold algorithm, KAM generates higher energy efficiency by setting the value of   ,   , and  ℎ .
Finally, the three thresholds (  ,   , and  ℎ ) in ATEA can be defined as follows: where  ∈  + represents a parameter of the algorithm that defines how aggressively the system consolidates VMs.For example, the higher , the more energy consumption, but the less SLA violations caused by VMs consolidation.The complexity of KAI is ( ×  × ), where  is the group number,  denotes the data size, and  is the iteration number.
As the value of   ( = 1, 2, 3, . . ., ) varies from time to time, the values of   ,   , and  ℎ are also variable.Therefore, KAI is an adaptive three-threshold algorithm.When the workloads are dynamic and unpredictable, as compared with fixed threshold algorithm, KAI generates higher energy efficiency by setting the value of   ,   , and  ℎ .

VM Selection Policies.
As described earlier in Section 4.1, some VMs on heavily loaded host must be migrated to another host with light load.Which VM should be migrated?In general, a host's CPU utilization and memory size affect its energy efficiency.Therefore, to solve this problem, three kinds of VM selection policies (MMS, LCU, and MPCM) are proposed in this section.

MMS (Minimum Memory Size) Policy.
The migration time of a VM will change, depending on its different memory size.A VM with less memory size means less migration time under the same spare network bandwidth.For example, a VM with 16 GB memory may take 16 times' longer migration time than a VM with 1 GB memory.Clearly, selecting the VM with 16 GB memory or the VM with 1 GB memory greatly affects energy efficiency of data centers.Therefore, if a host is being heavily loaded, MMS policy selects a VM with the minimum memory size to migrate compared with the other VMs allocated to the host.MMS policy chooses a VM  that satisfies the following condition: where VM  means the set of VMs allocated to host  and RAM() is the memory size currently utilized by the VM .

LCU (Lowest CPU Utilization) Policy.
As for energy efficiency in data center, the CPU utilization of a host is also another important factor.Therefore, if a host is being heavily loaded, LCU policy chooses a VM with the lowest CPU utilization to migrate compared with the other VMs allocated to the host.LCU policy chooses a VM  that satisfies the following condition: where VM  means the set of VMs allocated to host  and Utilization() is the CPU utilization of VM  allocated to host .

MPCM (Minimum Product of Both CPU Utilization and
Memory Size) Policy.As host's CPU utilization and memory size are two important factors for energy efficiency in data center, if a host is being heavily loaded, MPCM policy selects a VM with the minimum product of both CPU utilization and memory size to migrate compared with the other VMs allocated to the host.MPCM policy chooses a VM  that satisfies the following condition: where VM  means the set of VMs allocated to host  and  CPU and  memory , respectively, represent CPU utilization and memory size currently utilized by the VM .Algorithm 2 shows the pseudocode of EBFD, where   ,   , and  ℎ are three thresholds in ATEA (the definition of the three thresholds in Section 4.2)."vmlist" is the set of all VMs."hostlist" represents all hosts in the data centers.Line 1 (see Algorithm 2) means to sort all VM by CPU utilization in descending order.Line 3 represents that parameter "minimumPower" is assigned a maximum value.Line 6 is to check whether the host is suitable to accommodate the VM (e.g., host's CPU capacity, memory size, and bandwidth).Function getUtilizationAfterAllocation means to obtain host's CPU utilization after allocating a VM.Line 7 to Line 9 (see Algorithm 2) are to keep a host with light load (CPU utilization of a host at   -  interval).Function getPowerAfterVM is to obtain the increasing of host's energy consumption after allocating a VM.Line 11 to Line 15 are to find the host which owns the least increasing of power consumption caused by VM allocation.Line 19 is to return the destination hosts for accommodating VMs.The complexity of the EBFD is ( × ); parameter  represents the number of hosts, whereas parameter  corresponds to the number of VMs which should be allocated.

Experiments and Performance Evaluation
5.1.Experiment Setup.Due to the advantages of CloudSim toolkit [20,21] such as supporting on demand dynamic resource provisioning and modeling of virtualized environments and so on, we choose it as the simulation toolkit for our experiments.
We have simulated a data center which includes 800 heterogeneous physical nodes, half of which consists of HP ProLiant G4 (Intel Xeon 3040, 2 cores 1860 MHz, 4 GB), and the other half are HP ProLiant G5 (Intel Xeon 3075, 2 cores 2660 MHz, 4 GB).There are 1052 VMs and four kinds of VM types (High-CPU Medium Instance (2500 MIPS, 0.85 GB); Extra Large Instance (2000 MIPS, 3.75 GB); Small Instance (1000 MIPS, 1.7 GB); and Micro Instance (500 MIPS, 613 MB)) in the data center.The characteristics of the VMs correspond to Amazon EC2 instance types.

Workload Data.
Using real workload to do experiments is extremely significant for VM placement.In this paper, we utilize the workload coming from a CoMon project, which mainly monitors infrastructure for PlanetLab [22].We obtain the data derived from more than a thousand VMs' CPU utilization and the VMs placed at more than 500 places across the world.Table 2 [15] shows the characteristics of the data.
(1) Energy Consumption.For the six algorithms (KAM-MMS, KAM-LCU, KAM-MPCM, KAI-MMS, KAI-LCU, and KAI-MPCM) with different parameter (0.5 to 3.0 increasing by 0.5), the energy consumption is shown in Figure 2, which shows the energy consumption for the six algorithms.As for KAM-MMS, KAM-LCU, and KAM-MPCM, KAM-MPCM leads to the least energy consumption, KAM-LCU the second energy consumption, and KAM-MMS the most energy consumption.The reason is that KAM-MPCM considers both CPU utilization and memory size when a host is with heavy load.Compared with KAM-MMS, KAM-LCU leads to less energy consumption.This can be explained by the fact that the processor (CPU) of a host consumes much more energy than its memory.Similarly, as for KAI-MMS, KAI-LCU,  KAI-MMS, KAI-LCU leads to less energy consumption.This can be also explained by the fact that the processor (CPU) of a host consumes much more energy than its memory.
(2) SLATAH.The SLATAH is described in Section 3.3 (see (3)).For the six algorithms (KAM-MMS, KAM-LCU, KAM-MPCM, KAI-MMS, KAI-LCU, and KAI-MPCM) with different parameter (0.5 to 3.0 increasing by 0.5), the SLATAH is shown in Figure 3, which shows the SLATAH for the six algorithms.In terms of KAM-MMS, KAM-LCU, and KAM-MPCM, KAM-MMS contributes to the least SLATAH, KAM-MPCM the second, and KAM-LCU the most.The reason is as follows: when a host is with heavy load, KAM-MMS selects a VM with the minimum memory size to migrate leading to less migration time.Therefore, KAM-MMS leads to the least SLATAH compared with KAM-LCU and KAM-MPCM.Compared with KAM-MPCM, KAM-LCU contributes to much more SLATAH.This could be explained by the fact that SLATAH mainly depends on memory size but not CPU utilization.Similarly, as for KAI-MMS, KAI-LCU, and KAI-MPCM, KAI-MMS contributes to the least SLATAH, KAI-MPCM the second, and KAI-LCU the most.The reason is that KAI-MMS causes the least migration time leading to the least SLATAH.Compared with KAI-MPCM, KAI-LCU leads to much more SLATAH.This could also be explained by the fact that SLATAH mainly depends on memory size but not CPU utilization.
(3) PDM.The PDM is described in Section 3.3 (see (2)).For the six algorithms (KAM-MMS, KAM-LCU, KAM-MPCM, KAI-MMS, KAI-LCU, and KAI-MPCM) with different parameter (0.5 to 3.0 increasing by 0.5), the PDM is shown in Figure 4, which illustrates the PDM for the six algorithms.As for KAM-MMS, KAM-LCU, and KAM-MPCM, the PDM is the same.This could be explained by the fact that the overall performance degradation caused by VMs due to migration is the same.Furthermore, when parameter  = 0.5, 1.0, and 1.5, respectively, the corresponding PDM of the three algorithms (KAM-MMS, KAM-LCU, and KAM-MPCM) are 0. As for KAI-MMS, KAI-LCU, and KAI-MPCM, the PDM is also the same.This could also be explained by the fact that the overall performance degradation caused by VMs due to migration is the same.At the same time, when parameter  = 0.5, the PDM of the three algorithms (KAI-MMS, KAI-LCU, and KAI-MPCM) are 0.
(4) SLA Violations.The SLA violations are described in Section 3.3 (see ( 4)).For the six algorithms (KAM-MMS, KAM-LCU, KAM-MPCM, KAI-MMS, KAI-LCU, and KAI-MPCM) with different parameter (0.5 to 3.0 increasing by 0.5), the SLA violations are shown in Figure 5, which shows the SLA violations for the six algorithms.From (4), the SLA violations depend on SLATAH (Figure 3) and PDM (Figure 4).As for KAM-MMS, KAM-LCU, and KAM-MPCM, the PDM is the same.Therefore, SLA violations depend on SLATAH.In terms of KAM-MMS, KAM-LCU, and KAM-MPCM, KAM-MMS contributes to the least SLATAH, KAM-MPCM the second, and KAM-LCU the most.So, KAM-MMS contributes to the least SLA violations, KAM-MPCM the second, and KAM-LCU the most.Furthermore, when parameter  = 0.5, 1.0, and 1.5, respectively, the corresponding PDM of the three algorithms (KAM-MMS, KAM-LCU, and KAM-MPCM) are 0. Therefore, the corresponding SLA violations of the three algorithms (KAM-MMS, KAM-LCU, and KAM-MPCM) are 0. By the same way, as for KAI-MMS, KAI-LCU, and KAI-MPCM, KAI-MMS contributes to the least SLA violations, KAI-MPCM the second, and KAI-LCU the most.At the same time, when parameter  = 0.5, the PDM of the three algorithms (KAI-MMS, KAI-LCU, and KAI-MPCM) are 0. Therefore, the SLA violations of the three algorithms (KAI-MMS, KAI-LCU, and KAI-MPCM) are 0.
(5) Energy Efficiency.For the six algorithms (KAM-MMS, KAM-LCU, KAM-MPCM, KAI-MMS, KAI-LCU, and KAI-MPCM) with different parameter (0.5 to 3.0 increasing by 0.5), the energy efficiency () is shown in Figure 6, which shows the energy efficiency of the six algorithms.As discussed in Section 3.4, (5) depends on energy consumption (Figure 2) and SLA violation (Figure 5).Compared with KAM-LCU and KAM-MPCM, the energy efficiency of KAM-MMS is the most.The reason is that KAM-MMS reduces the migration time of VMs.In terms of energy efficiency, KAM-MPCM is better than KAM-LCU.This can be explained by the fact that KAM-MPCM considers both CPU utilization and memory size.As ( 5) is related to energy consumption (Figure 2) and SLA violation (Figure 5), when parameter  = 0.5, 1.0, and 1.5, respectively, the corresponding SLA violations of the three algorithms (KAM-MMS, KAM-LCU, and KAM-MPCM) are 0. Therefore, the corresponding energy efficiency of the three algorithms (KAM-MMS, KAM-LCU, and KAM-MPCM) is 0. Similarly, compared with KAI-LCU and KAI-MPCM, the energy efficiency of KAI-MMS is the most; the reason is that KAI-MMS reduces the migration time of VMs.In terms of energy efficiency, KAI-MPCM is better than KAI-LCU.This can be explained by the fact that KAI-MPCM considers both CPU utilization and memory size.As ( 5) is related to energy consumption (Figure 2) and SLA violation (Figure 5), when parameter  = 0.5, the corresponding SLA violations of the three algorithms (KAI-MMS, KAI-LCU, and KAI-MPCM) are 0. Therefore, the corresponding energy efficiency of the three algorithms (KAI-MMS, KAI-LCU, and KAI-MPCM) is 0.
Considering energy efficiency, we choose the parameter of six algorithms to maximize the energy efficiency.Figure 6 illustrates that parameter  = 2.0 for KAM-MMS is the best (denoted by KAM-MMS-2.0),parameter  = 3.0 for KAM-LCU is the best (denoted by KAM-LCU-3.0),parameter  = 3.0 for KAM-MPCM is the best (denoted by KAM-MPCM-3.0), parameter  = 1.0 for KAI-MMS is the best (denoted by KAI-MMS-1.0),parameter  = 1.0 for KAI-LCU is the best (denoted by KAI-LCU-1.0),and parameter  = 1.0 for KAI-MPCM is the best (denoted by KAI-MPCM-1.0).
For the two kinds of adaptive three-threshold algorithms (KAM and KAI), there are three VMs select policies which could be provided.Does a policy that is the best compared with other two policies in regard to energy efficiency exist?If it exists, which VM select policy is the best?In order to solve this problem, we have made three paired -tests to determine which VM policy is the best in regard to energy efficiency.Before using the three paired -tests, according to Ryan-Joiner's normality test, we verify the three VM select policies (MMS, LCU, and MPCM) with different parameter () that maximization energy efficiency follows a normal distribution with  value = 0.2 > 0.05.The paired -tests results are showed in Table 3, which shows that MMS leads to a statistically significantly upper value of energy efficiency with  value < 0.05 compared with LCU and MPCM.In other words, in terms of energy efficiency, MMS is the best VM select policy compared with LCU and MPCM.Therefore, KAM-MMS-2.0and KAI-MMS-1.0are the best combination in regard to energy efficiency.In the following section, we use KAM-MMS-2.0and KAI-MMS-1.0 to make comparison with other energy-saving algorithms.(6) Comparison with Other Energy-Saving Algorithms.In this section, the NPA (Nonpower Aware), DVFS [9], THR-MMT-1.0[15], THR-MMT-0.8[15], MAD-MMT-2.5 [15], IQR-MMT-1.5 [15], and MIMT [16] are chosen to make comparison in regard to energy efficiency.The related experimental results are shown in Table 4.
Three-Threshold Algorithm.As discussed in Section 4.1, what are the threshold values of   ,   , and  ℎ ?To solve this problem, two adaptive threethreshold algorithms (KAM and KAI) are proposed.

Figure 2 :Figure 3 :
Figure 2: The energy consumption of the six algorithms.

Figure 4 :Figure 5 :
Figure 4: The PDM of the six algorithms.

Figure 6 :
Figure 6: The energy efficiency of the six algorithms.

Table 1 :
Power consumption by the two servers at different load levels in Watts.corresponds to the CPU utilization of VM , parameter  0 represents start time of migration,    is completion time,   corresponds to the total memory used by VM , and   represents the available bandwidth.
to deal with it.As described earlier in Section 4.1, VMs on heavily loaded host or VMs on little-loaded host must be migrated to another host with light load (CPU utilization of a host at   -  interval); VMs on lightly loaded host or VMs on moderately loaded host are kept unchanged.
Algorithm.VM deployment can be considered as a bin packing problem.In order to tackle the problem, a modification of the Best Fit Decreasing algorithm denoted by Energy-Aware Best Fit Decreasing (EBFD) could be used

Table 3 :
Comparison of the VMs select policies using paired -tests.