Efficient Smart Grid Load Balancing via Fog and Cloud Computing

,


Introduction
Demand side management (DSM) system provides information and communication technology (ICT) is one of the important functions of smart grid (SG) [1].It performs bidirectional communication to get user information and distribute energy among users according to their needs.A huge no. of monitoring devices have industrialized, deployed and utilized in DSM.Many new concepts have been introduced in smart grids, such as charging/discharging of electric vehicles (EVs), smart meters, and smart home appliances, etc., [1].As the number of smart devices grows, huge storage space and high levels of security are required.To solve the above problems, the concept named cloud computing had introduced.In recent years, the demand for cloud computing has increased rapidly.Cloud computing provides Internet services with access to services from anywhere in the world.Cloud computing can o er the lowest storage cost, highest speed, high performance, and exibility.Cloud data centers typically consist of many physical machines (PMs).Virtualization technology allows providers of cloud services to o er users the convenience of virtual machine (VM) and resource sharing [2].Intelligently loading VMs into PMs is a research theme that saves energy and minimizes operating costs.
e cloud can be public, hybrid, or private.Examples of cloud computing include Net ix, Skype, e-mail, and Microsoft O ce 365.However, the issues with cloud computing are latency and security.
en introduce fog computing to solve the above problems.
Computer Information Systems Corporation (CISCO) introduced the theory of fog computing in 2014.Fog computing is very useful to provide services with minimal latency at the network edges.Fog computing decreases the load on the cloud and provides the convenience of uninterrupted communication with users.Communication between the fog and the user can take place over a specific communication medium, WiFi.Fog computing provides users through local services.
End users communicate directly with Fog, and their requests are fulfilled by multiple applications on the VM.Communication between end users and fog requires network resources.High network resource utilization causes communication delays.Network latency can even cause network congestion.rough 2-ways network resources can be balanced: VM consolidation and intelligent task assignment.VM consolidation includes VM migration and VM placement.VM placement is used to place VMs intelligently based on the processing power of each node, but cloud data centers are complex and users cannot rely on their initial placement [3].VM migration is to modify the position of VMs by bandwidth utilization rate and this is an optimal approach to balance a load of network resources [3].Intelligent task assignment greatly increases the operational cost [3].In this research we have presented an algorithm of live VM migration to make a balance between different network resources.VM-to-PM packing is also has been performed to reduce the number of active PMs.3-Layered architecture is presented in this paper: cloud, fog, and a consumer layers.Cloud and fog provide same services: infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and software-as-a-service (SaaS).Migrating a virtual machine is changing the location of the virtual machine through bandwidth utilization.
is is the best way to balance the load on your network resources [3].Intelligent task assignment significantly increases operational costs [3].
is paper proposes a real-time VM migration algorithm for load balancing network resources.VM-to-PM packaging is also done to reduce the no. of active PMs.In this paper, we propose 3-layer architecture of cloud layer, fog layer and consumer layer.Cloud and fog offer the same service: PaaS, IaaS and SaaS.e difference between cloud and fog are: the distance from the user, processing, size, and users.e distance between the consumer layer and the cloud is thousands of kilometers, and the fog is at the edge of the network.

Problem Statement.
Cloud data centers have large no. of PMs [2].Virtualization technology allows cloud service providers to share resources and also offer capabilities of virtual machine to their customers [2].Cloud service providers may want to package multiple VMs into a small number of PMs to minimize operational costs and save energy [2].e author of [2] suggested a way to solve the VM-to-PM packaging problem based on shadow routing.However, when a PM is fully packed, there is a chance of congestion.Due to congestion, there will be more delay.
ere is a rapid growth of cloud data centers.e rapid growth of Internet services has imbalanced the load on network resources [3].e bandwidth usage of some PMs may be too high resulting in network congestion [3].Fu et al. in cite 3 proposes a layered algorithm of VM to effectively balance the network load [3].e cost of migrating between regions is high because the regions are separated from each other.An intraregion migration algorithm is proposed to minimize the cost of interregion migration.However, the intraregion migration algorithm is expensive because it migrates VMs too often.Reference [4] proposed a virtual machine migration algorithm using Markov decision processes and Q-learning algorithm, which is applied to Round Robin, inverse ant system, max-min ant system and ant system algorithm.In this paper, cost-aware intraregion VM migration algorithm is proposed.e problem of VM-to-PM packing problem is also considered.Reference [5] proposed a virtual machine migration process, which will be responsible for minimizing the migration that leads to the reduction of response time.

Related Work.
In reference [1], the authors proposed a method based on shadow routing to solve VM autoscaling and VM-to-PM packaging issues.e intelligently package in VMs to reduce the number of managed PMs, save energy, and minimize operating costs.However, congestion can occur if there are many VMs in the PM.In references [6,7], the authors proposed IGGA, ACO and firefly optimization algorithms to minimize the migration cost.ese algorithms are also proposed to reduce high energy consumption.However, the migration cost and load on data center is still high.Reference [8] predicts VM migration using chaotic Drosophila Rider neural network, and uses VM migration and optimal switching strategy to optimize power.If the state of predicted load is overloaded, the HHSMO optimization algorithm will be used for VM migration.If the state of predicted load is underloaded, the HHSMO algorithm will be used to switch the server ON/OFF, which will improve the energy efficiency of the system.Reference [9] proposed a model to analyze the current state of the running tasks according to the results of the QoS prediction assigned by an ARIMA prediction model optimized with Kalman filter.According to the analysis of QoS status, a scheduling strategy is calculated by combining particle swarm optimization (PSO) and gravity search algorithm (GSA), which reduces resource consumption and SLA violation rate.However, minimizing the number of VM migrations is not a solution for reducing energy consumption.However, it affects the system performance.In reference [10], the author proposed a scheduling algorithm that assigns four priority levels to vehicle charging and discharging.
e proposed method optimizes the latency of plugins.It provides the communication architecture between SG and cloud platform.It optimizes the load during peak hours.However, the pricing policies used to eject vehicles are not efficient for users.In reference [11], a virtual data center migration (VDC-M) algorithm has been proposed to decrease the high energy consumption and wasted network resources.is article considers correlated VMs as a whole rather than individually.It remaps VDC-M requests, calculates migration paths, and allocates bandwidth resources to migrated VMs.However, it consumes a lot of processing time.Munshi and Mohamed proposed the Hypergraph Partition Algorithm (HPA) to solve the problem of minimizing the cost of 2 Mathematical Problems in Engineering server consolidation [12].e proposed method minimizes network overhead to 50%.However, migrating one VM at a time adds cost.Table 1 shows the related work of the paper.
A system to handle big data is proposed by Munshi and Mohamed in based on the lambda architecture [12].e main objective of the proposed method is to do real-time operations and parallel batching.e data of all the connected smart devices is stored on Hadoop Lake of big data.
e system can handle a large amount of data, but there is a limit on the data, and the devices are increasing day by day.
at means more advanced research and more robust algorithms are required to compete and work with the dynamic nature of intelligent machines.
Arora et al. proposed various security algorithms that work on data threats and preserve them while using the Internet [13].e authors deploy encryption and decryption algorithms to eliminate the fear of losing data and data segregation.Furthermore, they have also shown a comparison between different algorithms based on their features.
Nepal et al. proposed an algorithm that works on the features of long term preservation like flexibility and business progress that decreases the jeopardies by recovering from disasters [14].Now, these features are not available on physical machines that they have introduced it on virtual machines where all the data is stored on cloud resource.Riahi and Krichen presented an algorithm of multi-objective genetic [15].e primary purpose of the proposed algorithm is to solve the issues found in the placement of virtual machines, like over usage of physical devices and depletion of resources in the cloud.
ey have also used Bernoulli Simulations to prove that their work has the right adaptivity approach.
ey have also shown positive results in a company where they apply the proposed algorithm, cost reduction, and the resources are fully optimized in the end.But a negative point in their submitted work is that there is nothing to handle big data accurately.So there is a need to solve this problem as now data is the most critical object for each business and organization.Khalid et al. proposed a system for home energy management that has resolved the issues of workload with harmonization [16].It has practically performed very well, and the energy is consumed efficiently and compactly.e cost of electricity has also been reduced with this algorithm, but a significant flaw is that it does not consider user comfort while working.So here needs improvement.Chekired and Khoukhi worked on the energy consumption and proposed a scheduling algorithm in which the electronic automobiles use the power based on priorities [1].e vital benefit of the proposed algorithm is that it has also taken the charging and noncharging states of the vehicles into consideration.ey have divided the users into two types named calendar and random users.
e main objective of their algorithm is to supply power to the public in overload and underload hours.ey have introduced four levels of priority to resolve the overload and underload issues.e 2 and 4 priority means the discharge of calendar and random users, whereas if the demand is less or equal to the energy production, then calendar users will get priority one.Random users will get priority 3 for charging.And if the need is greater than the production, priority 1 and 3 means the discharging of automobiles, and 2 and 4 are used for charging.An alarming fact of the algorithm is that the damaged batteries are disposed of that infest the environment.Jensi and Jiji proposed the swarm optimization algorithm specifically designed to solve the universal optimization issues [17].e authors present a framework that is proficient at solving optimization issues.ey have also shown a comparative analysis of the algorithm with two prevailing algorithms with the help of 21 standard functions.Furthermore, the algorithms have some problem that needs to be resolved as it can only work with single objective tasks and have precipitate conjunction frequency.Faheem et al.
proposed a new model dedicated to in-house infrastructure and is location independent [18].eir model highlights the security issues and future challenges.e article wants to inform the users about the security risks linked with cloud storage and data accuracy.
ey have also presented a comparative analysis to show the compatibility of their paradigm.But the issues are still in the way of progress that needs attention to find out the solution.
A shadow routing approach is proposed by Gou et al. [19].ey have introduced a packing of virtual machines to physical machines for circumventing surplus of cloud resources.
ey only want to save energy and cost.VM placement problem is also solved via autoscaling.Optimization is not needed from the beginning because the presented algorithm is adaptive.
e practicality of the algorithm is very significant.Furthermore, there is a need for more research, and more robust algorithms are required to resolve the issues of VM placement.Fan et al. proposes a dynamic Virtual Machine amalgamation [20].
at is an energy-aware system to decrease the usage of power in virtual machines.e authors have worked on the migration and placement of VM.
ey have worked with excellent research and use the best mechanisms for placement of VM, and they have also worked on migration based on top CPU consumption to show the stability in the cloud circumstances.us it has proved its worth by decreasing power consumption and by balancing cloud load.But there is still a need for an excellent algorithm that can handle critical data centers and increase migration expenditures.Mirjalili, et al. offered two novel components in a multi-objective grey wolf optimization algorithm [21].ey use fixed-size archives for the security of nondominating clarifications.ey have also tested the proposed algorithm with ten known standard functions to show the adaptiveness of the algorithm.It can work with binary and ternary functions, but a drawback of the algorithm is that it has failed to work with four or more parts accurately.Song et al. proposed a model on the consumption of energy efficiently [22].ey worked with a mathematical expression called EE, which means energy efficiency calculated with the CPU's frequency and usage.
ey have also implemented numerous tests to prove that the proposed algorithm is correct.ey have also confirmed that the EE approach is accurate for cloud systems with the theory and practical performance.A mechanism specifically designed to manage demand-side power optimization is proposed by Naz et al. [23].It preserves stability between the production and the plea of energy-the proposed Mathematical Problems in Engineering architecture for the communication between users and the production unit.
e reusable resources are deployed to produce power.ey have also used the grey wolf valuation algorithm to balance between overload and underload of energy consumption.
e algorithm has successfully transferred the on-peak load to off-peak so that there will be equilibrium between the demand and energy production.But a discomfort of the proposed algorithm is that it can shift the burden to underload hours, but they can get overloaded with the shifting of the load, and the problem still exists.
ere is a need for enhancements in the algorithm that can resolve this issue and work on a stable level.
e Advancement in grey wolf optimization algorithm is proposed by Wu et al. [24].ey have to work on the existing GWO algorithm and modify it to overcome issues like the local minima.e author verified the algorithm by 29 tests, and it proved excellent for single-objective tasks.Still, the real-time optimization problems are more critical and need more than one objective function to resolve global issues.So the modified version of the algorithm still needs more work to do it.
An efficient algorithm to solve the virtual machine placement issues is proposed by Liu et al. [25].
e idea behind this great algorithm comes from an existing algorithm named the ant colony optimization algorithm.e authors' primary objective is to take such an algorithm in the market that resolves VM placement issues and decreases the use of Physical machines and energies, leading to cost reduction.But it does not help manage resources.at is why it is still in infancy, so more research is required in this algorithm.
A layered virtual machine migration algorithm is proposed by Fu et al. [3].e regions are connected and can share the burden of each other.For using cloud resources, two algorithms are used.e main objective of the proposed system is to determine that a region is overloaded or underloaded and then convert it to the normal phase.One algorithm is dedicated to this interregion communication.
Furthermore, it is a great initiative to reduce the congestion, so there will be no delays.But a lousy fact of the proposed system is that migration of even one task will cost high.at is why there is a need for such algorithms that work successfully with interregion migration systems at a reasonable cost.
Two pricing scheme is proposed by Javaid et al. [26].at uses IT and communication with the conventional lattice to make it efficient.One pricing scheme is for shortterm users, and the other is for long term users.In a shortterm pricing scheme, you need to pay as you are moving, and in the different strategies, the users need to pay honestly for the instances in possession.
In order to achieve the balance between the demand of power consumers and the supply side of power grid, a threetier architecture of "Cloud-Fog-Consumer" is established in this paper.Based on the live VM migration algorithm and three service agent strategies, the migration cost and response time are reduced.

Research Methodology
Bidirectional communication architecture is proposed for the efficient management of the resources in the residential area, which has three layers: Consumer layer, Fog layer and Cloud layer as shown in Figure 1.Cloud layer mainly is responsible for dispatching, communication and data processing, which includes service provider, cloud environment, and utility.In the Consumer layer, residential areas are considered.On the basis of six continents, the world is ere is a chance of congestion when PM is fully packed.Due to congestion there will be more delay.
[3] Balance the load of network resources.
Layered virtual machine migration.
Effective management of the physical and network resources and high performance in balancing the bandwidth utilization rate of hosts.e migration cost is high.[4] Minimize resource consumption and dynamic traffic.
Cluster-aware VM collaborative migration scheme for media cloud.
An ideal migration by using clustering algorithm and placement algorithm and effective migration of VM media servers.
e proposed scheme does not optimize the VM migration in media cloud.Migration cost is very high. [5] Reduce energy consumption with great migration cost.

An improved grouping genetic algorithm (IGGA).
Optimizes the consolidation score and reduces energy consumption with high consolidation score.
e migration cost is still high because of migration of one VM at a time. [6] Minimize energy consumption and great migration cost.

Ant colony system (ACO).
Minimizes energy consumption by decreasing number of active PMs and ensures the SLA based on quality of services requirement.
e migration cost is still high because of migrating of one VM at a time. [7] Lessen energy consumption and great migration cost.

Firefly optimization approach.
Energy-aware VM migration technique for cloud computing and migrates the overloaded VMs to the normal PMs.
e load on cloud data center is still there because by migration we can only achieve high utilization rate of network resources.

4
Mathematical Problems in Engineering classified into 6 regions [27][28][29][30][31][32][33][34][35][36].In this research, we consider region 0, which is North America, because the percentage of users in this region is 80 million [31].Two fogs are used for two cluster of buildings to effectively meet the needs of the users.Suppose that each cluster has 10 buildings.Each building has 50-80 homes.A smart meter is connected to each house and a controller to each cluster for communication.
A fog contains 2 PM.Each PM contains: memory, storage, number of processors along with processors speed and available bandwidth which is shown in Table 2. Virtualization technology enables service providers to provide the users the facility to use the VMs instead of PMs. 60 VMs are considered in each fog.ere is a load on some PMs.If the load on PM is greater than the defined bandwidth, the VMs will migrate to the normal PM.Label 0 represents the overloaded region and label 1 shows the normal region.e stored data in the fog is temporary to make it permanent fog forwards the data to cloud.Consider using a centralized cloud platform.
e cloud stores data persistently and provides the utility grid facilities to meet the needs of consume.Each cluster has a controller for communication because users cannot communicate directly with fog.
Energy demand is expressed as E DEM .e fog then communicates with the MG near the clusters of buildings.Also, MG uses renewable energy resources.MG also have its own generation of power resources and a small amount of electricity.e index of MG is MG � 1, . .., M and the total energy generated in MG is expressed as E gen .It sends back an acknowledgment of the energy they have.If e MG will fulfill the consumers need. Else To provide macro grid facility, fog communicates with the cloud.e macro grid is on layer 3 (Cloud Layer).ere is a large amount of electricity produced by macro-girds.Wind turbines, fossil fuels, water turbines, etc. are sources of electricity for macro grids.e workflow is shown in Fig- ure 2. e key symbols used in the paper are given in Table 3.
Figure 2 shows the work flow of overall system.e first connection Node is grid station, second is cloud server, third one is fog controller.Lastly, we have consumers and cities.

Live VM Migration Algorithm
With the continuous change of VM load, the physical host load may be too high to reduce the service quality, or the physical host load may be too low to make full use of resources.
erefore, it is necessary to migrate virtual machines periodically in order to improve the service quality and resource utilization rate.e VM migration algorithm comprehensively considers the CPU and memory usage of VM, and performs data migration to ensure the stability of the load.e main idea of this algorithm is to migrate VMs that exceed a certain bandwidth.ere is a fixed amount of PM and its bandwidth is defined by fog.Multiple VMs are

Service Broker Policies
ese are used to map incoming traffic from consumers to available fog.Below are three service proxy strategies.

Closest Data Center.
e nearest data center (CDC) holds all fog index tables.It selects the fog with the smallest delay and the closest cluster to the same region.If the distances to the clusters are the same, then fog is randomly selected.

Optimize Response Time Policy. It keeps an index table of
all the fog.Optimized response time (ORT) for checking all fog history.en select the fog with the best response time in the same region.

Simulation Results and Discussion
In this paper, we used the Cloud Analyst simulation tool.Live VM migration algorithms have been proposed to balance the overall load on the network.It can also be used to effectively utilize network resources and minimize overall migration costs.To do the experiment, this paper presents the results using the live VM migration algorithm and three service agent strategies.Live VM migration always seeks the best solution to minimize cost and increase processing time.

Cost Comparison.
Figure 3 shows the VM cost, MG cost, total data transmission cost, and total cost when using the live VM migration algorithm with the three service agent strategies.e total cost of using DRL is not good, as the strategy is still under consideration.erefore, the result is not very accurate.

Response Time.
Response time (RT) is the time taken by the VM to execute a request from the initialization process and receive a response.e response time of the live VM migration algorithm with three service agent strategies is shown in Figure 4. e response time using the live VM migration algorithm with ORT is optimal because it selects the fog with the best response time.Equation (3) gives the total response time and equation (4) gives the total delay.T latency represents signal delay time.

Processing Time.
e total time taken to process a request is called the processing time (PT).Figure 5 shows the processing times for the three service agent strategies.e fog is chosen based on the best response time, so the processing time using the real-time VM migration algorithm and ORT is optimal.Equation ( 5) is used to calculate the processing time and equation (6) to calculate the total bandwidth for the user.
e total time taken to process a request is called the processing time.Figure 5 shows the processing times for the three service agent strategies.e fog is chosen based on the best response time, so the processing time using the realtime VM migration algorithm and ORT is optimal.
In Figure 6 there are some value of different parameters which is used in SG and VM.ese parameters are storage, memory, available bandwidth, and the number of different processors.
To arrange SGs queries, the designed methodology analyzes six areas, each of which is controlled by 2 fog nodes.Each area contains a 100 SGs, each with a max of 1,000 houses.Each house asks power via SG, which is sent to fog node, who subsequently allocate energy to that particular residence via SG based on consumption and energy data.Each area has a fully electric Power Distribution Stations capable of charging or discharging a total of 1,000 units in 60 minutes.Energy provider (company) is connected with web and SGs. e web server holds all of the data about grid's power production and is linked to fog nodes throughout each location.Each cloud server contains data on energy usage in its associated area and makes decisions in realtime to meet users' electricity needs.e cloud hosts interact with remote server to designate an energy business that really can meet the needs of SGs who are (1) input: Hostlist, VMlist (2) CurrentTime (3) LinkSpeed (4) VMMigrationTime ( 5) VMMigraitonalListTime (6) for i:0 to Hostlist do (7) host: HostLargeSize in Hostlist (8) while host >0 do (9) VM: VMLargeSize in VMlist (10) for j : 1 to VMlist do (11) if VM > host then (12) VM: VM++ in VMlist ( 13) else (14) host: host-VM (size) (15) VM is in Migration (16) end if (17) end for (18)   Mathematical Problems in Engineering powerless.e suggested energy efficiency improvement system is intended to reduce cloud server costs and delay.By minimizing the overall reaction time among cloud and SGs, delay is reduced.Figure 7 shows the price & Cost including Grid cloud transmission (GCT) of broke services which is provided by the power suppliers to energy consumers.And in Figure 8, DRL and GCT show the cost allowance of service broker policies for their clients and consumers.
e average response time by 12 different regions are shown in Figure 9 which identify the PSS, Cluster by Active VM, Round Robin algorithm and rottled graph.
Response time is the time taken by the VM to execute a request from the initialization process and receive a response.
e response time of the live VM migration algorithm with three service agent strategies is shown in Figure 9. e response time of the live VM migration algorithm with three service agent strategies is shown in Figure 10.e response time using the live VM migration algorithm with ORT is optimal because it selects the fog with the best response time.
e total processing time and the overall response time of all 12 regions or 12 fogs shown in Figure 10.ere is average response time and fog processing time.And average response time is greater in value than the fog response time.
Figure 11 shows the VM cost, MG cost, total data transmission cost, and total cost when using the live VM migration algorithm with the three service agent strategies.Mathematical Problems in Engineering e total cost of using DRL is not good, as the strategy is still under consideration.
erefore, the result is not very accurate.
In above Figure 12 the comparison of response time and processing time with different regions such as fog processing time of 12-fog regions are less as compare to fog processing tome of 6 regions.As the response time of overall response time is smaller in value than the response time of half regions such as 6 regions or fog-6.

Conclusion and Future Work
Cloud computing is rapidly gaining popularity.Cloud computing is committed to providing efficient services to our customers.e main purpose of cloud computing is to provide efficient services to customers.It contains many resources.Cloud computing resources need to be effectively managed.VM consolidation is becoming more common, and network resources can be effectively managed.VM placement and VM migration are working fine.Migrating VMs has more powerful consequences than deploying VMs.In this paper, we propose an integrated environment based on clouds and fog.A two-way communication architecture is shown.Consumers send demands to fog, and MG provides energy to meet demand.is paper proposes live VM migration to effectively balance the load of VMs in the fog.Migration cost is 18% better with CDC and ORT. e response time with live VM migration algorithm and ORT is 11% better than the dynamically Figure with load.However, processing time is increased because live VM migration algorithm always goes for the optimal solution to minimize cost.In future, cluster-based VM migration will be done for more efficient results.Mathematical Problems in Engineering

4. 3 .
Dynamically ReconFigure with Load.Dynamically reconFigure with load (DRL) is the closest combination of data center and optimized response time.It selects the fog that is closest to your cluster and has the best response time.It is also responsible for scalability.It can increase or decrease the number of VMs accordingly.

Figure 12 :
Figure 12: Response and processing time comparison.