A Survey of Game-Theoretic Approach for Resource Management in Cloud Computing

Cloud computing is a groundbreaking technique that provides a whole lot of facilities such as storage, memory, and CPU as well as facilities such as servers and web service. It allows businesses and individuals to subcontract their computing needs as well as trust a network provider with its data warehousing and processing. The fact remains that cloud computing is a resource-ﬁnite domain where cloud users contend for available resources to carry out desired tasks. Resource management (RM) is a process that deals with the procurement and release of resources. The management of cloud resources is desirable for improved usage and service delivery. In this paper, we reviewed various resource management techniques embraced in literature. We concentrated majorly on investigating game-theoretic submission for the management of required resources, as a potential solution in modeling the resource allocation, scheduling, provisioning, and load balancing problems in cloud computing. This paper presents a survey of several game-theoretic techniques implemented in cloud computing resource management. Based on this survey, we presented a guideline to aid the adoption and utilization of game-theoretic resource management strategy.


Introduction
Game theory is a formal framework that includes a set of mathematical tools for studying complex interactions between interdependent rational players. Strategic games have a variety of applications in economics, politics, sociology, and other fields. Over the last decade, there has been an increase in studies using game theory to model and evaluate modern communication networks, as well as upcoming technologies such as cloud computing and other internet computation platforms [1].
Cloud computing help provide potently gaged shared resources over the Internet to prevent costs of overprovisioning. Cloud computing supplies three main service models, namely, platform, software, and infrastructure services. With the arrival of improved research and technology, cloud computing was identified to deliver XaaS, meaning one or all as a service. X could be defined as communication, storage, data, network, and so on.
Cloud computing has had several positive effects on the industry since its inception such as easy software installation and enhancement of applications. Cloud computing, being a recent IT transformation, is still in its developmental stage.
Consequently, everyone, from the technologist to the salesperson, is giving their own definition of cloud computing, thereby creating obfuscation [2]. us, there are lots of cloud computing interpretations, but we would be considering the definition from the National Institute of Standards and Technology (NIST). e NIST defined cloud computing as a technique and on-demand network access to a shared pool of computing resources (such as memory and CPU, among others) that can be speedily distributed and released with little attempt or communication from the service provider [2][3][4].
On the other hand, resource management (RM) is a process that deals with the procurement and release of resources. It is a major issue in cloud computing architecture.
is entails the impartial distribution of resources such as storage, central processing unit, servers, applications, and data. Cloud computing must avail resources requested by users, but this is inhibited by the presence of finite resources, making it quite demanding to meet users' needs.
Another issue is how to ensure resource optimization based on cloud users' request requirements based on the cloud provider's infrastructure. And also, how new resources can be correctly modeled based on the diverse cloud computing services performed and to ensure exact resources can be provisioned. Cloud users have no power over resources, so they request resources, such as CPU, memory, and storage, from cloud providers to meet their required purpose.
It is also a challenge when large data is meant to be moved from one provider to another. Robust techniques are required to manage cloud computing resource management and optimization. Also, adequate knowledge and information are needed since it is dependent on the cloud service provider. is work analyses the ability of game theory as a possible solution in modeling the resource management and optimization problems in cloud computing. Game theory conforms absolutely with the architecture of cloud computing, bearing in mind that an array of actors with contradictory purposes can be scrutinized. Besides, the game theory model's abundance allows the consideration of different cloud architectures and topologies.

Research Contributions.
Cloud computing offers Internet-based computing services, by allocating resources to meet users' requirements, but this is not usually the case. Due to the difficulties in availing all demanded resources, shared resources increase over time. is study tried to ensure efficient and effective access to a cloud computing resource, taking into account that inefficiency may cause a total system failure.
By employing an appropriate resource management strategy using game-theoretic models for resource management and optimization, we can improve cloud services delivery. is study gives a broader view and understanding of the concept of game theory and its various application areas, where game theory is used to suggest a systematic approach to decision-making. In this paper, we can categorically deduce that resource management problems in cloud computing architecture were tackled deeply, which brought about fairness, high utilization, and so on.
Having analyzed several game-theoretical models for resource management strategies, we can say that no cloud user receives more resources than the others or prefers the allocation of another. As a result, the total value of resources obtained by cloud users must equal the total resources accessible in the cloud.
is paper aims to present game-theoretic models as a potential solution for resources management in cloud computing architecture, to get the best strategy and obtain the most effective management strategy. e objectives are to: review various literature on management in cloud computing, analyze several game-theoretic models for resource management in cloud computing, discuss the findings of the analysis, and conclude.

Cloud
Computing. Cloud computing emanated from the concept of utility computing. It is defined as the provision of computational and storage resources as a metered service to users [5]. A cloud is a portion of cluster resources capable of expanding and compressing to accommodate the load changes [3,6].
is idea highlights the reality that modern information technology settings necessitate the ability to dynamically increase capacity or add capabilities while limiting the need to expend money and time on new infrastructure acquisition [7]. e data centers, which consist of networked servers, cables, power supplies, and other components, are at the backbone of cloud computing, hosting operating applications and storing business information [8].

Characteristics of Cloud Computing.
e following are some of the characteristics: (i) Multitenancy: It allows a large group of users to share resources and costs; culminates in the centralization of infrastructure and, as a result, cost reductions due to economies of scale; and allows for dynamic resource allocation, which is monitored by the service provider (ii) On-demand services (iii) Network access by using the Internet as a medium (iv) Scalability by maintaining the elasticity of resources

Advantages of Cloud Computing.
e various advantages of cloud computing are listed below: (i) Open access: With the help of a suitable web association, cloud specialists/organizations might be reached. (ii) Enhanced economies of scale: e client enjoyed lower venture and operating costs while the supplier has more revenue in masterminding the framework services with high sustainability and flexibility. (iii) Limit to the on-request foundation and computational control: Users may be interested in computational power, storage, and other foundations based on their requirements for a pay-per-use program. (iv) Enhanced asset usage: Clients use resources effectively because they return assets to the cloud provider when they no longer require them. As a result, adaptability and versatility can be increased. (v) Decreased data innovation (IT) framework needs: Distributed computing provides the client with a foundation as a benefit of interest. As a result, there is no longer any need to purchase the IT foundation. At any point in time, the client can purchase it from a cloud provider. (vi) Pooling of assets: e buyer, for the most part, has no knowledge of the expert organization's territory. As a result, the supplier serves a variety of customers by appointing assets that are both powerful and practical.
(vii) Association's center around their center abilities: Non-IT clients can contact IT specialist organizations for their business movement needs [9].

Cloud Computing Services.
ese services are mainly based on three delivery models: (i) Software as a service (SaaS): is allows users of the cloud to access the provider's apps (PA) over the Internet (ii) Platform as a service (PaaS): is allows users to deploy their apps on a platform that the service provider of cloud (SPC) provides (iii) Infrastructure as a service (IaaS): It allows users to rent information technology(IT) infrastructures such as storage, server and networking resources provided by service provider of cloud(SPC) Virtual machines, cloud computing, and containers are just a few examples of IT innovation over the past two decades that focused on ensuring that consumers are isolated from the underlying physical system that runs code [10].
Some of these innovations are listed in the following section.

Serverless Computing (SC).
Serverless computing is a type of cloud computing in which the servers required for computation have been hidden away, leaving the cloud provider to decide where and how to do the computation [11]. Serverless computing, also referred to as serverless architecture, refers to function-as-a-service solutions in which a client writes code that only handles business logic and uploads it to a provider. All hardware provisioning, virtual machine and container management, and even functions such as multithreading, which are frequently incorporated into application code, are handled by that provider [10]. In contrast to typical cloud computing, serverless computing is distinguished by the fact that the infrastructure and platforms on which the services are delivered are transparent to clients. Customers are only concerned with the desired functionality of their application under this method, and the rest is left to the service provider [12].

Virtual Machines (VM).
A virtual machine is a computer file, commonly referred to as an image that mimics the behavior of a real computer. A virtual machine (VM) is a device that works like a physical computer, such as a laptop, smartphone, or server. It features a CPU, RAM, and disks for storing your files, as well as the ability to connect to the Internet if necessary. Virtual machines (VMs) are software-defined computers that run on physical servers and exist only as code. In present cloud computing architecture, virtual machines (VMs) provide unique solutions to wasted resources, application inflexibility, software manageability, and security concerns.
VMs are viewed as a safe computing resource, but SC is treated as a common pool resource, vulnerable to potential overexploitation due to the uncertainty created by its shared nature [13]. Users' computing tasks are specified as a pipeline of event-triggered functions in serverless computing (SC), whereas, in contrast to the virtual machines (VMs) model, in which users rent VMs from the cloud provider and resources may sit idle due to sporadic requests, resulting in unwanted monetary costs, the SC model allows users to offload computing tasks to the cloud provider, who remains responsible for managing the infrastructure and respective resources.
Each VM is classified as a "secure resource" because it is rented exclusively by a single user, who benefits from guaranteed computing service. e SC, on the other hand, is often more cost-effective and has the potential to provide great user satisfaction [14].

Containers.
Containers have recently been shown to be a particularly successful lightweight method for virtualizing apps in the cloud [15]. Containers are software packages that include all of the components needed to run in any environment. Containers virtualize the operating system in this fashion, allowing them to run anywhere from a private data center to the public cloud or even on a developer's laptop. Containers are a common approach to encapsulating the code, settings, and dependencies of an application into a single object. Containers run as resourceisolated processes and share an operating system deployed on the server, ensuring speedy, reliable, and consistent deployments independent of the environment.

Cloud Computing Resource Management (RM).
According to Gonzalez et al. [7], resource management is the process of providing computing, storage, networking, and energy resources to a set of applications in order to meet the infrastructure providers' and cloud customers' performance targets and requirements. According to Jennings [16], resource management is the process of assigning computing, storage, networking, and energy resources to a set of applications in order to meet the infrastructure providers' and cloud clients' performance targets and requirements. Support for heterogeneity of hardware and capabilities, pay-per-use model, and on-demand service model are some of the important elements that make resource management more challenging in cloud computing [9].
Manvi and Shyam [17] classified resource management into nine components: (i) Provisioning: e allocation of resources to a workload (ii) Allocation: Resource allocation across competing workloads (iii) Adaptation: e ability to modify resources dynamically to meet workload demands Journal of Computer Networks and Communications (iv) Mapping: e relationship between the workload's resource requirements and the cloud infrastructure's resources (v) Modeling: A framework for predicting a workload's resource requirements by describing the most significant resource management attributes, such as states, transitions, inputs, and outputs, within a given context (vi) Estimation: An intelligent guess about the real resources needed to complete a task (vii) Discovery: Identifying a list of resources that can be used to run a workload (viii) Brokering: e process of bargaining the availability of resources through an agent in order to ensure that they are available at the proper time to complete the assignment (ix) Scheduling: A timetable of events and resources that determines when a workload should begin or conclude based on the activity's duration, predecessor activities, predecessor relationships, and assigned resources Resource management in the cloud, according to Singh [18], consists of three functions: resource provisioning, resource scheduling, and resource monitoring.

Resource Provisioning.
e authors characterize this stage as determining the appropriate resources for a given workload based on the QoS requirements stated by cloud users.

Resource Scheduling.
is is the process of mapping, allocating, and executing workloads depending on the resources chosen during the resource provisioning stage.
Resource monitoring is a complementary phase to achieve better performance optimization.
Resource management is a critical component of any cloud, and inefficient resource management has a direct impact on performance and cost, as well as an indirect impact on system functioning since it becomes too expensive or ineffective as a result of poor performance [6]. e cloud resource management practices connected with the three cloud delivery formats-IaaS, PaaS, and SaaS-differ. When cloud service providers can forecast a rise in demand, they can provide resources ahead of time.
Also, cloud resources are controlled on three independent levels: (i) Cluster level: A cluster resource manager (CRM), a software complex that controls resources and tasks in a cluster to preserve its efficiency, represents the cluster level of power management. e CRM is in charge of cloud creation and deletion. (ii) Node level: An operating system (OS) controls the high-level state of equipment by managing power at the node level. For example, the OS can put a processor (CPU) into sleep mode or spin down drives to save energy. (iii) Hardware level: Modern CPUs have a large number of modules, some of which are not always active in a given process. As a result, unused modules can be turned off. is is accomplished using a specific circuit that is in charge of the CPU's internal power management. As a result, all administration is done at the hardware level, with no involvement from the operating system [6].
For multiobjective optimization, cloud resource management necessitates complicated rules and judgments. is is why preparing ahead of time for the administration of these resources will aid in a smooth transition to cloud computing.

Classification of Cloud Resources.
Cloud resources can be classified in a variety of ways, including as physical and logical resources or as hardware and software resources. Cloud computing resources could be classified as follows: (i) Compute resources: To create computational resources, many physical resources must be pooled. Multiple elements such as processors, memory, network, and local I/O make up the computing capacity of the cloud environment. (ii) Networking resources: Different physical resources must be combined in order to develop computing resources. e processing capability of the cloud environment is made up of multiple components such as processors, memory, network, and local I/O. (iii) Storage resources: ese resources manage client information and make it accessible over the network.
e cloud enables elasticity in storage resources by allowing users to scale up or down their storage space on a lease basis based on their needs, which is challenging in a traditional database [9]. e taxonomy of cloud resource management found in literature is shown in Table 1.

Resource Allocation.
Allocation of resources is a method that makes sure virtual machines are assigned when there is demand for several applications in need of various resources such as CPU and memory among others. e cloud is derived from many real machines, and every real machine processes several virtual machines that are allocated to the final users as computing resources. Virtual machines are a program that behaves like real computer [23].
Allocation of resources in the cloud domain is done to ensure users or clients are satisfied with little time to process, while providers of these resources want to optimize the application of the resources and also make an expected profit. Distributed cloud domain is heterogeneous and potent. Heterogeneous cloud typically combines both public and private resources from more than one cloud provider that makes the distribution of resources quite challenging. e job of allocating resources to cloud users as a result of growth in demand from users and availability of limited resources also makes it quite difficult. As a result of this, several methods and versions have been suggested to assign these resources in the most effective way possible [24].
Cloud computing makes it possible for data to be saved and utilized efficiently for reliable applications. It has been identified as a standard for big data issues; one major issue encountered when resources are to be shared across the Internet is the need for cloud users to perfectly allocate the required resources by the cloud provider.
Another challenge with the allocation of resources is aligning with the service-level agreement (SLA) bargained with the user involved. ere are two stages of SLA. In class-based SLA, for each job class, QoS is measured based on performance metrics. In job-based SLA, QoS is measured using the metrics of individual jobs. Users believe job-based SLA is more robust of the two types. Users believe that job-based service-level agreement is stronger unlike providers [25].

Resource Optimization. Many researchers and scientists
have proposed different techniques in the area of task stability in the cloud computing environment. Excess supply of resources will bring about unnecessary costs and wastages, while undersupply of resources can disrupt the effectiveness of the application.
Optimal allocation of resources to users in a finite time to achieve excellent service is subject to rightly allocating resources to users based on user requirements. Resource optimization refers to choosing the most appropriate choice from an array of alternatives based on standard criteria. e performance of cloud computing solely depends on the level of optimization of resources (virtual machines) allocated to users by the cloud providers [26].
Virtual Machines (VMs) imitate the physical computer system. It is actually software that uses physical resources of systems such as CPU, RAM, and disk storage but is technically isolated from other computer software. e allocation of these resources based on users' requirements is a

Cloud computing
Shao et al. [20]; Arvind [21] Profit maximization A heuristic method to search for the optimal solution Reducing the waiting times of customers is a critical issue for a cloud service provider.
Profit maximization problem in the multistage multiserver queue systems, in which customers are served at more than one stage, arranged in a series structure.
Cloud brokers Siyi et al. [22] Journal of Computer Networks and Communications 5 major challenge in cloud computing, and several algorithms have been presented to provide a solution. e challenges presented as a result of resource allocation are dependent on the data from the resources made available by the cloud service provider. Cloud services provide almost the same resources but differ in other terms such as performance and service types [27].
In cloud computing, a productive resource allocation method allows readily accessible resources of service nodes to solicit jobs. It is pertinent to ensure that the entire resources on any service node should not exceed the size and subtasks that can be utilized to improve the resource implementation [28].

Review of Related Works.
Cloud computing is the use of the Internet to deliver services and resources. Many programs are self-service enable. ese dynamic networks required a proper resource management strategy to assign essential resources to the users' demands.
akur et al. [29] investigated the design and implementation of a game-theoretic approach for resource management that took into account the trust values between the federation's participating cryptographic service providers.
Liaqat et al. [30] restructured the nova-scheduler to propose a multiresource-based virtual machine (VM) placement approach to improving CPU utilization and execution time. When compared experimentally with other well-known techniques, the proposed method has improved execution time by 50%. e proposed solution covers only the computational resources. us, there is a need to extend the solution to also consider network and storage as a resource.
Shuja et al. [31] investigated the enabling methodologies and technologies for sustainable cloud data centers (CDCs) from multiple perspectives. In addition, case examples from academia and business were given to validate results for CDC sustainability initiatives. Sustainability solutions can both cut energy expenses and carbon footprint in CDCs, according to a comprehensive survey. e taxonomies offered categorize the parameters of CDC sustainability measures. Several hurdles to sustainable CDCs are also identified, such as dealing with the insecurity of renewable energy supplies.
Skourletopoulos et al. [32] provided a game-theoretic formulation of the technical debt management problem at the level of cloud-based services. A technical debt measuring game is created, with the current number of players per service parameterized, and each new end-user having the option of using any of the cloud-based services available.
Wang et al. [33] showed two unequal effective targets. e first technique is set up to reduce the level of load inequality, while the next technique is set up to optimize the use of resources and also limit energy utilization. For optimum virtual machines placement, resampled binary particle swarm optimization (RBPSO) was proposed. To control heterogeneity of the population, the unnecessary calculation was limited that enhances the possibility and effectiveness of the algorithm. e proposed model is logical and executes preferably more than BPSO and genetic algorithm (GA). But dynamic effectiveness of the virtual machine is required to limit the utilization of energy and enhance the service quality.
Wang et al. [34] applied a mapping and management architecture to explain the need for resource management in virtualized ultradense small cell networks and explore the challenge of user-oriented virtual resource management. ey created closed-form solutions for spectrum, power, and price by modeling the virtual resource management problem as a hierarchical game. Furthermore, propose and examine the convergence virtualization of a customer-first algorithm that describes the user-oriented service virtualization.
Koloniari and Sifaleras [1] gave a classification of ways to deal with a number of challenges faced in the design and deployment of peer-to-peer (P2P) and cloud systems, as well as a study of current developments in game-theoretic approaches in P2P networks and cloud systems.
Mustafa et al. [35] presented two consolidation-based energy-efficient techniques: maximum capacity best-fit decreasing (MCBFD) and minimum power best-fit decreasing (MPBFD), to reduce energy consumption together with the resultant SLA violations. To achieve better energy efficiency, workload consolidation and lower threshold are utilized. e lower threshold identifies the underutilized server that leads to a decrease in energy consumption, whereas the upper threshold reduces the SLA violations by keeping some of the resources free to accommodate the ever-changing demands of VMs. e proposed techniques have better performance than the selected heuristic-based techniques in terms of energy, SLA, and migrations.
Xiong et al. [36] proposed a lightweight infrastructure of the proof of work-based blockchains, where the computation-intensive part of the consensus process is offloaded to the cloud/fog. ey modeled the blockchain consensus process' compute resource management as a two-stage Stackelberg game, in which the profit of the cloud/fog provider and the utility of individual miners are jointly optimized.
Yang et al. [37] utilized matching theory to construct a distributed matching method that maximizes the social welfare of resource-constrained fog nodes while ensuring various fog node mining requirements.
Zafari et al. [38] presented the resource-sharing model as a multiobjective optimization problem and provide a solution framework based on cooperative game theory (CGT). ey analyze the technique of allocating resources first to native applications from each service provider and then sharing the remainder of resources with applications from other service providers. ey proposed allocations for the core game-theoretic Pareto optimum allocation (GPOA) and polyandrous-polygamous matching based Pareto optimal allocation (PPMPOA). As a result, the results are Pareto optimal, and the grand coalition of all service providers is more stable.
Feng et al. [39] offered a novel gaming strategy for cyber risk management. To move cyber risks from the fog computing environment to a third party, they use the cyber- 6 Journal of Computer Networks and Communications insurance idea.
ey use a dynamic Stackelberg game to depict this dynamic interactive decision-making dilemma. ey create an evolutionary subgame to assess the provider's protection and cyber-insurance subscription methods, as well as the attacker's plan.

Methodology
In this section, the methodology applied to solve the resource management and optimization problems are presented.

Resource Management Modeling.
Cloud computing providers have a large-scale dispersed data center with diverse physical servers that provides several computing resources using a pay-as-you-go model. Cloud computing providers offer a collection of conceivable virtual machine (VM) types to make simpler user's selections. Each type of VM is well-defined by stipulating the number of CPU cores, storage and memory size, and sizes of other resources. In cloud computing, users position applications of high-performance on clusters of virtual machines to achieve set tasks such as web and enterprise services. Cloud computing providers regulate their resource management strategies dynamically, since user's different requirements are heterogeneous and vary over time. We are concerned with an impartial and active resource management strategy for cloud computing architecture; thus, it is essential to unify control and manage the physical resources using a resource management system. In this paper, we present a comparative analysis of resource management strategies or metrics in cloud computing architecture using various game-theoretic models.

Game-eoretic Resource Allocation Modeling.
Cloud computing users ask for various types of virtual machines to accomplish diverse tasks. e implementation involves multidimensional resources with changing requirements per task.

Resource Allocation Problem.
In this paper, impartial resource allocation means all users have an equal portion of resources. In cloud computing, where users' requirements are heterogeneous, resources are allotted to users for their requirements. Users have a determined portion of total cloud capacity between diverse resources, which is called the main portion. e objective of impartial resource allocation is to balance the main portion of users. is resource allocation problem can be modeled using game theory.

Resource Optimization Problem.
During run time, the resources of the physical server may not be completely utilized. In a cloud computing environment, improving resource utilization requires the consideration of resource usage on each resource measurement. To attain optimum utilization of resources, cloud computing providers combine virtual machines with accessible machines wherein the resource requirements on a single server are balanced. is virtual machine placement problem can be addressed as a resource optimization problem.
To achieve the set objectives, research on game-theoretic resource allocation strategy was reviewed. A comparative analysis of game-theoretic resource allocation strategies was provided in Table 2, to understand the various game theories employed, as well as their benefits and limitations.

Game eoretic in Load Balancing.
e goal of cloud computing is to share resources in a consistent manner while also achieving economies of scale. Load balancing ensures resource availability while reducing server performance overhead. A load balancer's job is to distribute traffic to various servers so that they are all equally loaded. is could lead to a rise in the number of users as well as cloud application reliability. Load-balancing algorithms can be static or dynamic, and they can be centralized or decentralized.
One node in the system operates as a scheduler in the centralized model, for example, and makes all load-balancing choices.
is node receives information from the other nodes. e load-balancing decisions are made by all nodes in the system in a decentralized manner. As a result, obtaining and maintaining the dynamic state information of the entire system is exceedingly expensive for each node. To make suboptimal decisions, most decentralized systems need each node to collect and keep only partial knowledge locally.

Game eoretic in Resource
Provisioning. Due to their intensive calculations and interdependence across procedures, resource management is a crucial issue for scientific workflows. To manage cloud resources, a number of algorithms and strategies have been created. Resource provisioning time, or the period between scaling up/down and actual resource providing/deprovisioning, is one of the key challenges of cloud computing.
Wu et al. [50] investigated how cloud resources are managed using a combination of state-action-reward-stateaction learning and genetic algorithms. is is accomplished by picking the most appropriate set of activities to maximize resource use. e suggested method's agents are converged using a genetic algorithm, and global optimization is achieved. By keeping track of work deadlines, the fitness function used by this evolutionary algorithm aims to achieve more effective resource use and better load balancing.

Games eoretic in Task Scheduler.
Task scheduling is the process of arranging incoming requests (tasks) in a specific order to maximize the use of available resources. One of the most challenging aspects of cloud computing is efficiently scheduling jobs and completing them before the deadline in order to maximize processor usage, throughput, and task waiting time. e scheduler's main purpose is to receive tasks from a group of users and assign them to a group of virtual machines. Service users must submit their requests online because cloud computing is a technology that offers services through the Internet.

Discussion of Findings
In this paper, we attempted a glimpse at the crucial issue of resource management problems in cloud computing architecture since it affects the efficacy and proficiency of the system. is is presently experiencing intense investigation by the communities of game theorists and computer scientists.
rough the literature reviewed, we were able to get an understanding of the concept of game theory and its various application areas. e game theory aims to provide a systematic approach to decision-making.
is posed the question of whether the game theory can accurately model resource management problems in cloud computing architecture. To answer the question, further research was done to understand how game theory and cloud computing can coexist. On this note, research on game-theoretic models for resource management was reviewed, along with methods employed and their brief descriptions, in Section 3. e goal of resource management in cloud computing is to mitigate the underutilization of resources. rough this paper, we have shown the various approaches of game theory in modeling resource management problems in cloud computing architecture.
FromTables 3-6, some of the game theories employed are: a finite extensive-form game with perfect information [45], a repetitive game with incomplete information in a non-cooperative environment [46], a cooperative and noncooperative gaming model [47], cooperative game and bargaining game algorithm [48], evolutionary game theory [49], Stackelberg game [50], and non-cooperation model [51].
ese were applied to satisfy diverse users' requests needs outside that of a single physical machine. Virtualization and placement of the required resources are enabled by optimization theory and algorithms such as the backward induction approach [45], Nash equilibrium [46], evaporation-based water cycle algorithm [49], Pareto efficient [48], load balancer [50], and a decision model [51]. By so doing, a resource can be provisioned optimally and efficiently.
In addition, the advantages of these resource management strategies, when provisioned, are found to improve resource allocation concerning lesser task allocation time, lesser resource wastage, and higher request satisfaction. e comparative analysis in Tables 3-6 can serve as a specification guide for researchers, academic, and industry experts, in choosing that game-theoretic resource management strategy is suitable for their needs. Also, the following guidelines were developed and should be considered when choosing a resource management strategy based on game theory, to achieve impartial resource allocation and optimized resource utilization: (i) e number of resources received by users should be equivalent to splitting equally the total resources (ii) Each user does not prefer the allocation strategy of another user (iii) Increasing the volume of a user's resource without decreasing the resource allocated to another user is impossible (iv) A change in the user's strategy does not mean the user can get more utilities Table 2: Serverless computing, virtual machines, and container compared in a cloud computing environment [15].

VM SC Container
It is sometimes referred to as an "image" and functions like a physical computer. A virtual machine (VM) runs on top of a hypervisor, which runs on a host machine or a "bare-metal" host.
Serverless computing allows you to build and configure an app before uploading it to a cloud server, often one run by a large cloud vendor such as AWS or Microsoft. e cloud provider will then bill you for the amount of time each program spends on its servers.
Containers are a sort of virtualized computation that differs from virtual machines in that they are self-contained.
Virtual machines (VMs) are ideal for testing, accessing infected data, generating backups, and comparing software compatibility across OS systems.
When developers use serverless platforms, they do not have to worry about having enough bandwidth or servers to deploy their apps; the vendor handles all of the back-end administration, and your compute footprint is automatically adjusted to suit your app's needs.
A container is a Linux-based, standardized unit that holds everything you need to run the software. It bundles up all of an app's code, customizations, and dependencies so that it can be dropped and run anywhere.
Virtual hardware, including CPUs, RAM, hard drives, and network interfaces, are present in every VM. To run VMs, you will need a lot of processing power because you are essentially running several "computers" at the same time.
Containers are straightforward to start and stop because they are packaged into a single package. ey are also easy to migrate between environments. Containers, unlike virtual machines, share the host's operating system with other containers, allowing them to be smaller and perform multiple workloads on a single OS. Containers also consume fewer resources than virtual machines (VMs).
Amazon EC2 AWS Firecracker AWS Lambda. Amazon ECR, Amazon ECS, and Amazon EKS. A repetitive game with incomplete information in a noncooperative environment Nash equilibrium Achieve Nash equilibrium even with insufficient knowledge of the environment, respond in a shorter period, and provide the lowest violation of the service-level agreement and the most utility to the provider It is considered a static pricing strategy rather than a dynamic resource allocation method 3 Xiao and Tang, 2015 [47] A cooperative gaming model  Using the Nash bargaining solution (NBS), cooperative game theory delivers the Pareto optimal allocation of load to the user Auto scaling-load balancing allows cloud users to make effective use of network capacity while also lowering provisioning costs Ramya et al. [53] Clustering scheduling To improve performance Ramya et al. [53] Duplication (replication) based scheduling To achieve a directed acyclic graph (DAG) scheduling with minimized makespan time of the task and high efficiency of the task in the cloud service Subrata et al. [54] Defined the problem as a non-cooperative game, whereby the objective is to reach the Nash equilibrium e proportional-scheme algorithm Tasks are allocated to processors in proportion to their computing power Abdeyazdan et al. [55] Prescheduling algorithms Task graph scheduling It minimizes their earliest start time while reducing the overall completion time Swathy et al. [50] Stackelberg model Effective utilization of resources It is a centralized load balancing Table 5: A comparative analysis of game-theoretic models for resource provisioning in cloud architecture.

Author(s) Techniques Usage Comments
Zou et al. [56] Inter-/intraslice bandwidth optimization strategy is is a challenge to allocate resources efficiently due to heterogeneous QoS requirements of diverse services as well as competition among different network slices Inter-/intraslice bandwidth optimization strategy.
Pham-Nguyen and Tran-Minh [57] Multiobjective optimization problem/ strategy that is to combine three considered components (application's response time, network congestion, and server usage) into one objective function e service deployment is a multiobjective optimization problem Fog computing is a model in which the system tries to push data processing from cloud servers to "near" IoT devices in order to reduce latency time.
Wu et al. [50] e resource provisioning strategy that is based on dynamic programming Due to the clouds providing the payas-you-go pricing scheme, executing a workflow in clouds should pay for the provisioned resources. us, costeffective resource provisioning for workflow in clouds is still a critical challenge.
Mashayekhy et al. [58] Polynomial-time approximation scheme (PTAS) (v) Minimum usage is maximized between the several resources of the physical servers (vi) Minimizing the irregular usage of multidimensional resources, subsequently, most resource fragmentations are caused by unequal multiresource requirements

Conclusion and Recommendation
is paper offer resolution to the resource management problem in cloud computing architecture wherein cloud computing providers can effectively and optimally respond to different users' request for resources.
Many essential issues are considered, such as fairness, availability, optimization, utilization, provisioning, scheduling, and so on. To address the growing intricacy of the resource management problem in a dynamic and constantly evolving setting, this paper focused on game-theoretical methods and presented a comparison of various strategies.
Game-theoretical resource management strategies have received substantial attention in cloud computing communities and applications in resolving resource optimization, problem of resource allocation, task scheduling, and resource provisioning.
Furthermore, the security issues were not considered in this paper when resources are managed in a cloud computing setting. is poses another risk that has hindered the adoption of a game-theoretic resource management strategy in cloud computing architecture. We recommend future studies into the security issues inherent in game-theoretic resource management strategies.

Conflicts of Interest
e authors declare that they have no conflicts of interest. Non-cooperative and cooperative game model Scheduling framework for a real-time task using game theory concept e result showed a cooperative game model for task scheduling performs better than a non-cooperative game model. e total completion time and total waiting time in a cooperative game model are less than in the noncooperative game model Ni et al. [41] Used a three-layer scheduling model based on whale-Gaussian cloud/whale optimization strategy based on the Gaussian cloud model (GCWOAS2) It is used for multiobjective task scheduling in cloud computing that is to minimize the completion time of the task via effectively utilizing the virtual machine resources and to keep the load balancing of each virtual machine, reducing the operating cost of the system Jafar Ababneh [42] Used a hybrid multiobjective approach called hybrid grey wolf and whale optimization (HGWWO) algorithms that integrates two algorithms, namely, the grey wolf optimizer (GWO) and the whale optimization algorithm (WOA) A superior level by comparison to the original algorithms GWO and WOA on their own with regard to costs, energy consumption, makespan, use of resources, and degree of imbalance Implications of cloud scheduling are the planning of tasks on virtual machines and the attenuation of performance Aggarwal et al. [43] Fruit fly optimization (IFFO) algorithm To minimize makespan and cost for scheduling multiple workflows in the cloud computing environment Multiobjective workflow scheduling with scientific standards to optimize QoS parameters is a challenging task Jia et al. [44] Improved whale optimization algorithm, referred to as IWC Use the inertial weight strategy for the whale optimization algorithm to improve the local search ability and effectively prevent the algorithm from reaching premature convergence IWC algorithm has good results in terms of task scheduling time, scheduling cost, and virtual machine Gawali and Shinde [65] ey proposed a heuristic approach that combines the modified analytic hierarchy process (MAHP), bandwidth aware divisible scheduling (BATS) + BAR optimization, longest expected processing time preemption (LEPT), and divide-and-conquer methods to perform task scheduling and resource allocation Bipartite graphs are utilized to map tasks to appropriate virtual machines Cybershake scientific workflow and the epigenomics scientific workflow for scheduling algorithm