A Resource Scheduling Method for Enterprise Management Based on Artificial Intelligence Deep Learning

Under the current trend of economic globalization and international competition, more and more production enterprises are introducing the project management model, that is, customer or order projection, and using the concept of project management to manage operations. In an enterprise, there are multiple customers at the same time, and the production line can also generate multiple orders from dierent customers at the same time. at is to say, multiple projects are running at the same time, resulting in continuous changes in management processes, heavy project coordination tasks, and a serious waste of corporate resources.e resources that can be used are limited. In the production business using project management, how to rationally utilize the limited business scheduling resources has become the focus of research. Aiming at the method of business management resource scheduling, this paper investigates the application of enhanced particle swarm algorithm on the cloud deep learning platform, and the mapping problem between virtual machines and physical machines. As a heuristic algorithm, particle swarm optimization is suitable for solving combinatorial optimization problems. By improving the diversity of particle hosts and setting parameters, it improves the integration speed and accuracy of the algorithm. rough the analysis of the current situation of multi-project business management, it examines the allocation of resources. In terms of business management, it has established a resource allocation model for multiple projects. e results show that the method improves the eciency of the enterprise by 35% compared to the traditional method with a 20% reduction in personnel. A better conguration scheme is simulated by MATLAB, which veries the scienticity and eectiveness of the method studied in this paper.


Introduction
As the diversi cation of consumer demand becomes more obvious, enterprises continue to introduce new products in order to better meet the demand. It drives the next round of product development competition and increases the variety of products. However, some enterprises blindly follow the trend, and if the sales chain is interrupted slightly, they will accumulate a large amount of inventory. It takes up a lot of capital, which seriously a ects the speed-up and capital turnover of enterprises. is in turn a ects the sustainable competitiveness of the company. As consumer demand has become more unpredictable, there are many homogeneous products on the market. It is full of personalized needs, consumers cannot get what they want, and companies cannot provide what consumers want. With the intensication of market competition, the rhythm of economic activities is getting faster and faster; therefore, every business spends more time on researching consumers. If a business is relatively slow in meeting consumer demand, it will soon be overtaken by competitors. Because for today's companies, market opportunities are scarce, and the time for companies to think and make decisions is very limited. erefore, shortening the product development and production cycle and meeting the needs of consumers at the fastest speed has become one of the concerns of all enterprises and managers.
Big data can solve the storage and processing of data, and cloud computing can realize the e cient and exible utilization of computing resources. It is very practical to combine the two. How to analyze big data, extract elements, and visualize large amounts of data has become an important research topic. It can solve this problem through indepth research. Deep learning is a type of machine learning. It uses the structure of the human brain to create multilayered neural networks and continuously trains data to capture nonarithmetic functions, such as parts of high-level complexity. Cloud deep learning is provided by the deep learning platform, and the GPU performance is poor due to the use of virtualization technology in the cloud environment. So, it can also choose GPU training servers to improve performance. e GPUs discussed here range from cloud GPU servers to cloud service providers to physical machines. However, many problems arise when researchers use Ten-sorFlow's deep learning framework to train neural networks. When training on GPU servers, GPU usage relies on manual and static device allocation, and the uncertainty of resource allocation creates problems for consumers who cannot use computer resources efficiently. In addition, it is also important to use the deep cloud learning platform to do deep learning work and solve the mapping problem between virtual machines and physical machines in various places. erefore, for deep learning platform training and GPU server training in two different cloud environments, it is very important for deep learning developers to use deep learning workflow plans and computing resources more efficiently.
In the case of limited resources, the immediate change is that the tasks performed by the business never increase, while the resources and organizational structure of the business are relatively strong. In particular, the total resources such as human, material, and financial resources of enterprises are limited. Under a project condition, the enterprise only needs to put all the resources into the project, and the project schedule will not be disturbed. However, if the enterprise is running multiple projects, there will be problems that multiple projects compete with each other or share some resources and interfere with each other's schedule. In addition, project managers of various business projects will only consider how to meet resource requirements, construction phase, and cost of the project for which they are responsible, and focus on how to achieve the goals of the project for which they are responsible. It reinforces the overall level of self-development, joint planning, allocation, and management of project resources specific to each project. It must address limited resources.
is requires research into multi-project planning and planning to ensure resource scheduling and allocation methods for specific projects.

Related Work
In recent years, with the continuous deepening of globalization, in order to obtain strong corporate competitiveness in the fierce market competition, and to seek the survival and long-term development of enterprises, the world experts have carried out a lot of research on enterprise management resources. By reducing the optimization problem into two sequential problems according to You C, it corresponds to the optimal scheduling order and the joint data partitioning and time division problem given the optimal order. It was found that the optimal time-sharing strategy tends to offload the defined effective computing power among mobile devices through time-sharing balancing [1]. Zhang believed that driven by the increase of massive wireless data traffic from different application scenarios, 5G networks based on network slicing should utilize efficient resource allocation schemes to improve the flexibility and capacity of network resource allocation [2]. is problem is addressed by leveraging the common interest, physical and energy-aware clustering, and resource management framework of wireless powered communication (WPC) technologies according to Tsiropoulou E E. Within the proposed framework, numerous M2M devices initially form different clusters based on the low-complexity Chinese restaurant process (CRP) [3]. Chen C developed a supply and demand system model for the study area using system dynamics modeling tools. To explore the optimal resource management scheme by testing the system response under various scenarios, he identified the main factors that affect the response to achieve a balance of sustainable socioeconomic development [4]. Although the above research brings us some references, the research is too one-sided.
We bring artificial intelligence and deep learning algorithms into the research of enterprise management resource scheduling methods. Chen first introduced the concept of deep learning into hyperspectral data classification. First, he qualified stacked autoencoders by following classical spectral information-based classification [5]. Shen believed that recent advances in artificial intelligence, especially in deep learning, are helping with identification, scheduling, and resource allocation. Central to these advances is the ability to utilize hierarchical feature representations that are learned only from data, rather than hand-designed features based on domainspecific knowledge [6]. Ravi believed that deep learning is a technology based on artificial neural networks. It is emerging as a powerful machine learning tool in recent years, which is expected to reshape the future of artificial intelligence [7]. Although the above research is relatively comprehensive, the analysis is not deep enough, so it is not adopted in this paper.

TensorFlow Deep Learning Framework
In order to replace humans with repetitive tasks, humans have created computers with massive storage space and ultra-high computing speed, which can perform difficult tasks that humans need to accomplish, such as scientific computing and statistics. However, some problems that can usually be solved by humans cannot be solved by computers at present, such as natural language understanding, image recognition, and speech recognition, so they need the analysis of artificial intelligence [8]. e computer itself lacks intelligence and has obvious advantages in computing speed, but it is often inefficient in decision-making and analysis. Insufficient intelligence of computer operations is a serious problem, and based on this situation, early artificial intelligence such as IBM's deep blue can only solve problems in specific environments [9]. To enable computers to control information in an open environment, researchers use knowledge bases to give computers access to artificially generated information. But building a knowledge base of human and material resources is limited to but not excluded from all content except information disclosure. Information can be published in a fixed format that computers can understand [10]. Machine learning works by computing the correlation between a set of features derived from these data and its predictions, training data. However, these results are highly dependent on preplanned data acquisition and many issues need to be addressed, making direct data acquisition difficult for a number of reasons. erefore, exploration is expected to automate tasks from data [11].
As shown in Figure 1, the most common network systems are input layer, hidden layer, and production layer. e number of circuits that neurons have a specific function to connect neurons to different neurons and the direction of each connection resource should be data access and training of network neurons. Since the product layer adjusts each neuron connection to obtain the best results, a scaled structural network should be used to obtain some input information of the production function, i.e., to complete the prediction [12].
With the development of deep learning, more and more deep learning frameworks are introduced, and TensorFlow is becoming more and more popular. Regarding GitHub, TensorFlow has long been considered an open-source machine learning project. TensorFlow is the best-designed project in GitHub's annual developer report and is highly rated by deep learning experts. From Table 1, we can see the key position of the current deep learning framework in the industry [13].
TensorFlow borrows and maximizes the advantages of DisBelief's original AI system. Its name contains two key concepts in the framework, representing N-dimensional arrays, which is a calculation method described by a combination of points and lines. TensorFlow is a series of data features and performs end-to-end computations in a process graph. TensorFlow can be used to transfer data structures representing data features into neural networks with multiple input, hidden, and output layers [14]. For convenience, TensorFlow provides several APIs. Using these APIs, students can easily create various web templates, such as CNN, RNN, and LSTM. As shown in Figure 2, the TensorFlow architecture is divided into front-end components, hardware device layer, and application layer [15].

Cloud Computing Architecture.
Cloud configuration has two aspects: service and management. Services can be divided into hardware (IaaS) services, application-based services (PaaS), and application-based software services (SaaS). Supervision includes the management of personnel, storage, data, infrastructure, etc., under various functions to ensure the normal operation of the cloud platform. Its complete structure is shown in Figure 3 [16].

Virtualization Technology.
Cloud computing can transform the ability of information technology to achieve meaningful discovery. It is based on virtualization technology and valuable cloud computing technology. It uses computer resources to reduce memory, CPU, network, and storage. is unique approach modifies the original integration, which simplifies data center management and improves resource efficiency [17]. When invisible to the user, configuring the system becomes more intuitive and the use of practical methods is not limited to the physical components of the original composition. Virtualization technology is a method of freeing resources without compromising the hardware structure. For the application industry, virtualization technology can integrate resource management, distribution, and scalability. CPU and GPU are components of computer systems, but there are still differences [18]. A person must not only begin to manipulate, but also participate in the control of logic [19]. e latter contains a large number of computer units. Due to the high improvement in computer performance in recent years, the use of multiple GPUs at parallel computing time leads to better performance in difficult computer tasks such as deep learning [20]. Currently, there are four main ways to view GPUs: direct GPU integration, GPU simulation, mid-level API implementation, seamless virtualization of GPU tools, and communication between the two in the system, as shown in Figure 4 [21]. e resource layer is not only the bottom layer of cloud computing architecture, but also the most important layer in cloud computing. e realization of any upper-layer business needs to rely on the support of hardware devices.
is level is mainly the application of virtualization technology, which makes the use of resources more flexible by converting it into virtual resources. As an important technology in cloud computing, it realizes the hidden layer input layer output layer Mobile Information Systems integration of heterogeneous resources with the help of network connection. Platform layer: is layer is also essential in the three-tier architecture. It connects the application layer and the resource layer, it needs to provide services to the other two layers, and it needs to provide real-time monitoring of virtual machines for the resource layer. It ensures the normal operation of the virtual machine and performs virtual machine migration when necessary. It needs to realize resource discovery and recovery. It requires scheduling of tasks. It needs to provide functions such as disaster recovery mechanism. It provides permission management for the application    e resource scheduling service in cloud computing is subdivided into task scheduling and virtual machine scheduling according to different objects. e former maps virtual machines, and the latter maps physical machines. From Figure 5 we can see that they are mainly used in the resource layer.
Whenever a user submits a task to the data center, a certain task scheduling strategy is required. By dividing tasks, it places different subtasks in different virtual machines to execute. Sometimes, different subtasks have requirements on the execution order, and the mapping between subtasks and virtual machines requires task scheduling to play a role. is process relies on first-level scheduling.
e virtual machine contains the resource requirements for user task execution, which must be mapped to a specific physical machine before the task execution can begin. Because the relationship between virtual machines and physical machines is many-to-one, the total resources of multiple virtual machines running at the same time are limited by the total resources of physical devices. How to complete the creation of the virtual machine on the physical machine, what strategy to adopt for the migration of the virtual machine, and how to balance the load of each physical machine, these series of processes all depend on the second-level scheduling [22].

Main Resource of Deep Learning.
e GPU has more ALUs (ArithmeticAndLogicUnit, arithmetic logic unit) than the CPU, and more computing units enable the GPU to gain advantages in floating-point processing capabilities. en comes the need for bandwidth. To this end, on the one hand, the GPU has designed a large number of distributed caches internally to ensure the versatility and compatibility of high-bandwidth scenarios of data reuse. In addition, the GPU has always used the most advanced technology in terms of video memory technology, and its video memory bandwidth is also much higher than that of the CPU. It enables the GPU to read and process data at high speed. e following is a comparison of specific experimental data to reflect the difference between GPU and CPU in performing deep learning tasks. e figure below is a comparison of the time consumption of training a common MNIST (handwritten digit recognition) task in a fully convolutional neural network through the TensorFlow platform. e batch size of the task is set to 100, the epoch is 4, and the learning rate is 0.05. For other hardware parameters, the experimental environment description below is shown. Figure 6 shows the comparison of the training time when using the CPU alone to open 1, 2, 4, 8, 16, and 32 threads and using the GPU alone. From the figure, we can intuitively see the acceleration of GPU training for deep learning tasks. Without optimizing the network model, by using GPU, the training speed of CPU is increased by 67 times compared with a single thread, showing a significant acceleration effect. With the continuous development of deep learning, more complex neural networks will appear and face larger-scale training data. e training time will also be longer, and the use of GPU is particularly important.
In the training process of the neural network, it needs to perform multiple training on the full amount of data and fully learn the data features to improve the performance. e usual practice is to set a larger epoch value and train the data multiple times. is process increases the proportion of GPU usage and reduces the error of time estimation. e time required to complete the task under different epochs is calculated and analyzed through experiments. e results show that when the epoch is set to 32, the estimation error of the completion time of this task is reduced to about 5%, which confirms the effectiveness of this method. In order to test the accuracy of the deep learning task time estimation mentioned above, this paper counts the deep learning task estimated time and actual running time (unit: seconds) under different epoch    Table 2.

Mathematical Description of Particle Swarm
Optimization. Particle swarm optimization is a heuristic algorithm. From the mathematical model, it means that m number of particles are in the D-dimensional space. According to the objective function, it continuously iteratively searches and finally finds the optimal solution. e position of the particle is expressed as Every time an iteration is completed, particle i needs to update its own speed and position according to the formula: (2) However, particle swarm optimization also has its shortcomings. In the process of continuous research, in order to balance the search speed and accuracy of particles, the standard particle swarm optimization algorithm was produced. It applies the inertia weight to the particle swarm algorithm, which is combined with the previous velocity of the particle. It represents the effect of the previous speed on the current speed update. is evolves into the formula: at is, the iterative process linearly reduces the inertia weight value, and a large value in the early stage ensures that the optimal solution can be searched in a wide range. As the iteration progresses, the inertia weight value keeps shrinking, and the particle swarm presents an overall aggregation. At this time, a stronger local search ability is needed, and the following algorithm is proposed: W max , W min are generally 0.9, 0.4, the optimization effect of this value is good, and k is a constant. In order to choose a suitable k, a series of experiments are carried out, as shown in Figure 7, and it can be seen that k is 0.3 and the effect is better. is also shows that the improved method is better than the linear adjustment of the formula with k � 1.
Based on the above analysis, an improvement and optimization of dynamically adjusting the value of the learning factor with the number of iterations are proposed, as shown in the formulas:   Mobile Information Systems e test results are shown in Table 3.

Application Example of Enterprise Management Resource Scheduling Method
A Steel Heavy Industry Co., Ltd., adopts the MTO-type production method to organize production. At the same time, the company undertook the manufacturing projects of three mechanical lifting trolleys of different models (the orders were placed at the same time), and the project numbers were Project A, Project B, and Project C, respectively. e project indicators of each project are shown in Table 4, and each project contains 11 tasks. According to the experience and expert judgment of similar projects in the past, it has determined the task information data of each project, as shown in Figure 8. According to the logical relationship of task sequence, the network diagram structure of the three projects is the same, and the logical relationship of each task is given in Figure 8. e contract amounts of the three projects are 1 million, 900,000, and 800,000, respectively, and the delivery time is 50, 60, and 70 days. e late delivery penalty is 6/10 of the contract amount. Figure 8 shows the maximum supply of the project's shared resources. Now it is necessary to formulate a reasonable resource allocation plan, so that the three trolleys can meet the   No matter what the reality is, there are still many enterprises that are transforming from traditional functional department management to project-based management mode.
is phenomenon is more pronounced in order-design-production enterprises. Companies such as Motorola, Bell, and Ailant can not only customize the special requirements of customers, but also produce low-cost, high-quality products with short lead times. Compared with traditional manufacturing, the coexistence of multiple projects in projectbased business has the following advantages. It can quickly respond to dynamic market demands. It adopts the idea of modular design, and the technology adopts mass production as much as possible. erefore, customers can get products that exactly meet their needs at a lower price. It reduces production costs. It is made to order, so the company's inventory is reduced. e spending is on the backlog, which reduces the cost of the bill. It shortens the delivery time and makes full use of business resources. By adopting a modular approach, it shortens product launch time and improves product quality. As product designs revolve around more integrated parts, its effective production monitoring is better with standardization of parts, materials, and manufacturers. e quality can be greatly improved. rough efficient use of resources, it continues to improve quality, further enhance MC production and management and control capabilities, and improve the economic benefits of the enterprise. Because it can produce products and deliver products according to the needs of consumers, it increases the sales volume of products, occupies a wider market, and promotes the benefits of enterprises. During the production process, it takes full account of the actual location of other existing projects. It reasonably allocates limited resources and many time differences, and the business output-input ratio is significantly improved. First of all, due to the parallelism of multiple projects, it is necessary to prioritize the multiple projects being implemented in order to reasonably plan resources. Now it takes the three projects of the enterprise, Project A, Project B, and Project C as the priority evaluation object. In this paper, the established evaluation index system is adopted; that is, 10 priority evaluation elements are directly scored. After collecting the project data and five experts' comments on the evaluation elements, the score statistics shown in Table 5 is obtained.
After solving the resource scheduling scheme of the program group, the number of resources allocated to each task of each project and the construction period used are obtained. e specific data are shown in Figure 9. e article discusses the formulation process and method of enterprise program resource scheduling plan. First of all, this paper discusses the need to prioritize the order items of the enterprise before planning. By constructing the project priority evaluation system, the gray relational evaluation method is adopted in this paper to determine the project priority. On this basis, this paper establishes a corresponding mathematical model for the actual resource scheduling problem of enterprises through the deep learning method of artificial intelligence. It constructs the objective function with minimum weighted total overdue sum and minimum overdue penalty based on program group, and gives its constraints. By using the resource scheduling solution algorithm, it solves the model to determine the resource scheduling scheme of the project group. Finally, this paper applies the planning process and method to an enterprise example to solve practical problems.

Discussion
In the process of implementing program resource scheduling management, the objects of control are multiple projects. e entire enterprise presents a multidimensional state. Its implementation and control process is relatively complicated, so it is necessary to build a perfect dynamic control system to ensure the effective implementation of resource scheduling management. Based on the deep learning method combined with enterprise artificial intelligence, this paper studies the implementation of resource scheduling from three aspects: the key nodes of resource scheduling and monitoring of enterprise order items, the enterprise organization model, and the resource information platform.  Enterprise program resource scheduling monitoring refers to the management activities such as inspection, supervision, correction, and other management activities carried out by the enterprise organization in the dynamic changing environment to ensure the realization of the established resource scheduling goals. To make enterprise program resource scheduling monitoring work effectively, it must follow the following basic requirements-goal-oriented: the enterprise's program resource scheduling control work should be oriented to the goal of resource scheduling. at is, on the basis of maximizing resource utility, it can meet the delivery time of each order item. And the goal orientation should be consistent with the strategic goals of the enterprise to ensure the effectiveness of the enterprise-level goal management, so that the market competitiveness of the enterprise continues to grow. Key management: In the enterprise's program resource scheduling and control work, it should carry out key management and control for the key points in the resource scheduling management. It ensures the effective advancement of enterprise resource scheduling and management. Scientific management schedules the resource scheduling of business program groups in resource management activities. In order to improve the robustness and effectiveness of management, it requires the collection of large amounts of data and proper analysis. It enables resource scheduling to be logical and economical. Applying this scientific method to resource scheduling management activities is the principle of applying scientific management. Adapting measures to local conditions: e principle of enterprise resource scheduling management adapting to local conditions means that the resource scheduling management system must be individually designed. It is suitable for the actual situation of specific business, department, work, and project, and it cannot  copy the habits of others. Due to different industries, different scales, and even different development stages of the same business, its management focus, organizational structure, and management style are different. erefore, scheduling management is impossible. Each business-related resource, as well as enterprise management standards and procedures, is different.
According to the characteristics of the whole project life cycle, it divides the project into preproject preparation, project implementation, project completion, and project warranty phases and corresponding monitoring objectives. Indicators and corresponding management requirements are set in the main objectives of each business project. Only when the indicators and requirements are met can the next stage or management objectives be entered. When setting the main goal of planning resource scheduling, it should consider the unity of multiple projects. It covers aspects of project quality, investment, development, safety, and risk management. e specific monitoring content must be consistent with the target system, and the business expectations system must be managed consistently. In the prepreparation stage of the project, order item selection, project resource information tracking, and project resource scheduling plan review have become the focus of monitoring. is is mainly based on the enterprise's response to the market and customers' demands and the evaluation of its own resources and investment capabilities in the preproject preparation stage. In the project implementation stage, the enterprise should bring the project production work into the monitoring focus. It is mainly to manage whether the project department has the corresponding production conditions. Its purpose is to assess the availability of the relevant human, material, and financial resources of the project, and to have the corresponding technical force. In the project implementation stage, the enterprise can carry out the implementation inspection of resource scheduling in different stages of the product production process by adopting corresponding inspection methods. It also comprehensively considers the implementation of product quality, safety, progress, and investment. It can ensure the effective implementation of resource scheduling and the realization of the overall project goals by increasing the frequency of inspections. In the preacceptance stage of the project completion, the enterprise can conduct a comprehensive inspection of the project resource utilization efficiency and product quality by setting up the precompletion acceptance inspection.
is is also an important node for the project to monitor the effect of the order project before delivery. After the project is officially completed, the enterprise conducts final monitoring of the project department after completing the project management tasks. e purpose is to check the completion of the project and provide real-time feedback on the monitoring results.

Conclusion
In enterprise management, resource scheduling is an important issue that program management is concerned about and needs to solve urgently. Combining with the actual situation of production and operation of the enterprise, this paper conducts an in-depth study on the resource scheduling management of its program group. In this paper, the problem of mapping the application of improved particle swarm algorithm to virtual machine and physical machine is proposed for the situation of task training through cloud deep learning platform. It is improved in the initialization stage of particle swarm and focuses on improving the diversity of particle swarm. e improvement of particle velocity out of bounds adds randomness to the original processing method. For the improvement of particle inertia weight value, different values are used in different stages of iteration to balance the global search and local search capabilities of particles. It controls the resource scheduling of enterprise program groups from three aspects: resource scheduling management and monitoring key nodes, organizational structure design, and information platform establishment. It determines the key nodes of enterprise program resource scheduling and monitoring. It fully considers the organizational needs of program resource scheduling, and designs and optimizes the organizational form, so as to provide a good organizational guarantee. It takes modern information technology as an effective technology management tool. It provides a platform for resource scheduling that can collect, transmit, process, and analyze the resource information of each project within the organization, so as to ensure more effective resource scheduling of enterprise program groups. is paper makes some exploratory work on the resource scheduling of enterprise program groups. However, this paper is still in its infancy, and there are still many aspects to be further improved. Due to the current data sources and practical experience, the project priority evaluation index system proposed in this paper has certain pertinence, and it is necessary to further improve. e mathematical model of resource scheduling is based on simplifying complex problems. In the actual problem processing, the problem should be considered more comprehensively. e organizational model of optimal design is only a preliminary idea, and its scope of application and internal structure needs to be further studied.
Data Availability e data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest
e authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.