Automatic Scaling Mechanism of Intermodal EDI System under Green Cloud Computing

,


Introduction
Multimodal transportation is recognized as the most efficient mode of transportation service in the world, which is conducive to improving logistics efficiency and reducing logistics costs [1]. Among them, information sharing, as the key technology of the information of molten iron and intermodal transportation, not only determines the development level of the information of intermodal transportation, but also provides information guarantee for the realization of one-stop intermodal transportation service. China has established an intermodal transportation information sharing platform centered on some large ports, which has preliminarily realized data exchange between ports and railway departments, reduced cargo storage time and transportation cost, and effectively improved the efficiency of collaborative operation. Information sharing is the core of intermodal transportation informatization. Because the current information sharing technologies are all "chimney" architectures, they can only solve the problem of information integration in a local range and cannot meet the demand of on-demand information sharing in the cloud environment.
At present, the mainstream research is still the traditional EDI technology. Based on a research project conducted at the Institute of Logistics and Warehousing, Debicki and Kolinskianalyzed the impact of EDI methods on the complexity of information flow in global supply chains [2]. However, the traditional EDI technology has some problems, such as high cost, backward technology, and large coupling degree of the system, and the author does not provide corresponding solutions. Betz et al. applied ICT and introduced the current application technology, connection type, message standard, and its impact on multimodal transport supply chain based on the international research results of Hamburg Port and Logistics Institute [3]. However, this technology is mainly customized for different users, and it is difficult to adapt to large-scale intermodal transportation systems. Ding explores the functions and operating conditions of relatively independent information systems for railways and ports, combined with traditional information exchange modes, and establishes an electronic platform suitable for information interconnection and intermodal station interoperability [4]. However, the traditional information exchange mode is still adopted, which cannot adapt to the massive data exchange in a high-concurrency environment.
At present, information sharing-related technologies adopted by core intermodal transportation organizations such as ports and railways mainly include the following: 1.1. Electronic Data Interchange. It refers to the formation of structured document data in accordance with relevant standards and the completion of end-to-end electronic data transmission methods. EDI standardizes and formats exchanged information in accordance with agreed protocols (such as EDIFACT and SOAP). It exchanges data between the computer network systems of trading partners through data transmission systems such as mail servers, FTP, and Message Queue (MQ), which can effectively solve the inefficiency of paper-based information transmission.

Service Oriented
Architecture. It is defined as a functional paradigm for integrating dispersed businesses within an enterprise, and its essence is Enterprise Application Integration (EAI) technology that realizes information exchange between heterogeneous systems. e SOA component model realizes business information interaction between heterogeneous systems by defining standardized interfaces between different services and has the characteristics of loose coupling, coarse-grained, and transparency. As a technical realization of SOA, WebService has better openness and decoupling than traditional EDI.

Enterprise Service Bus.
It is a bus-based enterprise-level SOA architecture with features such as interoperability, independence, modularity, and loose coupling. ESB takes services as the basic unit, and services are coordinated through messages to complete related business collaboration, and service consumers do not need to know the technical details of the service provider. ESB can not only reduce the workload of development and maintenance, save costs, and improve the scalability of the system, but also better deal with the heterogeneity of different technologies and protocols.
According to the research of relevant literature, the current application scope, advantages, and disadvantages of information integration technology in intermodal transportation informatization are shown in Table 1. is paper constructs a self-scaling mechanism for the K8S-based XEDI (Extensible EDI) closed-loop control system, establishes an expansion model, and proposes an automatic expansion algorithm, a resource allocation algorithm, and a resource allocation optimization algorithm considering energy consumption to achieve flexible data sharing and on-demand resource allocation in a cloud environment. e performance of EDI system limits the energy consumption. Finally, the scalability test verifies that the proposed algorithm has good scalability effect and high scalability efficiency under heterogeneous cluster conditions, which can not only ensure the reliability of the system and realize the performance optimization of the system, but also effectively reduce energy consumption. It can not only meet the demand of dynamic load and improve the quality of service, but also reduce resource occupation and save energy by releasing virtual resources when resource utilization is low. It can solve the problems of high cost, system scalability, and insufficient data processing capacity of existing EDI system solutions.
is article is divided into eight parts: (1) Introduction. is part generally introduces the methods used in this paper, also compares other methods, and discusses their advantages and limitations.
(2) e Introduction of XEDI. is section gives a detailed introduction to XEDI, including its advantages over EDI and the architecture of XEDI. (3) Scaling models of XEDI.
is part introduces the single-index scaling model of XEDI. (4) Multi-index scaling model. is part introduces the multi-index scaling model of XEDI. (5) Algorithms. In this part, the scaling algorithm based on the scaling model is introduced. (6) Evaluation of the algorithm and example. is section tests XEDI's scalability performance. (7) Conclusions. is part is a summary of the full text, and the performance of the algorithm in the article is summarized. (8) Prospect. is section introduces the prospect of the algorithm and other application scenarios.

The Introduction of XEDI
Cloud computing has been developed as one of the creative platforms that give dependable, virtualized, and adaptable cloud resources over the Internet. Intermodal transportation refers to the "carriage of goods by two or more modes of transport." Traditional system framework of the intermodal transportation is rigid and lacks information sharing [5]. However, cloud computing helps provide a new direction to solve these bottlenecks and realize the informatization of the intermodal transport. Electronic Data Interchange (EDI) refers to a standard for exchanging business documents, such as invoices and purchase orders, in a standard form between computers through the use of electronic networks like the Internet. It is widely used in the information sharing mechanism of intermodal transport. However, as time goes by, there appear more and more defects of EDI, such as high powerful consumption and low performance threshold, which make it hard to adapt to the mass data exchange under the cloud computing environment [6]. In order to realize the elasticity of information sharing, which expands when faced with high concurrent information processing and vice versa, we have to build a lighter and more flexible EDI system, named XEDI system in our paper.
When and how does the system stretch specifically? ough Kubernetes (K8s), a mature open-source system for automating deployment, scaling, and management of containerized applications, provides an ideal platform for hosting various workloads, automated scaling of the cluster itself is not currently offered, and thus, it is necessary to rebuild an automated scaling model based on that [7,8].
Definition 1. XEDI system is the lighter and more flexible EDI system that we build, which provides open messages all intermodal participating organizations through the cloud. Definition 2. Dominant Resource Fairness for XEDI (XDRF) is an algorithm designed to allocate the resources of Pods more fairly and perform better in calculating.
Contributions: this work has the following key contributions: (1) It built the XEDI system and stretching model of that.
(2) It presented the algorithm to realize the scaling progressing.
(3) It provided a comparative analysis between our algorithm and others. (4) It evaluated the energy consumption of the cloud system.
Compared with traditional EDI technology, XEDI has the following advantages: (1) Low cost and high concurrency. Adopt micro-service unit encapsulation. e message processing module is encapsulated through microservices, which can be flexibly scheduled in the container cloud environment and simplify the construction of the scaling mechanism to achieve high concurrent message processing with variable loads with minimal computing resource consumption. (2) Support remote call. Use asynchronous message mechanism. e asynchronous message protocol adapter (Takia) is used to realize message reception and forwarding, and the high-performance distributed queue system (Kafka) is used to replace the inefficient remote call and folder delivery polling mechanism of the traditional EDI system. (3) Good scalability. Adopt an extensible message processing module. Message processing should be modularized, by encapsulating different message type processing procedures into micro-service units, and configured and extended according to message types and access protocols.
Different from the traditional EDI system, XEDI does not deploy to each port but manages it in a unified manner under a resource support system, renting functions such as EDI message standards are perfect, which can better meet business needs e cost is high, the technology is backward, the system coupling is large, the performance of the remote call method is low, the performance threshold is low, and it cannot adapt to the massive data exchange in the high concurrency environment WebService Used for business system integration of some electronic ports and ports Mature technology, low coupling, low cost, and easy implementation of SOAP data standards Its essence is a Web-RPC system, and the SOAP-based remote call method has a low performance threshold, and the supported message types are limited ESB ere are relatively few applications in the interoperability industry, which is more suitable for the complex internal environment Complete system, with standard adapters and extensible interfaces, low development, maintenance, and management costs, and strong compatibility with heterogeneity issues e structure is cumbersome, the scalability is poor, and the software and hardware requirements are high. If different protocols are uniformly converted into SOAP messages through the adaptor and then XML parsing, there will be more unnecessary format conversions, especially the processing efficiency of large data packets Journal of Advanced Transportation 3 message exchange and distribution to participating institutions and users in the form of EDIaaS to reduce overall costs. However, the system design is mainly aimed at largescale intermodal information platforms and has poor adaptability to the differentiated needs of individual users. e construction of XEDI's architecture makes it clear about how the messages interact from different EDI systems under cloud. XEDI system are composed of Data Service Layer, Micro-Service Layer, and Resource Scheduling Layer from top to bottom. e logic structure of Data Service Layer is similar with the traditional EDI system. Considering business operation compatibility, Data Service Layer consists of three models: Data Access Modal, Data Processing Modal, and Data Storage Modal to make messages received and sent, parsed, and transformed and make messages stored [9]. To adapt containers scheduling, Data Processing Modal rebuilds the decentralized modal using micro-service. e last layer takes charge of component scheduling and the feedback of performance monitoring. e architecture is shown in Figure 1

Scaling Models of XEDI
Most of the current scaling models and algorithms are designed based on IaaS VMS and can be divided into vertical and horizontal scaling modes [10]. However, it takes a long time to configure and start and stop virtual machine instances, so the scaling response is poor in real-time. Unlike IaaS, lightweight container clouds can scale applications in real time in a larger cluster environment. Because the container is an immutable carrier, only supporting horizontal scaling model, although the current container arrangement system has set up a simple response telescopic mechanism (for example K8S HPA [11]), but because only to copy an application based on memory and CPU load control, application scope is limited and has yet to have related research for complex component system scaling problem. Because XEDI's micro-service components are interconnected, there is no general scaling control by abstracting services into independent nodes [12]. In this paper, a closed-loop control system based on XEDI is proposed to build a self-scaling mechanism to achieve elastic data sharing and on-demand resource allocation in the cloud environment.
e scaling tactic is a function whose input is the indicator vector obtained from the XEDI monitoring module. Each dimension of the vector represents a monitoring indicator. In addition, the monitoring module ensures a long enough historical record. e record matrix P corresponding to the index data collection is shown as formula:  N specifically represents the length of historical records, and m specifically represents the number of monitoring indicators. e output of the scaling strategy is scaling index I, that is, I � f i (P), f i is the scaling strategy function. If I is large, it can be interpreted as urgent to expand capacity and vice versa. In order to realize the quantization of the scaling decision, the system usually sets the expansion threshold I up and the shrinkage threshold I down in advance. If I > I up appears, the expansion process will be carried out at this time; if I < I down , the shrinkage process will be carried out at this time.
According to the choice of P, f i can be divided into single index strategy and multi-index strategy. When m � 1, f i is a single-index scaling strategy; when m > 1, f i is a multi-index scaling strategy. At the same time, according to the choice of f i , the scaling strategy can be divided into response strategy and prediction strategy.
Although the single index algorithm is simple, it is prone to the miscalculation of scaling. In terms of scaling strategy, compared with a responsive scaling strategy, a predictive scaling strategy can make a prediction based on historical load and make scaling decisions earlier, which has a better scaling effect [13]. We propose a multi-index scaling model of XEDI based on a single index predictive scaling strategy.
In the single index algorithm, the input matrix P can be simplified as the historical window vector of load indicators, defined as follows:

Responsive Scaling Strategy.
e nonprediction model is generally based on the historical window H(x, n) to make a weighted average of the index x as the response value V r (H(x, n)), where x is the index, and n is the window size. e following formula (2) is used for calculation: where q i is the weight coefficient.
is the average value of indicator window H(x, n). According to formula (3), the response scaling index I can be obtained: In particular, when n � 1, I � x n , that is, scaling according to the current load, which is currently the industry commonly used scaling strategy.

Predictive Scaling Strategy.
Compared with a responsive scaling strategy, a predictive scaling strategy can predict the historical load and make scaling decisions earlier, which has a better scaling effect [14]. In this paper, the autoregressive model is adopted to design a predictive scaling strategy, which is generally used in the stage of statistics and signal processing. As a random process, it is mostly used for modeling and forecasting various natural phenomena. Although XEDI message load changes are not the case. AR(p) specifically represents the p-order autoregressive model in this study. e definition of AR(p) model is as follows: e X t is model variables, φ i is the model of the regression coefficient, c is a constant (usually zero), ε t is a random error, and p is the order number.
In the process of AR (1), a sliding window composed of multiple cycle monitoring indicators is used to predict the load value of future cycles, which are called adaptive Windows. According to H(x, n) of the history window with length n, let the length of the adaptive window be w and iteratively predict the value of a new period based on n recent historical records. AR (1) can predict the index x i of w future periods in the adaptive window, where n < I < � n + w, x i is calculated iteratively by formula: where x avg represents the mean value of x i in the history window, e i represents noise (generally 0), and ρ (1) represents the relation function when the delay step number is 1. ρ(1) is calculated by the following formula: where σ 2 n represents the standard deviation of the historical window.
en, the predicted peak value can be obtained from the w window of indicator x i . It is reasonable that when indicator x is the load rate, it can be calculated by the following formula: e same as formula (3), the expansion index can be predicted I � f i (P) � V p (H(x, n)).
Although AR (1) algorithm solves the problem of index prediction in the time window, it can only realize the prediction of a single index load. Literature [15,16] has proved through experiments that when the selection of indicators does not match with the type of load, the real load of the application cannot be shown, and even if the algorithm is rigorous, it will fail. e multi-index algorithm can rely on the comprehensive analysis of multiple load indicators to correctly judge the scaling time and effectively avoid the situation that the load is too large, the application scale cannot be adjusted correctly, and the request cannot be responded to normally.

Multi-Index Scaling Model
e basic idea of implementing the multi-index scaling strategy is to transform the multi-index load into a single index set. According to the above analysis, the input P of the scaling strategy is an n * m matrix, and the output is the scaling index I. e multi-index scaling strategy is shown in the following formula: Journal of Advanced Transportation e calculation steps of I are as follows:

Convert Multiple Indicators into Single Indicators.
e weighted average is carried out for each index in each row of P, and the transformation formula in the k row is as follows: You take all the rows, you transform the input matrix P, and you get a vector that has dimension n, which is H(x, n).

A Single Index Is Used to Calculate the Load.
After converting the multi-index matrix into a single index vector, the single index scaling model can be used to calculate the scaling index I and carry out the scaling decision-making process.
At the same time, in terms of index selection, XEDI adopts the multi-index comprehensive trigger strategy that can reflect the performance most directly, so as to avoid the failure of prediction algorithm due to the failure of CPU, memory, and other indirect indexes to reflect the real status of message processing. e multi-index predictive scaling algorithm can select the trigger point of scaling more effectively and effectively prevent excessive scaling operation when combined with the cooling time of scaling.
Based on [17] Dominant Resource Fairness (DRF) algorithm and Dominant Resource Fairness for XEDI (XDRF), which is an extension of the two above-mentioned algorithms, it is designed to allocate the resources of PODs more fairly and perform better in calculating.
Assuming that there are n available computing nodes in the current cluster operating environment of XEDI, each computing node has m resources in total, Q k represents the performance evaluation score of node k, η k represents the ratio of the performance evaluation score of node k to the average score, and T k represents the resource type characteristics of node k, and the encoding is consistent with Definition 3: Definition 3. (XEDI performance context). Parameter XEDI.C � {XEDI performance index set ∪ XEDI resource I resource status index set} is the performance context of the current XEDI system. z i,k represents the adaptation factor of POD(i) on machine k, D i,j represents the demand of a copy of POD(i) for resources of type j, with D i � D i,1 , D i,2 , D i,3 , . . . , D i,m , S i represents the dominant share of POD(i), R k j represents the total amount of resources of type j on node k, and Ru k i,j represents the number of resources of type j that POD(i) has been allocated on node k, Rc k,j represents the number of resources of type j on node k that can be allocated, and W i represents the weight of calculating POD(i). e calculation process is as follows: (1) W i of each POD weight requiring capacity expansion in the POP set is (2) e ratio of the performance evaluation score of node k to the average score is (3) e adaptive factor and dominant share S i of POD(i) on node k were calculated, and k was (4) Calculate the leading share S i (DS value) of POD(i) as the sum of its leading shares on each node: (6) e JTH resource allocation to POD(i) is determined by the following Priority: When j is odd, the copy of POD(i) is allocated with suitable high-quality resources, as shown in the following formula: When j is even, the copy of POD(i) is allocated with suitable inferior resources, as shown in the following formula:

Algorithms
e automatic scaling algorithm of XEDI is designed based on the scaling model in Section 2, which mainly solves the problem of when the message processing module scales in the container cloud environment. According to the threshold of the scaling index, the scaling process is divided into two algorithms: Algorithm 1 is for expansion, and Algorithm 2 is for shrinkage. e scaling algorithm firstly obtains the monitoring data, and under the condition that the performance is not abnormal, calculate the load index set and calculate the XEDI message workload [18]. If the message's expansion index exceeds the expansion threshold, it traverses all the packet processing packets in sequence and calculates the data load of the corresponding POD. If the expansion index of the data packet exceeds the expansion threshold, the POD replica set is expanded to improve data throughput. On the contrary, if the expansion index is lower than the reduction threshold, the POD replica set is scaled down to release resources, and under the premise of ensuring the concurrent processing performance of the message, the resource occupation is minimized (Algorithm1).

Fairness Analysis of XDRF and CXDRF Algorithms
In the process of cloud resources sharing, the efficiency and fairness of the allocation of resources are the most important properties, widely considering the encourage sharing, cheat blocking, no jealous, and Pareto efficiency as an important index of judging allocation mechanism, and the following XDRF algorithm in POD expansion process is used to further discuss the equality of the allocation of resources:

Theorem 1. XDRF is incentive sharing
Proof. if there are k PODs to expand, for any POD (i), POD (j), . . < S k , then the allocation of POD(i) results in the amount of resources D i , and the total amount of resources decreases as R � R − D i . According to formulas (3)-(6), the increase of the used resource Ru k j will cause the DS value S i of POD(i) to increase. While S i > S j , POD(i) stops allocating and allocates resources to POD(j) to minimize the DS values alternation of different POD. When the load falls back, each POD will call Algorithm 3 to release the excess resources to ensure the resource-share of other POD in order to guarantee the resource-share of the next expansion, and the proof is completed.

Theorem 2. XDRF prevents strategic operations
Proof. suppose that there are two resources r 1 and r 2 , and the total resources are R 1 and R 2 respectively; there are two computing tasks i and j, and their resource demand vectors are D i � d i,r1 , d i,r2 and D j � d j,r1 , d j,r2 . If the following relationship exists, (d i,r1 /R 1 ) > (d i,r2 /R 2 ), (d j,r1 /R 1 ) < (d j,r2 /R 2 ), then the dominant resource of computing task i is r 1 , and the dominant resource of computing task j is r 2 . If x i and x j are the number of subtasks of calculation tasks i and j respectively, the x i and x j are calculated by the following formula: Assume that POD(i) increases its dominant resource demand from D i to D i ′ in order to obtain more shares, and the dominant resource of POD(i) is r, and then d i,r < d i,r ′ ; if the dominant resource of POD(j) is p, according to formula (17), it can be known that when capacity expansion is completed, i,r , and the proof is completed. It indicates that POD cannot increase its allocation share by falsely reporting resource demand and cannot be deceptive in meeting resource demand.

□ Theorem 3. XDRF is free of jealousy
Proof. assume that POD(i) is jealous of POD(j)'s resource quota; that is to say, POD(j)'s resource quota is larger than POD(i), and these resources are also needed by POD(i). If these resources are r ∈ r 1 , r 2 , . . . , r m , the following two situations should be considered: (1) If r is the dominant resource of POD(i) and POD(j), then r can only be the same resource. According to the hypothesis d i,k < d j,k and according to formula is, by allocating more copies for POD(i) to balance its dominant resource, so the resource allocation of i will not be affected. (2) If r is not the dominant resource of i but is relatively important for POD(i), POD(j) occupies more quotas, and if the dominant resource distribution of POD(i) and POD(j) is q and p, then there is the following relationship: consider the following two scenarios simultaneously: (a) if (d j,p /R p ) > (d i,q /R q ), then x i > x j ，and in order to satisfy the above relationship, the demand of POD(i) on r is far less than that of POD(j), that is, d j,r ?d i,r , and r is not an important resource of POD(i), which contradicts the hypothesis.
, then x i < x j ，and it can be obtained from the above relationship, d j,r ≥ d i,r , which is the same as the case a), so the demand of POD(i) on r is less than or equal to that of POD(j), which is inconsistent with the hypothesis, and the proof is completed.  Journal of Advanced Transportation of POD(I) in resource r is increased from s i,r to s ' i,r , where s i,r � d i,r * x i . According to eorem 2, POD(I) cannot increase s i,r by increasing d i,r ; therefore, POD(I) can only increase the quota of resource r by increasing x i . It can be obtained from lemma (8) that I has at least one saturated resource w. erefore, increasing xi cannot increase the share of w. erefore, it contradicts the hypothesis that Pareto improvement does not exist, and the proof is completed. e essence of XDRF meeting Pareto efficiency is the constraint on resource occupation by P. When resource allocation reaches saturation, POD cannot increase its share anymore, unless it occupies resources of other pods, whose behavior will be forbidden by XDRF.
e XEDI system is deployed with the help of Inter ings, a virtual cluster environment of containers, and it is tested under the following two aspects: scaling effect and scaling velocity [20]. e former one refers to comparing frequencies of message processing using different automatic scaling algorithms, while the latter one refers to testing whether PODS can be effectively adjusted with the change of load [1]. □

Scaling Effect Test.
e throughput limit and resource allocation algorithm efficiency of XEDI under different POD copy sizes were tested, among which the Takia adapter was configured into SYN mode; that is, the request-response was not conducted until the message conversion of the three steps was completed. e POD copy quota of the three steps tested was configured as <0.2c, 128M>, <0.4c, 128M>, <0.3c, 256M>, and the resource vectors were <1, 0, 0>, <0, 1, 1>, and <1, 0, 1>. In order to compare the capacity expansion effect, XEDI configured the three STEP copies of the test Topic into even capacity expansion mode (from 2 to 16 to expand capacity on 4 heterogeneous computing nodes), where Dell-R710 and Dell-R620, respectively, correspond to CPU and memory storage computing resources and used Mesos DRF and XDRF as XEDI POD allocation algorithms, respectively [21].
e VUser of LoadRunner adopts the trapezoid incremental graph until the HTTP-503 error appears in the response result.
us, the response frequency of server requests, data throughput frequency, and maximum concurrent request number of XEDI under different replica configurations can be obtained, as shown in Table 2: And the relationship of the data in Table 2 can be shown in Figures 1-4.
According to Figure 4, through XMON's monitoring of POD's comprehensive load rate, the overall load rate of XDRF is higher than that of Mesos' DRF algorithm during the POD distribution process, which indicates that the resources are (POD.CPUinsR,n)), ); } add this pod to collection<POD>; }} //Calculate configuration optimizations for POD collections that need to be scaled up for (each pod in collection<POD>) { //POD optimization scheme is calculated by queuing theory system compute PodOptimizationPlan(pop) for pod by queue system; add this pop to collection<POP>; } //Confirm whether K8S resources meet the expansion conditions if (R not adequate for collection<POP> scaling-up) { //When the available resources are used up, try to apply for resources from the container //cloud and preempt dynamically when the resources are insufficient try to apply resource //increment as R c1 ;
Journal of Advanced Transportation 9 better utilized overall. Combined with Figure 5 it can be also seen that XDRF algorithm and dynamic weighting and resource types match, and the more the urgent priority allocation, the more the reasonable resources, as well as the equilibrium between different node performances, so the two resources allocation performance is better than the default resource allocation algorithm as a whole.

Comparison of Scaling Effects of Different Scaling
Strategies. In the cloud deployment response scale and scale forecasting strategy, respectively, two cluster instances, Takia ferry mode is configured to ASYN enough throughput to ensure that the front end POD configuration is the same as the first step in the test, the initial replications to 1, and in the test phase of the load scenario, LoadRunner VUser adopts arch random graph, the two cluster instances at the same time to request access to 16 min to test the system's response to the load, including the expansion of the trigger and execution and time efficiency to solve this problem. According to the interface of the capacity enlargement algorithm, the capacity expansion threshold was set as 75% and the capacity reduction threshold as 45% [22]. e capacity expansion index adopted the time-throughput composite load rate, and the cooling time of capacity expansion was 2 min (note: in production environment, to avoid frequent capacity expansion caused by load fluctuation near the threshold, the value was generally more than 10 minutes). e test results are shown in Figure 6. As can be seen from Figures 1 and 6, in the initial stage, the load rate is lower than 40%, and the total number of POD copies is specifically 3. At 3 min, the load increases sharply, and the server load rate rises rapidly to nearly 80%, higher than the capacity expansion threshold. With capacity expansion triggered, the number of POD copies increased to 8 at 4 min. After that, the load was reduced to 40%, less than the shrinkage threshold. Since it was in the "cooling-off time" stage, the shrinkage operation was not triggered, and the two expansion and shrinkage operations within a short period of time in this stage were prevented. At 5 min, the load returned to the rising trend, reached 75% at 7 min, and triggered the second capacity expansion operation. At 8 min and 9 min, respectively, the number of copies of the two expansion strategies increased to 12. At 11 min, the load was reduced, and the volume reduction operation was triggered when it was lower than the volume reduction threshold. e copies of the two capacity name: autoScalingDown input: C: XEDI performance context; output: none //If the XEDI resource occupancy rate is low, it will not shrink, reducing the number of //unnecessary shrinkages if (R a < R * θ max ) { terminate scaling-down;} //Get all the POD sets for XEDI retrieve all pods of XEDI from K8S as collection<POD>; for (POD pod: expansion strategies were reduced to 6 and 8, respectively, and the volume expansion was not carried out in the following 2 min cooldown. At 16 min, the volume reduction operation was triggered by load drop, and the number of copies was reduced to 3, which verified the effectiveness of XEDI dynamic capacity expansion. It can also be seen from the figure above that, compared with the response capacity expansion strategy, the predictive capacity expansion strategy is more active than the response strategy, because it can predict the load status of the subsequent time series in advance. erefore, the capacity expansion preparation can be carried out before the load expansion, so as to obtain better system processing performance and throughput [23]. Algorithm name: XDRFforPOD / * e number of nodes is n, and the resource dimension is m * / Input: R � <R 1 � <R 1,1 to R 1,m >, . . ., R k � <R k,1 to R k,m >, . . ., R n � <R n,1 to R n,m >>: total resource collection; collection<POP>: POD optimization scheme collection; Output: none Define variable z � collection<POP>.size: the number of PODs to be calculated; Define variable R 1u � Ru 1 1,1 , . . . , Ru k i,j , . . . , Ru n z,m : the set of allocated resources, Ru k i,j represents the number of resources of type j that has been allocated by POD(i) on node k; Define variable R c � Rc 1,1 , . . . , Rc k,j , . . . , Rc n,m : unallocated resource set, Rc k,j represents the number of resources of type j on node k that can be allocated; Define variable W � W 1 toW z : e weight set of POD to be optimized; for (i from 1 to z) { Calculate the weight of the POD in the collection<POP> according to formula (10) and fill the collection W; } for (k from 1 to n) { Calculate the cluster nodes η k according to formula (10) and arrange them in ascending order; } do { for (i from 1 to z) { For R and Ru sets, calculate the dominant share S i of POD(i) according to formulas (11) and (12), and update the collection<POP> collection, sorted by S i in ascending order; } //Get the POD with the smallest dominant share (i) picking POP(i), the first element of collection<POP>; POD(i) � POP(i).POD; //Get POD(i) resource requirements such as CPU and memory calculate resource demand of POD(i) as D i ; Calculate the resource Predicates set Npre(i) of POD(i) according to formula (14); if (Ru + D j ≤ R) { According to formulas (13) or (14), a copy resource r is allocated to POD(i), where r � � D i ; //Load and run the copy instance let replication as result of loading and running POD(i).replicationConfig with r; //Register the copy, monitor the data queue and participate in data processing services register this replication as consumer to kafka with POP(i).topic; //Update resource usage Ru+ � D j ; Rc− � D i ; //Refresh the dominant share of POD(i) according to formulas (12) and (13) refresh dominant share for POD(i); if (POP(i).dPR--�� 0) { // e POD has been expanded and deleted from collection<POP>, and no longer enters //the subsequent allocation process POD(i) scaling-up done; }} // e cluster node resources are exhausted, record the POD information that has not been //allocated and exit DRF else { get unsatisfied POPs as collection<UPOP> from collection<POP>; report collection<UPOP> to CAdvisor; terminate XDRF; } / When collection<POP> is empty, all POD allocation is completed If (allocation done for all pod in collection<POP>) { report to XTuning allocation done with R and collection<POP>; terminate XDRF; }} while (true) ALGORITHM 3: XDRF algorithm.

Performance Comparison between Closed and Open
Scaling Strategy. Based on the above test scenarios, and further comparison does not have scale characteristics of the traditional "stovepipe" through information sharing system with elastic performance difference between unit ITIU information sharing, namely, validation expansion module performance improvement effect of information sharing service, we will have the response type expansion cluster instance XTuning closed, as well as the expansion and test instance and the expansion of the client's response performance. Using the same server configuration, set the front module to the SYN mode, and at the same time, set the LoadRunner VUser map of 300 concurrent users trapezoidal map; the threshold arrival time is 50 sec, cycle for 3 min, and do not test points recording two-cluster-instance transaction response time, and the test results are shown in Figures 7 and 8; the Xaxis is time, the vertical axis for the transaction response time.
By comparing the two figures, it can be found that the response time of the two cluster instances is basically the same in the early stage, and the system throughput of the server load reaches the threshold at about 50 sec. In Figure 7, as the capacity expansion scenario starts to expand, the message response time decreases to about 1 sec after the capacity expansion. In Figure 8, as the cluster instance shuts down the capacity expansion component, the response time of the system after stabilization remains around 2.5 sec. It can be seen that the automatic capacity expansion system can effectively maintain the service performance of the client when the system load increases.
In order to compare the performance difference of XEDI components in the container environment and the virtual machine environment of the current mainstream cloud platform, two XEDI cluster instances that respond to the scaling mode are deployed in the container and virtual machine environments [24,25]. e Takia adapter is configured in SYN mode, the POD configuration is the same as the first test, the initial number of copies is 1, the node in the virtual machine mode also uses the same configuration, and the initial number of nodes is also 1. LoadRunner's VUser map is a ladder map of 200 concurrent users, with a period of 2 minutes. Record the transaction response time of the two test cluster instances separately to evaluate the response level of the container and virtual machine scaling to the load under the same configuration. e test results are shown in Figures 9 and 10.
As can be seen from the above figure, the average response time of the container environment is about 1 sec, which is significantly better than the virtual machine environment. In addition, because the container is a lightweight process-level service, the refresh time of the POD copy only takes about 5 sec, so Figure 9 can quickly complete the expansion operation in the early stage of the load and reduce the transaction response time to less than 1 sec. e virtual machine startup and deployment time is an operating system level operation. As can be seen from Figure 10, the transaction response time in the virtual machine mode increases as the load reaches about 90 sec before completing the first expansion operation. It can be seen that, in terms of

Conclusions
In this paper, we have proposed the autoscheduling algorithm XDRF in the cloud environment. is paper incorporates a detailed evaluation of the XEDI stretching model toward the workloads of CPU and RAM. rough quantitative experiments, it was verified that the XDRF algorithm could achieve the system performance optimization on the basis of guaranteeing system reliability and reduce energy consumption effectively [26]. e work in this paper also has clarified that the model can meet the demand of dynamic load and improve the service quality according to the two tests.

Standardization of Cloud Platform for Combined Iron and
Water Transport. Cloud computing is an effective way to optimize the existing intermodal information layout and application management model, and it also brings new challenges to intermodal business and data standards under the cloud environment. Although the intermodal cloud platform adopts a centralized management model, it is difficult to integrate a large number of heterogeneous intermodal applications on a unified cloud platform without a unified intermodal information standard. Although simple migration can achieve unified management of applications, it cannot effectively use virtual resources to optimize cloud service models. erefore, researching the intermodal information standards that adapt to the cloud environment is crucial to the landing application of intermodal cloud platforms.

Construction of Intermodal Blockchain.
Combined transportation of iron and water is a multiparty collaborative business process, and the security and traceability of information sharing are extremely important. Blockchain is the latest information sharing and storage technology. It can not only effectively simplify the intermodal business process, but also effectively protect the security of shared data. How to combine blockchain with intermodal information technology, build intermodal blockchain, and realize intermodal smart contracts and data traceability is also of great significance and requires a lot of follow-up research work.

Data Availability
Data used to support the finding of this study are available within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.