An Accelerated Error Convergence Design Criterion and Implementation of Lebesgue-p Norm ILC Control Topology for Linear Position Control Systems

School of Automation, Northwestern Polytechnical University, Shaanxi, Xi’an 170072, China Department of Material Science and Mechanical Engineering, Beijing University of Technology, Beijing, China MCS, National University of Sciences and Technology, Islamabad, Pakistan School of Electrical Engineering, Qingdao University, Qingdao 266000, China Department of Electrical Engineering, National University of Technology, Islamabad, Pakistan


Introduction
Iterative learning control is suitable for controlled objects with repetitive motion (running) properties in a limited time interval. It uses the data generated during the previous iteration of the system to correct undesirable control signals and generate the control signals used in the current iteration to make the system control. e performance is gradually improved, and finally the complete tracking in a limited time interval is achieved. In a comparison with other control methods, the iterative learning control method has a simple controller structure, a small amount of calculation, and only less knowledge of dynamic characteristics and can get precise control. e characteristics of precise tracking control are applied in many industrial applications such as assembly line industrial robots and chemical intermittent processes. e iterative learning control algorithm is different from other learning algorithms such as neural networks and adaptive control. e iterative learning control algorithm aims at the controlled system with repeated operation characteristics in the finite time interval. It uses the tracking error stored in the system to modify the control input one by one to realize the goal of completely tracking the expected trajectory [1][2][3].
e iterative learning controller can be designed without precise model information, which has the advantages of simple structure, batch processes, etc. [4,5].
e iterative learning control [6] has achieved many research results in theoretical research and practical application since it was proposed [7,8]. e ILC optimal approach is also used in recent days for the error convergence [9]. e soft, inflatable robotic manipulator has many useful features. High compliance and low inertia combined with pneumatic execution assist fast but still secure operations and applications [10][11][12]. However, precise position control is challenging for soft manipulators since they usually take many potential coupling and uncontrollable degrees of freedom [13,14]. Besides, soft materials' dynamical behavioral properties act as viscoelastic materials, which are problematic to model from first principles [15]. e body of a soft robot is made of soft and compliant materials in nature. is inherent softness allows them to interact with faint objects and passively adjust their shape to adapt to amorphous atmospheres [16]. ese features are desired for robotic applications that require safe human-computer interaction, such as wearable robots, home assistant robots, and medical robots. ese robots' soft bodies also present modeling and control challenges that have limited their functions so far. e challenge in constructing such precise control technology is the difficulty in designing a soft robot model suitable for model-based control design technology. Consider, for example, a rigid mechanical system, which is connected by rigid links through discrete joints. Since the joint displacement can completely describe the configuration of the rigid body system, the joint displacement and its derivative are the natural choice of the state variables of the rigid body robot.
Furthermore, many typical paths tracking control strategies have been adopted for soft manipulators [17]. A pneumatic control system-based open-loop and mechanical feedback control topology are discussed in [18]. Model predictive control (MPC) and neural network-based nonlinear MPC methodology are adopted to achieve error convergence for soft actuators [19]. Further advancement made in the control method is the reinforcement learning method, which is introduced for precise position tracking in these manipulators [4,20].
In [21], the authors used a learned inverse kinematics model to enhance the tracking accuracy of position with soft processing aid. Iterative learning control (ILC) is applied in [22] to find a control strategy for the soft mesh worm robot. e authors of [23] used ILC to generate flexible impact behavior, and the authors of [24] reported an ILC-based method to learn the grasping task of a soft, fluid, and elastomeric manipulator. A graph-based, model-free flexible robot motion control framework was proposed in [25][26][27]. In literature [28,29], the authors have suggested a control strategy influenced by marine life. Both of these solutions are only concerned with the coarse-grained motion of the soft robot. It has fine-grained control and dynamic response adjustment. Reference [30] uses a numerical model to control the system response, but this technique is applied to a linear predictable model. Such assumptions and the lack of feedback loops can make the system unbalanced and yield unwanted responses. References [31][32][33] proposed a control strategy based on the Finite Element Method (FEM), which can attain high accuracy but needs a detailed understanding of soft structural materials' mechanical properties. e controller strategy based on FEM will produce a high computational cost, making it impossible to execute in realtime on the embedded processor. e solution is to run the FEM-based mechanism in a feedforward open-loop mode, which leads to control error dominance and reduces the system's overall robustness. A model-based soft robot dynamic response optimization control strategy is proposed in [24,34,35].
So far, most of the literature on iterative learning control has been focused on the convergence of the algorithm in the sense of norm metric, pointing out that the algorithm's convergence can only be guaranteed if λ is large enough [36][37][38]. Since λ norm is an upper-bounded negative exponential function norm, the error's essential characteristics cannot be objectively quantified. e paper [39] found that even though the learning algorithm is theoretically convergent when it gets an enormous parameter value, the upper bound of the error during the initial stage of system operation often exceeds the allowable error range of practical engineering. To avoid the above defects of the λ norm, the papers [40,41] presented the convergence of PD iterative learning control algorithm in the sense of PD measurement in the definite upper norm [42,43]. It is found that the learning algorithm is convergent only in a subinterval of the system running time interval. In [44], to make the iterative learning control algorithm convergent in the sense of upperbounded norm measurement, the algorithm is adjustable and learning law subinterval modified accordingly. However, the algorithm structure is quite complex, and it is not easy to apply in practical nonlinear engineering systems [45].
Furthermore, the Lebesgue-p norm is more reasonable in terms of the properties of quantization and reaction function f. It considers both the upper bound value of the function f in the whole time interval and the p integral function value at each running time [46]. Based on literature [47,48], the tracking performance of iterative learning control is discussed using the Lebesgue-p norm, but the algorithm's convergence is not involved. In references [49,50], the stability of iterative learning control for multistate delay linear systems is studied, and Lebesgue-2 norm is used to evaluate the learning algorithm's tracking performance. In [51], convergence analysis is carried out for PD iterative learning control with feedback information regarding Lebesgue-p norm measurement of linear time-invariant systems. Literature [52,53] analyzes the convergence of fractional-order iterative learning control laws in the sense of the Lebesgue-p norm. Based on the Lebesgue-p norm, an accelerated initial state error convergence topology is discussed in the literature [48,54].
Further, the convergence of variable gain iterative learning control algorithm is discussed in [55] in the sense of Lebesgue-p norm. It can be found from the analysis literature [49,[51][52][53][54][55] that although these research results avoid the defect of using the tracking error of the λ norm metric, they are all convergent analysis for the complete, on-regular system with D � 0, and their conclusions do not apply to the regular system D ≠ 0. e reason is that, for a completely nonregular system, there must be derivative of tracking error in the iterative learning control law, namely, derivative (D) or PID iterative learning law. As for the regular system, only following error, namely, the proportional (P) iterative learning law, would be used to correct the control law. Because the traditional P-type iterative learning algorithm only uses the previous tracking error to correct the control law, the tracking speed is low. To improve the conventional P-type iterative learning algorithm's convergence speed, an iterative learning control algorithm is proposed in the literature [56], but its convergence analysis still adopts the norm. In the theoretical analysis, λ norm was mainly used in the measurement of tracking error. However, the convergence condition of the control algorithm could be satisfied when the parameter λ was relatively large, but the maximum value of transient tracking error fell beyond the allowable range of practical engineering application in the repeated operation of the system, leading to system collapse [57][58][59]. In literature [60,61], Ruan et al. studied the convergence of P-type and PD-type iterative learning control algorithms for linear time-invariant systems using Lebesgue-p (Lp) norm and found that the convergence condition of the system is independent of the value of the parameter λ and mainly depends on the system's own properties and the learning gain matrix. Furthermore, in the sense of Lebesgue-p norm, the convergence of fractional-order iterative learning control algorithm for fractional-order linear systems is discussed in literature [62]. In order to cope the above defects, this paper proposes a class of regular system to improve the convergence speed of traditional P-type iterative learning algorithm. Furthermore, it also overcomes the λ-norm to measure the tracking error using the tracking error of system before storage and the current tracking error information as well as adjust iterative axis on the difference between twotime error signal. e control input of successive modified fast iterative learning control algorithm gives accelerated and better convergence of the Lebesgue-p norm for particular satisfied conditions. is paper is organized into the following sections. Section 2 presents the problem description and its importance as well as basic mathematical background of relevant problems. Section 3 refers to the convergence and proof of the error convergence and analysis. It also gives the sufficient conditions for the validation of the proposed algorithm. Section 4 elaborates the validation of the proposed algorithm and its result discussion. Finally concluding remarks are given in Section 5.

Problem Description
Consider a class of regular systems with repetitive running characteristics: where k denotes the number of iterations, t is the time interval of the system, x k (t) ∈ R n is the state vector of the system running in the k th time, and u k (t) ∈ R r and y k (t) ∈ R m are the control input vector and output vector, respectively, in the system running in the k th time. e proper dimensions are taken for all A, B, C, and D matrices.
It is considered that the initial state of the system for every iteration is consistent with the expected initial state; that is,

Hypothesis 1.
ere is a unique ideal input u d (t) to make (2) true: where y d (t) denotes the expected trajectory and x d (t) is the expected state.

Control Target.
is research's primary and vital control objective is to design a fast iterative learning control algorithm for a regular linear system described in (1) and to overcome the shortcoming of the low convergence speed of the traditional P-type iterative learning control algorithm. Simultaneously, the convergence of the algorithm is analyzed by using the Lebesgue-p norm to overcome the defect of using the tracking error measured norm.
For this control goal, the fast iterative learning control algorithm is designed as follows: like the iteration axis on the difference between two-time error signal, where Δe k (t) is called the last difference signal and Δe k+1 (t) is called the differential signal of the current time. L p1 is the learning gain of the k th tracking error, L p2 is the feedback gain of the (k + 1)th tracking error, and L d1 and L d2 are the learning gain and feedback gain of the differential signal, respectively.
According to algorithm (3), when L p2 and L d2 are set at zero, algorithm (3) is the open-loop iterative learning control algorithm: When L p2 and L d2 are all set at zero, algorithm (3) becomes the traditional P-type iterative learning control algorithm.
e question is now raised that what would be the control law designed for linear regular system (1) to make it convergent using algorithm (3) and also what conditions should be chosen for L d1 , L p2 , and L d2 ?

Preliminaries Knowledge.
Convergence is obtained through the following definitions and lemmas: define one vector-valued function f: [0, T] ⟶ R n and λ norm [63] as e upper vertical-bound norm [10] and Lebesgue-p norm [64] of vector-valued function f are defined as follows: Mathematical Problems in Engineering An important conclusion is given in the literature [65]: the upper-bounded norm is a particular case of Lebesgue-p norm, namely, dτ is the convolution of g and h, and the parameters p, q, rsatisfy 1 ≤ p, q, r ≤ + ∞ and

Convergence Analysis
Theorem 1. uses the designed algorithm (3) to control system (1) that meets Hypothesis 1. Suppose that the following conditions are satisfied: As the number of iterations k ⟶ ∞, the tracking error of the system in the Lebesgue-p norm tends to zero, so the limit k goes to infinity‖ek + 1(·)‖ p � 0. e proof can be seen from system (1).
For Hypothesis 2 we need to do some assumption as follows: Assumption 1. Assume that the initial state and expected We consider a class of single input single output linear time-invariant systems as follows: e system operation interval x(t) ∈ R n is an n-dimensional state variable. u(t) and y(t) are the control input and output, respectively. A, B, and C, are matrices with corresponding dimensions, and it is assumed that CB ≠ 0. Without loss of generality, it is taken that the dynamics of system (1) is not entirely known, but the initial state of system (1) when repeatedly running on the interval [0, T] is resettable, and the desired ideal trajectory is given. To realize the system's ultimate complete tracking of the ideal trajectory, we construct a P-type iterative learning control law with feedback information.
Obviously, in control law (3) above, when L p2 and L d2 are set at zero, the control law of degradation of specific iterative learning control law (4) is as follows: Furthermore, set all the L d1 , L p2 , and L d2 to zero results in typical P-Type ILC law as follows: When the control input u(t) in system (1) is replaced byu K +1 (t) in control law (3), (4), or (5), the corresponding system dynamics is where x k+1 (t), u k+1 (t), and y k+1 (t) are the corresponding state variables, controlling input and controlling output of the system for the (k + 1) th iteration. In this paper, Lebesgue-p norm is used to demonstrate the convergence of the algorithm. For easy comparison, the λ upper bound norm and Lebesgue-p norm are defined as follows: . . , f m (t)] T is a vectorvalued function, and λ is a positive real number; then, the λ norm of the vector-valued function f can be expressed as e upper verticality [59] and Lebesgue-p norm [65] of vector-valued function f are In the literature [65], an important conclusion is that lim p⟶∞ ‖f(·)‖ p � ‖f(·)‖ ∞ � ‖f(·)‖ sup .
at is, the upperbounded norm is a particular case of the Lebesgue-p norm.
Proof. of Error Convergence. ere is unique ideal input according to Hypothesis 1, such as 4 Mathematical Problems in Engineering e above-mentioned Δ is an arbitrary value, subscript krepresents the number of iterations, and L p and L d are denoted separately, describing proportional and differential learning gain matrix. □ Hypothesis 2. PD-type iterative learning controller (3) is used for system (1), if the condition ρ < 1 is met, where en, the number of iterations approaches infinity, and the norm of Lebesgue-P becomes significant.
(1) When t ∈ [0, h/a k ) is caused by the deviation of initial state value, the system output cannot follow the desired trajectory. (2) In the periodt ∈ [h/a k , T], the tracking error monotonously tends to zero, and the system outputs the expected trajectory at the output of tracking; i.e.,‖e k+1 (·)‖ p ⩽ρ‖e k (·)‖ p , k � 1, 2, . . . , According to Hypothesis 1 and by substituting (3) into (20), we can get Arrangement formula (21) can be obtained as follows: Lebesgue-p norm is taken from both sides of (22), and Young inequality is applied to obtain Preparation equation (23) can be obtained as follows: at is, Procedure formula (25) can be obtained as follows: Mathematical Problems in Engineering e conditions of the theorem (26) show that it satisfies ρ < 1 and k ⟶ lim ∞‖e k +1 (·)‖ P � 0 is true; that is, when the number of iterations approaches infinity, the tracking error of the system approaches zero. Note 1. In general, the Young inequality based on generalized convolution is also true for vector-valued functions. e conclusion obtained in this paper is also true for multiinput and multioutput systems in the case of the Lebesgue-p norm defined for vector-valued functions as described in this paper. e demonstration process only needs to replace the single input single output scalar with the corresponding multidimensional vector in this paper's demonstration process and follow the vector algorithm for the deduction, so it will not be described further.

Note 2.
When L p1 � L d1 � 0, the control law of degradation (2) for specific iterative learning control law (3), the PD-type control law (3), and the convergence conditions for ρ � |1 − CBL d1 | + ‖C exp(A · (·))(ABL d1 + BL p1 )‖ 1 < 1 show that, in the sense of Lebesgue-p norm, the convergence of PD-type iterative learning control law (3) not only depends on the system input and output matrix of CB and the differential learning gain L d1 values, but also depends on the proportion of the system state matrix A and learning gain L values of p. Although the convergence conditions relative to the λ norm in the sense of ρ * � |1 − CBL d1 | < 1 are conventional, in this paper, the error in measurement and the analysis of convergence are not dependent on the parameter selection of λ, and convergence conditions ρ < 1 essentially describe the system dynamics and control law of learning gain decided to the convergence of the leading role.
Note 3. Compared with the convergence conditions under the λ norm, the convergence conditions given in this paper are conservative, but their convergence is no longer dependent on selecting the λ value. Simultaneously, the article gets the convergence condition as ρ � ρ − 1 (ρ 1 + ρ 2 ) < 1, and must satisfy ρ − 1 < 1 or (ρ 1 + ρ 2 ) < 1, and so make when selecting feedback gain and the learning gain more immense freedom.

Algorithm Application to Soft Robotic Position Control.
e soft structure has unlimited degrees of freedom; therefore, building a model as accurate as a rigid structure is challenging. It makes the quite fine-grained control structure challenging, especially when tuning the dynamic response. erefore, people have raised serious concerns, especially in rehabilitation applications where fine-grained control of muscles under the support of soft structures is compulsory. A further illustration is high-speed applications, such as industrial robots with soft tentacles, where fine-tuned dynamic response is necessary. As an emerging field, soft robots have very limited research on precise modeling and vibrant response tuning [67]. e method to improve the tracking accuracy and performance of flexible and inflatable manipulators is to syndicate flexible structures with stiff parts. Compared with a completely soft design, this hybrid design usually has worse inclusive compliance and higher inertia, but the degree of freedom is also reduced. As a result, the control action of the remaining degrees of freedom can be amplified, accordingly improving the tracking control performance. e literature [68][69][70][71] describes such kind of examples.
e rigid body dynamics of the soft robotic arm are calculated by defining the difference in pressure between the two actuators, Δp � p A − p B , as shown in Figure 1. In the positive alpha direction, the positive pressure difference p accelerates the arm (compare Figure 2). To describe the dynamics of the robotic arm with p as input and arm angle α as an output, use device recognition. Apply the same mechanism of an acknowledgement as in [72]. e following continuous-time transfer function is obtained: where the parametric values are taken as κ � 7.91 rad/bar, ω 0 � 14.141/s, δ � 0.31. Now discretizing this transfer function can be obtained by taking sampling time of 0.02 s.
where k denotes the time index and the states are being described as k(x 1 , x 2 ) T � (α, _ α) · α is the arm deflection angle that is directly measurable and that is normalized (π, 10π). u is the control input and initial condition for this u 0 � 0. For this proposed controller, parameters are L p1 � 0.5, L d1 � 0.01, L p2 � 0.2, L d2 � 0.002, and the desired trajectory is taken as y d � 30°sin(2πt).
When algorithm (3) is applied to the soft robotic system (28), the system's output tries to reach its desired trajectory. It can be seen from Figure 1 that after the second iteration, the controller effort of the learning algorithm (3) is remarkable, but the error is still significant. After a few iterations, it can be noted, as in Figure 3, that the error reaches its convergent limit compared to the super norm. e error of the super norm is more prominent as well as not converging to zero. e reason is that, when iterative learning control algorithm (3) is used for system (1), if the condition ρ < 1 is met, whereρ � ‖I − CL d ‖ + ‖C exp(A(·))(L p + AL d ) ‖ 1 , then as the number of iterations approaches infinity, the sup-norm is significant. When t ∈ [0, h/a k ) is caused by the deviation of the initial state value, system output cannot follow the desired trajectory, so the error does not converge to zero as expected. In contrast with the period t ∈ [h/a k , T], the tracking error monotonously tends to zero, and the system outputs tracks ultimately as the expected output, i.e., ‖e k+1 (·)‖ p ≤ ρ‖e k (·)‖ p , k � 1, 2, . . . ,.
Algorithm (3) uses previous and current error information, and its convergence is proved through sufficient conditions. Under the above conditions, when the algorithms proposed in (3) and (4) are applied to the soft robotic systems (21) having an arbitrary initial state, the system tracking errors are shown in Figure 3. According to the Lebesgue-p norm, errors in the proposed algorithms' followup (2) and (3) tend monotonously to zero with the increase in iteration number. At this point, the tracking error reaches the error convergence limits when algorithm (3) executes four iterations. In contrast, algorithm (4) requires more iterations to achieve the convergence limit but cannot reach zero. erefore, under the given appropriate learning gain, algorithm (3) has a faster convergence speed and higher control accuracy than algorithms (4) and (5). Algorithm (3) updating law includes feedback gains with current and previous information of the errors such as e k (t), Δe k (t) and Δe k+1 (t). As the number of iterations k ⟶ ∞, the tracking error of the system in the Lebesgue-p norm tends to zero, the output of the system tries to follow within the finite time interval t ∈ [0, T], and ultimately a perfect desired trajectory is achieved. Algorithm (3) is more robust and guarantees monotonic error convergence for position tracking, especially in soft robotic applications. is robust topology is also applied to higher-order high dynamical systems with little modifications in the learning proportional and derivative gains according to the system requirements.

Validation for Typical PMSM Servo Position Control
System. A typical PMSM (permanent magnet synchronous motor)-based servo position control system is taken as an example for validating the proposed algorithm. e standard state-space linear servo position control model of the PMSM can be described as follows: which the value of each parameter is described in Table 1. e state-space equation for the given system in a standard form can be expressed as follows: e states of the system are described as x � [θ(t), ω(t)] T , and the control input is u � T e (t) � k t i q (t), for which each matrix of the system can be calculated as follows:

Mathematical Problems in Engineering
To validate the Lebesgue-p norm proposed in this paper, we assume the parameters to be as follows: the rotational inertia J � 0.004 kg.m 2 and viscous friction coefficient B f � 0.0001 Nm/rad/s. For ILC, the parameters are taken as L p1 � 0.8, L d1 � 0.01, L p2 � 0.3, L d2 � 0.006, and the desired trajectory for the system is y d � 50°sin(2πt). e controller's effort is shown in Figure 2, which describes the output of system (31), attempting to follow the desired position. e figure displays the simulation results and interpretation and also shows the control consequence of a particular iteration of the method. As we have seen, the performance of the second iteration is not better and initially has significant errors, the delay is relatively apparent, and the error is critical. e operation of the ILC control additionally reduces the error and attempts to exceed its goal. e error converges rapidly to its limit after a limited amount of time and several iterations, and the performance of the method precisely tracks the target location.
In comparison, these conditions still occur despite modifying the controller parameters several times. As shown in Figure 2, the system's desired and output position can be seen and automatically updated by the output accurately following the optimal level. e controller's action is stable and sufficient for the error to converge to its monotone convergence limit under satisfactory conditions. e system's tracking curve is seen in Figure 2 in the second iteration of the learning system, and the error curve indicates that the error is too high. In Figure 4, the results of the different iterations errors are shown. e error trajectory of the device is already greatly decreased, and the most significant error in the tenth iteration relative to the second trial (the results are shown in Figure 4) has very good tracking accuracy for algorithm (3) as compared to the other two algorithms. e error is too small to meet the demands of the system. erefore, we can say that the proposed Lebesgue-p norm scheme for accurate position tracking is significantly fast compared to algorithms (4) and (5). e sufficient conditions and the Lebesgue-P error criterion suggest that the findings are acceptable and that the mechanism is stable enough to monitor the PMSM servo control position. is approach can also be implemented with a specific additional extension to other complicated speed and position servo systems for the broad range of traditional automation applications.

Application and Validity for Other Linear Systems.
e following linear system is taken to validate the proposed algorithm's effectiveness further, and it is obtained from [73].
where t ∈ [0, 2]. Algorithm (3) was used to control system (32). It was assumed that y d (t) � sin(5t) of the desired trajectory, and the initial state of the system wasx 1 (0) � 0 x 2 (0) � 0. e initial control was set as u(t) � 0, and L p1 � 0. If the convergence condition is satisfied, then the control parameters are chosen as L p1 � 0.3, L d1 � 0.1, L p2 � 0.2, and L d2 � 0.1. To validate the effectiveness of algorithm (3) proposed in this paper, simulation comparisons are made with open-loop algorithm (4) and traditional P-type algorithm (5). e simulation results are shown in Figure 5-7. Figure 5 shows the output tracking curve of different iteration times during algorithm (3) control. Figure 6 shows the tracking error curve in the sense of the norm of upper certainties and Lebesgue-2 norm; Figure 7 shows the tracking error curves of algorithms (3)-(5) in the Lebesgue-2 norm sense.
As shown in Figure 5, after the 20 th iteration, the system output has been fully tracked on the expected trajectory in a finite time. It can be seen from Figure 6 that the Lebesgue-2 norm and the upper-bounded norm of algorithm (3) converge to 0. As can be seen from Figure 7, algorithm (3) has the highest convergence rate, algorithm (4) comes second, and algorithm (5) has the lowest convergence rate. e  reason lies in the fact that algorithm (4) increases the difference signal of error in two adjacent iterations based on algorithm (5). Algorithm (3) uses the current error and the previous error to form the difference signal, while algorithm (4) only uses the previous error to create the difference signal. Compared with algorithm (4), algorithm (3) makes full use of the current error information. To better illustrate the effectiveness of algorithm (3) designed in this paper, the numerical values of tracking errors of algorithms (3)-(5) under different iteration times are given below, as shown in Table 2. Table 2 shows that the tracking error of algorithms (3)-(5) in the first iteration is 1.217316. After the 15 th iteration, the error of algorithm (5) is 0.07538, and the error of algorithm (4) is 0.024335. e error of algorithm (3) is 0.003683. From the vertical data in Table 2, the three algorithms' tracking error can be reduced successively with the increase of iteration number. However, from the horizontal data in Table 2, the tracking error of algorithm (3) is the smallest, followed by algorithm (4), and that of algorithm (5) under the same iteration number is the largest. erefore, it is easily observed from Table 1 that the convergence speed of the fast iterative learning control algorithm (3) designed in this paper is significantly higher than that of algorithms (4) and (5).

Validation for Other Linear System.
To illustrate the tracking capability of algorithm (3) in this paper for different expected signals, let us assume the expected trajectory, y d , to be as follows: e value is the same as that of the above expected sinusoidal trajectory. e tracking effect of the output curve on the predicted trajectory under different iteration times is shown in Figure 8, which shows the tracking effect of iteration 2, iteration 10, and iteration 15.    It can be seen from Figures 5 and 8 that the control algorithm (3) designed in this paper can achieve complete tracking of different expected tracks in the finite time interval with the increase of iteration numbers for the predicted trajectory of slow and abrupt changes.
e new proposed updating input iterative learning law includes feedback gains with current and previous information of the errors such as e k (t), Δe k (t), and Δe k+1 (t). As the number of iterations k ⟶ ∞, the system's tracking error in the Lebesgue-p norm tends to zero. e system's output tries to follow within the finite time interval as specified for this system, t ∈ [0, 2]. Ultimately, a perfect desired trajectory y d is achieved. e result of the system is shown for the different iterations in Figure 8. When the tracking error converges after 15 or more iterations and tends to zero, the system's output precisely follows the desired trajectory y d . Accordingly, algorithm (3) in the sense of the Lebesgue-p norm is robust and satisfies ρ < 1, and k ⟶ lim ∞‖e k +1 (·)‖ P � 0 is accurate. When the number of iterations approaches infinity, the tracking error of the system approaches zero. is robust control topology is also applied to higher-order dynamic systems with little change in proportional and derivative learning gains as required by the system. Furthermore, it can also correctly work for the motor position control, aircraft altitude and latitude control, angle of attack, soft articulated robot position control, satellite positioning systems, and piezoelectric nanopositioning control systems.

Conclusion
is research paper has discussed a fast iterative learning control algorithm for a class of regular linear systems with direct input-output transmission terms of Lebesgue-p norm. e convergence of the algorithm is proved under the Lebesgue-p norm, and sufficient conditions are given for the convergence of the norm form of the algorithm. is algorithm not only has a higher convergence rate than the traditional P-type algorithm, but also avoids the defect of using the tracking error of the norm metric and increases the degree of freedom of learning gain selection. Due to the convolution limitation of Lemma 1, the algorithm in this paper is only applicable to regular linear systems. erefore, in future studies, the convergence of typical nonlinear systems in the Lebesgue-p norm can be further analyzed.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.