Heuristic Determination of Resolving Controls for Exact and Approximate Controllability of Nonlinear Dynamic Systems

Dealing with practical control systems, it is equally important to establish the controllability of the system under study and to find corresponding control functions explicitly. The most challenging problem in this path is the rigorous analysis of the state constraints, which can be especially sophisticated in the case of nonlinear systems. However, some heuristic considerations related to physical, mechanical, or other aspects of the problem may allow coming up with specific hierarchic controls containing a set of free parameters. Such an approach allows reducing the computational complexity of the problem by reducing the nonlinear state constraints to nonlinear algebraic equations with respect to the free parameters. This paper is devoted to heuristic determination of control functions providing exact and approximate controllability of dynamic systems with nonlinear state constraints. Using the recently developed approach based on Green’s function method, the controllability analysis of nonlinear dynamic systems, in general, is reduced to nonlinear integral constraints with respect to the control function. We construct parametric families of control functions having certain physical meanings, which reduce the nonlinear integral constraints to a system of nonlinear algebraic equations. Regimes such as time-harmonic, switching, impulsive, and optimal stopping ones are considered. Two concrete examples arising from engineering help to reveal advantages and drawbacks of the technique.


Introduction
The ability of a controlled system to accommodate a required state at a given instant by means of attached controllers is called controllability.It is one of the most crucial properties of applied control systems along with stability and reliability.Generally, two main types of controllability are considered for deterministic systems: exact and approximate.If by a specific choice of admissible controls the system can be transmitted from a given state to a required state within a finite amount of time exactly, it is called exactly controllable.If the system is not exactly controllable by any choice of admissible control, but its state implemented at the required instant for at least one admissible control is "sufficiently" close to the designated terminal state, it is called approximately controllable.Evidently, exact controllability implies approximate controllability with arbitrarily small accuracy.However, in general, approximate controllability does not imply exact controllability.For further introduction to the concept of controllability with diverse applications, refer to major contributions [1][2][3][4][5][6][7][8][9][10] and related references therein.The concept of controllability with applications in engineering has been studied by many authors, a part of which can be found in [11][12][13][14][15][16] and in related references therein.Results obtained in this paper can be used in relevant applied studies in engineering, e.g., [17][18][19][20], for derivation of diverse control regimes.
Let us describe the aforesaid mathematically.Assume that the state of a dynamic system is characterized by the vectorfunction w : R  ×R  ×R + → R  , , ,  ∈ N, satisfying some constraints, e.g., governing equation, boundary, and initial conditions, among other possible constraints (hereinafter, all those constraints are referred to as state constraints).Then, mathematically the controllability of the system is verified by evaluating the mismatch between the designated state, w  : R  → R  , and the state implemented by a specific choice of admissible controls u : R + → R  at the given instant , i.e., the residue R  (u) =     w (u, x, ) − w  (x)    W . (1) Thus, if for at least one admissible control u ∈ U, then the system is exactly controllable.If this is not the case, but for at least one admissible control u ∈ U, R  (u) ≤ , for a given precision  > 0, then the system is approximately controllable.
Here u is the control vector-function, U is the set of admissible controls,  > 0 is the control time, W is the space of the terminal states w  (appropriate Hilbert space), and ‖ ⋅ ‖ W is the norm in W. At this, the subscript  at the residue is a short form of denoting the obvious dependence R  (⋅) = R(⋅, ).
Hereinafter, we assume that the set of admissible controls has the following general form [10]: where supp() = { ∈ R + , u() ̸ = 0} is the support of u and U is an appropriate Hilbert space.Note that U can be complemented by some other possible constraints on u.For instance, in the case of boundary controls, if the required state transition is necessarily time-continuous, then U is complemented by the compatibility conditions of the initial and boundary conditions (see Section 3 below).The admissible controls providing (2) or (3) are called resolving controls (in corresponding sense).The set of all resolving controls is denoted by U res .Obviously, U res ⊂ U.If U res = ⌀, lack of controllability occurs (see [21][22][23][24][25] and related references therein).
The analysis of controllability for a particular control system can be quite sophisticated and can require burdensome computational costs.This may happen in the case of systems with singularities/discontinuities, uncertain systems, systems with nonlinear state constraints, and so on.However, the evaluation of (1) on U can be made less costly if the state vector w, that is, the solution of the state constraints, is found explicitly.Unfortunately, there does not exist a unified technique that allows solving general state constraints explicitly.However, there are such powerful techniques as the Adomian decomposition method [26], the homotopy analysis method [27], and nonlinear Green's function method [28,29] for finding an approximate analytical solution to general types of nonlinear differential equations.Nevertheless, even having the explicit dependence R  = R  (u), it is a very challenging problem to find resolving controls providing (2) or (3) explicitly.
In this paper we develop a systematic algorithm for heuristic determination of explicit expressions for resolving controls providing exact or approximate controllability at the required instant  (i.e., (2) or ( 3)) of systems with nonlinear state constraints.Parametric hierarchies of functions are constructed to satisfy the resolving systems derived by the recently developed Green's function approach [10].In the case of exact controllability, the resolving system is expanded into series of orthogonal functions.Considered controls include trigonometric, piecewise-constant, impulsive, piecewise-continuous, constant, quasi-polynomial, trigonometric polynomial, and stopping regimes.At this, we cover the cases of both distributed and boundary controls.A rigorous comparison of our approach with the well-known moments problem approach shows that under proper restrictions the known explicit solutions obtained by moments problem approach coincide with our solutions.We test the technique on the examples of a finite elastic beam subjected to an external force with an uncertainty and the viscous Burgers' nonlinear equation.Both cases have diverse applications in modern engineering.
The paper is organized as follows.In Section 2 we outline the two mostly used methods for the analysis of exact and approximate controllability of nonlinear dynamic systems.Then, in Section 3 we derive diverse hierarchies of resolving controls and corresponding constraints on the free parameters for both exact and approximate controllability analysis.In Section 4 we perform a comparative analysis between the controls derived by our approach and those derived by the moments problem approach.Finally, we provide a demonstration of how the derived explicit expressions of resolving controls can be used to ensure exact or approximate controllability of particular dynamic systems.

Existing Approaches
The verification of (2) or (3) is enough only for establishing whether a particular system is exactly or approximately controllable or not.However, when dealing with real-life control systems, from the practical implementation point of view, it is equally important to find admissible controls providing either type of controllability explicitly, which would significantly reduce controller performance costs.There exist several efficient techniques for analyzing particular systems on exact or approximate controllability.In this section we outline the two most commonly used techniques.
The first technique involves a norm minimizing algorithm to minimize (1) (see [6]).In general statement, the problem is formulated as a constrained minimization problem: where w is subjected to the state constraints.If the minimum is attained and is zero, then provides exact controllability of the system at .If the minimum is not equal to zero, but it remains smaller than the required precision, then the system is approximately controllable.Otherwise, we arrive at the lack of controllability.In other words, following this approach, by a single numerical procedure it is possible to identify whether the system is exactly or approximately controllable or it is not controllable at all.The disadvantage of this approach is in burdensome computational costs (machinery time) required to run the corresponding numerical scheme in the case of nonlinear state constraints.
The second approach (see [10]) is based on the usage of the system w (u, x, ) − w  (x) = 0 a.e. in Ω, (7) which is equivalent to exact controllability of the system according to the definition of norm.Here Ω is an open subset of R  occupied by the system.Then the set of resolving controls is defined as In some exceptional cases, w is found from the state constraints explicitly, so that ( 7) becomes an explicit constraint on u.Then, the Newton-Raphson iteration (or similar others) can be involved to determine resolving controls approximately with required precision.The disadvantage of this approach is that the rigorous derivation of w from the state constraints is complicated and, in some cases, is impossible.Another disadvantage of this approach may be the necessity of using derivatives of (7) in the numerical procedure.

Heuristic Determination of Resolving Controls
In this section we describe a method of heuristic determination of resolving controls providing exact or approximate controllability of dynamic systems with nonlinear state constraints.Using Green's function approach [10], the state constraints are reduced to nonlinear integral constraints on the admissible controls.Parametric solutions to the reduced constraints are constructed explicitly.Eventually, a system of nonlinear algebraic equations for the parameters is derived.
For the sake of simplicity, hereinafter we restrict ourselves to the one-dimensional case.All the derivations and transformations can be straightforwardly generalized to the case of higher dimensions.

Exact Controllability.
In general statement, the exact controllability of a particular system in one space dimension is equivalent to equality type constraint of the form [10] where   and   are given functions.The subscript  indicates that the corresponding quantity depends on .
The explicit determination of resolving controls from (9) by direct methods can be quite complicated since its both sides depend on , while the control depends only on .This means that (9) cannot be considered as a Fredholm integral equation of the first kind.Nevertheless, in cases when   (, , ) =   ()   (, ) in U × [, ] × [0, ] , (10) for an integrable function   , (9) becomes a Fredholm integral equation of the first kind which can be solved efficiently for generic kernels   (refer to [30] for details).An efficient way of solving (9) in general can be developed using the expansion of   and   into series of orthogonal functions.Let {  } ∞ =1 be a family of orthogonal functions in [, ] (in some specific cases a family of orthogonal functions with some weight can also be involved); that is, where    is the Kronecker's delta: Denote by   and   the expansion coefficients of the functions   and   , respectively, so that (in real problems,   and   are expressed in terms of Green's function of the system under study and its integral [10], so their expansion is convergent) Then, since {  } ∞ =1 are orthogonal, ( 9) is equivalent to the infinite system Note that ( 16) can be treated in different ways.It is an infinite system of Fredholm integral equations of the first kind, which can be efficiently solved numerically [30,31].On the other hand, it can be treated as an infinite-dimensional problem of moments (see Section 4 below).In specific problems, depending on, for example, the required accuracy of computations, the infinite system ( 16) is truncated and considered for some finite .
System (16) also can be resolved heuristically.Namely, based on some considerations, say, physical treatment of the problem, controller capability, and so forth, the control function is chosen to have a specific form containing a set of free parameters.Then, this function is substituted into system (16) or its truncated version and a discrete system of, in general, nonlinear algebraic equations is derived with respect to the set of free parameters.
Consider some specific solutions.Let the infinite system (16) be truncated for some finite  ∈ N.Then, the control function can be sought, for instance, in the form of trigonometric polynomial where  ∈ N and   ,   , and   are free parameters determined appropriately to satisfy ( 16) exactly.Substituting (17) into the truncated part of system (16), the system of nonlinear equations is obtained. Here and the superscript T means transposition.Consider also the piecewise-constant regime corresponding to a finite jump | +1 −   | from the constant regime (  ) to ( +1 ) when  switches from   to  +1 .Here   and   are free parameters.At this, instants   satisfy the inequality type constraints foreclosing the overlap of different regimes.Here  is the Heaviside function: In this case, ( 16) is reduced to with Further, consider the impulsive regime formally describing instantaneous impacts   at  =   .Here  is the Dirac function: In this case also the free parameters are   and   .Substituting ( 25) into ( 16) leads to with Moreover, here also the instants   satisfy In many applications the piecewise-continuous control is considered, where is the characteristic function of the interval [ 0 ,  1 ].This regime corresponds to switching between the timedependent regimes   .At this, any of the regimes (17), (20), and (25) can be used as   .In this case the resolving equation is reduced to with In the case of boundary control,   is linear in ; that is for given functions  0 and  1 .
In addition,  satisfies the boundary conditions reduced from the compatibility of given boundary and initial data.
Then, ( 18), (23), and ( 27) are reduced to linear systems for   .Indeed, in the case of, for example, (25), the resulting system for the free parameters will be where Here  0 and  1 are the expansion coefficients of  0 and  1 into series of {  } ∞ =1 , respectively.At this, (35) implies additional restrictions on   : when  1 = 0 and   = , then Otherwise, (25) is applied in cases when  0 =   = 0. Eventually, different order numerical methods can be involved to approximate the nonlinear systems derived above for the free parameters.See, for instance, [32,33].
Remark 1.In general, the -dimensional system (18) contains 3 unknowns; therefore it might be irresolvable.Nevertheless, if some of the free parameters, say   and/or   ,  = 1, . . ., , are prescribed, then (18) may become solvable.Moreover, in the case of boundary control, ( 18) is reduced to a linear system for   .Therefore, if   and   are prescribed and  = ,   are found straightforwardly.Otherwise, that is, when  ̸ = , for finding specific solutions, techniques of nonlinear programming [34] must be involved.
Remark 2. Any solution derived from the truncated version of the infinite-dimensional system ( 16) is approximate, which means that (9), in general, is satisfied approximately.

Approximate Controllability.
Assume that the system under study is linear in control.Then, its approximate controllability is reduced to the evaluation of an integral equation of the form [10] for   ≥ 0, bounded kernel   ≥ 0, and  ∈ U.In the case of boundary control, the control function is additionally constrained by the discrete constraints (35).
It is easy to see that one of the obvious solutions to (39) is the constant regime leading (39) to When the control is carried out by the boundary data, this regime is applicable only in the case when  0 =   =   ̸ = 0.In vibration control problems usually time-harmonic controls of the form are considered, leading (39) to the nonlinear equation with respect to the free parameters   , , and .In the case of boundary control, (35) provides the additional constraint A particular solution of ( 39), ( 35) can be constructed using the quasi-polynomial control where   and ,  ∈ R + are free parameters satisfying system (39) (note that in a proper treatment,  and  may also accept negative values).Assume, for simplicity, that sign  ≥ 0.Then, (39) provides for determination of   , , and .Here Evidently, when  and  are prescribed, (46) becomes a linear equation for   .
In the case of boundary control, in order to satisfy (35) by the quasi-polynomial regime, the term must be added to (45).Then, ( 46) is reduced to A fast verification of approximate controllability is provided by the trigonometric regime where   , ,  ∈ N, and  ∈ R + are the free parameters to satisfy (39).Assuming that sign  ≥ 0, for the determination of the free parameters, the nonlinear equation is derived, where In the case of boundary control, the term  0 above should be added to the control.
Other particular solutions appropriate for the physical treatment of the problem are also possible.In many applied problems it becomes necessary to involve sliding modes [35].
As an example, consider the switching or piecewise-constant control regime subject to the inequality type constraints Here   ,   , and  are free parameters.Note that piecewisecontinuous regimes with   =   () are also often considered.In such cases, any of continuous regimes above can be considered as   .
Assuming that sign  ≥ 0, (39) can be reduced to the nonlinear equation where Note that in the case of boundary control, the additional constraints are derived when  1 = 0. Otherwise,  0 = (0) should be added to the right hand side of (53).
Another application of sliding mode control is the socalled optimal stopping regime usually given by and ( 39) must be satisfied by an appropriate choice of   and   .In addition,   satisfies the inequality type constraint In this case, ( 39) is reduced to the nonlinear equation where Therefore, if   is prescribed and   = const, then In the case of boundary control, if   < , only the first condition in (35) can be satisfied by   and necessarily   should be zero.Otherwise, both conditions must be satisfied by   .The set of reachable terminal states can be significantly extended, by complementing U with impulsive actions of the form subject to the inequality type constraints The free parameters are   ,   , and .If in addition all   ≥ 0, then (39) is reduced to the nonlinear constraint where In the case of boundary control, if  1 = 0 and   = , (35) is satisfied only when Otherwise, (63) is applicable if and only if respectively.
Remark 3. In all cases above, together with constraints above, according to the definition of U above, free parameters must satisfy also the inequality type constraint This means that more likely techniques of nonlinear programming must be involved.
Remark 4. As it was noted above, the derived systems of nonlinear algebraic equations can be efficiently approximated by existing powerful numerical techniques.In applied problems the derivation of ( 9) and ( 39) remains the most difficult step towards application of this method.

Comparison with the Moments Problem Approach
One of the well developed and widely used approaches that allows finding resolving controls explicitly is Krasovskii's moments problem approach.It has been initially developed in [36] for the linear finite-dimensional moments problem.See also [2] for more details.Later the algorithm has been extended to nonlinear finite-dimensional systems in [37] related to applications in mobile control problems [38].Using an integral representation formula for the general solution of the state constraints (equivalent to Green's function solution) and satisfying the terminal conditions, the explicit determination of the resolving controls is reduced to a moments problem.At this, for concentrated parameter systems, the resulting moments problem is finite-dimensional, while for distributed parameter systems, it is infinite-dimensional.When ( 16) is linear in , its treatment as a problem of moments has several advantages, including the availability of explicit   -optimal solutions for 1 ≤  ≤ ∞ and necessary and sufficient conditions for the optimal solution existence.Let us formulate the following theorem giving the explicit form of the   -optimal for 1 ≤  ≤ ∞ solution of ( 16) linear in  (see [39] for details).For the nonlinear case see [37].
where   and the Lagrange multipliers {   }  =1 are determined from the following two equivalent problems of conditional extremum: In the special case when  = 1 ( = ∞), the resolving admissible controls are determined as a weak limit of (71) as follows: Thus, (71) and ( 77) are the solutions of (70) minimizing It is easy to see that, under some restrictions on the parameters   and   , the control regime (25) obtained above heuristically coincides with the  1 -optimal solution (77).It is noteworthy that when  = 2 and   is a set of sine functions or quasi-polynomials, then, under proper constraints on the free parameters, the heuristically determined solutions (17) and ( 45) also coincide with the corresponding  2 -optimal solutions.Finally, when  = ∞, the solution of the moments problem (70) is represented in the form of a switching regime, which, under proper constraints, coincides with (20).
The explicit solution of the general finite-dimensional nonlinear moments problem is not known.It is hard to obtain explicit solutions even for some simple forms of   =   (, ) [40].Therefore, the explicit expressions of controls heuristically determined above can play into hands of applied mathematicians and engineers to directly identify control regimes suitable in their studies of nonlinear control problems.

Examples
In this section we demonstrate the ways of using the heuristically determined controls in practical cases.

Beam Subjected to Load with Uncertainty.
Consider an elastic beam of finite length  subjected to a concentrated dynamical load of intensity .The exact location of the load application, denoted by  0 , is not known.However it is given that  0 ∈ ( 0 ,  1 ) ⊂ (0, ) for given  0 and  1 .Assume that  =  end of the beam is simply supported, while at  = 0 only the bending moment is fixed, and the transverse deflection is controlled.The aim is to find boundary controls providing controllability of the beam in a given finite time  > 0. Suppose that the load vanishes at some  0 , 0 <  0 < .Since there exists an uncertainty in the system, it is hardly possible to provide exact controllability in finite time [41]; therefore approximate controllability of the beam is studied.
Assume also that the beam is sufficiently thin and the load intensity varies in such a range that the beam undergoes merely infinitesimal strains.In that case, the Euler-Bernoulli assumptions can be involved for deriving the mathematical model of the beam.Then, limiting the consideration by the linear elasticity, the transverse deflection of the beam, denoted here by , satisfies the fourth order differential equation subject to the boundary conditions Here  is the Young's modulus,  is the density,  is the cross-sectional moment of inertia, and  is the cross-sectional area of the beam.The quantity  measures the resistance of the beam against bending load and is referred to as bending stiffness.
At the initial time instant  = 0 the state of the beam is known: Mathematically, the problem is to find such a boundary control function   that provides the inequality in a required finite  with desired precision  > 0.
The functions   ,  1 ∈  2 [0, ] are given.As in many applications; here it is assumed that   ≡ 0; i.e., it is required to suspend the vibrations of the beam.The initial and boundary data are supposed to be consistent: Then, the set of admissible controls is considered to be For further analysis it is convenient to use the dimensionless quantities for the coordinate, time, deflection, and load intensity, respectively.Here  is the speed of elastic wave propagation in the beam: New symbols for those quantities are not introduced, in order to make the reading convenient.Then, the general solution of (80), (81), and (82) according to the Green's representation formula is given by Mathematical Problems in Engineering 9 Here [42] Obviously,  ∈  2 ([0, 1] × R + ), so that the residue (83) is well-defined.
Evaluating the expression (88) at  =  and substituting into the residue (83), by virtue of the triangle and Cauchy-Schwartz inequalities, the following estimate is derived: where [ 0 ()  (, , )  +  01 ()  (, , )]        = d Furthermore, the direct integration leads to It is evident from the expression for   that when  0 ↘ 0 or  0 ↗ 1, then   becomes independent of  0 .It is also obvious that the "worst" influence of  0 on ‖  ‖  2 [0,1] occurs when  0 → 1/2.A consequence which should be expected.However, computations show that when  0 ∼ 10 5 , the elastic displacements of the beam are of 10 −5 order when the load is active and of 10 −9 order when it vanishes (see Figure 1).
Therefore the required precision must be at least  ∼ 10 −10 .On the other hand, the residual axial stresses arising in the beam after the load vanishes have larger values and can serve as controllability criterion.Thus, as a residue considering the quantity where is the axial stress of the beam evaluated at  =  and  0 is a prescribed threshold value.Thus, R  evaluates how close is the axial stress to a given threshold at the instant .
For specific analysis, let us return to the quantities with dimension.Consider a beam of length  = 1 m and of square cross-section with dimensions 0. In order to reduce this value, the threshold value is set to  0 = 200 N/m 2 and the optimal stopping boundary control regime is chosen to ensure the inequality for (93) with the precision  = 10 −5 .Above   ,   ,  1 , and  2 are free parameters.At this,   is constrained by 0 <   ≤ .
Then, inequality (97) holds with  = 10 −5 when Moreover, Figure 3 shows that beside ensuring (97), the considered control regime reduces the total displacement of the beam almost twice.

Mathematical Problems in Engineering
Total displacement (m) at t = 0.5 s Consider now the switching regime where  1 ,  2 ,  1 ,  2 , and  3 are free parameters.Let the threshold value in this case be  0 = 150 N/m 2 .Then, provide (97) with  = 10 −5 .Note that the total displacement in this case also is reduced almost twice (see Figure 4).

Burgers' Equation.
Consider the nonlinear Burgers' equation where  is a positive constant, arising in fluid mechanics, nonlinear acoustics, traffic flow, etc.Using the well-known Hopf-Cole transformation it is reduced to the linear heat equation The controllability of the Burgers' equation is studied separately by many authors, e.g., in [43][44][45][46].See also the related references therein.
Our aim is to determine a distributed control regime governing the heat equation with the aim of providing at a given instant  the terminal condition exactly or approximately for the viscous Burgers' flow (102).
Here V is a given nonnegative function, and   ∈  ∞ (R) is a given function.
In terms of the residue the problem is to determine admissible controls  ∈ U ensuring at the given instant  the equality or the inequality with a given accuracy  > 0.
From (102) it becomes obvious that as soon as holds with then (108) holds as well.Similarly, it is shown that (108) implies (110) as a particular case.Therefore, in the case of exact controllability, (108) and (110) are equivalent.This means that the controls satisfying (110) are resolving for (108).Assume that V() =  [−1,1] ().Then, on the basis of Green's function method [42], (110) is equivalent to the equality where Therefore, (112) can be expanded into series of orthogonal functions as it was done above in Section 3.1.Consider the initial state for a given  0 ∈  ∞ (R).Let  0 and   be positive.Then, both  V and   have exponential decay in  meaning that we can consider (112) in [−, ],  > 1. Expanding  V and   into Fourier series and taking into account the fact that both functions are even, we obtain The resolving system is formed by equating the coefficients of cosines' for corresponding  in both sides of the equation above: As in Section 3.1, truncate system (116) by a finite .In order to derive a consistent system of algebraic equations, consider where   are unknown constants that need to be determined.Substituting  into the resolving system, the following system of linear algebraic equations is derived: where Remark 6.The unknown coefficients can be found from the linear system (118) efficiently.Nevertheless, because of truncation, the control (117) will ensure only approximate controllability of the Burger's equation.(121) Let in particular The problem is to find such an admissible control  ∈ U that ensures with  = 10 −4 .Obviously, the desired  must satisfy  <  0 , where  0 satisfies Restricting the consideration by  = 1, it is obtained that  0 = 10.35 (see Figure 5).Involving the distributed control regime (42), it is possible to achieve the approximate null-controllability of the flow for  = 9.2 when   = −1,  = 2, and  = 0 (see Figure 6).Furthermore, it is evident from Figure 7 that when   = 1,  = , and  = −/2, the flow is approximately nullcontrollable at  = 7.15.

Conclusions
A systematic technique for deriving resolving controls for exact and approximate controllability analysis of systems with nonlinear state constraints heuristically is proposed.
Representing the solution of the state constraints in terms of Green's function, the exact and approximate controllability are reduced to nonlinear integral type constraints.The exact solution of the constraints is highly complicated, so that numerical schemes are usually applied.The proposed technique allows representing some of the resolving controls explicitly and provides nonlinear algebraic constraints on the unknown coefficients.The hierarchy of the heuristic controls includes (i) polynomial, (ii) time-harmonic, (iii) switching or piecewise-continuous, (iv) optimal stopping, and

Figure 3 :Figure 4 :
Figure 3: Controlled total displacement (a) and axial stress of the beam at  = : optimal stopping control.
Now let us consider the case of approximate controllability.Obviously, the equivalency of exact controllability established above does not occur here.Therefore, we need to verify (109) directly.Consider a flow governed by (102) in a thin infinite layer.Assume that the flow source is located at  = −1 section of the layer and generates a harmonic flux () = [ ( − 0) −  ( − 1)] exp () ,(120)which vanishes when  > 1.Therefore, the Burgers' equation must be complemented by the condition

Flow at t = 10 Figure 5 :
Figure 5: Uncontrolled flow in the layer at  = 10.35.
v) impulsive regimes.The corresponding constraints are derived straightforwardly.The derived regimes show a good correspondence with the   -optimal solutions,  = 1, 2, ∞, of linear moments problem derived earlier rigorously.The technique is checked on two specific examples: elastic finite beam bent under external force with uncertainty and a viscous flow governed by Burger's nonlinear equation.The technique is efficiently applied in both cases.Numerical simulations confirm theoretical predictions.New studies about practical implementation of the technique in other problems of mechanical engineering are initiated.In proceeding papers we plan to report some possibilities of determination of resolving controls extremizing cost functionals that have specific importance in the context of the considered problem.