Weak Subdifferential in Nonsmooth Analysis and Optimization

Some properties of the weak subdi ﬀ erential are considered in this paper. By using the deﬁnition and properties of the weak subdi ﬀ erential which are described in the papers (cid:3) Azimov and Gasimov, 1999; Kasimbeyli and Mammadov, 2009; Kasimbeyli and Inceoglu, 2010 (cid:4) , the author proves some theorems connecting weak subdi ﬀ erential in nonsmooth and nonconvex analysis. It is also obtained necessary optimality condition by using the weak subdi ﬀ erential in this paper.


Introduction
Nonsmooth analysis had its origins in the early 1970s when control theorists and nonlinear programmers attempted to deal with necessary optimality conditions for problems with nonsmooth data or with nonsmooth functions such as the pointwise maximum of several smooth functions that arise even in many problems with smooth data, convex functions, and max-type functions.
For this reason, it is necessary to extend the classical gradient for the smooth function to nonsmooth functions.
The first such canonical generalized gradient was the generalized gradient introduced by Clarke in his work 1 .He applied this generalized gradient systematically to nonsmooth problems in a variety of problems.But the nonconvex basic or limiting normal cone to closed sets and the corresponding subdifferential of lower semicontinuous extended-real-valued functions satisfying these requirements were introduced by Mordukhovich at the beginning of 1975.The corresponding subdifferential is called Morduchovich subdifferential.The initial motivation came from the intention to derive necessary optimality conditions for optimal control problems with endpoint geometric constraints by passing to the limit from free endpoint control problems, which are much easier to handle.This was published in 2 .Let us remark also that Clarke's normal cone is the closed convex closure of Mordukhovich normal cone 2 .
Multifunctions set-valued maps naturally appear in various areas of nonlinear analysis, optimization, control theory, and mathematical economics.In Aubin and Frankowska's book 3 and in Mordukhovich's book is an excellent introduction to the theory of multifunctions.Coderivatives are convenient derivative-like objects for multifunctions and were introduced by Mordukhovich 2 motivated by applications to optimal control see 4 for more discussions on the motivations and the relationship among coderivatives and other derivative-like objects for multifunctions .They are defined via "normal cones" to the graph of the multifunctions.Approximate and geometric subdifferentials are introduced by Ioffe in 5 .These subdifferentials are infinite-dimensional extensional of Mordukhovich subdifferential which may be different only in non-Asplund spaces.Michel and Penot's derivatives can be discussed in 6 .Rockafellar and Wets 7 provide a comprehensive overview of the field.The more information about the subdifferentials and coderivatives in nonsmooth analysis can be found also in 8 .The notion of the weak subdifferential, which is a generalization of the classic subdifferential, was introduced by 9 .
In this paper, we investigate the relationships between the Frechet lower subdifferential and weak subdifferentia and we prove some theorems related to the weak subdifferential.
The paper is organized as follows.The definition of the weak subdifferential, strict differentiability, and the Frechet lower subdifferential are provided in the following section.In Section 2, the principal necessary theorems related to the properties of the weak subdifferential are also proved.In the third section, the necessary optimality conditions are proved.The final section presents some conclusions.

Main Results
To start, we provide some definitions which will be useful for some parts of the current paper.
Let X, • X be a real normed space, and let X * be a topological dual of X.
Definition 2.1 strictly differentiable functions .F is called strictly differentiable at x with a strict derivative ΔF x if Definition 2.2 weak subdifferential .Let F : X → R be a single-valued function, and let x ∈ X be a given point where where R is defined as a set of nonnegative real numbers.
The reader can find more information about the strict differentiable and the weak subdifferential, respectively, in 10, page 19 and 11, 12 .
The set is called the weak subdifferential for the F at the point x ∈ X.
It is noted in 11, Remark 2.3, page 844 by the authors that when F is subdifferentiable at x in the classical sense, for the convex functions , then F is also weakly subdifferentiable at x, that is, if x * ∈ ∂F x , then by definition x * , c ∈ ∂ w F x for every c ≥ 0. It follows from the definition of the weak subdifferential that the pair x such that g x ≤ F x for all x ∈ X and g x F x .But the authors do not note the boundedness of the gradient of the functional g x which will be useful in estimating the subgradients for the finding extremum points for the nonsmooth functions.The following proof shows that the gradient of the functional g x is also bounded.Let us prove this.
In fact, if we evaluate the gradient of the functional Then, if we calculate the norm of the gradient ∇g x of the functional g x , we get ∇g  Let us note that the Frechet subdifferential may be empty for some functions.
Example 2.5.Take F : R → R : F x −|x|, x ∈ R. Easy calculation shows that the Frechet subdifferential for above example at the point zero is empty, that is, ∂F 0 ∅.
Note that there is also a symmetric counterpart of the Frechet lower subdifferential, which is the Frechet upper subdifferential, described as ∂F x {x * ∈ X * : The Frechet lower subdifferential and Frechet upper subdifferential are not empty for the function F if and only if the function F is Frechet differentiable.For more information about the Frechet subdifferentials upper and lower , the reader can consult 10, page 90 and 13 and its applications to the necessary optimality conditions in 14, Chapters 5 and 6 .
Theorem 2.6.If x * is a Frechet lower subgradient (Definition 2.3) for the functional F : X → R at the point x, then the couple x * , c is a weak subdifferential for the functional F x at x for any nonnegative c ∈ R .
Proof.Let x * be a Frechet subgradient for the functional F : X → R at the point x, that is, x * ∈ ∂F x .Then by using the definition Definition 2.3 of the Frechet lower subdifferential provided above, we can write due to, Definition 2.3, Lim x → x inf • ≥ 0 .Then it reduces easily to the inequality It is easy to show that the right side o x − x of the last inequality is not less than −c x − x for any nonnegative c.Then it follows that By using the definition of the weak subdifferential Definition 2.2 , we can say that x * , c is a weak subdifferential for the functional F x at the point x.
Theorem 2.7.Let F x be a finite at x, h x ∈ C 1 (continuously differentiable function) in a neighborhood of x.Then if x * , c ∈ ∂ w F h x , then x * − h x , −2c ∈ ∂ w F x , that is, x * − h x , 2c is the weak subdifferential of the function F x at the point x.
Proof.The inequality 2.2 applied to the function −h x implies the existence of the constant c such that 2.9 It is easy to check that, for the differentiable functions, the weak subgradient and its derivative coincide, i.e., x * h .Since x * , c ∈ ∂ w F h x , if we imply the inequality 2.2 for the function F h, we can obtain for all x ∈ X near x.
Upon adding the inequalities 2.9 and 2.10 side-by-side, we arrive with

2.11
The last inequality means that x * −h x , 2c ∈ ∂ w F x that is, the couple x * −h x , 2c is the weak subdifferential of the function F x at the point x.
Theorem 2.8.Let F x be finite at x, and x * , c the weak subdifferential for F x at x provided that x * ≥ c, and let one take any real number l which satisfying c ≤ l ≤ x * .Then, for any x, where x x * /l x, the inequality F x ≥ F x holds.
Proof.By using the definition of the weak subdifferential Definition 2.2 , it is easy check that if the couple x * , c is the weak subdifferential for the function F, then, for any real l ≥ c, the pair l, c is also the weak subdifferential for the function F. Then we can write

2.12
From the relation x x * /l x, it is easy to define x * l x − x .If, in the right side of the last inequality, we substitute x * with l x − x , then we get

2.13
Since c ≤ l ≤ x * and x x * /l x, then x − x x * /l ≥ 1.If we consider the estimate x − x ≥ 1 in the inequality we can obtain F x ≥ F x .
Theorem 2.9.If F is strictly differentiable at x with a derivative ΔF x , then, for any x * , c ∈ ∂ w F u , there exists δ > 0 such that x * ∈ ΔF x 2cB * , where u ∈ B δ x -sphere with radius δ and B * -unit sphere, provided that ∂ w F u / 0 for any u ∈ B δ x .
Proof.It follows from the definition of the strict differentiable Definition 2.1 that for any ε > 0 there exist δ > 0 such that

2.15
Let us take u ∈ B δ x and assume that x * , c ∈ ∂ w F u .Then it follows from the definition of the weak subdifferential that

2.16
Let us substitute ε with the c in the inequality 2.15 , that is, put ε c in 2.15 .Then 2.15 can be formulated as follows. 6

Journal of Applied Mathematics
For c > 0, there exists δ > 0 with the condition that 2.17 If we estimate the absolute value in 2.17 using the above, we can get for such c there exists δ > 0 which satisfies the following inequality

2.18
Let us multiply both sides of the inequality 2.16 by "minus".Then we obtain

2.19
Adding up the inequalities 2.18 and 2.19 side-by-sides, we get the following estimate:

2.20
Dividing both sides of the relations 2.2 by u − u , we get

2.21
If we take the supremum with the respect to the variables u and u in the last inequality, then 2.21 reduces to

2.22
If we consider the norm of the functional or operator 7 , then we get which can be reduced to the following form: x * ∈ ΔF x 2c.

2.24
This is the end of the proof.

Necessary Optimality Conditions via the Weak Subdifferential
In this section, we present the necessary optimality condition for the weakly differentiable function.

Journal of Applied Mathematics 7
Given a function F : X → R finite at the reference point and nonempty subset S of the normed space X, we consider the following minimization problem: minimize F x subject to x ∈ S ⊂ X. 3.1 The following is a well-known optimality condition in nonsmooth convex analysis see 15, Proposition 1.8.1, page 168 which states that if F : R n → R is a convex function, then vector x minimizes F over a convex set S ∈ R n if and only if there exists a subgradient x * ∈ ∂F x such that But optimality conditions in 16, Proposition 1.8.1, page 168 are proved for convex functions.
Let us formulate the necessary optimality conditions for problem 3.1 by using weak subdifferential in the case where the minimizing functional is nonconvex.In fact, the weak subdifferential is given for nonconvex functions, while the classical subgradient does not enable us to find the minimum point in cases where the minimizing function is nonconvex 11, 12 .
The interested reader can find out more about the convex function and the subdifferential for convex functions in 1, 7 .Proof.Let the function F x take minimum value at the point x.If F x is weakly subdifferentiable at the point x, then, by using Definition 2.2, we can write that Since F x takes its minimum at the point x over the set S ∈ X, then we can write that F x ≥ F x , ∀x ∈ S.

3.5
We can reduce last inequality to the following form: for all x ∈ S and nonnegative real number c ≥ 0.
Comparing the definition of the weak subdifferential Definition 2.2 with the inequality 3.6 , we can say that 0, c ∈ ∂F w x . 3.7

Conclusion
Comparison of the current status of smooth subdifferential theory and the corresponding smooth theory reveals a glaring lack of a second order theory.In finite dimensional space a beautiful sum rule for a second-order derivative-like object close to the fuzzy sum rule was derived in 17 .There are many other approaches and results in nonsmooth optimizations and variational analysis in infnite-dimensional spaces.In infinite dimensions the field is little developed.Applications in optimal control, mathematical programming, and other related problems are critical for the healthy development of further nonsmooth analysis theory.
A further research topic is also the development of methods for obtaining the optimality condition for the nonsmooth optimal control problem by using the weak subdifferential.Open problems, including the existence of the solution, the exploration of the necessary conditions in the nonsmooth case, the solution of the HJB Hamilton-Jacobi-Belmann equation, and the use of numerical methods, still present considerable challenges.

Theorem 3 . 1 .
Let the function F x have a minimum at the point x ∈ S in problem 3.1 .If the function F x is weakly subdifferentiable at x, that is, ∂ w F x / 0, then the couple 0, c belongs to the ∂ w F x , for any nonnegative real number c.
and x / x.Then we can add an extra useful and interesting property to Remark 2.3 in article 11, page 844 for the gradient of functional g x , namely, that is bounded by the nonnegative real number x * c. * ∈ X * : Lim In different books, the definition of Frechet lower subdifferential is given different names by the authors, such as presubdifferential or the Frechet subdifferential in 10, page 90, I volume and the Frechet lower subdifferential in 7 .More information about Frechet upper and lower subdifferential can be found in 10, volume 1 .
5 is called a Frechet lower subdifferential of the function F at x. Any element x * ∈ ∂F x is called the Frechet lower subgradient of the function F x .Remark 2.4.