New Exact Penalty Functions for Nonlinear Constrained Optimization Problems

and Applied Analysis 3 ∂ ∂ε f σ (x, ε) = − α 2 ε α−1 m


Introduction
Penalty function has always taken an important role in solving many constrained optimization problems in the fields such as industry design and management science.It is traditionally constructed to solve nonlinear programs by adding some penalty or barrier terms with respect to the constraints to the objective function or a corresponding Lagrange function.Then it can be optimized by some unconstrained or bounded constrained optimization software or sequential quadratic programming (SQP) techniques.No matter what kind of techniques are employed, the penalty function always depends on a small parameter  or large parameter  =  −1 .As  → 0; the minimizer of the penalty function, such as a barrier function or the quadratic penalty function [1], converges to a minimizer of the original problem.By using some exact penalty function such as  1 penalty function (see [2][3][4][5][6][7]), the minimizer of the corresponding penalty problem must be a minimizer of the original problem when  is sufficiently small.There are some nonsmooth penalty functions for nonsmooth optimization problems, such as the exact penalty function using the distance function for the nonsmooth variational inequality problem in Hilbert spaces [8].In [9], the convergence of lower-order exact penalization for a constrained scalar set-valued optimization problem is given under sufficient conditions which are easy to verify.
The traditional exact penalty functions [3] are always nonsmooth.When it is used as a merit function to accept a new iterate in an SQP method, it may cause the Maratos effect [10].On the other hand, a traditional smooth penalty function such as the quadratic penalty function cannot be an exact one.So we must compute a sequence of minimization subproblem as  → 0. At that time, ill-conditioning may occur when the penalty parameter is too large or small, which also brings difficulty of computation.In [11,12], some kinds of augmented Lagrangian penalty functions have been proposed with improved exactness under strong conditions.In [13], exact penalty functions via regularized gap function for variational inequalities have also been given.In [14], the authors study exactness and algorithm of an objective penalty function for inequality constrained optimization.All these functions enjoy some smoothness, but at the very beginning, to use this smoothness we need second-order or third-order derivative information of the problem function that is difficult to estimate in practice.Besides, all the above kinds of penalty functions (see [11,[15][16][17][18] for summary) may be unbounded below even when the constrained problem is bounded, which may make it difficult to locate a minimizer.
Most results in the literature of exact penalization are mainly concerned with finding conditions under which a solution of the constrained optimization problem is a solution of an unconstrained penalized optimization problem, and the reverse property is rarely studied.In [19], the author studies the reverse property.In this paper, two modified simple exact penalty functions are proposed for two kinds of constrained nonlinear programming problem, where the term simple means that the penalty function constructed in the primal variable space only contains the original information of the objective function and the constraint functions in the constrained optimization problem but does not contain the information of their differentials and multipliers.This kind of traditional exact penalty function can be expressed into the following form: where (⋅) satisfies that () = 0, if  is feasible, and () > 0 otherwise.A simple exact penalty function of this kind is the  1 penalty function, and it is known that this penalty function is nonsmooth.The penalty functions without the multipliers have been given in [12,17,20,21].Under mild conditions, these penalty functions have been proved exact and smooth; however, since they include the information of differentials of the objective function and the constraint functions in the constrained optimization problem, they are not simple ones by our definition.In [22], a new exact penalty function is c onstructed by adding a new finite-dimensional or even onedimensional decision variable to control the penalty terms.
Under mild conditions, it is proved that for sufficiently large penalty parameter  > 0, every local minimizer ( * ,  * ) of the above penalty problem with finite penalty function value   ( * ,  * ) has the form  * = 0, where  * is a local minimizer of the original problem.
Inspired by this idea, in this paper, by augmenting the dimension of the program with a variable, we propose a simple exact penalty function for the equality constrained mathematical program and a simple exact barrier-penalty function for the inequality constrained mathematical program, respectively.Our new penalty function for equality constrained mathematical program is different from the one in [22] since that in [22], as the variable  is controlled by the function  that has the properties (0) = 0, and   (0) ≥  1 > 0, which are not needed for our function.In [22], to construct the penalty function for the inequality constrained mathematical program, the original optimization problem must be converted to be an equality constrained problem.For the inequality constrained mathematical program, we propose a new simple exact log-type barrier-penalty function, which is different from the classical log-barrier function and has broader feasible region.

A Modified Simple Exact Penalty Function for Equality Constrained Optimization Problems
We are now ready to propose a simple exact penalty function for equality constrained mathematical programs.
We consider the following problem: where  is a bounded open set in the -dimensional Euclidean space R  and  :  → R and  :  → R  are all continuously differentiable in .We assume that  is bounded below in .
We then consider a new penalty function as follows: where  is a new one-dimensional variable, Δ(, ) = ‖() −   ‖ 2 is the constraint violation measure,  >  ≥  ≥ 2 are three integers, ,  are both even, and  > 0 is a penalty parameter.In particular, we can set  = 3, and  =  = 2.  ∈ R  is a preset variable, for example, we can set  = (1, 1, . . ., 1).Compared with the paper [22,23], here we get rid of the restriction that  is bounded and positive.
Based on the function (3), we establish the following penalty problem: Let ∇ (,)   (, ) denote the gradient of   (, ) in (, ), then we have In the following we consider the smoothness of the penalty function   . For if  ̸ = 0, then Obviously,   (, ) is continuously differentiable in the set  × {R \ {0}}.
We are now to discuss the exactness of the penalty function   (, ).
Proof.Since for any , (  ,   ) is the local minimizer of (   ) with finite   (  ,   ), and (  ,   ) ∈  × R, then we have Equation ( 9) is equivalent to Let  → ∞, from the assumption we know   →  * ∈ ,   →  * , if  * ̸ = 0, then by the above equality, we know that the first and second term on the left side are all finite value, and the third term tends to +∞.This is impossible, so  * = 0.
where () is the set of local minimizers of (2).
It is shown from Theorems 1 and 2 that under some constraint qualification condition, the local minimizer of the penalty problem corresponds to a local minimizer of the original problem, thus our penalty function for equality constrained mathematical program enjoys exactness.

A New Simple Exact Barrier-Penalty Function for Inequality Constrained Optimization Problem
We are now to construct a class of simple smooth exact penalty function for inequality constrained optimization problem: where F = { ∈ R  |   () ≤ 0,  = 1, . . ., }, and ,   : R  → R are all continuously differentiable functions.Throughout this section, we assume that F is a nonempty and bounded set.In [22], the authors transform the inequality constrained problem into a kind of equality constrained optimization problem by adding some parameters to control the constraints.In this section, we will give a new smooth and exact barrier-penalty function.
For problem (23), the classical barrier function is where  > 0 is a parameter.The corresponding problem is min ∈R  (; ) .
Because (; ) constructs a barrier wall at the boundary points which satisfy   () = 0, the above problem is equivalent to the following problem: min  (; ) So the operation set is the interior F 0 := { ∈ R  |   () < 0,  = 1, . . ., } of F, this implies that the interior point method will need a strict interior point as an original point.
In this section, our penalty function is constructed by augmenting a variable .Problem ( 23  (32) In the following we discuss the exactness of penalty function p (, ).
where () is the set of local minimizers of ().