This paper presents a general and comprehensive description of Optimization Methods, and Algorithms from a novel viewpoint. It is shown, in particular, that Direct Methods, Iterative Methods, and Computer Science Algorithms belong to a well-defined general class of both Finite and Infinite Procedures, characterized by suitable descent directions.

The dichotomy between Computer Science and Numerical Analysis has been for many years the main obstacle to the development of

Since the formulation of a problem requires the preliminary definition of the variables, and the functions involved in the model, the antithesis between finite and continuous applied mathematics is even stronger from a computational point of view.

In Computer Science, problems are typically defined on discrete sets (graphs, integer variables and so forth) and are characterized by procedures formalized in a

Direct Methods, which are classical tools of Numerical Analysis, can be considered, in fact,

Furthermore, Linear Programming, Convex Quadratic Programming, and the unconstrained minimization of a symmetric positive definite bilinear form are continuous problems that can be exactly solved with a finite number of steps. This proves that the distinction between algorithms and infinite iterative procedures is not always characterized by the discrete or the continuous range of the variables involved in the problem.

Most of Numerical Analysis methods are based upon the application of the

Gradient methods are usually considered in the literature as particular procedures in the frame of optimization techniques, for classical unconstrained or constrained problems.

The main aim of the present paper is to show that

Moreover, some classical discrete optimization algorithms can be also viewed in the framework of Gradient-type methods.

Hence, the

It is essential to underline that ABS methods [

It is important to emphasize that the typical finiteness of Computer Science algorithms is characterized by classes of Gradient-type methods converging to an isolated point of a suitable sequence, generated by the procedure.

Furthermore, the most recent algorithms for Local Optimization can be precisely described by Gradient-type methods in a general framework. As a matter of fact, Interior Points techniques [

Moreover, a fundamental role in this new approach is played by the properties of suitable

We point out, in particular, the techniques based on

The utilization of Advanced Linear Algebra Techniques in NonLinear Programming opens a new research field, leading in many cases to a significant improvement both of the efficiency and in the practical application of Gradient-type methods for problems of operational interest [

In Deterministic Global Optimization structured matrices allow remarkable results in the frame of the

The novel results on

Therefore, this survey has also the aim of finding in-depth general relationships between Local Optimization techniques and Deterministic Global Optimization algorithms in the frame of Advanced Linear Algebra Techniques.

Let

By assuming

The following theorem generalizes a well-known result shown in [

If

Moreover, the following property holds:

Particular cases of descent directions can be obtained, by setting

It is useful to underline that the general theory of admissible directions for unconstrained optimization [

The iterative scheme described by Algorithm

(a) Given

The convergence of Algorithm

Let

if

every EP

Notice that Theorem

Let

It is well known that the problem

However, it can be also proved that the application of the procedure defined in (

Moreover, if

So, once again, the distinction between Numerical Analysis direct methods (or Computer Science algorithms) and infinite procedures cannot be considered as the fundamental classification rule in computational mathematics.

In the case of Steepest Descent method, the truncation error is

In [

According to the classical definition, the function

Hence,

As a matter of fact, the following result holds.

Let

Let us now consider some generalizations of the convexity, which play an important role in global optimization see [

Let

A function

A function

Let

Definition

Definition

Definition

In [

Let

Theorem

By utilizing Armijo-Goldstein-Wolfe's method [

Quadratic Programming (QP) is defined in the following way:

Let

Let us consider, for instance, the following problems:

The optimal solution of (

The following question arises: does QP characterize

Given a convex function

Assuming

Let

Let

The importance of Definition

Let

Moreover, for any fixed

Given a suitable integer

By Theorem

Given the convex functions,

Letting

So, from Definition

A set

Let

Then the corresponding conic feasibility problem

The technique utilized to prove Theorem

Theorem

Theorem

Hence, explicit formulas for the projection operators for suitable classes of nonlinear convex feasibility problems in terms of the corresponding conified sets might allow to solve CPLC problem (

Consider the particular CPLC problem

Given the convex set of feasible solutions

by Theorem

by Theorem

One can prove the following global convergence theorem [

Consider Problem (

Let

Condition (

Condition (

Let us now consider the classical “box-constrained” problem:

Set

Consider Problem (

In fact, we have the following.

Given

We can fruitfully combine the results of Theorems

Consider Problem (

If in a BFGS-type iterative scheme

By the assumptions it follows:

Hence, by (

If

Although the local minimization phases are performed effectively by the iterative scheme (

More precisely, by the utilization of

By injecting in the method suitable “tunneling phases,” one can avoid the unfair entrapment in a “bad” local minimum, that is, when the condition

Let

A matrix

The structure in (

It is well known, in fact, that a rank-p matrix can be recovered from a cross of p linearly independent columns (or rows). Therefore, an arbitrary matrix can be interpolated by a

An operational cross approximation method, evaluating large close-to-rank-p matrices in

A well-known family of Computer Science methods is represented by the so-called

Given

The algorithm computes

Therefore, formula (

Integer Nonlinear Programming with Linear Constraints problems (INPLCs) can be transformed into continuous GO problems over the unit hypercube [

Hence, the Gradient-type methods for Global Optimization of Section

In this paper we have tried to demonstrate that Gradient or Gradient-type methods lead both to a general approach to optimization problems and to the construction of efficient algorithms.

In particular, we have shown that the class of problems for which the optimal solution can be obtained in a finite number of steps is larger than canonical unconstrained Convex Quadratic problems or Convex Quadratic Programming. Moreover, we have pointed out that the classical distinction between Direct Methods and Iterative Methods cannot be considered as a fundamental classification of techniques in Numerical Analysis. Many optimization problems can be, in fact, solved in a finite number of steps by suitable hybrid efficient algorithms (see [

Furthermore, if the matrices involved in the computation are well conditioned, the superiority of Iterative Methods with respect to Direct ones, which is a typical feature of

Several

It is also important to underline that many combinatorial problems, representing a remarkable benchmark set in Computer Science, can be translated in terms of Gradient-type methods in a general framework.

Once again, we stress that the Fixed Point theorem, which is considered a milestone in Numerical Analysis and guarantees the convergence of most of classical Iterative Methods, represents the background for only a subset of Gradient-type methods.

This paper was partially supported by PRIN 2008 N. 20083KLJEZ.