We consider the data fitting problem, that is, the problem of approximating a function of several variables, given by tabulated data, and the corresponding problem for inconsistent (overdetermined) systems of linear algebraic equations. Such problems, connected with measurement of physical quantities, arise, for example, in physics, engineering, and so forth. A traditional approach for solving these two problems is the discrete least squares data fitting method, which is based on discrete
Let
In order to ensure uniqueness of the solution to problems under consideration, it is known that the following condition must be satisfied:
Thus, the polynomial
Recall that the system of functions
For the problems under consideration, the system
It is proved that functions
Given an inconsistent (overdetermined) system of linear algebraic equations,
We can associate the following minimization problems with (
Approximations with respect to
Problems like (
Problems, discussed in this paper and related to them, are considered in [
The
Papers of Andersen [
Books of Clarke [
Numerical methods for best Chebyshev approximation are suggested, for example, in the book of Remez [
A subgradient algorithm for certain minimax and minisum problems is suggested in the paper of Chatelon et al. [
Least squares approach is discussed by Bertsekas [
A quasiNewton approach to nonsmooth convex optimization problems in machine learning is considered in Yu et al. [
Polynomial algorithms for projecting a point onto a region defined by a linear constraint and box constraints in
Rest of the paper is organized as follows. In Section
Some known results, called propositions, which are used in subsequent sections, are recalled without proofs in Appendix
We prove below some results, which guarantee solvability of considered problems (Theorem
If
Let
Since the generalized polynomial
The following two theorems give the rules for calculating subgradients for some types of functions.
Let
Since convex functions have derivatives on the right and on the left at each interior feasible point, then we can assume that
According to Proposition
Since the subdifferential
Let
Since
Functions
Using the same reasoning, we obtain that
According to Proposition
Function
Similarly,
Functions
Functions
Since (
Using the same reasoning, we can conclude that problems (
Since
Existence of solutions to these problems can also be proved by using some general results.
As it is known,
Linear independence of
Furthermore, since
The
When
Solvability of problems (
In addition, using the same reasoning, the following problem
Propositions
Similarly,
Since
Let
The
The following theorem guarantees convergence of the subgradient method (
If
By the assumptions of Theorem
Choose some
Both inequalities imply
The subgradient method (
In order to apply the subgradient method for solving the problems under consideration, we have to calculate the corresponding subgradients.
Using that
Let
Let
Obviously, elements of
We can choose, for example,
In this section, we present results of some computational experiments, obtained by the subgradient method for problems (
For both methods (
Consider problem with
Results: see Table
By method ( 
By method ( 
By method ( 
for problem ( 
for problem ( 
for problem ( 


Iterations 101  Iterations 98  Iterations 97 
Run time 0.00045 s  Run time 0.00055 s  Run time 0.00035 s 
Consider problem with
Results: see Table
By method ( 
By method ( 
By method ( 
for problem ( 
for problem ( 
for problem ( 


Iterations 103  Iterations 103  Iterations 96 
Run time 0.00037 s  Run time 0.000375 s  Run time 0.00038 s 
Consider problem with
Results: see Table
By method ( 
By method ( 
By method ( 
for problem ( 
for problem ( 
for problem ( 


Iterations 100  Iterations 105  Iterations 82 
Run time 0.00015 s  Run time 0.00017 s  Run time 0.00006 s 
Consider problem with
Results: see Table
By method ( 
By method ( 
By method ( 
for problem ( 
for problem ( 
for problem ( 


Iterations 100  Iterations 104  Iterations 108 
Run time 0.00065 s  Run time 0.0018 s  Run time 0.0019 s 
Consider problem with
Results: see Table
By method ( 
By method ( 
By method ( 
for problem ( 
for problem ( 
for problem ( 


Iterations 108  Iterations 118  Iterations 111 
Run time 0.0048 s  Run time 0.0051 s  Run time 0.0049 s 
Consider problem with
Results: see Table
By method ( 
By method ( 
By method ( 
for problem ( 
for problem ( 
for problem ( 


Iterations 102  Iterations 119  Iterations 101 
Run time 0.00375 s  Run time 0.0039 s  Run time 0.0037 s 
Examples
Consider
Results: see Table
Therefore, algebraic polynomials obtained by the two methods are
By method ( 
By method ( 
for problem ( 
for problem ( 










Iterations 101  Iterations 106 
Run time 0.00135 s  Run time 0.0019 s 
Consider the system of linear equations
Results: see Table
By method ( 
By method ( 
By method ( 
for problem ( 
for problem ( 
for problem ( 














Iterations 101  Iterations 84  Iterations 18 
Run time 0.0011 s  Run time 0.0008 s  Run time 0.00165 s 
Computational experiments presented above, as well as many other experiments, allow us to conclude that the subgradient method (
In this section, some known results, called propositions, used in this paper, are recalled without proofs.
The following Weierstrass theorem and the corollary turn out to be useful concerning solvability of the problems under consideration.
A lower (upper) semicontinuous function
Let
Proposition
Since a continuous function is both lower and upper semicontinuous, then Proposition
Let
If
Let
Recall that a vector
The set containing all subrgadients of
If
Let
Let
Let
Let
Let
Let
Let
If
In order to compare the results, obtained by the subgradient method for nonsmooth optimization for problems (
The
We use, for example, a line search method for choosing the step size
An alternative way of choosing the step length
The gradient method (
Gradients of
Let
Then
The author declares that there is no conflict of interests regarding the publication of this paper.