1. Introduction
Many neural network models have been presented and applied in pattern recognition, automatic control, signal processing, artificial life, and aided decision. Among these models, BP neural networks and RBF neural networks are two classes of neural networks which are widely used in control because of their approximation ability to any continuous nonlinear function. Up to now, these two classes of neural networks have successfully been applied to approach any nonlinear continuous function [1–11].
Wang and Zhao [12] and Cao and Zhao [13] presented a class of neural network called the neural network with two weights by combining the advantage of BP neural networks with that of RBF neural networks. This model can not only simulate BP neural network and RBF neural network, but also simulate neural networks with higher order. This neural network contains not only direction weight value with respect to BP network, but also core weight value with respect to RBF network. The function of neurons of this neural network is of the following form:
(1)
y
=
f
[
∑
j
=
1
m
(
ω
j
(
x
j
-
z
j
)
|
ω
j
(
x
j
-
z
j
)
|
)
s
|
ω
j
(
x
j
-
z
j
)
|
p
-
θ
]
,
where
y
is the output of neurons,
f
is the activation function,
θ
is threshold value,
ω
j
is direction weight value,
z
j
is core weight value,
x
j
is input, and
s
,
p
are two parameters.
In (1), when
z
j
=
0
,
s
=
1
, and
p
=
1
, then (1) reduces to mathematical model of neurons of BP networks; when
ω
j
takes a fixed value,
s
=
0
, and
p
=
2
, then (1) becomes mathematical model of neurons of RBF networks.
Since Wang and Zhao [12] presented the neural network with two weights, the network had caused already domestic and international wide attention. So far, the network has been successfully applied in many studying fields such as the research fields of face recognition, voice recognition, and protein structure [13]. However, as a complete system, the theoretical research of approximation ability of the neural network with two weights to nonlinear functions is still in its initial stage. Because the function of its neurons is of very complicated form (see (1)), so far, the approximation results have been rarely found to any nonlinear function in the present literature. This motivates us to study the approximation ability of the neural network with two weights to any nonlinear continuous function.
In [14], the approximation operators with logarithmic sigmoidal function of a neural network with two weights (1) and a class of quasi-interpolation operators are investigated. Using these operators as approximation tools, the upper bounds of estimate errors are estimated for approximating continuous functions.
In [3], Bochner-Riesz means operators of double Fourier series are used to construct network operators for approximating nonlinear functions, and the errors of approximation by the operators are estimated.
However, so far, the approximation results of the neural network with two-weight operators and with sigmoidal functions obtained by using the way of Fourier series have not been found. So, in this paper, our objective is to prove that by adjusting the values of parameters
w
i
,
z
i
, and
p
, the neural network with two weights and with sigmoidal functions can approximate any nonlinear continuous function arbitrarily and prove that the neural network with two weights is of better approximation ability than BP neural networks constructed in [3].
To help readers know the mathematical symbols used in the paper, we cite the following notations.
∥
f
∥
∞
denotes the uniform norm of
f
in
R
,
|
·
|
denotes the Euclidean norm of
x
in
R
, and
C
(
[
a
,
b
]
,
R
)
is set of the continuous functions defined on
[
a
,
b
]
and takes values in
R
.
ω
(
f
,
h
)
is the modulus of continuity of
f
defined by
(2)
ω
(
f
,
h
)
=
sup
0
<
t
≤
h
max
x
+
t
∈
[
-
1,1
]
|
f
(
x
)
-
f
(
x
+
t
)
|
,
where
sign
(
x
)
is the sign function defined by
(3)
sign
(
x
)
=
{
1
,
x
>
0
,
-
1
,
x
<
0
,
0
,
x
=
0
.
2. Construction and Approximation of the Network Operators with Sigmoidal Function
A function
σ
:
R
→
R
is called a sigmoidal function if the following conditions are satisfied:
lim
x
→
+
∞
σ
(
x
)
=
A
and
lim
x
→
-
∞
σ
(
x
)
=
B
, where
A
and
B
are constants. The sigmoidal functions are a class of important functions, which play an important role in the research of neural networks.
One of the most familiar sigmoidal functions is the logarithmic type function defined by
(4)
s
(
x
)
=
1
1
+
e
-
x
,
x
∈
R
.
For the logarithmic type function, if we define
(5)
ϕ
(
x
)
=
1
2
(
s
(
x
+
1
)
-
s
(
x
-
1
)
)
,
x
∈
R
,
then some better properties such that
∫
-
∞
+
∞
ϕ
(
x
)
d
x
=
1
, the Fourier transform of
ϕ
is equal to 0, and
∑
k
=
-
∞
+
∞
ϕ
(
x
-
k
)
=
1
can be implied (see [6]).
In this paper, we assume that the sigmoidal function is central symmetrical with respect to the point
(
0
,
σ
(
0
)
)
. Let
σ
be a sigmoidal function and
(6)
S
(
x
)
=
1
2
(
σ
(
x
+
1
)
-
σ
(
x
-
1
)
)
.
Then
(7)
lim
x
→
+
∞
S
(
x
)
=
lim
x
→
-
∞
S
(
x
)
=
0
and
S
(
x
)
is an even function. From Poisson summation formula (see [15]), we can obtain the following Lemma.
Lemma 1 (see [3]).
Assume that
σ
is a sigmoidal function and central symmetrical with respect to the point
(
0
,
σ
(
0
)
)
, and
S
(
x
)
is given by (6). If there exist positive constants
C
and
δ
such that
(8)
|
S
(
x
)
|
≤
C
(
1
+
|
x
|
)
-
1
-
δ
,
|
S
*
(
x
)
|
≤
C
(
1
+
|
x
|
)
-
1
-
δ
,
x
∈
R
,
∫
-
∞
+
∞
S
(
x
)
d
x
=
1
,
then
(9)
∑
k
=
-
∞
+
∞
S
(
x
-
k
)
=
1
+
2
∑
k
=
1
+
∞
S
*
(
k
)
cos
2
k
π
x
,
where
S
*
(
k
)
denotes the
k
th
Fourier transform of
S
(
x
)
(see [15]).
If (9)
(10)
∑
k
=
-
∞
+
∞
S
(
x
-
k
)
=
1
+
2
∑
k
=
1
+
∞
S
*
(
k
)
cos
2
k
π
x
,
holds, then one has, for
w
i
∈
R
,
(11)
∑
k
=
-
∞
+
∞
S
(
w
i
x
-
k
)
=
1
+
2
∑
k
=
1
∞
S
*
(
k
)
cos
2
k
w
i
π
x
.
Let
S
A
(
x
)
=
(
1
/
A
)
S
(
x
/
A
)
(
A
>
0
)
. Then using the property of Fourier transform, it follows that
(12)
∑
k
=
-
∞
+
∞
S
A
(
w
i
x
-
k
)
=
1
+
2
∑
k
=
1
∞
S
*
(
A
k
)
cos
2
k
π
w
i
x
.
Thus one has
(13)
∑
k
=
-
∞
+
∞
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
=
1
+
2
∑
k
=
1
∞
S
*
(
A
k
)
cos
2
k
π
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
]
.
Lemma 2.
If
x
>
0
, then, when
0
<
a
<
1
,
x
a
-
a
x
≤
1
-
a
.
Proof.
Let
f
(
x
)
=
x
a
-
a
x
, and then
f
′
(
x
)
=
a
(
x
a
-
1
-
1
)
. Since
a
<
1
, thus, when
0
<
x
<
1
,
f
′
(
x
)
>
0
; when
x
>
1
,
f
′
(
x
)
<
0
. Hence,
f
(
1
)
=
1
-
a
=
max
x
>
0
{
f
(
x
)
}
. Namely, when
x
>
0
,
f
(
x
)
≤
1
-
a
.
For each
f
∈
C
(
[
-
1,1
]
,
R
)
,
∀
n
∈
N
, we construct networks operators as follows:
(14)
G
n
,
A
(
f
,
x
)
=
∑
k
=
-
2
n
2
n
f
1
,
e
(
g
(
n
,
x
)
)
×
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
,
where
w
i
=
(
(
2
n
+
p
-
p
n
3
/
4
p
)
/
4
p
)
,
z
i
=
1
+
(
5
p
n
3
/
4
p
/
(
2
n
+
p
-
p
n
3
/
4
p
)
)
(
i
=
1,2
,
…
,
n
)
,
g
(
n
,
x
)
=
k
x
/
(
[
w
i
(
x
-
z
i
)
]
p
[
sign
(
w
i
(
x
-
z
i
)
)
]
p
-
s
)
, and
f
1
,
e
is the extension of
f
defined by
(15)
f
1
,
e
(
x
)
=
{
f
(
x
)
,
x
∈
[
-
1,1
]
,
f
(
sign
(
x
)
)
,
1
<
|
x
|
.
Now we give the first main result as follows.
Theorem 3.
Assume that
σ
satisfies the conditions of Lemma 1 and
p
satisfies
(
3
/
4
)
<
p
≤
1
. If
f
∈
C
(
[
-
1,1
]
,
R
)
, then there exists a positive constant
C
such that
(16)
∥
f
-
G
n
.
A
(
f
)
∥
∞
≤
2
C
A
δ
1
δ
(
n
+
p
2
)
-
δ
∥
f
∥
∞
+
2
C
A
1
+
δ
∥
f
∥
∞
+
C
ω
(
f
,
1
n
)
A
δ
+
2
C
A
δ
∥
f
∥
∞
(
4
5
)
p
δ
1
n
δ
/
2
.
Proof.
From (13) and (14), we have
(17)
f
(
x
)
-
G
n
,
A
(
f
,
x
)
=
∑
k
=
-
∞
+
∞
f
(
x
)
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
-
G
n
,
A
(
f
,
x
)
-
2
f
(
x
)
∑
k
=
1
∞
S
*
(
A
k
)
×
cos
2
k
π
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
]
=
∑
k
=
-
∞
-
2
n
-
1
f
(
x
)
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
+
∑
k
=
2
n
+
1
+
∞
f
(
x
)
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
-
2
f
(
x
)
∑
k
=
1
+
∞
S
*
(
A
k
)
×
cos
2
k
π
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
]
+
∑
k
=
-
2
n
2
n
(
f
(
x
)
-
f
1
,
e
(
g
(
n
,
x
)
)
)
×
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
=
I
1
+
I
2
+
I
3
+
I
4
.
Since
w
i
=
(
(
2
n
+
p
-
p
n
3
/
4
p
)
/
4
p
)
,
z
i
=
1
+
(
5
p
n
3
/
4
p
/
(
2
n
+
p
-
p
n
3
/
4
p
)
)
, then from Lemma 2 for
k
∈
(
-
∞
,
-
2
n
-
1
]
,
x
∈
[
-
1,1
]
, we have
(18)
|
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
|
≥
-
k
-
|
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
|
≥
-
k
-
|
w
i
(
1
+
|
z
i
|
)
|
p
≥
-
k
-
p
|
w
i
(
1
+
|
z
i
|
)
|
+
p
-
1
≥
-
k
+
p
2
-
1
-
n
-
3
p
n
3
/
4
p
4
≥
n
+
p
2
-
3
p
n
3
/
4
p
4
≥
n
+
p
2
-
3
p
4
(
3
n
4
p
+
1
-
3
4
p
)
=
7
n
16
-
p
4
+
9
16
≥
1
-
p
4
≥
3
4
>
0
.
Hence
(19)
|
I
1
|
=
∑
k
=
-
∞
-
2
n
-
1
|
f
(
x
)
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
|
≤
C
∥
f
∥
∞
A
δ
×
∑
k
=
-
∞
-
2
n
-
1
|
-
k
(
w
i
(
x
-
z
i
)
/
|
w
i
(
x
-
z
i
)
|
)
s
]
1
+
δ
)
-
1
(
-
k
(
w
i
(
x
-
z
i
)
/
|
w
i
(
x
-
z
i
)
|
)
s
]
1
+
δ
[
(
w
i
(
x
-
z
i
)
/
|
w
i
(
x
-
z
i
)
|
)
s
×
|
w
i
(
x
-
z
i
)
|
p
-
k
(
w
i
(
x
-
z
i
)
/
|
w
i
(
x
-
z
i
)
|
)
s
]
1
+
δ
)
-
1
|
≤
C
∥
f
∥
∞
A
δ
×
∑
k
=
-
∞
-
2
n
-
1
1
(
-
k
-
1
+
(
p
/
2
)
-
n
-
(
3
p
n
3
/
4
p
/
4
)
)
1
+
δ
≤
C
A
δ
∥
f
∥
∞
×
∫
-
∞
-
2
n
-
1
d
x
(
-
x
+
(
p
/
2
)
-
1
-
n
-
(
3
p
n
3
/
4
p
/
4
)
)
1
+
δ
=
C
A
δ
∥
f
∥
∞
1
δ
(
n
+
p
2
-
3
p
n
3
/
4
p
4
)
-
δ
.
Since
w
i
=
(
(
2
n
+
p
-
p
n
3
/
4
p
)
/
4
p
)
,
z
i
=
1
+
(
5
p
n
3
/
4
p
/
(
2
n
+
p
-
p
n
3
/
4
p
)
)
, then from Lemma 2 for
k
∈
[
2
n
+
1
,
+
∞
)
,
x
∈
[
-
1,1
]
, we have
(20)
|
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
|
≥
k
-
|
w
i
(
1
+
|
z
i
|
)
|
p
≥
k
-
p
|
w
i
(
1
+
|
z
i
|
)
|
+
p
-
1
≥
k
+
p
2
-
1
-
n
-
3
p
n
3
/
4
p
4
≥
n
+
p
2
-
3
p
n
3
/
4
p
4
≥
n
+
p
2
-
3
p
4
(
3
n
4
p
+
1
-
3
4
p
)
=
7
n
16
-
p
4
+
9
16
≥
1
-
p
4
≥
3
4
>
0
.
Hence
(21)
|
I
2
|
≤
∥
f
∥
∞
∑
k
=
2
n
+
1
+
∞
|
S
A
[
(
w
i
(
x
-
z
i
)
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
]
|
≤
C
∥
f
∥
∞
1
A
×
∑
k
=
2
n
+
1
+
∞
A
1
+
δ
|
(
w
i
(
x
-
z
i
)
/
|
w
i
(
x
-
z
i
)
|
)
s
|
w
i
(
x
-
z
i
)
|
p
-
k
|
1
+
δ
≤
C
∥
f
∥
∞
A
δ
∑
k
=
2
n
+
1
+
∞
1
(
k
+
(
p
/
2
)
-
1
-
n
-
(
3
p
n
3
/
4
p
/
4
)
)
1
+
δ
≤
C
∥
f
∥
∞
A
δ
∫
2
n
+
1
+
∞
d
x
(
x
+
(
p
/
2
)
-
1
-
n
-
(
3
p
n
3
/
4
p
/
4
)
)
1
+
δ
=
C
A
δ
∥
f
∥
∞
1
δ
(
n
+
p
2
-
3
p
n
3
/
4
p
4
)
-
δ
.
It is easy to see that
(22)
|
I
3
|
≤
2
∥
f
∥
∞
∑
k
=
1
+
∞
|
S
*
(
A
k
)
|
≤
2
C
∥
f
∥
∞
∑
k
=
1
+
∞
1
(
A
k
)
1
+
δ
≤
2
C
A
1
+
δ
∥
f
∥
∞
.
Next we estimate
I
4
. Consider
(23)
|
I
4
|
=
∑
|
x
-
g
(
n
,
x
)
|
≤
(
1
/
n
4
)
|
f
(
x
)
-
f
1
,
e
(
g
(
n
,
x
)
)
|
×
|
S
A
(
[
w
i
(
x
-
z
i
)
]
p
[
sign
(
w
i
(
x
-
z
i
)
)
]
p
-
s
-
k
)
|
+
∑
|
x
-
g
(
n
,
x
)
|
>
(
1
/
n
4
)
|
f
(
x
)
-
f
1
,
e
(
g
(
n
,
x
)
)
|
×
|
S
A
(
[
w
i
(
x
-
z
i
)
]
p
[
sign
(
w
i
(
x
-
z
i
)
)
]
p
-
s
-
k
)
|
≤
ω
(
f
,
1
n
4
)
×
∑
k
=
-
∞
+
∞
|
S
A
(
[
w
i
(
x
-
z
i
)
]
p
[
sign
(
w
i
(
x
-
z
i
)
)
]
p
-
s
-
k
)
|
+
Δ
≤
C
ω
(
f
,
1
n
4
)
A
δ
+
Δ
,
where
(24)
Δ
≤
2
∥
f
∥
∞
×
∑
|
(
k
x
/
g
(
n
,
x
)
)
-
k
|
>
(
|
[
w
i
(
x
-
z
i
)
]
p
|
/
|
x
|
n
4
)
×
|
S
A
(
[
w
i
(
x
-
z
i
)
]
p
[
sign
(
w
i
(
x
-
z
i
)
)
]
p
-
s
-
k
)
|
≤
2
∥
f
∥
∞
×
∑
|
(
k
x
/
g
(
n
,
x
)
)
-
k
|
>
(
|
[
w
i
(
x
-
z
i
)
]
p
|
/
|
x
|
n
4
)
×
C
A
δ
|
[
w
i
(
x
-
z
i
)
]
p
[
sign
(
w
i
(
x
-
z
i
)
)
]
p
-
s
-
k
|
1
+
δ
.
Since
(25)
Δ
≤
2
C
A
δ
∥
f
∥
∞
∫
(
|
w
i
p
|
|
x
-
z
i
|
p
/
|
x
|
n
4
)
+
∞
d
t
t
1
+
δ
=
2
C
A
δ
∥
f
∥
∞
(
w
i
p
|
x
-
z
i
|
p
|
x
|
n
4
)
-
δ
,
substituting (25) into (23) gives
(26)
|
I
4
|
≤
C
ω
(
f
,
1
n
4
)
A
δ
+
2
C
A
δ
∥
f
∥
∞
(
w
i
p
|
x
-
z
i
|
p
|
x
|
n
4
)
-
δ
≤
C
ω
(
f
,
1
n
4
)
A
δ
+
2
C
A
δ
∥
f
∥
∞
[
n
4
w
i
p
(
|
z
i
|
-
1
)
p
]
δ
≤
C
ω
(
f
,
1
n
4
)
A
δ
+
2
C
A
δ
∥
f
∥
∞
(
4
5
)
(
p
δ
)
1
n
δ
/
2
.
Substituting (19)–(22) and (26) into (17) gives
(27)
|
f
(
x
)
-
G
n
,
A
(
f
,
x
)
|
≤
2
C
A
δ
1
δ
(
n
+
p
2
)
-
δ
∥
f
∥
∞
+
2
C
A
1
+
δ
∥
f
∥
∞
+
C
ω
(
f
,
1
n
4
)
A
δ
+
2
C
A
δ
∥
f
∥
∞
(
4
5
)
(
p
δ
)
1
n
δ
/
2
.
This finishes the proof of Theorem 3.
Theorem 4.
Assume that
σ
satisfies the conditions of Lemma 1 and p satisfies
2
(
4
/
5
)
p
δ
<
1
+
(
1
/
δ
)
,
3
/
4
<
p
≤
1
. If
f
∈
C
(
[
-
1,1
]
,
R
)
, then the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural networks constructed in [3].
Proof.
From Theorem 3, the error of approximation of the neural network with two weights to any nonlinear continuous function
f
(
x
)
is
(28)
2
C
A
δ
1
δ
(
n
+
p
2
)
-
δ
∥
f
∥
∞
+
2
C
A
1
+
δ
∥
f
∥
∞
+
C
ω
(
f
,
1
n
4
)
A
δ
+
2
C
A
δ
∥
f
∥
∞
(
4
5
)
(
p
δ
)
1
n
δ
/
2
.
By choosing
n
and
A
such that
A
n
-
1
→
0
,
1
/
A
→
0
,
A
/
n
1
/
2
→
0
, and
ω
(
f
,
1
/
n
4
)
A
δ
→
0
, this guarantees that the limit of the above error is zero. From [3, Theorem
2.2
], the error of approximation of BP neural networks to the same nonlinear continuous function is
(29)
C
[
∥
f
∥
∞
(
(
1
+
1
δ
)
(
A
n
)
δ
+
2
C
A
1
+
δ
)
+
ω
(
f
,
1
n
)
A
δ
]
.
Since
2
(
4
/
5
)
p
δ
<
1
+
(
1
/
δ
)
, then we obtain as
n
sufficiently large
(30)
2
C
A
δ
1
δ
(
n
+
p
2
)
-
δ
∥
f
∥
∞
+
C
A
1
+
δ
∥
f
∥
∞
+
C
ω
(
f
,
1
n
)
A
δ
+
2
C
A
δ
∥
f
∥
∞
(
4
5
)
(
p
δ
)
1
n
δ
/
2
-
C
[
∥
f
∥
∞
(
(
1
+
1
δ
)
(
A
n
)
δ
+
2
A
1
+
δ
)
(
(
1
+
1
δ
)
(
A
n
)
δ
+
2
A
1
+
δ
)
+
ω
(
f
,
1
n
)
A
δ
]
=
C
A
δ
n
-
δ
/
2
(
2
δ
[
n
n
+
(
p
/
2
)
]
δ
+
2
(
4
5
)
p
δ
-
1
-
1
δ
)
∥
f
∥
∞
<
0
,
which tells us that the approximation error of the neural network with two weights is smaller than that of the BP neural networks constructed in [3]. Hence, from (30), the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural networks constructed in [3].
Remark 5.
We can choose
C
and
δ
such that the two parameters satisfy all inequalities in [3, Theorem 2.1], Theorems 3, and 4. Now we give an example to illustrate the result. Let
(31)
σ
(
x
)
=
1
π
arctan
x
,
x
∈
R
.
Since
(32)
S
(
x
)
=
1
2
π
[
arctan
(
x
+
1
)
-
arctan
(
x
-
1
)
]
,
we have
(33)
tan
(
arctan
(
x
+
1
)
-
arctan
(
x
-
1
)
)
=
2
x
2
,
x
≠
0
,
S
(
x
)
=
1
2
π
arctan
2
x
2
,
x
≠
0
,
S
(
0
)
=
1
π
[
arctan
1
-
arctan
(
-
1
)
]
=
1
4
=
lim
x
→
0
1
2
π
arctan
2
x
2
.
Hence
(34)
S
(
x
)
=
{
1
2
π
arctan
2
x
2
,
x
≠
0
,
1
4
,
x
=
0
.
Obviously, for some positive constant
C
,
(35)
|
S
(
x
)
|
≤
C
(
1
+
|
x
|
)
-
2
,
x
∈
R
.
Thus,
δ
=
1
. Obviously it satisfies all inequalities in [3, Theorem 2.1] and Theorems 3 and 4.
Remark 6.
The method used to obtain approximation errors for the neural network with two weights in our paper is different from that used for BP neural networks in [3]. In our paper, we show and apply the inequality in Lemma 1 and other inequalities techniques which are different from those used in [3] to obtain more precisely approximation errors than BP neural networks.
Remark 7.
Theorem 4 tells us, when parameters
z
i
,
w
i
take some values and
p
satisfies two inequalities conditions, the neural network with two weights is of better approximation ability than BP neural networks constructed in [3].