^{1}

^{2}

^{1}

^{1}

^{2}

A class of Soblove type multivariate function is approximated by feedforward network with one hidden layer of sigmoidal units and a linear output. By adopting a set of orthogonal polynomial basis and under certain assumptions for the governing activation functions of the neural network, the upper bound on the degree of approximation can be obtained for the class of Soblove functions. The results obtained are helpful in understanding the approximation capability and topology construction of the sigmoidal neural networks.

Artificial neural networks have been extensively applied in various fields of science and engineering. Why is so mainly because the feedforward neural networks (FNNs) have the universal approximation capability [

Universal approximation capabilities for a broad range of neural network topologies have been established by researchers like Cybenko [

For any approximation problem, the establishment of performance bounds is an inevitable but very difficult issues. As we know, feedforward neural networks (FNNS) have been shown to be capable of approximating general class of functions, including continuous and integrable ones. Recently, several researchers have been derived approximation error bounds for various functional classes (see, e.g., [

In this paper, using the Chebyshev Orthogonal series from the approximation theory and moduli of continuity, we obtain upper bounds on the degree of approximation in

Before introducing the main results, we firstly introduce some basic results on Chebyshev polynomials from the approximation theory. For convenience, we introduce a weighted norm of a function

For function

As we know, Chebyshev polynomial of a single real variable is a very important polynomial in approximation theory. Using the above notation, we introduce multivariate Chebyshev polynomials:

For

For one-dimension degree of approximation of a function

Let

Furthermore, we can simplify

A basic result concerning Valle Poussin Operators

Now we consider a class of multivariate polynomials defined as follows:

Hence, we have the following theorem.

For

We consider the Chebyshev orthogonal polynomials

This theorem reveals two things: (i) for any multivariate functions

We consider the approximation of functions by feedforward neural networks with a ridge functions. We define the approximating function class composed of a single hidden layer feedforward neural network with

There is a constant

For each finite

We define the distance from

Let condition (1) and (2) hold for the activation function

Firstly, we consider the partial derivative

For any fixed

From the definition of

Using the Theorems

For

This theorem reveals two things: (i) for any multivariate functions

In this work, the approximation order of feedforward neural networks with the form (

This research is supported by Natural Science Foundation of China (no. 11001227), Natural Science Foundation Project of CQ CSTC (no. CSTC,2009BB2306), and the Fundamental Research Funds for the Central Universities (no. XDJK2010B005).

^{d}by linear combination of shifted rotations of sigmoid function with and without scaling

^{d})by neural and mixture networks