We propose a biologically motivated braininspired single neuron perceptron (SNP) with universal approximation and XOR computation properties. This computational model extends the input pattern and is based on the excitatory and inhibitory learning rules inspired from neural connections in the human brain’s nervous system. The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules. The main features of proposed single layer perceptron are universal approximation property and low computational complexity. The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets. Various comparisons with multilayer perceptron (MLP) with gradient decent backpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy, lower time, and spatial complexity, as well as faster training. Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.
In various computer applications such as pattern recognition, classification, and prediction, a learning module can be implemented by various approaches including statistical, structural, and neural approaches. Among these methods, artificial neural networks (ANNs) are inspired by physiological workings of the brain. They are based on mathematical model of single neural cell (neuron) named single neuron perceptron (SNP) and try to resemble the actual networks of neurons in the brain. As computational models, SNP has particular characteristics such as the ability to learn and generalize. Although the multilayer perceptron (MLP) can approximate any functions [
In contrast to the MLP, SNP and FLNs do not impose high computational complexity and are far from the curse of dimensionality. But because of disregarding the universal approximation property, SNP and FLNs are not very popular in the applications. In contrast to the previse knowledge about SNP, this paper aims to propose a novel SNP model that can solve the XOR problem and we show that it can be universal approximator. Proposed SNP can solve XOR problem only if additional nonlinear operator is used. As illustrated in the next section, the SNP universal approximation property can simply be archived by extending the input patterns and using the nonlinear operator max. Like functional link networks (FLNs) [
The paper is organized as follows. Proposed SNP and universal approximation theorem are proposed in Section
Figure
Proposed SNP.
Actually,
So, the new input pattern has
It should be added that max operation applied on the input pattern and also in the learning phase has been motivated from computational models of limbic system in the brain [
In summary, the feedforward computation and backward learning algorithm of proposed SNP, in an online form and with
(1)
(3)
(4)
(5)
(6)
(7)
(8)
In the algorithm,
The proposed SNP solves the XOR problem. Consider 2−1 architecture with hardlim activation function and by using the following weights:
Since
In the next section, we prove that SNP is a universal approximator and can approximate all real continuous functions.
Let us ignore the activation function from the model and rewrite (
Consider
Next, we prove that
Thus,
Finally, we prove that
Since, for all
So, SNP independently from activation function is universal approximator.
One parameter that related to computational complexity of a learning method is the number of learning weights in each epoch. The lower number of learning weights concludes lower number of computations and lower computational complexity. To evaluate the number of proposed SNP learning weights with respect to the MLP, we propose a measure named the reducing ratio of number of weights (Rw) as follows:
The Rw is a measure that can be used to compare the computational complexity of proposed SNP and MPL. The higher Rw shows SNP has a lower number of learning weights. Thus, it has a lower number of computations and so has a lower computational complexity. Additionally, in the classification problems, the accuracy can be a proper performance measure to evaluate the algorithms. This measure is generally expressed as follows:
For all learning scenarios listed below, the training set contained 70% while the testing set contained 15% of the data and the remaining was used for the validation set. Input patterns have been normalized between
Here and prior to entering comparative numerical studies, let us analyze the computational complexity. Regarding the proposed learning algorithm, the algorithm adjusts
To test and assess the SNP in classification, 6 single class datasets have been downloaded from UCI (University of California, Irvine) Data Center. In all datasets, the target labeling was binary. Table
Datasets and related learning information.
Dataset  Dataset information  Parameters  ENN model  MLP model  Comparison  

ID  Name  Instance  Class  Attribute  Learning rates  Architecture  Weights  Architecture  Weights  Rw 
1  Diabetes  768  2  8  0.050  91  10  821  21  52% 
2  Heart  270  2  13  0.050  141  15  1321  31  51% 
3  Pima  768  2  8  0.005  91  10  821  21  52% 
4  Ionosphere  351  2  34  0.050  351  36  3421  73  50% 
5  Sonar  208  2  60  0.0005  611  62  6021  125  50% 
6  Tictac  958  2  9  0.0005  101  11  922  23  52% 
In the proposed SNP algorithm, we consider
Comparisons between SNP and MLP.
Although, according to Figure
Number of learning epoch comparison.
Model  Proposed SNP  GDBP MLP 

Diabetes 


Heart 


Pima 


Ionosphere 


Sonar 


Tictac 




Average  3392  9760 
In this paper, we prove that a single neuron perceptron (SNP) can solve XOR problem and can be a universal approximator. These features can be achieved by extending input pattern and by using max operator. SNP with this extension ability is a novel computational model of neural cell that is learnt by excitatory and inhibitory rules. This new SNP architecture works with fewer numbers of learning weights. Specifically, it only generates
The authors declare that there is no conflict of interests regarding the publication of this paper.