^{1}

^{1}

In a previous work (S. Fiori, 2006), we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs). The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations.

Random numbers are currently used for a variety of purposes such as:
cryptographic keys generation, games, some classes of scientific experiments
as well as “Monte Carlo” methods in physics and computer science
[

The principal methods known in the literature to
obtain a batch of samples endowed with an arbitrary distribution from a samples
batch having a simple distribution are the “transformation method” and the
“rejection method” [

A well-known effect of nonlinear neural systems is to
warp the statistical distribution of its input. In particular, we assume that
the system under consideration has a nonlinear adaptive structure described by
the transference

Neural system, neural dual system, input/output sample spaces and their statistical distributions.

In the recent contribution [

The resulting random-number generation method should
be thus read as a two-stage procedure. The first stage consists
in solving the cardinal differential equation
(

However, we recognized that the method presented in [

In the present paper, we consider the problem of extending the
previous method to the generation of asymmetric distributions by
removing the constraint of symmetry or skewedness to the right.
Also, we propose a way to avoid probability
density function. The solution of choice implies a change in the viewpoint
of cardinal equation (

The effectiveness of the proposed approach will be evaluated through numerical experiments. In particular, the designed experiments followed a logical succession, beginning with a basic assessment of the proposed method when applied to bi-Gaussian distribution, which is then followed by comparably more difficult distributions, namely a generalized Gaussian distribution and an asymmetric Gamma distribution.

The existing method presented in [

Most of the methods of random vector generation known
from the literature impose constraints on the size of the random
vectors and many of them are applicable only for bivariate
distributions whose components are equidistributed.
Conversely, within the NORTA framework, marginal probability distributions
for vector components as well as their correlation matrix may be
specified. Obtaining the prescribed generated random vector
correlation matrix requires solving an involved nonlinear system
of equations, which is the most serious problem in this kind
of approach. Paper [

The present section formalizes the learning problem at hand and illustrates a fixed-point-based numerical algorithm to solve the dual cardinal equation.

The key point of the new method consists in learning the inverse function

We denote by

In general, a closed-form solution to
(

After learning an inverse function

From an implementation viewpoint, the algorithm
(

We choose to represent function

In order to translate the learning rule
(

In order to describe the numerical learning algorithm,
the following operators are defined for a generic look-up table

Behavior of the “cumsum” operator for look-up tables.

In terms of look-up-tables entries, the learning relaxation index

When a suitable dual neural system described by the transference

In the following experiments, we consider generating random univariate
samples with prescribed density function within prescribed ranges of interest,
supposing that a prototype Gaussian random number generator is available.
The prototype Gaussian distribution has zero mean and unitary variance.
The parameter

The first case of generation of a random variable concerns a
“bi-Gaussian” distribution defined by

The numerical results presented below pertain to values

Result of dual neural system adaptation with Gaussian input and bi-Gaussian output.

Cumulative results on repeated independent trials are
illustrated. The number of iterations of the algorithm
(

The average number of generated samples varies between
about 68250 and 68290. The obtained results are summarized in
Tables

Average results about the experiment on bi-Gaussian
random number generation; averages computed over 100
independent trials when the algorithm
(

POINTS | 200 | 400 | 600 | 800 | 1000 |
---|---|---|---|---|---|

AVG. LEARN. TIME | 0.0092 | 0.0090 | 0.0109 | 0.0117 | 0.0133 |

AVG. GEN. TIME | 0.0517 | 0.0514 | 0.0509 | 0.0508 | 0.0509 |

AVG. DSC | 0.0026 | 0.0018 | 0.0011 | 0.0009 | 0.0008 |

Average results about the experiment on bi-Gaussian
random number generation; averages computed over 100
independent trials when the
algorithm (

POINTS | 1200 | 1400 | 1600 | 1800 | 2000 |
---|---|---|---|---|---|

AVG. LEARN. TIME | 0.0145 | 0.0155 | 0.0176 | 0.0197 | 0.0209 |

AVG. GEN. TIME | 0.0527 | 0.0528 | 0.0527 | 0.0511 | 0.0523 |

AVG. DSC | 0.0007 | 0.0006 | 0.0005 | 0.0006 | 0.0005 |

The second example of random samples generation is about a
generalized Gaussian distribution
[

The numerical results presented below pertain to
values

Result of dual neural system adaptation with Gaussian input and generalized Gaussian output.

Cumulative results are illustrated as well. The number
of iterations of the algorithm
(

Average results about the experiment on generalized
Gaussian random number generation; averages computed
over 100 independent trials when the algorithm
(

POINTS | 200 | 400 | 600 | 800 | 1000 |
---|---|---|---|---|---|

AVG. LEARN. TIME | 0.0129 | 0.0159 | 0.0229 | 0.0281 | 0.0320 |

AVG. GEN. TIME | 0.0511 | 0.0517 | 0.0500 | 0.0502 | 0.0513 |

AVG. DSC | 0.0038 | 0.0018 | 0.0011 | 0.0007 | 0.0005 |

The third example is repeated from
[

The numerical results presented below pertain to
values

Result of dual neural system adaptation with Gaussian input and Gamma output.

Cumulative results were obtained by setting the number
of iterations of the algorithm
(

Average results about the experiment on Gamma random
number generation; averages computed over 100 independent
trials when the algorithm
(

POINTS N | 1000 | 1200 | 1400 | 1600 | 1800 |
---|---|---|---|---|---|

AVG. LEARN. TIME | 0.0165 | 0.0176 | 0.0220 | 0.0242 | 0.0261 |

AVG. GEN. TIME | 0.0516 | 0.0516 | 0.0504 | 0.0514 | 0.0513 |

AVG. DSC | 0.0137 | 0.0118 | 0.0118 | 0.0109 | 0.0101 |

The aim of the present manuscript was to present a novel
random number generation technique based on dual neural
system learning. We elaborated over our recent work
[

The proposed numerical results confirmed the agreement between the desired and obtained distributions of the generated variate. The analysis of computational burden, in terms of running times, shows that the proposed algorithm is not computationally demanding.