^{1}

^{1}

^{1}

^{2}

^{1}

^{2}

We consider the dynamics of a stochastic cobweb model with linear demand and a backward-bending supply curve. In our model,
forward-looking expectations and backward-looking ones are assumed, in fact
we assume that the representative agent chooses the backward predictor with probability

The cobweb model is a dynamical system that describes price fluctuations as a result of the interaction between demand function, depending on current price, and supply function, depending on expected price.

A classic definition of the cobweb model is the one
given by Ezekiel [

Research into the cobweb model has a long history, but
all the previous papers have studied deterministic cobweb models. The dynamics
of the cobweb model with a stochastic mechanism has not yet been studied.
In this paper, we consider a stochastic nonlinear cobweb
model that generalizes the model of Jensen and Urban [

backward predictor: the expectation of future
price is the weighted mean of past observations with decreasing weights given
by a (normalized) geometrical progression of parameter

forward predictor: the formation mechanism of this expectation takes into account the market equilibrium price and assumes that, in the long run, the current price will converge to it.

At each time, the representative entrepreneur chooses
the backward predictor with probability

In recent years, several models in which markets are
populated by heterogeneous agents have been proposed as an alternative to the
traditional approach in economics and finance, based on a representative (and
rational) agent. Kirman [

The present work represents a contribution to this
line of research: as in Brock and Hommes [

Moreover, even though our assumption is the same as
considering (on average) fixed time proportions of agents, the fraction of
agents employing trade rules based on past prices increases as

In the model herewith proposed, the time evolution of
the expected price is described by a stochastic dynamical system. (Recent works
in this direction are those by Evans and Honkapohja [

We note that the successful development of ad hoc stochastic cobweb models to describe the time evolution of the prices of commodities, will make possible to use these models to describe fluctuations in price derivatives having the commodities, has underlying assets. The stochastic cobweb model presented here can be considered as a first step in the study of a more general class of models.

The paper is organized as follows. In Section

We consider a cobweb-type model with linear demand and
a backward-bending supply curve (i.e., a concave parabola).
(This formulation for the supply function was
proposed in Jensen and Urban [

According to the previous considerations, demand and
supply are given by

The market clearing equation

The map

We consider the backward-looking component to extract
the future expected market price value from the prices observed in the past
through an infinite memory process (this iterative
scheme is known as Mann iteration, see Mann [

We consider a forward-looking expectation creation
mechanism. In order to introduce a more sophisticated predictor, we assume that
the representative supplier of the goods knows the market equilibrium price,
namely

Our assumption is consistent with the conclusion
reached in many dynamical cobweb models (see, e.g., Hommes [

We assume that the representative agent chooses
between the two predictors (the backward and the forward predictors) as
follows:

In this paper, we want to study the stochastic process

We consider the discrete time stochastic process

Let

According to the Markov property, sometime called
memoryless property, the state of the system at the future time

Finally we associate each value of the price with a
state

Note that the corresponding process may be treated as
a discrete-time Markov chain, whose state space is

It is easy to see that the transition probabilities

We are interested in the probability that

Let the probability vector be denoted by

Recall that a subset

We now assume

For all

Consider

We now fix the value of the parameter

In a first experiment, we assume

As previously proved

Assume

Assume

Let

First consider that with
probability

Second, with probability

Hence

As proved in Proposition

Let us recall some mathematical results concerning the
stability of Markov chains (for further details
see, among others,
Feller [

In our case, for

The asymptotic behaviour of the probability
distribution changes if we consider a different value of

Rearranging the order of the states in

First of all we consider the case

Probability
distributions after

In Figure

Probability distributions after

Moreover, considering simulations for different values
of

The diagram of Figure

Asymptotic
states versus

The situation is quite different for greater values of

Probability
distributions after

This study allows us to conclude that, in the case
with naive versus forward-looking expectations, there exists a unique
asymptotic distribution whose behaviour becomes more complicated as

Obviously, the quantitative results are not
independent of the number of states considered, in any case it is possible to
verify that the qualitative results (i.e., the increase in complexity as

In this section we move on to the continuous-time
Markov process, we calculate the probability distribution solving the
appropriate system of ordinary differential equations and we compare it with
the probability distribution obtained by statistical simulation of the
appropriate continuous-time limit of the stochastic
process defined by (

In particular, we start looking at the transition
matrix

Assuming

It is well known that the solution of (

Since the interval

From (

We present some simulations to compare the numerical
solution obtained from computing (

In Figure

Solution obtained using finite differences and
solution obtained using statistical simulation at time

Figure

Solution obtained using finite differences and
solution obtained using statistical simulation for

We now come back to the initial model with discrete
time, continuous states,

Our simulations have been performed as follows:

we want to depict a trajectory starting at
time zero from an initial condition

we repeat the previous procedure several
times, that is, we extract a large number of vectors

This procedure is in agreement with the consideration
of a market made of a large number of agents and with the heterogeneity of
beliefs, that is, with the assumption that, on average, agents distribute
between the two predictors in the fractions

We first assume

We represent the evolution in time of

(a) Three trajectories produced by the stochastic
process defined by (

Completely different behaviours are observed if we
change the value of the parameter

(a) Three trajectories produced by the stochastic
process defined by (

When

This behaviour is closely related to that exhibited by
the deterministic cobweb model with naive expectations and with the dynamics of
the logistic map. In fact as

Trajectories produced by the stochastic process
defined by (

Similar results have been observed considering several other parameter values. All the experiments confirm the well-known result obtained for the deterministic cobweb model with infinite memory expectation, that is the fact that the presence of resistant memory contributes to stabilizing the price dynamics.

We studied a nonlinear stochastic cobweb model with a parabolic demand function and two price predictors, called backward-looking (based on a weighted mean of past prices) and forward-looking (based on a convex combination of actual and equilibrium price), respectively. The representative agent chooses between expectations, this fact may be interpreted as a population of economic agents such that, on average, a fraction chooses a kind of expectation while a different kind of expectation is chosen by the remaining fraction of agents. Since a random choice between the two price expectations is allowed (a possible motivation is the assumption of heterogeneity in beliefs among agents), we have considered a new element that is the stochastic term in the well-known cobweb model. In fact, even if research into the cobweb model has a long history, the existing literature is limited to the deterministic contest. As far as we know, the dynamics of a cobweb model with expectations decided on the basis of a stochastic mechanism have not been studied in the literature, so our paper may be seen as a first step towards this direction, and our results may trigger further studies in this field.

In order to describe the features of the model, we have concentrated on the case in which the backward predictor is simply a static expectation, so that the stochastic dynamical system is a Markov process, considered both in discrete and continuous time.

By using an appropriate transformation, it has been
possible to study a new discrete time model with discrete states. In this way
we have been able to apply some well-known results regarding Markov chains with
finite states and to prove some general analytical results about the
reducibility of the chain and the existence of an absorbing state for some
values of the parameters. We have also presented numerical simulations
confirming our analytical results. From an empirical point of view, we have
observed that the absorbing state is also an asymptotic state if parameter

In a following step we moved to the continuous-time
Markov process in order to use analytical tools on differential equations that
made it possible to obtain the exact probability distribution at any time

Finally, we come back to the case with backward expectations with memory. Since the model become quite difficult to be treated analytically, we presented some numerical simulations that enable us to consider the role of the memory rate in the stochastic cobweb model. We have found that the presence of resistant memory affects the asymptotic probability distribution: it contributes to reducing fluctuations and to stabilizing the price dynamics thus confirming the standard result in economic literature.

The authors wish to thank all the anonymous referees for their helpful comments and suggestions.