RANDOM FIXED POINTS AND RANDOM DIFFERENTIAL INCLUSIONS

In this paper, first, we study random best approximations to random sets, using fixed point techniques, obtaining this way stochastic analogues of earlier deterministic results by Broder-Petryshyn, KyFan and Reich. Then we prove two fixed point theorems for random mmltifunctlons with stochastic domain that satisfy certain tangential conditions. Finally we consider a random differential inclusion wth upper semlcontinuous orlentor field and establish the existence of random solutions.

Recently the interest in these problems was revived by the survey article of Bharucha-Reld [3].
Since then, there has been a lot of activity in this area and several interesting results have appeared.
In this paper, we will study random fixed points in connection with random approximations and will derive stochastic analogues of some results by Browder-Petryshyn [4], KyFan [5] and Reich [6]. We also extend a random fixed point theorem proved by Engl [7] and finally we prove the existence of a solution for a random differential inclusion with an upper semlcontlnuous orlentor field, extending this way an earlier result of the author [8] (theorem 5.1).
For the corresponding deterministic theory, we refer to the recent books of Goebel-Relch [9] for fixed points (in connection with the study of the geometry of the underlying space) and of Aubln-Cellna [I0] for differential ncluslons. Another nice wDrk, bringing together the two main mathematical branches considered in this note, namely fixed point theory and differential equations, is the paper of Reich [II], where an interesting approach to fixed point theory is presented, through the existence theory of abstract differential equations. We will start with a random version of proposition 2.3 of Reich [6], which in turn was an extension of an earlier very interesting result of KyFan [5] (theorem 2).
In this section (R,E,) is a complete o-finite measure space.
Also recall that a map f:X X is nonexpansive, if llf(x)-f(y)l llx-yH for all x, yEX. It is well known (see for example Goebel-Reich [9]) that the metric projection on a closed, convex set in a Hilbert space, is nonexpanslve. That's why in theorems 3.1, 3.2, 3.3 and 3.4, that follow and involve the metric projection (either in their statement or in their proof), we assume that the ambient space is a Hilbert space.
Since the best approximation is unique, then by hypothesis x() f(,x()).
Otherwise we must have that x(), wen.
REMARK. In the previous theorem, we can instead assume that for all eR f(,.) is condensing on K().
Then in the proof we have to use theorem 3.2. Now we pass to multlfunctlons and prove the following random fixed point theorem.

REMARK.
If there is no m dependence of the data in the previous theorem (deterministic case), then we can relax the hypotheses on F(.) and simply assume that F(.) is closed, Y-condensing and with bounded range. Also in the deterministic case, the theorem can be proved for general Banach spaces, if we assume that K is approximatively w-compact and F(.) is w-u.s.c., with w-compact range.
The proof is analogous to the random case and in the general Banach space, we have to use proposition 2.1 of Pelch [6], which tells us that the metric projection on K is a w- x(m,t)eF(,t,x(,t)) a.e. for all wen By random solution of (*), we understand a stochastic process x:xT X, with absolutely continuous realizations, satisfying (*) a.e. in t, for all e.
Furthermore, a simple application of the Arzela-Ascoli theorem, tells us that for every me, W(m) is a compact subset of C(T,X).
Clearly this is a random solution of (*) with orlentor field F. But from the definition of F, we see that IF(m,t,x) l, a(m,t)+b(m,t) Ax|a.e., for all wen and as in the beginning of the proof, through Gronwall's inequality, we get that |x(m,t)A ,M(m) (m,t,x(m,t)) F(m,t,x(m,t))x(.,.) is the desired random solution of (*).
AOINOWLEDGEMENT. The author would llke to express his gratitude to the referee for his helpful suggestions and remarks.

Call for Papers
As a multidisciplinary field, financial engineering is becoming increasingly important in today's economic and financial world, especially in areas such as portfolio management, asset valuation and prediction, fraud detection, and credit risk management. For example, in a credit risk context, the recently approved Basel II guidelines advise financial institutions to build comprehensible credit risk models in order to optimize their capital allocation policy. Computational methods are being intensively studied and applied to improve the quality of the financial decisions that need to be made. Until now, computational methods and models are central to the analysis of economic and financial decisions. However, more and more researchers have found that the financial environment is not ruled by mathematical distributions or statistical models. In such situations, some attempts have also been made to develop financial engineering models using intelligent computing approaches. For example, an artificial neural network (ANN) is a nonparametric estimation technique which does not make any distributional assumptions regarding the underlying asset. Instead, ANN approach develops a model using sets of unknown parameters and lets the optimization routine seek the best fitting parameters to obtain the desired results. The main aim of this special issue is not to merely illustrate the superior performance of a new intelligent computational method, but also to demonstrate how it can be used effectively in a financial engineering environment to improve and facilitate financial decision making. In this sense, the submissions should especially address how the results of estimated computational models (e.g., ANN, support vector machines, evolutionary algorithm, and fuzzy models) can be used to develop intelligent, easy-to-use, and/or comprehensible computational systems (e.g., decision support systems, agent-based system, and web-based systems) This special issue will include (but not be limited to) the following topics: • Computational methods: artificial intelligence, neural networks, evolutionary algorithms, fuzzy inference, hybrid learning, ensemble learning, cooperative learning, multiagent learning • Application fields: asset valuation and prediction, asset allocation and portfolio selection, bankruptcy prediction, fraud detection, credit risk management • Implementation aspects: decision support systems, expert systems, information systems, intelligent agents, web service, monitoring, deployment, implementation