MONOTONE METHOD FOR FIRST ORDER SINGULAR SYSTEMS WITH BOUNDARY CONDITIONS

Recently the result has been extended [6] to singular systems with initial conditions since singular systems do occur in many physical applications. In this paper, we extend this result to singular systems with boundary conditions. This is achieved by developing the necessary comparison result. The crucial part is the consistency condition. An example is given to illustrate that such a condition is attainable.


INTRODUCTION
It is well known [4] that by combining the method of upper and lower solutions with monotone iterative techniques, one can prove the existence of extremal solutions on nonlinear problems in a closed set, namely, the sector defined by means of the upper and lower solutions.
Recently the result has been extended [6] to singular systems with initial conditions since singular systems do occur in many physical applications.In this paper, we extend this result to singular systems with boundary conditions.This is achieved by developing the necessary comparison result.The crucial part is the consistency condition.An example is given to illustrate that such a condition is attainable.

PRELIMINARY RESULTS
Consider the boundary value problem where A ,nxn is singular matrix, E, F are real n x n nonsingular matrices, and f C [J x I", In], J = [0, T].In this paper we combine the method of upper and lower solutions together with monotone iterative technique to prove the existence of extremal solutions of (1.1).For this purpose we need the existence and uniqueness of the solution of the corresponding linear boundary value problem of the form (1.2) where M" is an n x n matrix and g is n-times differentiable on J.The results relative to (1.2) are well known [2,3].
Here, we recall the results without proof.Here and throughout the paper, we assume that A is a singular matrix and that there exists a , R for which (L4 + M) 4 exists.Also, B t shall denote the Drazin inverse of the matrix B, and , denotes the quantity (2,4 + M)'B.( z'. ,'.and,p will have the same relationship to A, M, and g ) T has a solution q.
Note 1.1"If the index of A is one, the third term of (1.4) simplifies to (I- )'(t).In this ease, differentiability of g is enough to show that (1.4) is in fact the solution of (1.2).Also, if the index of ,4 is one and further, if (L4 + M) " exists, assumption (a) of Theorem 1.2 is easily satisfied.
Next, we prove a comparison result which is needed in our main result.
convenience we list the following assumptions: (A1) There exist Vo(t),Wo(t) C[J,Rn] with Vo(t) < w0(t) for t J such that (1.5) That is, v0,w 0 are lower and upper solutions of the boundary value problem (1.1).
(A3) Let A and M be n x n matrices such that (Z4 + M) " exists and is nonnegative for some , R. Also, let there exist nonnegative matrices such that 0 where C is a diagonal square matrix with C] > ;I, if , > 0, and Ci > 0 if < 0.
Note that for any square matrix A, there exist nonsingular matrices P and C, and a nilpotent matafx N such that where index of ,4 = index of N. Thus N 0 e: index of ,4 = 1.This in turn implies that the index of = 1.In such a case the Drazin inverse of is called the group inverse and is denoted by '#.See [5] for details.
Proof: Set p = Pz.Then the inequality A[o + Mp < 0 reduces to A P: + MP z _< 0.

MAIN RESULT
Here we develop the monotone iterative technique for the singular system of the boundary value problem which yields monotone sequences that converge to the extremal solutions of (1.1) relative to the sector [v0,w0].
Proof."Consider the linear boundary value problem where r/ belongs to the sector [v0,w0] --{u cl[j,n]: v 0 < u < w0}.Now (2.2) can be rewritten as Choosing x(0)= ,u(0), x(T)=/.t(T), the boundary value problem (2.3) satisfies the consistency condition (2.1) and has a unique solution since the associated homogeneous problem has only the zero solution.It is easy to see this using (A3).Define a mapping R such that Rr/=x where xis the unique solution of (2.3).This mapping defines the sequences {vn}, {wn}.To prove (a), set Rv o = v, where v is the unique solution of (2.3) with r/= v o.Setting p = v o v, one can see that and p(0) = v0(0) v(0) < 0, p(T) < 0 which by using Theorem 1.3 implies p(t) < O.That is, Vo(t) < v (t) on J. Similarly, one can prove that w o > Rw o on J.
To prove that p, 7'are the minimal and maximal solutions of (1.1), it is enough to show that if x(t) is any solution of (1.1) such that v 0 _< x < w 0 on J, then v 0 < p _< x < 7'< w 0 on J.
Below is an example to illustrate that the condition (2.1) of Theorem 2.1 is attainable.For this f it is easy to see that M can be chosen as 0 1 s that assumption (A2) is satisfied.The corresponding linear problem (2.e'6-g2(1) Remark 2.1" In the case where a = 0 = b, and T = 1, the solution of the linear problem (1.2) can be written in the form (see [2] for details) 1 (2.4) x(t) = JG(,, s(s) ds + JH(t, S)t(S) ds where the matrix G is given by

First
we prove that (a) v o <_ Rv o, w o >_ Rw o (b) R is monotone nondecreasing on the sector Iv 0, w0].