On the Solutions of Okubo-Type Systems

We show that for certain systems of Okubo-type, we can find a solution vector, all components of which are expressed in terms of the first one. This first component can be expressed in two ways. It solves a Volterra integral equation with the kernel expressed in terms of the solutions of a reduced Okubo-type system of smaller dimension. It is also expressed as a power series about the origin with coefficients satisfying certain recurrence relation. This extends the results in (W. Balser, C. Röscheisen, J. Differential Equations, 2009).


Introduction
Linear systems play an important role in the theory of modern mathematics and especially in the theory of special functions. In this paper, we are concerned with a special type of linear systems, so-called Okubo-type systems.
Linear systems of Okubo-type, have been studied extensively in the literature [1][2][3][4][5][6][7][8][9][10][11][12]. In [1], the authors study system [13] under the assumption that A 0 has all distinct eigenvalues and introduce one scalar function that allows representing all solutions of the system. is function is believed to be a new higher transcendental function and it satisfies a Volterra integral equation. e coefficients of its power series about the origin can be explicitly given in terms of a matrix version of the Pochhammer symbol.
In this paper, we study more general systems than [13] and show that under certain assumptions, a similar function can be introduced. However, in this case, the coefficients of its power series about the origin are determined by a certain recurrence relation. In particular, we study systems of differential equations which can be transformed to a system of the form, where A 0 � diag(0, λ 1 , λ 2 , . . . , λ m ) is a diagonal matrix; moreover, we assume that λ j ≠ 0 for every 1 ≤ j ≤ m. Here, n � m + 1. In other words, λ � 0 is an eigenvalue of A 0 of multiplicity one, and λ 1 , λ 2 , . . . , λ m are distinct eigenvalues of A 0 . Let A 1 (t) � (a ij (t)) n i,j�1 and let us assume that a 11 (t) ≡ 0. For the sake of simplicity, we write the matrices involved in [1] in the form, are vectors of polynomials of degree at most one; It is shown in [13] that every linear (scalar) differential equation with a finite number of distinct regular singularities and one irregular singularity can be reduced to system of the form [1] with λ i ≠ λ j . For instance, let us consider system [1] with with arbitrary a 12 ≠ 0 and parameters q, α, c, δ, ε. is system is equivalent to a confluent Heun equation for the first component of the vector y, Here, z � 0, 1 are both regular singularities, whilst z � ∞ is an irregular singularity. In case c � 1, we get the required form of the matrix A 1 (t). See also [14][15][16][17] for other examples of different Okubo-type systems.
In this paper, we search for a solution y � y(t) � (y 1 (t), . . . , y n (t)) T of [1] which is holomorphic at the origin and we are interested in y 1 (t). We first study the reduced Okubo-type system and find its fundamental solution (see eorem 1). en, we define a function of two variables which will be the kernel of the Volterra integral equation. Next, it will be shown that y 1 (t) solves this integral equation. We also define other components of y in terms of the fundamental solution of the reduced system and y 1 (t). We give a recursion formula for the coefficients of the Taylor expansion of y at t � 0 (see eorem 2). e next section is devoted to some generalizations of the problem under study. Namely, we deal with the case when the matrix A 0 in [1] is not diagonalizable which is motivated by examples that appear in concrete applications (see, for instance, [17]). Finally, we give some further discussions on the problem and its applications.

Main Results
Let us first consider the reduced system, We write We search for the power series expansion of the form, where (h k ) k ≥ 0 satisfies a recurrence relation. Indeed, taking into account [15], we get that for every k ≥ 2. [4] for k ≥ 2 is convergent in some neighborhoods of the origin.
Proof. Let ‖ · ‖ be the matrix norm ‖(a ij )‖ � sup i j |a ij |. We fix numbers C � ‖h 0 ‖ � 1 and A > 0, satisfying Let us show that, by induction on k ≥ 0. Here, (x) k stands for the Pochhammer symbol applied to Concerning k � 2, we apply the hypothesis in [5] to arrive at ‖h 2 ‖ ≤ CA 2 (‖B‖) 2 . We assume that [17] holds up to some index k ≥ 2. From [4] and the fact that we get In view of [17], we derive that the series [16] has a positive radius of convergence A|t| < 1 by applying the ratio test for the majorant series. □ Remark 1. A different approach to the previous result is to take into account that Y(t) solves the reduced system [15], and t � 0 is not its singularity for the system since λ j ≠ 0 for all 1 ≤ j ≤ m. e Cauchy theorem guarantees that the series [16] is convergent in the neighborhood of t � 0 up to the nearest singularity of the system, namely, min 1≤j≤m |λ j |.
Now, let us consider the system of first order linear differential equations [1], with A 0 and A 1 as in [2], and where Λ � diag(λ 1 , . . . , λ m ), with distinct λ j and λ j ≠ 0 for all 1 ≤ j ≤ m. We write y(t) � (f(t), g(t)) T , where f is a scalar function, and g � (g 1 , . . . , g m ) T is a vector function. Such y is a solution of [1] if and only if In view of [9], one has that g is a solution of the nonhomogeneous system of differential equations Let Y � Y(t) be a fundamental solution of the homogeneous system, defined by the series [16]. By using the method of variation of constants, a particular solution of [10], with g(0) � 0, is given by By plugging the previous expression into [8], one arrives at e solution of such equation under initial data f(0) � 1 is determined by Now, we define for (t, u) ∈ G∖ (0, 0) { }, with G being some neighborhood of the origin in C n .
An analogous result as Lemma 2 [1] can be stated in order to prove that k(t, u) is, in general, a multivalued function, which is singular when u or t tends to 0. It is straightforward to check that if |arg(t/u)| < π and t, u are close to 0, then holds, when the integration is performed along a straight line segment. is follows from termwise integration which can be performed since Y(v) is analytic on the path of integration.

Theorem 2. ere exists a unique function f which is holomorphic in G and satisfies
Moreover, the power series expansions at 0 of f and g, defined by [12], are given by Proof.
e proof of existence and analyticity of f is similar to that of eorem 4 in [1]. In order to prove that f is a solution to (22), we substitute the expression of k(t, u) into (22) and arrive at where we have made use of the Fubini theorem. e recurrence on the coefficients appears when plugging series (23) into [8,9]. □ Remark 2. Similarly to [1], we can show that there exist functions r kj (t) and vectors a k (t) such that which easily follows from differentiating [8] and using [9].

Generalizations
In this section, we enlarge the family of systems under consideration for which the previous results can be straightforwardly generalised (the expansions of solutions, recurrencies for the coefficients, and initial data may change) and give some details on the elements of derivation involved. It is clear from the previous section that similar reasoning can be done in the case when the eigenvalues of A 0 are not necessarily distinct. Now, let us assume that the linear system of Okubo-type [1] is such that where the elements of A 1 (t) are of the same nature as above, but the submatrix Λ is a block matrix given by with B ij ∈ C m i ×m j with r i�1 m i � m, and B ij is the null matrix if i ≠ j. For all 1 ≤ i ≤ r, we assume B ii is a Jordan matrix of the form, for some λ i ≠ 0. e main results of the paper described in Section 2 hold under minor modifications, namely, the idea of the proof for the convergence of the formal solution of [1] at t � 0, and the existence of a recurrence formula for the coefficients of the Taylor expansion of the solution of such system at the origin.
Given an index 1 ≤ i ≤ r, the reduced system can be decomposed into blocks of the form, where y i (t) stands for a vector of m i undetermined functions. More precisely, one may write y(t) � (y 0 (t), . . . , y r (t)), where y j is the vector of length m j which corresponds to the components in y with indices related to the Jordan block B jj . Observe that y 0 � y 1 is a scalar indeterminate function. e matrix A ii (t) ∈ (C 1 [t]) m i ×m i stands for the elements of A(t) with the same indices that those corresponding to B ii in Λ. e matrix H(t) ∈ (C 1 [t]) m i ×(n− m i ) is multiplied by y i (t) � (y 0 (t), . . . , y i , . . . , y r (t)), which consists of the n − m i dimensional vector obtained by eliminating the components of y i in y(t). Clearly, with this notation, the first component of the solution vector is given by where y 0 (t) � y 1 (t) is the vector of m components which is obtained by eliminating the first component in y(t). e main difference with respect to the diagonalizable framework is that the recurrence for the coefficients in the Taylor expansion of y i is obtained backwards with respect to the indices involved in the block. More precisely, let 1 ≤ i ≤ r, and assume that y i � (y i 1 , . . . , y im i ).
e element y im i is given in terms of known elements in the recursion formula (coming from known coefficients of the Taylor expansion); the element y im i − 1 depends also on y im i , and so on. e coefficients of the Taylor series expansion of Y(t), which is the solution of the reduced system, also have more complicated recurrence relation. For example, let us assume that Observe that Λ and Λ − t are invertible matrices. Similar to Section 2, let us write Y(t) in the form, for every k ≥ 1. e power series (33) is convergent in a neighborhood of the origin. is follows from an analogous reasoning as in eorem 1.

Discussion and Conclusions
Linear systems play an important role in the theory of special functions. In some problems, the study of the scalar equation might give more information than the study of the system itself. In this paper, we exploit the second approach with linear systems and we discuss how one can obtain the solution vector. We show that the whole vector solution can be expressed in terms of only the first component. We also show some connections of the problem to integral equations. e method presented in this paper possibly can be extended to some classes of more general systems with polynomial (resp. rational) entries in A(t) (see, for instance, formula (3.2) in [1], where A(t) is rational). It is also an open problem to consider the case in which 0 is an eigenvalue of multiplicity larger than one, and the matrix A 0 is not necessarily diagonalizable. It may also happen that 0 is the only eigenvalue of the matrix A 0 . is problem is motivated by examples appearing in [17] and the following observation that if, for instance,