CORRECT SELFADJOINT AND POSITIVE EXTENSIONS OF NONDENSELY DEFINED MINIMAL SYMMETRIC OPERATORS

Let A0 be a closed, minimal symmetric operator from a Hilbert space H into H with domain not dense in H. Let Â also be a correct selfadjoint extension of Ao. The purpose of this paper is (1) to characterize, with the help of Â, all the correct selfadjoint extensions B of A0 with domain equal to D(Â), (2) to give the solution of their corresponding problems, (3) to find sufficient conditions for B to be positive (definite) when Â is positive (definite).


Introduction
Minimal symmetric operators arise naturally in boundary value problems where they represent differential operators with all their defects, that is, their range is not the whole space and also their domain cannot be dense in the whole space.For example, the operator A 0 defined by the problem A 0 y = iy , y(0) = y(1) = 0 is a minimal symmetric nondensely defined operator.The problem of finding all correct selfadjoint extensions of a minimal symmetric operator is not either easy or always possible.The whole problem is facilitated when the domain of definition of the minimal symmetric operator is dense.Correct extensions of densely defined minimal not necessarily symmetric operators in Hilbert and Banach spaces have been investigated by Vishik [17], Dezin [3], Otelbaev et al. [10], Oȋnarov and Parasidi [14], and many others.Correct selfadjoint extensions of a densely defined minimal symmetric operator A 0 have been studied by a number of authors as J. Von Neumann [13], Kočubeȋ [7], Mikhaȋlets [12], and V. I. Gorbachuk and M. L. Gorbachuk [5].They described the extensions as restrictions of some operators, usually of the adjoint operator A * 0 of A 0 .In this paper, we attack the above problem, developing a method which does not depend on maximal operators, but only on the existence of some correct selfadjoint extension of A 0 .The essential ingredient in our approach is the extension of the main idea in [14].More precisely, we show (Theorem 3.2) that every correct selfadjoint extension of a minimal operator is uniquely determined by a vector and a Hermitian matrix (see the comments preceding Theorem 3.2).
In [1,2,9,8] extensions of nondensely defined symmetric operators by embedding H in a space X in which the operator A 0 is dense were studied.The class of extensions they consider is much wider than ours, but they do not consider correct selfadjoint extensions.Our method does not require such an embedding and applies equally well to positive correct selfadjoint extensions.Positive selfadjoint extensions of densely defined positive symmetric operators have been considered by Friedrichs [4].
As a demonstration of the theory developed in this paper, we give here all the correct selfadjoint extensions of the minimal operator A 0 in the example mentioned in the beginning of the introduction.These are the operators B : L 2 (0,1) → L 2 (0,1), Bu = iu − ct 1 0 tu(t)dt with D(B) = {u ∈ H(0, 1) : u(0) = −u(1)}, where c is any real number.
The paper is organized as follows.In Section 2, we recall some basic terminology and notation about operators.In Section 3, we prove the main general results.Finally, in Section 4, we discuss several examples of integrodifferential equations which show the usefulness of our results.

Terminology and notation
By H, we will always denote a complex Hilbert space with inner product (•,•).The operators (linear) from H into H we refer to are not everywhere defined on H.We write D(A) and R(A) for the domain and the range of the operator A, respectively.Two operators A 1 and A 2 are said to be equal if D(A 1 ) = D(A 2 ) and A 1 x = A 2 x, for all x ∈ D(A 1 ).A 2 is said to be an extension of A 1 , or A 1 is a restriction of A 2 , in symbol and the inverse A −1 exists and is continuous.An operator A is called a correct extension (resp., restriction) of the minimal (resp., maximal) operator A 0 (resp., A) if it is a correct operator and A 0 ⊂ A (resp., A ⊂ A).
Let A be an operator with domain D(A) dense in H.The adjoint operator A * : H → H of A with domain D(A * ) is defined by the equation (Ax, y) = (x,A * y) for every x ∈ D(A) and every y ∈ D(A * ).The domain A symmetric operator A is said to be positive if (Ax,x) ≥ 0 for every x ∈ D(A) and positive definite if there exists a positive real number k such that (Ax,x) ≥ k x 2 , for all x ∈ D(A).
The defect def A 0 of an operator A 0 is the dimension of the orthogonal complement R(A 0 ) ⊥ of its range R(A 0 ).
Let F = (F 1 ,...,F m ) be a vector of H m and AF = (AF 1 ,...,AF m ).We write F t and (Ax,F t ) for the column vectors col(F 1 ,...,F m ) and col((Ax,F 1 ),...,(Ax,F m )), respectively.We denote by (AF t ,F) the m × m matrix whose i, jth entry is the inner product (AF i ,F j ) and by M t the transpose matrix of M. We denote by I and 0 the identity and the zero matrix, respectively.I. Parassidis and P. Tsekrekos 769

Correct selfadjoint extensions of minimal symmetric operators
Throughout this paper, A 0 will denote a nondensely defined symmetric minimal operator and A a correct selfadjoint extension of A 0 .Let E cs (A 0 , A) denote the set of all correct selfadjoint extensions of A 0 with domain D( A) and let E m cs (A 0 , A) denote the subset of We begin with the following key lemma.
Lemma 3.1.For every B ∈ E m cs (A 0 , A), there exists a vector F = (F 1 ,...,F m ), where F 1 ,...,F m are linearly independent elements of D( A) ∩ R(A 0 ) ⊥ and a Hermitian invertible matrix T = t i j m i, j=1 such that where The main result of [10] implies that there exists a linear continuous operator K : Hence K = K * , since B −1 and A −1 are selfadjoint operators.Since A 0 is a minimal operator, it follows that R(A 0 ) is a closed subspace of H, and so We will show that dimR(K) = m.Indeed, from (3.2), it follows that from which it follows that ( and the operator A is invertible, we have dimR(K) = m.Therefore, the selfadjointness of K gives the decomposition From decompositions (3.3), (3.5), and the inclusion ker K ⊇ R(A 0 ), we conclude that Fix a basis {F 1 ,F 2 ,...,F m } of R(K).Then, for every f in H, there are α i in R such that Let {ψ 1 ,ψ 2 ,...,ψ m } be the biorthogonal family of elements of H corresponding to the above basis of R(K), that is, (ψ i ,F j ) = δ i j , i, j = 1,...,m.From (3.7), we have In particular, for f = ψ j , we have Replacing the above expression for Kψ j in (3.8), we obtain If T denotes the matrix (Kψ l ,ψ i ) m l,i=1 , then (3.10) takes the form Now, the reader can easily verify that each of the matrices T and ( AF t ,F) is a Hermitian matrix.We claim that T is invertible.Let K = K | R(K) denote the restriction of K to its range.From (3.5), it follows that kerK ∩ R(K) = {0}.Therefore, ker K = {0}.Substituting f = F j into (3.11),we obtain The determinant det(F t ,F) is nonzero, being the determinant of the Gramm matrix (F t ,F) of F. Since the vectors of R(K) F 1 ,F 2 ,...,F m are linearly independent and ker K = {0}, it follows that det T = 0, which proves our claim.We now prove the formula (3.1) which describes the action of the operator B on x.From (3.4) and (3.11), we have Then, taking the inner product with F t , we get I. Parassidis and P. Tsekrekos 771 Let W denote the matrix I + ( AF t ,F)T.We will show that detW = 0.For if detW = 0, then det W t = 0. Hence, there exists a nonzero vector a = col(a 1 ,...,a m ) such that W t a = o.We consider the linear combination f 0 = m i=1 a i AF i .Since the vectors F 1 ,...,F m are linearly independent and ker A = {0}, their images A(F i ) under A are linearly independent as well.It follows that f 0 = 0. Combining (3.4) and (3.11), we get x = A −1 f + FT( f ,F t ), where x = B −1 f .In particular, for x = B −1 f 0 , we compute In the above chain of equalities, the last one follows from the definition of W and the fact that the matrices T and ( AF t ,F) are Hermitian.But W t a = 0.This implies that the nonzero vector f 0 is contained in the kernel kerB −1 of B −1 , contradicting the correctness of B. So detW = 0. Now (3.14) gives ( f ,F t ) = W −1 (x, AF t ), which with (3.13) implies formula (3.1).
We now prove our main theorem which describes the set E m cs (A 0 , A) of all correct selfadjoint extensions B of an operator A 0 with D(B) = D( A) and dimR(B − A) = m, using one correct selfadjoint extension A of a minimal symmetric operator A 0 with def A 0 ≤ ∞.Every operator B is uniquely determined by a vector F with components

m, and a Hermitian m
) which is the solvability condition for the problem Bx = f (whose solution is also given in the following result).Theorem 3.2.Suppose that A 0 , A are as in Lemma 3.1.Then the following hold.
(i) For every B ∈ E m cs (A 0 , A), there exists a vector ) , where F 1 ,...,F m defined as above, and Hermitian m × m matrix C, which has rank C = n ≤ m and satisfies (3.16), the operator B defined by (3.17) belongs to E n cs (A 0 , A).The unique solution of (3.17) is given by the formula Then by Lemma 3.1, there exists a Hermitian, invertible m × m matrix T = (t i j ), and vector F = (F 1 ,...,F m ), where F 1 ,...,F m are linearly independent elements from D( A) ∩ D(A 0 ) ⊥ such that det W = 0 and (3.1) holds true.From (3.1), since We denote by C the matrix T W −1 .Since B = B * , relations (3.1), (3.20) imply that (ii) We will show that B ∈ E n cs (A 0 , A).We first show that B is a correct extension of A 0 .Taking into account (3.17), we have (3.23) From (3.16), we have I. Parassidis and P. Tsekrekos 773 Since A is invertible, (3.17) implies that and because of (3.24), we have So, A 0 ⊂ B and since B −1 exists and is continuous on H, B is a correct extension of A 0 .From (3.17), because of rank C = n and AF 1 ,..., AF m being linearly independent, it follows that dimR(B − A) = n.
It remains to show that B = B * .Taking into account (3.17In the next particular case when ..,m, the condition (3.16) is fulfilled automatically and the solution of Bx = f is simpler.
, where F 1 ,...,F m are linearly independent elements from D(A 0 ) ∩ R(A 0 ) ⊥ , and for every Hermitian m × m matrix C with rank C = n ≤ m, the operator B defined by (3.17) belongs to E n cs (A 0 , A).The unique solution of (3.17) is given by Let now the minimal operator A 0 have finite defect def A 0 = dimR(A 0 ) ⊥ = m.Then D(A 0 ) can be defined as follows: where ..,F m are linearly independent elements of R(A 0 ) ⊥ ∩ D( A).So if we have chosen the elements F 1 ,...,F m so that (3.32) holds, then every B from E m cs (A 0 , A) is defined only by the Hermitian matrix C and we can restate Theorem 3.2 as follows.
(ii) Conversely, for every Hermitian m × m matrix C, which satisfies (3.16) and rank C = n, the operator B defined by (3.17) belongs to E n cs (A 0 , A).The unique solution of (3.17) is given by (3.18).
Proof.From (3.32), we have As basis of R(K), we can take F 1 ,...,F m .The rest is proved similarly.Remark 3.8.Let A 0 be defined by (3.32) or (3.33), and F = (F 1 ,...,F m ), where F 1 ,...,F m are linearly independent elements of R(A 0 ) ⊥ ∩ D( A).Then, (3.35) Let now the minimal symmetric operator A 0 be defined by i = 1,...,m, and F 1 ,...,F m are linearly independent elements of D(A 0 ).Then from the above remark and Theorem 3.5 follows the next corollary, which describes the most "simple" extensions of A 0 .
Corollary 3.9.(i) For every B ∈ E m cs (A 0 , A), where A 0 satisfies (3.36), there exists a Hermitian m × m matrix C with detC = 0, such that (3.17) is fulfilled.
(ii) Conversely, for every Hermitian m × m matrix C, with rank C = n ≤ m, the operator B defined by (3.17) belongs to E n cs (A 0 , A).The unique solution of (3.17) is given by (3.30).
The next theorem is useful for applications and gives the criterion of correctness of below problems and their solutions.
and the unique solution of (3.37) is given by Proof.We define corresponding to this problem the minimal operator A 0 as a restriction of A by (3.32).
If n = m, then the theorem is true by Theorem 3.5.While if n < m and B ∈ E n cs (A 0 , A), then from (3.37), we have Bx = f and 776 Correct selfadjoint and positive extensions or If we suppose that the first k lines of the matrix L are linearly independent, then for f = ψ k+1 , where (F i ,ψ k ) = δ i,k , i,k = 1,...,m, the system L( Ax,F t ) = ( f ,F t ) has no solution, since the rank of the augmented matrix is k + 1 = k.Then Bx = ψ k+1 has no solution and R(B) = H.Consequently, B is not a correct operator.So (3.38) holds true.Conversely, let detL = 0, then by Theorem 3.5, we have that (3.42) Theorem 3.11.If in Theorem 3.2 A is positive operator and C is negative semidefinite matrix, then B, defined by (3.17), is a positive operator.
Proof.We will show that (Bx,x) ≥ 0 for all x ∈ D(B).
for C is negative and semidefinite.We remind that an operator A : H → H is called positive definite if there exists a positive real number k such that I. Parassidis and P. Tsekrekos 777 Proof.For x ∈ D(B), we have (3.46) The theorem now easily follows.Now we will state Theorem 3.2, in the following more general form, which is useful in the solutions of differential equations.Theorem 3.13.Suppose that A 0 , A are as in Theorem 3.2.Then the following hold.
(i) For every B ∈ E m cs (A 0 , A), there exists a vector Q = (q 1 ,..., q m ), where q 1 ,..., q m are linearly independent elements from D(A 0 ) ⊥ and a Hermitian invertible m × m matrix C, such that ) (ii) Conversely, for every vector Q = (q 1 ,..., q m ), defined as above, and Hermitian m × m matrix C, which has rank C = n and satisfies (3.47), the operator B defined by (3.48) belongs to E n cs (A 0 , A).
The unique solution of (3.48) is given by the formula The proof easily follows from Theorem 3.2 by substituting Q = AF, F = A −1 Q, where Q = (q 1 ,..., q m ), q i ∈ D(A 0 ) ⊥ , i = 1,...,m.Corollary 3.14.For every vector Q = (q 1 ,..., q m ), where q 1 ,..., q m are linearly independent elements of D(A 0 ) ⊥ ∩ R(A 0 ), i = 1,...,m, and for every Hermitian m × m matrix C, with rank C = n, the operator B defined by (3.48) belongs to E n cs (A 0 , A) The unique solution of (3.48) is given by the formula (3.50) Let now the minimal symmetric operator A 0 have finite defect and be defined by the relations where Q is defined as in Theorem 3.13.Then dimD(A 0 ) ⊥ = m and def A 0 = dimR(A 0 ) ⊥ = m.
In this case, we restate Theorems 3.5, 3.11, and 3.12 in the following more general form.q i q j c i j , (3.52) Proof.Since A is selfadjoint and R( A) = H, for every x ∈ D( A), we have (x, ..,m, and that they are linearly independent.If we substitute Q = AF, Q t = AF t in (3.47), (3.48), and (3.49), then we receive the relations (3.16), (3.17), and (3.18) of Theorem 3.2, which hold true.Because of Theorems 3.11, 3.12, and the relations q i = AF i , i = 1,2,...,m, cases (c) and (d) of the present theorem are true.Remark 3.16.Suppose that A 0 , A are as in Theorem 3.15 and Q = (q 1 ,..., q m ), where q 1 ,..., q m are linearly independent elements of D(A 0 ) ⊥ , then (3.53) Let now the minimal symmetric operator A 0 be defined by the relation and let Q = (q 1 ,..., q m ), q 1 ,..., q m be linearly independent elements of D(A 0 ) ⊥ .By the above remark and Theorem 3.15, we take the following corollary, which describes the most "simple" extensions of A 0 .
Corollary 3.17.(i) For every B ∈ E m cs (A 0 , A), where A 0 satisfies (3.54), there exists a Hermitian m × m matrix C with detC = 0, such that (3.48) is fulfilled.
(ii) Conversely, for every Hermitian m × m matrix C, with rank C = n ≤ m, the operator B defined by (3.48) belongs to E n cs (A 0 , A), where A 0 satisfies (3.54).The unique solution of (3.48) is given by (3.50).
Let now G = (g 1 ,...,g m ), where g 1 ,...,g m are arbitrary elements of H, A as in Theorem 3.2, and Q satisfies (3.51).Then holds the next corollary which is useful for applications.
is correct and selfadjoint and dimR(B − A) = m, then the elements q 1 ,..., q m are linearly independent and there exists a Hermitian, invertible m × m matrix C such that G = QC, where C satisfies (3.47).
(ii) Conversely, if there exists a Hermitian, m × m matrix C such that G = QC, where C satisfies (3.47), then B is correct and selfadjoint.If also detC = 0, then dimR(B − A) = m.
Proof.Follows easily from Theorem 3.15 by defining a minimal operator A 0 by (3.51).
A generalization of Theorem 3.13 is the following theorem.
The solution of problem (3.55) is given by the formula for all f ∈ H.
Proof.We have (3.59) If we substitute the term G(x,Q t ) in (3.55) with its equal from above, then we take In the following examples, H 1 (0,1)(H 2 (0,1)) denotes the Sobolev space of all complex functions of L 2 (0,1), which have generalized derivatives up to first-(second-)order, Lebesgue integrable.

Examples
Example 4.1.For every real number c, the operator B : L 2 (o,1) → L 2 (0,1) corresponding to the problem is a correct selfadjoint extension of the minimal symmetric operator A 0 defined by The unique solution of (4.1)-(4.2) is given by the formula Proof.By comparing (4.1) with (3.48) and (4.3) with (3.54), we take It is evident that A 0 is minimal symmetric operator.From [6, page 272] (in our case θ = π) follows that A is selfadjoint and it is easily seen that I. Parassidis and P. Tsekrekos 781 Then The condition (Q , A −1 Q) = 0 and Remark 3.16 imply that Q ∈ R(A 0 ) and from Corollary 3.17 follows the validity of this example.
Example 4.2.The operator A : L 2 (0,1) → L 2 (0,1) defined by ) is a correct, selfadjoint, positive definite operator and satisfies the inequality For every f ∈ L 2 (0,1), the unique solution u of the problem (4.8)-(4.9) is given by the formula Proof.Indeed, formula (4.11) is found by two direct integrations of (4.8), where (4.9) is taken into consideration.That A −1 is continuous is proved easily by showing, using Schwarz's inequality and formula (4.11), that A −1 is a bounded operator.Hence A is a correct operator.We show that A is selfadjoint.From formula (4.11), we take (4.12) 782 Correct selfadjoint and positive extensions So, the integral Kernel of A −1 is the function where is the Heaviside's function.(4.14)Since it follows from [16] that A −1 is selfadjoint.Then, from the equalities A −1 = ( A −1 ) * = ( A * ) −1 [11] follows that D( A) = D( A * ).On the other hand, for all x, y ∈ D( A) = D( A * ), we have Ax, y = Ax, A −1 Ay = x, Ay .( The above two remarks imply that A = A * .Next, we prove inequality (4.10), showing at the same time that A is positive definite.Let u(x) ∈ D( A).Since u(0) = −u(1), we have From these equalities, we take  Proof.If we compare (4.22) with (3.37), it is natural to take Au = −u with D( A) = D(B), m = 1, F(t) = 2t 4 − 4t 2 + 1.We easily see that F ∈ D( A) and AF = −8(3t 2 − 1).Then (4.22) can be written as follows:  This inequality which has been proved in [15, pages 194, 195] for real functions u ∈ C 2 ( Ω) : u |γ = 0 holds true for all u ∈ H 2 (Ω) : and for all u ∈ D( A), exist the real functions g,h ∈ D( A) such that u The unique solution of (4.42)-(4.43) is given by the formula

.56)
Proof.Since v i = − v i /λ i , i = 1,2, from (4.51), we have  The results of this paper can be applied to Hermitian matrices, which are the matrices of Hermitian operators in unitary spaces with respect to any orthonormal basis of the space.
Example 4.8.Let A be a Hermitian operator in the n-dimensional unitary space E, λ 1 ,...,λ m its eigenvalues, which are real numbers, different from zero, with multiplicity p 1 ,..., p m and E 1 ,...,E m , the corresponding eigenspaces.If we consider that E endowed with an orthonormal basis D consisted of eigenvectors of A, such that the first p 1 elements of D constitute a basis of E 1 , the next p 2 elements constitute a basis of E 2 and so on, then the matrix of A, with respect to this basis, is the following:  which means that the number 1/λ m is not an eigenvalue of C. The action of B on E m is found easily, which is described by the formula x 1 . . .
where x = x 1 ε 1 + ••• + x p ε p ∈ E m , and its matrix, with respect to D, is  we obtain, by Theorem 3.12, a positive definite matrix B.
.21) Hence the matrix C is Hermitian and so (3.1) implies(3.17).The invertibility of C is implied by the fact that T and W −1 are invertible matrices.To show (3.16), we first remember that the m × m matrix ( AF t ,F) = D = (d i j ) is Hermitian.From C = TW −1 , we take T = CW = C(I + DT) or C = (I − CD)T.Since C and T are invertible, it follows that det(I − CD) = 0, and we finally have that det(I − DC) = 0, that is, (3.16) is fulfilled.

Theorem 3 .
10. LetBx = Ax − AF C Ax,F t = f , D(B) = D A , (3.37) where A as in Lemma 3.1, C a Hermitian m × m matrix with rank C = n, F 1 ,...,F m linearly independent elements of D( A).Then B is correct and selfadjoint operator with dimR(B − A) = n if and only if

.44) Theorem 3 . 12 .
If the operator A in Theorem 3.2 is positive definite, then the operator B, which is defined by the relation(3.17), is positive definite whenever the matrix C is Hermitian and satisfies the inequality j c i j(3.45)    and positive when k ≥ m i=1 m j=1 AF i AF j |c i j |.

Theorem 3 .
15. (a)For every B ∈ E m cs (A 0 , A), where A 0 is defined by(3.51),there exists a Hermitian m × m matrix C with det C = 0, such that(3.47)and(3.48)are fulfilled.(b) Conversely, for every Hermitian m × m matrix C, which satisfies (3.47) and has rank C = n, the operator B defined by (3.48) belongs to E n cs (A 0 , A).The unique solution of (3.48) is given by (3.49).(c) If the operator A is positive and the matrix C is negative semidefinite, then B is positive.(d) If A is positive definite (so it satisfies a relation (3.44)) and if C is a Hermitian m × m matrix which satisfies the inequality

Corollary 3 .
18. (i) If the operator B : H → H defined by

Theorem 3 .
19. (i) If the operator B : H → H defined by (3.55) is correct and selfadjoint and R(B − A) = m, then the elements of the matrix G m + G n−m M are linearly independent and there exists a Hermitian invertible m × m matrix C such that .60) Now the operator B in (3.60) has the form of the operator B in (3.55),where instead of G we have G m + G n−m M and instead of Q we have Q m .So according to Corollary 3.18, the relations (3.56) and (3.57) hold true; also (3.49) implies (3.58).
all the other elements are zero.Let A 0 be the restriction of A onto the subspaceE 1 ⊕ ••• ⊕ E m−1 ,which is a symmetric operator on this subspace.Let also ε 1 ,...,ε p be an orthonormal basis of E m , where p m = p.If we write F = (ε 1 ••• ε p ), then Theorem 3.10 asserts that any invertible Hermitian extension B of A 0 to the whole space E which takes I. Parassidis and P. Tsekrekos 789 different values on E m than A is given by the formulaBx = Ax − AF C Ax,F t , x ∈ E, (4.60)where C is an invertible Hermitian matrix which satisfies relation(3.16), that is, det I − ( AF t ,F)C = 0 λ m − λ 2 m c 22 ••• − λ 2 m c 2p .64)The p × p submatrix at the right-bottom place of the previous matrix represents an invertible Hermitian matrix which has not the number λ m as eigenvalue, and this is something expected.Now, if A is positive definite, then if we choose the elements c i j of the matrix C to satisfy the inequality k > λ2 m. Hence ( AF t ,F) = 0.The rest easily follows from the above theorem.