Bounds for Total Antieigenvalue of a Normal Operator

We give an alternative proof of a theorem of Gustafson and Seddighin (1993) following the idea used by Das et al. in an earlier study of antieigenvectors (1998). The result proved here holds for certain classes of normal operators even if the space is infinite dimensional.


Introduction.
Let H be a complex Hilbert space and let B(H) denote the Banach algebra of all bounded linear operators on H.The concept of the angle of an operator T was introduced by Gustafson [4,5,6] while studying perturbation theory of semigroup generators.From this has developed what we call operator trigonometry, whose theory and applications are still evolving.The properties are intimately associated with the numerical range W (T ) of an operator T and the numerical range W (T T * ).Some relevant important results can be found in [3].
The cosine of an operator T in B(H) was originally defined as follows: for arbitrary operators in a Banach semi-innerproduct space.Here we will restrict attention primarily to the case of T ∈ B(H).Clearly, Cos T is a real cosine defined for the real part of the numerical range of T .The total cosine is also defined.The expression (1.1) also denoted as the angle φ(T ) measures the maximum (real) turning effect of T .
The quantity Cos T has another interpretation as the first antieigenvalue of T , where 3) The terminology "antieigenvalue" and "antieigenvector" was introduced by Gustafson [7] in the year 1972.Krein [10] and Gustafson [7] have studied µ 1 (T ) and indicated how the knowledge of µ 1 (T ) can be useful in the study of certain integral operators, initialvalue problems and some other areas.The upper bound for µ 1 (T ) for T , a finite-dimensional, strongly accretive (i.e., Re T > 0) normal operator was obtained by Davis [2] in the year 1980.
The exact value of µ 1 (T ) can be found in [2,7].Mirman [11] gave a method of estimation of µ n (T ), the higher antieigenvalues of T , which is defined by Gustafson as follows: where f k is the kth antieigenvector.In the year 1989 Gustafson and Seddighin [8] proved the following theorem.
Theorem 1.1.Let T be a normal accretive operator on a finite-dimensional Hilbert space H with eigenvalues (1.5) (1.6) Then µ 1 (T ) is exactly equal to the smallest number in E ∪F .Furthermore, if T is diagonal and then, µ 1 (T ) = (T z,z )/ T z , for some z with ) Das et al. [1] also proved the above theorem in a different form which seems to be much simpler.They used the concept of stationary vectors and the result holds even if the space is not finite dimensional for operators with complete orthonormal set of eigenvectors.
Gustafson and Seddighin [9] also obtained the bounds for total antieigenvalues of a normal operator on a finite-dimensional Hilbert space.
We have proved the result following the idea used by Das et al. [1].The result holds even if the space is infinite dimensional for operators with complete orthonormal set of eigenvectors.

Total antieigenvectors.
Let A be a strictly accretive operator on H and let C = A * A, and let |φ A (f )| = |(Af ,f )|/ Af f represent the modulus of the cosine of largest angle through which an arbitrary nonzero vector f can be rotated by the action of A. Now |φ A (f )| is said to have a stationary value at a vector f = 0 if the function w g (t) of real variable t defined by has a stationary value at t = 0 for an arbitrary but fixed vector g ∈ H.In other words we must have w g (0) = 0 for all g ∈ H.
With these notations, we see that Since g ∈ H is arbitrary, we have the following theorem.

and A Y be defined as above. A unit vector f is a stationary vector of |φ
The above equation obviously characterizes the vectors for which |φ A (f )| is stationary, in particular, a minimum or a maximum.
We next prove the following theorem.

Theorem 2.2. If for a stationary vector f , Af
Proof.Suppose f is a stationary vector and Af = A * f .Then, we have by the necessary and sufficient condition for a vector to be stationary (2.5) Let (2.6) Thus f is a linear combination of the eigenvectors g 1 and g 2 of A * with corresponding eigenvalues β 1 and β 2 .
If further, A is normal, then proceeding as above, we get as before; This completes the proof.

Theorem 2.3. A unit vector f is a total antieigenvector of a selfadjoint operator A if and only if there exist two eigenvectors whose appropriate linear combination (in the sense given below) yields f .
Proof.If f is a stationary vector, in particular, a total antieigenvector, then it satisfies (2.3).
Conversely, let f = α i e i + α j e j with |α i | 2 = λ j /(λ i + λ j ) and |α j | 2 = λ i /(λ i + λ j ), where e i , e j are any two eigenvectors of A corresponding to the eigenvalues λ j , λ j . So With these values, we can see that (2.8) is satisfied by f .This completes the proof.
Before we discuss the structure of the stationary vectors in the normal-operator case, we give a few examples to show how the situation arises in this case.In the examples, e k 's are eigenvectors of A corresponding to the eigenvalues λ k 's, the values of which are clear from the context.Here, both of (f , e 1 ) and (f , e 2 ) are not zero.Thus, an antieigenvector may be a linear combination of two eigenvectors.
Example 2.5.This is the most important example in this section.It shows that a linear combination of more than two eigenvectors may exist for the attainment of the minimum of |φ A (f )|.We consider the normal operator A such that (2.10) where 2 , and (2.12) Hence, )e 2 will be the required vector.We now prove the main theorem.

e k . If |φ A (f )| is stationary at f and f is not an eigenvector of A, then either f is a linear combination of two eigenvectors, or there exists a suitable linear combination g of two eigenvectors corresponding to two distinct eigenvalues such that |φ
holds if λ k = β k + iδ k and λ j = β j + iδ j are the distinct eigenvalues referred to as above.
Proof.If |φ A (f )| is stationary at f , then we have (2.3),where If f is not an eigenvector, then f may be a linear combination of two eigenvectors corresponding to two eigenvalues λ k which satisfies (2.3).If, however, f is a linear combination of more than two eigenvectors, then we show that there always exists a linear combination g of two eigenvectors corresponding to two eigenvalues such that (2.16) and so where and so So, g is a stationary vector as it satisfies the necessary and sufficient condition.Also we have, Hence, the proof of the theorem is complete.

Call for Papers
As a multidisciplinary field, financial engineering is becoming increasingly important in today's economic and financial world, especially in areas such as portfolio management, asset valuation and prediction, fraud detection, and credit risk management.For example, in a credit risk context, the recently approved Basel II guidelines advise financial institutions to build comprehensible credit risk models in order to optimize their capital allocation policy.Computational methods are being intensively studied and applied to improve the quality of the financial decisions that need to be made.Until now, computational methods and models are central to the analysis of economic and financial decisions.However, more and more researchers have found that the financial environment is not ruled by mathematical distributions or statistical models.In such situations, some attempts have also been made to develop financial engineering models using intelligent computing approaches.For example, an artificial neural network (ANN) is a nonparametric estimation technique which does not make any distributional assumptions regarding the underlying asset.Instead, ANN approach develops a model using sets of unknown parameters and lets the optimization routine seek the best fitting parameters to obtain the desired results.The main aim of this special issue is not to merely illustrate the superior performance of a new intelligent computational method, but also to demonstrate how it can be used effectively in a financial engineering environment to improve and facilitate financial decision making.In this sense, the submissions should especially address how the results of estimated computational models (e.g., ANN, support vector machines, evolutionary algorithm, and fuzzy models) can be used to develop intelligent, easy-to-use, and/or comprehensible computational systems (e.g., decision support systems, agent-based system, and web-based systems) This special issue will include (but not be limited to) the following topics: • Computational methods: artificial intelligence, neural networks, evolutionary algorithms, fuzzy inference, hybrid learning, ensemble learning, cooperative learning, multiagent learning

•
Application fields: asset valuation and prediction, asset allocation and portfolio selection, bankruptcy prediction, fraud detection, credit risk management • Implementation aspects: decision support systems, expert systems, information systems, intelligent agents, web service, monitoring, deployment, implementation