© Hindawi Publishing Corp. OPTIMALLY ROTATED VECTORS

We study vectors which undergo maximum or minimum rotation by a matrix on the field of real numbers. The cosine of the angle between a maximally rotated vector and its image under the matrix is called the cosine or antieigenvalue of the matrix and has important applications in numerical methods. Using Lagrange multiplier technique, we obtain systems of nonlinear equations which represent these optimization problems. Furthermore, we solve these systems symbolically and numerically.


Introduction.
The concept of cosine of an operator or a matrix was first introduced by Gustafson (see [2]).Given an operator on a Hilbert space, cosine of T is defined by where cos T is also denoted by µ(T ).This parameter has important applications in numerical analysis as well as pure matrix and operator theory.See [2] for more information on the applications of µ(T ) to numerical techniques such as conjugate gradient and steepest descent methods.In recent years, many attempts have been made to compute or approximate cos T for operators on complex Hilbert spaces.In particular, computation and approximation of cos T for normal operators have been somewhat successful (see [2,3,4,5]).
A vector f for which the inf in (1.1) is attained is called a maximally rotated vector for T (a maximally rotated vector is called an antieigenvector in our earlier papers).On the other hand, vectors for which the sup in is attained are called minimally rotated vectors for T .A maximally or minimally rotated vector is called an optimally rotated vector.In the past the focus has been on the computation of maximally rotated vectors and µ(T ).In the present paper we are also concerned with minimally rotated vectors and υ(T ).
Note that for a matrix on the real field, if the set of all negative eigenvalues of T is nonempty, then µ(T ) = −1.In this case the set of all maximally rotated vectors of T is simply the union of all eigenspaces corresponding to negative eigenvalues of T .However, if the set of negative eigenvalues of T is empty, then we have µ(T ) ≥ −1.Likewise if the set of all positive eigenvalues of T is nonempty, then υ(T ) = 1.In this case the set of all minimally rotated vectors of T is simply the union of all eigenspaces corresponding to positive eigenvalues of T .However, if the set of positive eigenvalues of T is empty, then we have υ(T ) ≤ 1.Unfortunately, a variational approach analogous to Rayleigh-Ritz variational theory of eigenvectors is not successful for computing optimally rotated vectors.Nevertheless, Gustafson has found an Euler equation that satisfies these vectors (see [2]).His approach is based on the direct computation of the left-hand side of which yields the following equation: In this paper we use Lagrange multipliers to compute the set of optimally rotated vectors for matrices on the real field.For a matrix T defined on the real field, we have Note that µ(T ) and ν(T ) can equivalently be defined by The following are three simple properties that result directly from definitions.

Main results. If
is an n × n matrix on the real field and f = (x 1 ,x 2 ,x 3 ,...,x n ) is any vector in R n , direct computations show that the functional (T f ,f )/ T f takes the form (2.1) Therefore, in order to compute µ(T ) and υ(T ) we must find the optimum values of the expression on the unit sphere n i=1 x 2 i = 1.Making use of Lagrange multiplier technique seems like a natural approach in computing optimally rotated unit vectors.However, as the next theorem illustrates, for a general matrix, the resulting equations are nonlinear and hard to solve even in the case of 2 × 2 matrices.
be a 2 × 2 matrix on the field of real numbers; then any optimally rotated unit vector f = (x, y) satisfies the following system of equations: (2.4)

Proof.
Finding is the same as finding the optimum values of the function (2.7) on the sphere x 2 + y 2 = 1.A necessary condition for f = (x, y) to be an optimizing vector for J(x, y) on the sphere is that the gradients of J(x, y) and x 2 + y 2 = 1 be parallel.This means that we must have for some nonzero constant λ.Eliminating λ from (2.8) yields system (2.4).
As we will see later, system (2.4) can be solved algebraically for some special matrices.Nevertheless, we can solve that system numerically in all cases, as the following example shows.
For this matrix, the function J(x, y) defined by (2.7) is  whose graph is shown in Figure 2.1.The y-axis represents the cosine of the angle between a vector (x, f (x)) and its image under T .On the other hand, substituting − √ 1 − x 2 in (2.9) gives us whose graph is shown in Figure 2.2.The y-axis represents the cosine of the angle between a vector (x, f (x)) and its image under T .Notice that the graphs of f and g are symmetric with respect to the vertical axis.
System (2.4) can be converted to a polynomial system.We omit the proof of the next corollary.

Corollary 2.3. Let T = a b
c d be a 2×2 matrix on the field of real numbers; then an optimally rotated vector f = (x, y) with f = 1 satisfies the following system of equations: px 4 + qx 3 y + r x 2 y 2 + sxy 3 + ty 4 = 0, where the coefficients p, q, r , s, and t are functions of a, b, c, and d as follows: (2.16) System (2.15) can be easily solved for some special cases.For example, if a matrix is such that q = 0 and s = 0, then the optimally rotated unit vectors f = (x, y) satisfy (2.17) (2.18) Proof.In this case all of the coefficients in the first equation of system (2.15) are zero.Also direct substitutions show that for any vector f = (x, y), the value of J(x, y) defined by (2.7) is a/2 √ a 2 + b 2 .
Corollary 2.5.For the matrix T = a b b −a , the unit vectors f = (x, y) whose components satisfy x 2 = 1/2 + a/ √ a 2 + b 2 and y 2 = 1/2 − a/ √ a 2 + b 2 are minimally rotated vectors giving ν(T ) = 1.Also the unit vectors whose components satisfy Proof.If we substitute entries of this matrix in system (2.15) and eliminate y in it, we obtain the equation This will give us two sets of the solutions.One set satisfies . If we substitute any vector from the first set in (2.7), we obtain 1.If we substitute any vector from the second set in (2.7), we obtain (b 2 − a 2 )/(b 2 + a 2 ).

In many applications one needs only to find upper bounds for µ(T ) and lower bounds for υ(T ). If K is a reducing subspace for T , then µ(T |K) is an upper bound for µ(T ) and ν(T |K) is a lower bound for ν(T ).
In particular, if J is any elementary Jordan matrix in the elementary Jordan form of T , then µ(T ) ≤ µ(J).It is in general easier to compute µ and υ for an elementary Jordan matrix than for a general matrix.
Proof.First note that k is an eigenvalue for J.One can also verify that in this case system (2.15) is reduced to which has two sets of solutions Substituting the values of x and y in either set of solutions in (2.7) yields (4k 2 − 1)/(4k 2 + 1).
Although it becomes impossible to algebraically solve the system of equations resulting from the Lagrange multipliers for general matrices of dimension greater than two, numerical solutions of these systems are always accessible.
Example 2.7.Find µ(T ) and υ(T ) for the matrix (2.30) Notice that this matrix has only one real eigenvalue which is negative, and the last two vectors are actually eigenvectors of T corresponding to this negative eigenvalue.This example underlines the very important fact that our techniques can be used to find eigenvectors that correspond to real eigenvalues and hence the real eigenvalues themselves.To find the negative eigenvalue of the matrix T above, note that T f = − 0.97286, 1.1873, −9.4474 × 10 −2 = −1.5379− 0.63262, 0.77202, −6.1442 × 10 −2 .
Remark 2.8.In [1] Gustafson has developed an extended operator trigonometry by redefining µ(T ) for invertible operators based on their polar decomposition T = U |T |.He has replaced the definition of µ(T ) given by expression (1.1) with

Call for Papers
Thinking about nonlinearity in engineering areas, up to the 70s, was focused on intentionally built nonlinear parts in order to improve the operational characteristics of a device or system.Keying, saturation, hysteretic phenomena, and dead zones were added to existing devices increasing their behavior diversity and precision.In this context, an intrinsic nonlinearity was treated just as a linear approximation, around equilibrium points.Inspired on the rediscovering of the richness of nonlinear and chaotic phenomena, engineers started using analytical tools from "Qualitative Theory of Differential Equations," allowing more precise analysis and synthesis, in order to produce new vital products and services.Bifurcation theory, dynamical systems and chaos started to be part of the mandatory set of tools for design engineers.
This proposed special edition of the Mathematical Problems in Engineering aims to provide a picture of the importance of the bifurcation theory, relating it with nonlinear and chaotic dynamics for natural and engineered systems.Ideas of how this dynamics can be captured through precisely tailored real and numerical experiments and understanding by the combination of specific tools that associate dynamical system theory and geometric tools in a very clever, sophisticated, and at the same time simple and unique analytical environment are the subject of this issue, allowing new methods to design high-precision devices and equipment.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable:

Mathematical Problems in Engineering Special Issue on Modeling Experimental Nonlinear Dynamics and Chaotic Scenarios
.32)