A general class of matrices, covering, for instance, an important set of proper rotations, is considered. Several characteristics of the class are established, which deal with such notions and properties as determinant, eigenspaces, eigenvalues, idempotency, Moore-Penrose inverse, or orthogonality.
1. Introduction and Basic Properties
Let ℂm,n denote the set of m×n complex matrices. The symbols L′, L*, ℛ(L), and rk(L) will stand for the transpose, conjugate transpose, column space, and rank, respectively, of L∈ℂm,n. Further, L†∈ℂn,m will be the Moore-Penrose inverse of L∈ℂm,n, that is, the unique matrix satisfying the equations
LL†L=L,L†LL†=L†,(LL†)*=LL†,(L†L)*=L†L,
and In will mean the identity matrix of order n. The Moore-Penrose inverse L† is useful in representing the orthogonal (in the sense of the standard inner product) projectors onto ℛ(L) and ℛ(L*), denoted by PL and PL*, as well as the orthogonal projectors onto the orthogonal complements of these subspaces, denoted by QL and QL*. To be precise, for L∈ℂm,n,
PL=LL†,PL*=L†L,QL=Im-LL†,QL*=In-L†L.
With respect to a scalar, say α∈ℂ, the inverse α† is defined as: α†=0 when α=0 and α†=α-1 when α≠0.
The considerations of the present paper concern 3×3 matrices and 3×1 vectors having either complex or real entries; in the latter case their subsets will be denoted by ℝ3,3 and ℝ3,1, respectively. Customarily, the symbol ∥l∥ will stand for the Euclidean norm of l∈ℂ3,1, that is, ∥l∥=l*l. Let
Ta=(0-a3a2a30-a1-a2a10)
be generated by a=(a1,a2,a3)′∈ℝ3,1. The matrix Ta can be used to define the vector cross product in ℝ3,1, with Tal=a×l for any l∈ℝ3,1; see [1]. Further properties of Ta are listed in the following lemma, whose proof is easy and thus omitted.
Lemma 1.1.
Let λ,μ∈ℝ and a,b,c,d∈ℝ3,1. Moreover, let Tl for some l∈ℝ3,1 be of the form (1.3). Then,
Tλa+μb=λTa+μTb,
Tab=-Tba,
Ta=-Ta′,
Taa=0,
TaTb=ba′-(a′b)I3,
(TaTb)′=TbTa,
TaTbTc=Tacb′-(b′c)Ta,
TaTbTa=-(a′b)Ta,
Ta3=-∥a∥2Ta,
c′Tab=a′Tbc=b′Tca,
TaTbc=(a′c)b-(a′b)c=a×(b×c),
(Tab)′Tcd=(a′c)(b′d)-(a′d)(b′c)=(a×b)′(c×d),
TTab=ba′-ab′=TaTb-TbTa.
Relationships listed in Lemma 1.1 are available in the literature; see Trenkler [2, 3], Groß et al. [4], Bernstein [5, Ch. 3], and G. Trenkler and D. Trenkler [6]. It is noteworthy that the three scalars involved in point (x) represent the scalar triple products, for example, c′Tab=(a×b)′c. Moreover, the right-hand sides equalities in points (xi) and (xii) are known as the Grassmann and Lagrange identities, respectively. From the point of view of the present paper, conditions (iii)–(v) of the lemma are of particular importance and will be extensively utilized in the subsequent derivations.
It is known that if a∈ℝ3,1 generating Ta given in (1.3) is such that ∥a∥=1, then the matrices of the form
R=sinθTa+I3+(1-cosθ)Ta2
describe proper rotations in ℝ3,1 with θ being the angle of rotation about the axis given by a; see Noble [7, Ch. 12] or Murray et al. [8, Ch. 2]. In what follows we consider a more general class of matrices than the one spanned by matrices of the form (1.4), namely,
Υ={T∈C3,3:T=αTa+βI3+γaa′,α,β,γ∈C,a∈R3,1,a≠0}.
It can be verified that Υ covers all proper rotations of the form (1.4). Moreover, it comprises also improper rotations, that is, orthogonal matrices with determinant equal to −1 [9, Ch. VIII], symmetric elementary matrices [10, Sec. 1], and all matrices commuting with Ta [11].
The purpose of the present paper is to identify various properties of the class of matrices specified in (1.5). As a result, several characteristics of the class Υ are established, dealing with such notions and properties as idempotency, determinant, eigenvalues, Moore-Penrose inverse, orthogonality, or eigenspaces.
2. Results
Subsequently, the symbol i is interpreted as i=-1. The theorem below states that Υ is closed against multiplication.
Theorem 2.1.
Let T1,T2∈Υ with Tk=αkTa+βkI3+γkaa′, k=1,2. Then T1T2∈Υ.
Proof.
Direct calculations show that
T1T2=(α1β2+β1α2)Ta+(β1β2-α1α2‖a‖2)I3+(α1α2+β1γ2+β2γ1+γ1γ2‖a‖2)aa′,
establishing the assertion.
It is clear that, besides ensuring the closure property, multiplication in Υ is also associative. Furthermore, for each T∈Υ there exists the identity element, namely, I3. On the other hand, since Υ includes also singular matrices, not every T∈Υ has an inverse element in Υ. Thus, the set Υ is a semigroup under (matrix) multiplication. (As will be seen subsequently, the Moore-Penrose inverse of every T∈Υ belongs to Υ, in particular, T∈Υ⇔T-1∈Υ for each nonsingular T∈Υ.)
The next theorem provides necessary and sufficient conditions for T2=T.
Theorem 2.2.
Let T∈Υ. Then T is idempotent if and only if
2αβ=α,β2-α2‖a‖2=β,α2+2βγ+γ2‖a‖2=γ.
Proof.
The equivalence established in the theorem follows straightforwardly from Theorem 2.1.
In view of Theorem 2.2, we can distinguish two subsets of idempotent matrices belonging to Υ corresponding to α≠0 and α=0. In the former of them, α=±(i/2∥a∥), β=1/2, and γ=±(1/2∥a∥2). Hence, since a†=(1/∥a∥2)a′, it follows that T∈{P1,P2,P3,P4}, with
P1=12(i‖a‖Ta+I3+Pa),P2=12(i‖a‖Ta+Qa),P3=12(-i‖a‖Ta+I3+Pa),P4=12(-i‖a‖Ta+Qa).
(Note that matrices Pk, k=1,…,4, were considered by Trenkler [3] to characterize certain eigenspaces.) On the other hand, if α=0, then Theorem 2.2 entails four cases, namely, β=0, γ=0; β=0, γ=1/∥a∥2; β=1, γ=0; and β=1, γ=-1/∥a∥2, leading to P5=0, P6=Pa, P7=I3, and P8=Qa, respectively.
The next task is to characterize the eigenvalues of matrices belonging to Υ. The subsequent theorem expresses the determinant of T∈Υ in terms of the scalars α, β, and γ. Its proof is based on the so-called Leverrier-Sourian-Frame algorithm, which provides a useful tool to calculate the coefficients in a characteristic polynomial. Since the algorithm is not widely known, it is restated in the following lemma; see for example, Meyer [12, page 504]. Customarily, tr(·) denotes the trace of a matrix argument.
Lemma 2.3.
Let L∈ℂn,n, and let μn+c1μn-1+c2μn-2+⋯+cn=0 be the characteristic equation for L. Then
c1=-tr(L),ck=-1ktr(LBk-1),k=2,3,…,n,
where B1=c1In+L and Bk=ckIn+LBk-1, k=2,3,…,n-1.
Theorem 2.4.
Let T∈Υ, and let τ=α2∥a∥2+β2. Then det(T)=τ(β+γ∥a∥2).
Proof.
From Lemma 2.3 it follows that c1=-tr(T) is given by c1=-(3β+γ∥a∥2), whence it is seen that B1=c1I3+T takes the form
B1=αTa-(2β+γ‖a‖2)I3+γaa′.
Further, if k=2, then c2=-(1/2)tr(TB1), where
TB1=-(αβ+αγ‖a‖2)Ta-(α2‖a‖2+2β2+βγ‖a‖2)I3+(α2-βγ)aa′.
Since tr(Ta)=0, in consequence we get c2=α2∥a∥2+3β2+2βγ∥a∥2, leading to
B2=c2I3+TB1=-(αβ+αγ‖a‖2)Ta+(β2+βγ‖a‖2)I3+(α2-βγ)aa′.
Finally, if k=3, then c3=-(1/3)tr(TB2), where
TB2=(α2β‖a‖2+α2γ‖a‖4+β3+β2γ‖a‖2)I3.
Hence, straightforward calculations yield c3=-τ(β+γ∥a∥2). Combining this result with the property det(T)=(-1)3c3 completes the proof.
By virtue of Theorem 2.4, it is easy to determine the spectrum of matrices belonging to Υ.
Theorem 2.5.
Let T∈Υ. Then the eigenvalues of T are solutions μ to the equation [α2∥a∥2+(β-μ)2][β-μ+γ∥a∥2]=0.
Proof.
It is clear that
det(T-μI3)=det[αTa+(β-μ)I3+γaa′].
Hence, on account of Theorem 2.4, we get
det(T-μI3)=det{[α2‖a‖2+(β-μ)2][β-μ+γ‖a‖2]},
establishing the assertion.
Theorem 2.5 leads to what follows.
Corollary 2.6.
Let T∈Υ. Then the eigenvalues of T are
μ1=β+γ‖a‖2,μ2=β+iα‖a‖2,μ3=β-iα‖a‖.
An expected result originating from Corollary 2.6 is that μ1μ2μ3=det(T), with det(T) given in Theorem 2.4. Furthermore, when α=1, β=0, and γ=0, then the eigenvalues given in (2.11) reduce to μ1=0, μ2=i∥a∥, and μ3=-i∥a∥, that is, to the eigenvalues of Ta; see [3, Theorem 2].
The following theorem will be useful in the subsequent calculations of the Moore-Penrose inverses of T∈Υ.
Theorem 2.7.
Let A∈ℂ3,3 be such that A=αTa+βI3, with α,β∈ℂ and Ta of form (1.3) generated by nonzero a∈ℝ3,1. Moreover, let λ=1+γa′A†a, with γ∈ℂ, and τ=α2∥a∥2+β2. Then,
det(A)=βτ,
the eigenvalues of A are σ1=β, σ2=β+iα∥a∥, σ3=β-iα∥a∥,
if β=0, then A†=-(α†/∥a∥2)Ta, λ=1,
if β≠0, τ=0, then A†=(1/4β)((α/β)Ta+I3+3Pa), λ=1+(γ/β)∥a∥2,
if β≠0, τ≠0, then A-1=(1/τ)(-αTa+βI3+(α2/β)aa′), λ=1+(γ/β)∥a∥2.
Proof.
Assertion (i) follows from Theorem 2.4 by setting γ=0. Statement (ii) is a consequence of (i), whereas the validity of points (iii) and (v) can be confirmed by straightforward calculations; see [3, Theorem 1]. For the proof of statement (iv) note that β≠0, τ=0 imply α/β=±(i/∥a∥), that is, α/β is purely imaginary. Taking this fact into account, in view of a†=(1/∥a∥2)a′, the validity of the formula for A† given in point (iv) is seen by direct verification of conditions (1.1). Similarly, the formula for A-1 provided in point (v) can be confirmed by examining the condition AA-1=I3. The proof is thus complete, for the expressions for λ given in points (iv) and (v) are easily obtainable.
Note that regardless whether β and/or τ in Theorem 2.7 are zero or not, the matrix A is such that PA=PA*, or, in other words, ℛ(A)=ℛ(A*), that is, A is an EP matrix. Another observation is that by setting α=1, β=1 in Theorem 2.7 leads to the relationship
(Ta+I3)-1=1‖a‖2+1(-Ta+I3+aa′).
Hence, the so called Cayley transform of Ta (see [13, p. 219]), being of the form S=(-Ta+I3)(Ta+I3)-1, of which it is known that is orthogonal (see [9, Theorem 8.1.10]), takes the form
S=1‖a‖2+1[-2Ta+(1-‖a‖2)I3+2aa′].
Since det(S)=1, the matrix S represents in fact a proper rotation.
We now have the tools necessary to establish formulae for the Moore-Penrose inverses of T∈Υ.
Theorem 2.8.
Let T∈Υ be decomposed as T=A+γaa′, where A=αTa+βI3. Moreover, let λ=1+γa'A†a and τ=α2∥a∥2+β2. Then,
if β=0, then T†=(1/∥a∥2)(-α†Ta+γ†Pa),
if β≠0, τ=0, λ=0, then T†=(1/4β)((α/β)Ta+Qa),
if β≠0, τ=0, λ≠0, then T†=(1/4β)((α/β)Ta+I3+((3β-γ∥a∥2)/(β+γ∥a∥2))Pa),
if β≠0, τ≠0, λ=0, then T†=(1/τ)(-αTa+βQa),
if β≠0, τ≠0, λ≠0, then T-1=(1/τ)(-αTa+βI3+((α2-βγ)/(β+γ∥a∥2))aa′).
Proof.
The first observation is that if β=0, then on account of Theorem 3.1.1 in [14], we have T†=(αTa)†+(γaa′)†. Hence, by utilizing Ta†=-(1/∥a∥2)Ta and (aa′)†=(1/∥a∥2)Pa, the formula for T† given in point (i) follows.
Assume now that β≠0, in which case we can still have τ=0 or τ≠0. In the former of these situations, point (iv) of Theorem 2.7 implies PA=(1/2)((α/β)Ta+I3+Pa), whence PAa=a, or, in other words, a∈ℛ(A). This inclusion is clearly satisfied also when τ≠0, for then det(A)≠0.
In order to apply the results of Baksalary et al. in [15], we introduce b=γa, c=a. Then, T=A+bc′=A+bc*, that is, T is a rank-one modification of A. As in [15], we define also the vectors d,e,f,g∈ℂ3,1 according to
d=A†b,e=(A†)*c,f=QAb,g=QA*c,
and denote the squares of the norms of the first two of them by δ and η, that is, δ=∥d∥2, η=∥e∥2. As is seen from Theorem 2.7, the scalar λ∈ℂ specified in [15] by λ=1+c*A†b now takes the form λ=1+γa′A†a=1+(γ/β)∥a∥2.
Let us first consider case (ii) of the theorem, characterized, in addition to β≠0, by τ=0, λ=0. On account of Theorem 1.1 in [15] this case corresponds to rk(T)=rk(A)-1. As can be directly verified with the use of A† given in point (iv) of Theorem 2.7, d=(γ/β)a and e=(1/β¯)a, from where δ=|γ/β|2∥a∥2 and η=(1/|β|2)∥a∥2. Moreover, we get A†e=(1/|β|2)a, implying d*A†e=(γ¯/|β|2β¯)∥a∥2 and A†ee*=(1/|β|2β)aa′. Furthermore, dd*=(|γ/β|2)aa′ and de*=(γ/β2)aa′. Thus,
δ-1dd*=Pa,η-1A†ee*=1βPa,δ-1η-1(d*A†e)de*=1βPa.
In consequence, from formula (2.1) in [15] we get T†=QaA†, whence the expression for T† claimed in point (ii) of the theorem follows.
According to Theorem 1.1 in [15], another case which corresponds to rk(T)=rk(A)-1 is given in point (iv) of the theorem, where τ≠0, λ=0. Direct calculations with the use of A-1 given in point (v) of Theorem 2.7 show that formulae (2.15) remain valid also in this case. Hence, from relationship (2.1) in [15] we obtain T†=QaA-1, leading to the expression claimed in point (iv) of the theorem.
Another conclusion originating from Theorem 1.1 in [15] is that case (iii) of the theorem, in which τ=0, λ≠0, corresponds to rk(T)=rk(A). Direct calculations with the use of A† given in point (iv) of Theorem 2.7 show that
λ-1de*=γβ(β+γ‖a‖2)aa′,
and substituting this relationship into formula (2.2) in [15] leads to the expression for T† given in the theorem.
Case (v), in which τ≠0, λ≠0, is left to be considered. According to the remark on p. 210 in [15], in such a situation T is nonsingular. Hence, T†=T-1, and the validity of the formula given in the theorem can be confirmed by direct verifications of the condition TT-1=I3.
A conclusion originating from Theorem 2.8 is that every T∈Υ satisfies T∈Υ⇔T†∈Υ, the property which was already mentioned in the remark following Theorem 2.1. Further consequences of Theorem 2.8 are in what follows.
Corollary 2.9.
Let T∈Υ, and let λ=1+γa′A†a, τ=α2∥a∥2+β2. Then PT=PT*, where
if β=0, then PT=αα†Qa+γγ†Pa,
if β≠0, τ=0, λ=0, then PT=P2 provided that α/β=i/∥a∥ and PT=P4 provided that α/β=-i/∥a∥,
if β≠0, τ=0, λ≠0, then PT=P1 provided that α/β=i/∥a∥ and PT=P3 provided that α/β=-i/∥a∥,
if β≠0, τ≠0, λ=0, then PT=Qa,
if β≠0, τ≠0, λ≠0, then PT=I3,
with Pk, k=1,…,4, as specified in (2.3).
Proof.
The corollary is established by direct calculations.
Point (v) of Theorem 2.8 enables to formulate necessary and sufficient conditions for T∈Υ to be orthogonal.
Theorem 2.10.
Let T∈Υ with nonzero α,β,γ∈ℝ. Moreover, let τ=α2∥a∥2+β2. Then T∈Υ is orthogonal if and only if
τ=1,(β+γ‖a‖2)2=1.
Proof.
The matrix T is orthogonal if and only if it is nonsingular and T-1=T′. On account of point (v) of Theorem 2.8, it is seen that T-1=T′ is equivalent to
α=ατ,β=βτ,γ=α2-βγβ+γ‖a‖2,
or, in other words, τ=1 and γ(β+γ∥a∥2)=α2-βγ. Taking into account that τ=1 implies α2=(1/∥a∥2)(1-β2), the assertion follows.
Observe that the right-hand side condition in (2.17) admits two possibilities, namely, either β+γ∥a∥2=1 or β+γ∥a∥2=-1. Since τ=1, Theorem 2.4 ensures that in the former situation T is a proper rotation, whereas in the latter one T is an improper rotation.
The next theorem concerns eigenspaces attributed to T∈Υ.
Theorem 2.11.
Let T∈Υ, and let ℰ(μj), j=1,2,3, be the eigenspaces of T associated with its eigenvalues μj given in (2.11). Then,
if α=0, then ℰ(μ2)=ℛ(I3-γγ†Pa), ℰ(μ3)=ℛ(I3-γγ†Pa),
if α≠0, then ℰ(μ2)=ℛ(Q2) provided that γ/α=i/∥a∥ and ℰ(μ2)=ℛ(Q1) otherwise and, simultaneously, ℰ(μ3)=ℛ(Q4) provided that γ/α=-(i/∥a∥) and ℰ(μ3)=ℛ(Q3) otherwise
if γ=0, then ℰ(μ1)=ℛ(I3-αα†Qa),
if γ≠0, then ℰ(μ1)=ℛ(Q2) provided that α/γ=-i∥a∥, ℰ(μ1)=ℛ(Q4) provided that α/γ=i∥a∥, and ℰ(μ1)=ℛ(a) otherwise,
where Qk=I3-Pk, k=1,…,4, with Pk as specified in (2.3).
Proof.
It is known that ℰ(μj)=ℛ[I3-T(μj)T(μj)†], where T(μj)=T-μjI3, j=1,2,3; see, for example, [3]. Clearly, for each μj, matrix T(μj) can be written as
T(μj)=αTa+β̃I3+γaa′,
where β̃=β-μj. For μ1=β+γ∥a∥2, we have β̃=-γ∥a∥2. By virtue of the equivalence β̃=0⇔γ=0, statement (i) of Corollary 2.9 leads to point (iii) of the theorem. If, however, β̃≠0, that is, γ≠0, then λ=1+(γ/β̃)∥a∥2=0, which means that cases (iii) and (v) of Corollary 2.9, characterized by λ≠0, are not attainable in the present situation. Further observations are that τ=α2∥a∥2+β̃2=(α2+γ2∥a∥2)∥a∥2 and α/β̃=-(α/γ)(1/∥a∥2). In view of these facts, it is seen that statements (ii) and (iv) of Corollary 2.9 lead to characterizations of ℰ(μ1) given in point (iv) of the theorem.
Next we consider eigenvalue μ2=β+iα∥a∥, which ensures that β̃ occurring in (2.19) is given by β̃=-iα∥a∥. Since β̃=0 is equivalent to α=0, on account of statement (i) of Corollary 2.9 we arrive at characterization of ℰ(μ2) given in point (i) of the theorem. On the other hand, if β̃≠0, that is, α≠0, then τ is necessarily equal to zero, which means that cases (iv) and (v) of Corollary 2.9 are to be excluded from the present considerations. Furthermore, it is seen that λ=1+i(γ/α)∥a∥ and α/β̃=i/∥a∥. With these facts taken into account, we conclude that statements (ii) and (iii) of Corollary 2.9 lead to characterizations of ℰ(μ2) provided in point (ii) of the theorem.
The last eigenvalue to be considered is μ3=β-iα∥a∥, for which β̃=iα∥a∥. In this case, analogous arguments to the ones used with respect to μ2 lead to the eigenspaces ℰ(μ3) in points (i) and (ii) of the theorem. The proof is complete.
Observe that if α=1, β=0, and γ=0, then from Theorem 2.11 we get ℰ(μ1)=ℛ(a), ℰ(μ2)=ℛ(Q1), and ℰ(μ3)=ℛ(Q3), that is, the eigenspaces of Ta identified by Trenkler [3, Sec. 3].
Theorem 2.11 is supplemented with examples demonstrating its applicability. Let a=(1,0,-1)′. Then, Ta=(010-10-1010),aa′=(10-1000-101),
leading to
T=(β+γα-γ-αβ-α-γαβ+γ)
of eigenvalues μ1=β+2γ, μ2=β+i2α, μ3=β-i2α. From the right-hand side formula in (2.20) we get
Pa=(120-12000-12012),
and the projectors Qk, k=1,…,4, involved in Theorem 2.11 are of the forms
Q1=(14-i2414i2412i2414-i2414),Q2=(34-i24-14i2412i24-14-i2434),Q3=(14i2414-i2412-i2414i2414),Q4=(34i24-14-i2412-i24-14i2434).
Hence, from Theorem 2.11 we obtain what follows.
If α=0, then ℰ(μ2)=ℰ(μ3)=ℂ3,1 provided that γ=0 and
E(μ2)=E(μ3)=span{(101),(010)}
otherwise.
Next, if α≠0, then
E(μ2)=span{(10-1),(01-i2)}
provided that γ/α=i(2/2) and
E(μ2)=span{(1i21)}
otherwise and, simultaneously,
E(μ3)=span{(10-1),(01i2)}
provided that γ/α=-i(2/2) and
E(μ3)=span{(1-i21)}
otherwise.
Further, if γ=0, then ℰ(μ1)=ℂ3,1 provided that α=0 and ℰ(μ1)=ℛ(a) otherwise.
Finally, if γ≠0, then
E(μ1)=span{(10-1),(01-i2)}
provided that α/γ=-i2,
E(μ1)=span{(10-1),(01i2)}
provided that α/γ=i2, and ℰ(μ1)=ℛ(a) otherwise.
Further consequences of Theorem 2.11 deal with the proper rotation matrices; for more detailed discussion see [16]. G. Trenkler and D. Trenkler [6, Sec. 3] identified three types of proper rotations, namely, Type I, covering matrices of the form R=I3; Type II, covering matrices of the form R=2aa′-I3, where ∥a∥=1; Type III, covering matrices of the form R=I3+fa(Ta+Ta2), where fa=2/(∥a∥2+1). Moreover, it was pointed out in [6] that each proper rotation can be attributed to one of these types. Direct calculations show that rotations of Type I are obtained from the representation (1.5) by taking α=0, β=1, γ=0; Type II are obtained from the representation (1.5) by taking α=0, β=-1, γ=2, and ∥a∥=1; Type III are obtained from the representation (1.5) by taking α=fa, β=1-fa∥a∥2, γ=fa, where fa=2/(∥a∥2+1). Combining these observations with Corollary 2.6 leads to the conclusion that rotations of Type I have eigenvalues μ1=1, μ2=1, μ3=1; Type II have eigenvalues μ1=1, μ2=-1, μ3=-1; Type III have eigenvalues μ1=1, μ2=1-fa∥a∥(∥a∥-i), μ3=1-fa∥a∥(∥a∥+i). Furthermore, from Theorem 2.11 we obtain the following characterizations of the eigenspaces.
Corollary 2.12.
Let R∈Υ be a proper rotation, and let ℰ(μj), j=1,2,3, be the eigenspaces of R associated with its eigenvalues μj. Then,
if R is of Type I, then ℰ(μ1)=ℂ3,1, ℰ(μ2)=ℂ3,1, ℰ(μ3)=ℂ3,1,
if R is of Type II, then ℰ(μ1)=ℛ(a), ℰ(μ2)=ℛ(Qa), ℰ(μ3)=ℛ(Qa),
if R is of Type III, then ℰ(μ1)=ℛ(a), ℰ(μ2)=ℛ(Q1), ℰ(μ3)=ℛ(Q3), where Qk=I3-Pk, k=1,3, with Pk as specified in (2.3).
RoomT. G.The composition of rotations in euclidean three-space195259688692005278810.2307/2307548TrenklerG.Vector equations and their solutions19982934554591621486ZBL1025.15002TrenklerG.The vector cross product from an algebraic point of view200121167821868618ZBL0999.15033GroßJ.TrenklerG.TroschkeS.-O.The vector cross product in ℂ31999304549555170535610.1080/002073999287815ZBL1018.15027BernsteinD. S.20092ndPrinceton, NJ, USAPrinceton University Pressxlii+11392513751TrenklerG.TrenklerD.On the product of rotations200839194104237871710.1080/00207390601115054NobleB.1969Englewood Cliffs, NJ, USAPrentice Hallxvi+5230246884MurrayR. N.LiZ. X.SastryS. S.1994Boca Raton, Fla, USACRC Pressxx+4561300410MirskyL.1990New York, NY, USADoverviii+4401088257HouseholderA. S.1964New York, NY, USADoverxi+2570175290TrenklerD.TrenklerG.Problem 29–12. Matrices commuting with the vector cross product20022935MeyerC.2000Philadelphia, Pa, USASIAMxii+7181777382LancasterP.TismenetskyM.19852ndOrlando, Fla, USAAcademic Pressxv+570Computer Science and Applied Mathematics792300CampbellS. L.MeyerC. D.Jr.1979London, UKPitmanBaksalaryJ. K.BaksalaryO. M.TrenklerG.A revisitation of fomulae for the Moore-Penrose inverse of modified matrices2003372207224199914810.1016/S0024-3795(03)00508-1BaksalaryO. M.baxx@amu.edu.plTrenklerG.Eigenspaces of the proper rotation matrices201041682782910.1080/00207391003675174