Relativistic wave equations with fractional derivatives and pseudo-differential operators

The class of the free relativistic covariant equations generated by the fractional powers of the D'Alambertian operator $(\square^{1/n})$ is studied. Meanwhile the equations corresponding to n=1 and 2 (Klein-Gordon and Dirac equations) are local in their nature, the multicomponent equations for arbitrary n>2 are non-local. It is shown, how the representation of generalized algebra of Pauli and Dirac matrices looks like and how these matrices are related to the algebra of SU(n) group. The corresponding representations of the Poincar\'e group and further symmetry transformations on the obtained equations are discussed. The construction of the related Green functions is suggested.


Introduction
The relativistic covariant wave equations represent an intersection of ideas of the theory of relativity and quantum mechanics. The first and best known relativistic equations, the Klein-Gordon and particularly Dirac equation, belong to the essentials, which our present understanding of the microworld is based on. In this sense it is quite natural, that the searching for and the study of the further types of such equations represent a field of stable interest. For a review see e.g. [1] and citations therein. In fact, the attention has been paid first of all to the study of equations corresponding to the higher spins (s ≥ 1) and to the attempts to solve the problems, which have been revealed in the connection with these equations, e.g. the acausality due to external fields introduced by the minimal way.
In this paper we study the class of equations obtained by the 'factorization' of the D'Alambertian operator, i.e. by a generalization of the procedure, by which the Dirac equation is obtained. As the result, from each degree of extraction n we get a multi-component equation, hereat the case n = 2 corresponds to the Dirac equation. However the equations for n > 2 differ substantially from the cases n = 1, 2 since they contain fractional derivatives (or pseudo-differential operators), so in the effect their nature is non-local.
In the first part (Sec. 2), the generalized algebras of the Pauli and Dirac matrices are considered and their properties are discussed, in particular their relation to the algebra of the SU (n) group. The second, main part (Sec. 3) deals with the covariant wave equations generated by the roots of the D'Alambertian operator, these roots are defined with the use of the generalized Dirac matrices. In this section we show the explicit form of the equations, their symmetries and the corresponding transformation laws. We also define the scalar product and construct the corresponding Green functions. The last section (Sec. 4) is devoted to the summary and concluding remarks.
Let us remark, the application of the pseudo-differential operators in the relativistic equations is nothing new. The very interesting aspects of the scalar relativistic equations based on the square root of the Klein-Gordon equation are pointed out e.g. in the papers [2]- [4]. Recently, an interesting approach for the scalar relativistic equations based on the pseudo-differential operators of the type f ( ) has been proposed in the paper [5]. One can mention also the papers [6], [7] in which the square and cubic roots of the Dirac equation were studied in the context of supersymmetry. The cubic roots of the Klein-Gordon equation were discussed in the recent papers [8], [9].
It should be observed, that our considerations concerning the generalized Pauli and Dirac matrices (Sec. 2) have much common with the earlier studies related to the generalized Clifford algebras (see e.g. [10]- [12] and citation therein) and with the paper [13], even if our starting motivation is rather different.

Generalized algebras of Pauli and Dirac matrices
Anywhere in the next by the term matrix we mean the square matrix n × n, if not stated otherwise. Considerations of this section are based on the matrix pair introduced as follows.
Definition 1 For any n ≥ 2 we define the matrices where α = exp(2πi/n) and in the remaining empty positions are zeros. where I denotes the unit matrix.
Proof: All the relations easily follow from the Definition 1.

Definition 3 Let
A be some algebra on the field of complex numbers, (p, m) be a pair of natural numbers, X 1 , X 2 , ..., X m ∈ A and a 1 , a 2 , ..., a m ∈ C. The p − th power of the linear combination can be expanded: where the symbol {X p1 1 , X p2 2 , ..., X pm m } represents the sum of the all possible products created from elements X k in such a way that each product contains element X k just p k − times. This symbol we shall call combinator.
3) The relation (2.18) can be proved by the induction, therefore first let us assume p = 1, then its l.h.s. reads k2 k1=0 z k1 = 1 − z k2+1 1 − z and r.h.s. gives so for p = 1 the relation is valid. Now let us suppose the relation holds for p and calculate the case and since z r·k = z −p·k , the sum can be rewritten Obviously, the last sum coincides with G p (z), which is zero according to already proven identity (2.16). Let us remark, last lemma implies also the known formula The product can be expanded c j x n−j (−y) j and one can easily check that For the remaining j, 0 < j < n we get  This multiple sum is a special case of the formula (2.12) and since α n = 1, the identity (2.19) is satisfied. Therefore for 0 < j < n we get c j = 0 and formula (2.27) is proved.
Definition 6 Let us have a matrix product created from some string of matrices X, Y in such a way that matrix X is in total involved p − times and Y r − times. By the symbol P + j (P − j ) we denote permutation, which shifts the leftmost (rightmost) matrix to right (left) on the position in which the shifted matrix has j matrices of different kind left (right). (Range of j is restricted by p or r if the shifted matrix is Y or X).

Example 7
Now, we can prove the following theorem. Obviously this equation is valid irrespective of the assumption α p+r = 1, i.e. it holds for any n and α = exp(2πi/n). It follows, that Eq. (2.33) is satisfied for any α.

Lemma 10
Q rs Q pq = α s·p Q kl ; k = mod(r + p − 1, n) + 1, l = mod(s + q − 1, n) + 1, (2.35) Theorem 11 The matrices Q pr are linearly independent and any matrix A (of the same dimension) can be expressed as their linear combination a kl Q † rs Q kl = a rs n = 0.
This equation contradicts our assumption, therefore the matrices are independent and obviously represent a base in the linear space of matrices n × n, which with the use of the previous lemma implies the relations (2.42).
Theorem 12 For any n ≥ 2, among n 2 matrices (2.34) there exists the triad and moreover if n ≥ 3, then also (2.44)

Proof:
We shall show the relations hold e.g. for indices λ = 1n, µ = 11, ν = n1. Let us denote Actually the relation {X p , Z r } = 0 is already proven in the Theorem 8, obviously the remaining relations (2.43) can be proved exactly in the same way. The combinator (2.44) can be similarly as in the proof of Theorem 8 expressed which for matrices obeying relations (2.46) give Since the first multiple sum (with indices j) coincides with Eq. (2.12) and satisfy the condition for Eq. (2.19), r.h.s. is zero and the theorem is proved. Now let us make few remarks to illuminate content of the last theorem and meaning of the matrices Q λ . Obviously, the relations (2.43), (2.44) are equivalent to the statement: any three complex numbers a, b, c satisfy (aQ λ + bQ µ + cQ ν ) n = (a n + b n + c n )I. (2.48) Further, the theorem speaks about existence of the triad but not about their number. Generally for n > 2 there is more than one triad defined by the theorem, but on the other hand not any three various matrices from the set Q rs comply with the theorem. Simple example are some X, Y, Z where e.g. XY = Y X, which happens for Y ∼ X p , 2 ≤ p < n. Obviously in this case at least the relation (2.43) surely is not satisfied. Computer check of the relation (2.47) which has been done with all possible triads from Q rs for 2 ≤ n ≤ 20 suggests, that a triad X, Y, Z for which there exist the numbers p, r, s ≥ 1 and p + r + s ≤ n so that X p Y r Z s ∼ I also does not comply with the theorem. Further, the result on r.h.s. of Eq. (2.47) generally depends on the factors β k in relations and computer check suggests the sets in which for some β k and p < n there is β p k = 1 also contradict the theorem. In this way the number of different triads obeying the relations (2.43), (2.44) is rather complicated function of nas shown in the table n : 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #3 : 1 1 1 4 1 9 4 9 4 25 4 36 9 16 16 64 9 81 16 Here the statement triad X, Y, Z is different from X ′ , Y ′ , Z ′ means that after any rearrangement of the symbols X, Y, Z for marking of matrices in the given set, there is always at least one pair β k = β ′ k . Naturally, one can ask if there exists also the set of four or generally N matrices, which satisfy a relation similar to Eq. (2.48) For 2 ≤ n ≤ 10 and N = 4 the computer suggests the negative answer -in the case of matrices generated according to the Definition 9. However, one can verify: if U l , l = 1, 2, 3 is the triad complying with the theorem (or equivalently with the relation (2.48)), then the matrices n 2 × n 2 where the last multiple sum equals zero according to the relations (2.12) and (2.19). Obviously for n = 2 the matrices (2.45) and (2.51),(2.52) created from them correspond, up to some phase factors, to the Pauli matrices σ j and Dirac matrices γ µ . Obviously, from the set of matrices Q rs (with exception of Q nn = I) one can easily make the n 2 − 1 generators of the fundamental representation of SU (n) group where a rs are suitable factors. For example the choice gives commutation relations allow to write down the set of algebraic equations If the variables µ, π λ represent the fractional powers of the mass and momentum components and after n − 1 times repeated application of the operator Γ on Eq. For n > 2 the Eq. (3.2) is new, more complicated and immediately invoking some questions. In the present paper we shall attempt to answer at least some of them. One can check, that the solution of the set (3.2) reads where (U l is the triad from which the matrices Q l are constructed in accordance with Eqs. (2.51), (2.52)) and h 1 , h 2 , ...h n are arbitrary functions of p. At the same time, π λ satisfy the constraint First of all, one can bring to notice, that in Eq. (3.2) the fractional powers of the momentum components appear, which means that the equation in the x−representation will contain the fractional derivatives: Our primary considerations will concern p−representation, but afterwards we shall show how the transition to the x−representation can be realized by means of the Fourier transformation, in accordance with the approach suggested in [14]. Further question concerns relativistic covariance of Eq. (3.2): How to transform simultaneously the operator Γ(p) → Γ(p ′ ) = ΛΓ(p)Λ −1 (3.10) and the solution to preserve the equal form of the operator Γ for initial variables p λ and the boosted ones p ′ λ ?

Infinitesimal transformations
First let us consider the infinitesimal transformations where dω represents the infinitesimal values of the six parameters of the Lorentz group corresponding to the space rotations and the Lorentz transformations where tanh ψ i = v i /c ≡ β i is the corresponding velocity. Here, and anywhere in the next we use the convention, that in the expressions involving the antisymmetric tensor ǫ ijk , the summation over indices appearing twice is done. From the infinitesimal transformations (3.13), (3.14) one can obtain the finite ones.
For the three space rotations we get and for the Lorentz transformations similarly The definition of the six parameters implies that the corresponding infinitesimal transformations of the reference frame p → p ′ changes a function f (p) : where d/dω stands for Obviously, the equation combined with Eq. (3.21) is identical to Eqs. (3.13), (3.14). Further, with the use of formulas (3.12) and (3.21) the relations (3.10), (3.11) can be rewritten in the infinitesimal form If we define The six operators L ω are generators of the corresponding representation of the Lorentz group, so they have to satisfy the commutation relations Definition 13 Let Γ 1 (p), Γ 2 (p) and X be the square matrices of the same dimension and Then for any matrix X we define the form (3.31) One can easily check, that the matrix Z satisfies e.g.
where Q 0 is the matrix (2.51), i.e. there exists the set of transformations Y , that and a particular form reads The last sum can be rearranged, instead of the summation index j we use the new one k = i − j for i ≥ j, k = i − j + n for i < j; k = 0, ...n − 1, then the Eq. (3.40) reads and if we take into account that Γ n 0 = Γ n = p 2 , then this sum can be simplified For the term k = 0 we get and for k > 0, using Eqs.
For p < n − k ≡ l the last sum can be with the use of the relation (2.3) modified therefore only the term p = n − k contributes: i.e. the sequence of non zero components can be only in one block, whose location depends on the choice of the phase of the power (p 2 ) 1/n . The g j are arbitrary functions of p and simultaneously the constraint p 2 = m 2 is required. Now we shall try to find the generators satisfying the covariance condition for Eq.
where the bold 0,1 stand for zero and unit matrices 2 × 2. The Dirac equation is covariant under the transformations generated by where j, k, l = 1, 2, 3. Obviously, to preserve covariance, one has with the trans- where κ is any complex constant, satisfy the commutation relations (3.29), (3.30).

Proof:
After insertion of the generators (3.58) into the relations (3.29), (3.30) one can check, that the commutation relations are satisfied. In fact, it is sufficient to verify e.g. the commutators [L ϕ1 , L ψ2 ], [L ϕ1 , L ψ1 ] and [L ψ1 , L ψ3 ], the remaining follow from the cyclic symmetry. Let us note, the formula (3.58) covers also the limit case |κ| → ∞, then Further, the generators M ω can be rewritten in the covariant notation Now the Pauli -Lubanski vector can be constructed: which has satisfy where s is the corresponding spin number. One can check, that after inserting the generators (3.63) into relations (3.64), (3.65), the result does not depend on κ So the generators of the Lorentz group, which satisfy Eq. (3.49), can have the form where M ω are the n × n matrices defined in accordance with the Lemma 15.
There are n such matrices on the diagonal and apparently these matrices may not be identical. Finally, it is obvious that Eq. (3.36) is covariant also under any infinitesimal transform where the generators K ξ have the similar form as the generators (3.66) satisfying the commutation relations for the generators of the Lorentz transformations? In this paper we shall not discuss this more general task, for our present purpose it is sufficient, that we proved existence of the generators of infinitesimal Lorentz transformations, under which the Eq. (3.36) is covariant.

Finite transformations
Now, having the infinitesimal transformations, one can proceed to finite ones, corresponding to the parameters ω and ξ: where p → p ′ is some of the transformations (3.15) -(3.18). The matrices Λ satisfy which for the parameters ϕ (space rotations only) and ξ imply Assuming the constant elements of the matrices R ϕj and K ξ , the solutions of the last equations can be written in the usual exponential form The space rotation by an angle ϕ about the axis having the direction u, | u| = 1 is represented by For the Lorentz transformations we get instead of Eq. (3.72) The solution of Eq. (3.75) reads The Lorentz boost in a general direction u with the velocity β is represented by where The corresponding integrals can be found e.g. in the handbook [16]. Let us note, from the technical point of view, solution of the equation where Λ, Ω are some square matrices, can be written in the exponential form only if the matrix Ω satisfies This condition is necessary for differentiation Obviously the condition (3.85) is satisfied for the generators of all the considered transformations, including the Lorentz ones in Eq. (3.79), since the matrix N does not depend on ψ. (N depends only on the momenta components perpendicular the direction of the Lorentz boost.)

Equivalent transformations
Now, from the symmetry of the Eq.
where R ω (Γ 0 ), R ω (Γ 0 ) are generators (3.66) and Y (p) is the transformation (3.38), will satisfy the same conditions, but with the relation (3.26) instead of the relation (3.49). Similarly the generators K ξ (Γ 0 ) in relation (3.68) will be for Eq. (3.2) replaced by The finite transformations of the Eq. (3.2) and its solutions can be obtained as follows. First let us consider the transformations Λ(Γ 0 , ω, u) given by Eqs.
(3.74) and (3.79). In accordance with Eq. (3.37) we have and correspondingly for the solutions of Eqs. (3.2), (3.36) In the same way, the sets of equivalent generators and transformations can be obtained for the diagonalized equation (3.36). Let us remark, according to the Lemma 14 there exists the set of transformations Γ(p) ↔ Γ 0 (p) given by the relation (3.37). We used its particular form (3.38), but how will the generators differ for the two different matrices X 1 and X 2 ? The last relation implies and according to the relation (3.32) It follows that there must exists a matrix X 3 [e.g. according to implication (3.35) one can put then the relation (3.97) can be rewritten i.e. the generators R ω (Γ, X 1 ), R ω (Γ, X 2 ) are equivalent in the sense of the relation (3.94).

Scalar product and unitary representations
Definition 16 The scalar product of the two functions satisfying Eq. (3.2) or (3.36) is defined: where the metric W is the matrix, which satisfies The conditions (3.104), (3.105) in the above definition imply, that the scalar product is invariant under corresponding infinitesimal transformations. For example for the Lorentz group the transformed scalar product reads and with the use of the condition (3.104) one gets According to a general definition, the transformations conserving the scalar product are unitary. In this way the Eqs.
Then also for the Lorentz transformations one gets provided that the constant κ in Eq. (3.58) is real and |κ| ≤ m. Also the generators K ξ can be chosen in the same way: The structure of the generators R ω (Γ 0 ), K ξ (Γ 0 ) given by Eqs. (3.66), (3.68) suggests, that the metric W satisfying the condition (3.111) can have a similar structure, but in which the corresponding blocks on the diagonal are occupied by unit matrices multiplied by some constants. Nevertheless, let us note, that the condition (3.111) in general can be satisfied also for some other structures of W (Γ 0 ). From W (Γ 0 ) we can obtain matrix W (Γ)− the metric for the scalar product of the two solutions of Eq. (3.2). One can check, that after the transformations and simultaneously the unitarity in the sense of conditions (3.104), (3.105) is conserved -in spite of the fact that equalities (3.108) -(3.110) may not hold for R ω (Γ, X), K ξ (Γ, X).

Space-time representation and Green functions
If we take the solutions of the wave equation (3.2) or (3.36) in the form of the functions Ψ(p), for which there exists the Fourier picturẽ In this way we get for our operators: and in the same way Apparently, the similar relations are valid also for the remaining operators K ξ , W, Z, Z −1 and the finite transformations Λ in the x−representation. Concerning the translations, the usual correspondence is valid: p α → i∂ α . Further, the solutions of the inhomogeneous version of the Eqs. (3.36), (3.2) can be obtained with the use of formula (2.27): The last equation contains the fractional derivatives defined in [14]. Obviously, the functionsG 0 ,G can be identified with the Green functions related to x− representation of Eqs. (3.36), (3.2). With the exception of the operatorsR ϕj (Γ 0 ),W (Γ 0 ) and i∂ α all the remaining operators considered above are pseudo-differential ones, which are in general non-local. The ways, how to deal with such operators, are suggested in [3], [5], [14]. A more general treatise of the pseudo-differential operators can be found e.g. in [17]- [19]. In our case it is significant, that the corresponding integrals will depend on the choice of passing about the singularities and the choice of the cuts of the power functions p 2j/n . This choice should reflect contained physics, however corresponding discussion would exceed the scope of this paper.

Summary and concluding remarks
In this paper we have first studied the algebra of the matrices Q pr = S p T r generated by the pair of matrices S, T with the structure given by the Definition 1. We have proved, that for a given n ≥ 2 one can in the corresponding set {Q pr } always find a triad, for which Eq. (2.48) is satisfied, hereat the Pauli matrices represent its particular case n = 2. On this base we have got the rule, how to construct the generalized Dirac matrices [Eqs. In the further part, using the generalized Dirac matrices we have demonstrated, how one can from the roots of the D'Alambertian operator generate a class of relativistic equations containing the Dirac equation as a particular case. In this context we have shown, how the corresponding representations of the Lorentz group, which guarantee the covariance of these equations, can be found. At the same time we have found additional symmetry transformations on these equations. Further, we have suggested how one can define the scalar product in the space of the corresponding wave functions and make the unitary representation of the whole group of symmetry. Finally, we have suggested, how to construct the corresponding Green functions. In the x− representation the equations themselves and all the mentioned transformations are in general non-local, being represented by the fractional derivatives and pseudo-differential operators in the four space-time dimensions.
In line with the choice of the representation of the rotation group used for the construction of the unitary representation of the Lorentz group according to which the equations transform, one can ascribe to the related wave functions the corresponding spin -and further quantum numbers connected with the additional symmetries. Nevertheless it is obvious, that before more serious physical speculations, one should answer some more questions requiring a further study. Perhaps the first could be the problem how to introduce the interaction. The usual direct replacement ∂ λ → ∂ λ + igA λ (x) would lead to the difficulties, first of all with the rigorous definition of the terms like (∂ λ + igA λ (x)) 2/n .
On the end one should answer the more general question: Is it possible on the base of the discussed wave equations to build up a meaningful quantum field theory?