Within the theory of Quantum Chromodynamics (QCD), the rich structure of hadrons can be quantitatively characterized, among others, using a basis of universal nonperturbative functions: parton distribution functions (PDFs), generalized parton distributions (GPDs), transverse momentum dependent parton distributions (TMDs), and distribution amplitudes (DAs). For more than half a century, there has been a joint experimental and theoretical effort to obtain these partonic functions. However, the complexity of the strong interactions has placed severe limitations, and first-principle information on these distributions was extracted mostly from their moments computed in Lattice QCD. Recently, breakthrough ideas changed the landscape and several approaches were proposed to access the distributions themselves on the lattice. In this paper, we review in considerable detail approaches directly related to partonic distributions. We highlight a recent idea proposed by X. Ji on extracting quasidistributions that spawned renewed interest in the whole field and sparked the largest amount of numerical studies within Lattice QCD. We discuss theoretical and practical developments, including challenges that had to be overcome, with some yet to be handled. We also review numerical results, including a discussion based on evolving understanding of the underlying concepts and the theoretical and practical progress. Particular attention is given to important aspects that validated the quasidistribution approach, such as renormalization, matching to light-cone distributions, and lattice techniques. In addition to a thorough discussion of quasidistributions, we consider other approaches: hadronic tensor, auxiliary quark methods, pseudodistributions, OPE without OPE, and good lattice cross-sections. In the last part of the paper, we provide a summary and prospects of the field, with emphasis on the necessary conditions to obtain results with controlled uncertainties.
Among the frontiers of nuclear and particle physics is the investigation of the structure of hadrons, the architecture elements of the visible matter. Hadrons consist of quarks and gluons (together called partons), which are governed by one of the four fundamental forces of nature, the strong force. The latter is described by the theory of Quantum Chromodynamics (QCD). Understanding QCD can have great impact on many aspects of science, from the subnuclear interactions to astrophysics, and, thus, a quantitative description is imperative. However, this is a very challenging task, as QCD is a highly nonlinear theory. This led to the development of phenomenological tools such as models, which have provided important input on the hadron structure. However, studies from first principles are desirable. An ideal
Despite the extensive experimental program that was developed and evolved since the first exploration of the structure of the proton [
Understanding internal properties of the hadrons requires the development of a set of appropriate quantities that can be accessed both experimentally and theoretically. The QCD factorization provides such formalism and can relate measurements from different processes to parton distributions. These are nonperturbative quantities describing the parton dynamics within a hadron and have the advantage of being universal, that is, do not depend on the process used for their extraction. The comprehensive study of parton distributions can provide a wealth of information on the hadrons, in terms of variables defined in the longitudinal direction (with respect to the hadron momentum) in momentum space, and two transverse directions. The latter can be defined either in position or in momentum space. These variables are as follows:
As is clear from the above classification, PDFs, GPDs, and TMDs provide complementary information on parton distributions, and all of them are necessary to map out the three-dimensional structure of hadrons in spatial and momentum coordinates. Experimentally, these are accessed from different processes, with PDFs being measured in inclusive or semi-inclusive processes such as deep-inelastic scattering (DIS) and semi-inclusive DIS (SIDIS); see e.g., [
Despite the tremendous progress in both the global analyses and the models of QCD, parton distributions are not fully known, due to several limitations: global analysis techniques are not uniquely defined [
Lattice QCD provides an ideal formulation to study hadron structure and originates from the full QCD Lagrangian by defining the continuous equations on a discrete Euclidean four-dimensional lattice. This leads to equations with billions of degrees of freedom, and numerical simulations on supercomputers are carried out to obtain physical results. A nonperturbative tool, such as Lattice QCD, is particularly valuable at the hadronic energy scales, where perturbative methods are less reliable, or even fail altogether. Promising calculations from Lattice QCD have been reported for many years with the calculations of the low-lying hadron spectrum being such an example. More recently, Lattice QCD has provided pioneering results related to hadron structure, addressing, for instance, open questions, such as the spin decomposition [
Recent pioneering work of X. Ji [
The first studies on Ji’s proposal have appeared for the quark quasi-PDFs of the proton (see Sections
The central focus of the review is the studies of the
The rest of the paper is organized as follows. In Section
In this section, we briefly outline different approaches for obtaining the
The common feature of all the approaches is that they rely to some extent on the factorization framework. For a lattice observable Observables that are generalizations of light-cone functions such that they can be accessed on the lattice; such generalized functions have direct Observables in terms of which hadronic tensor can be written; the hadronic tensor is then decomposed into structure functions like
Below, we provide the general idea for several proposals that were introduced in recent years.
All the information about a DIS cross-section is contained in the hadronic tensor [
A crucial aspect for the implementation in Lattice QCD is the fact that the hadronic tensor
In 1998, a new method was proposed to calculate light-cone wave functions (LCWFs) on the lattice [
The essence of the idea is to “observe” and study on the lattice the partonic constituents of hadrons instead of the hadrons themselves [
Schematic representation of the three-point function that needs to be computed to extract the pion light-cone wave function [
In 2005, another method was proposed [
The considered heavy-light current is defined as follows:
Numerical exploration, in the quenched approximation, is in progress [
Another possibility for extraction of light-cone distribution functions appeared in 2007 by V. Braun and D. Müller [
The first numerical investigation of this approach is under way by the Regensburg group [
In 2013, X. Ji proposed a new approach to extracting the
We illustrate the idea using the example of PDFs, while analogous formulations can be used to define DAs, GPDs, etc. It is instructive to see the direct correspondence between the light-cone definition (Equation (
The quasidistribution differs from the light-cone one by higher-twist corrections suppressed with
The quasidistribution approach received a lot of interest in the community and sparked most of the numerical work among all the direct
The approach of quasidistributions was thoroughly analyzed by A. Radyushkin [
We briefly mention here the issue of power divergences induced by the Wilson line, to be discussed more extensively in Sections
Numerical investigation of the pseudodistribution approach has proceeded in parallel with the theoretical developments and promising results are being reported [
Yet another recent proposal to compute hadronic structure functions was suggested in [
The starting point is the forward Compton amplitude of the nucleon, defined similarly as in Equation (
Another important ingredient of the method proposed in [
A novel approach to extracting PDFs or other partonic correlation functions from
Good LCSs, i.e., ones that can be included in such a global fit, are the ones that have the following properties: They are calculable in Euclidean Lattice QCD Have a well-defined continuum limit Have the same and factorizable logarithmic collinear divergences as PDFs.
All of these properties are crucial and nontrivial. The first one excludes the direct use of observables defined on the light cone. In practice, the second one requires the observables to be renormalizable. Finally, the third property implies that the analogy with global fits to HCSs is even more appropriate; both strategies need to rely on the factorization framework: LCSs and HCSs are then written as a convolution of a perturbatively computable hard coefficient with a PDF.
Ma and Qiu constructed also a class of good LCSs in coordinate space that have the potential of being used in the proposed global fits, demonstrating that the three defining properties of LCSs are satisfied [
An explicit numerical investigation of the current-current correlators is in progress by the theory group of Jefferson National Laboratory (JLab) and first promising results for pion PDFs, using around 10 different currents, have been presented. For more details see Section
We discuss now, in more detail, the quasidistribution approach which is the main topic of this review. The focus of this section is on the theoretical principles of this method and we closely follow the original discussion in Ji’s first papers. Since these were soon followed by numerical calculation within Lattice QCD exploring the feasibility of the approach, we also summarize the progress on this side. We also identify the missing ingredients in these early studies and aspects that need significant improvement.
Ji’s idea of quasi-PDFs [
Consider a local twist-2 operator
In the original paper that introduced the quasidistribution approach [
Schematic illustration of the relation between a finite momentum frame, with the Wilson line in a spatial direction and the light-cone frame of a hadron at rest. Due to Lorentz contraction, going to the light-cone frame increases the length by a boost factor
We turn now to discussing how to match results obtained on the lattice, with a hadron momentum that is finite and relatively small, to the IMF. The subtlety of this results from the fact that regularizing the UV divergences does not commute with taking the infinite momentum limit. When defining PDFs, the latter has to be taken first, i.e., before removing the UV cutoff, whereas on the lattice one is bound to take all scales, including the momentum boost of the nucleon, much smaller than the cutoff, whose role is played by the inverse lattice spacing. To overcome this difficulty, one needs to formulate an effective field theory, termed Large Momentum Effective Theory (LaMET) [
The parallels of LaMET with HQET are more than superficial. We again follow Ji’s discussion [
Using the same ideas, one can write the relation between an observable in the lattice theory,
To summarize, the need for LaMET when transcribing the finite boost results to light-cone parton distributions is the consequence of the importance of the order of limits. Parton physics corresponds to taking Construction of a Euclidean version of the light-cone definition. The Euclidean observable needs to approach its light-cone counterpart in the limit of infinite momentum Computation of the appropriate matrix elements on the lattice and renormalizing them Calculation of the matching coefficient in perturbation theory and use of LaMET, Equation (
There is complete analogy also with accessing parton physics from scattering experiments, using factorization theorems and, thus, separating the nonperturbative (low-energy) and perturbative (high-energy) scales. To have similar access to partonic observables from lattice computations, LaMET plays the role of a tool for scale separation. Moreover, just as parton distributions can be extracted from a variety of different scattering processes, they can also be approached with distinct lattice operators.
We continue the discussion of LaMET by considering now the matching process in more detail. In the first paper devoted to the matching in the framework of LaMET, the nonsinglet PDF case was discussed [
The tree level of both the quasi- and the light-cone distributions is the same, i.e., a Dirac delta Linear (UV) divergences due to the Wilson line, taking in this scheme the form Collinear (IR) divergences, only in Soft (IR) divergences (singularities at Logarithmic (UV) divergences in self-energy corrections, regulated with another cutoff
One-loop diagrams entering the calculation of quasidistributions: self-energy corrections (left) and vertex corrections (right). Source: [
We turn now to the light-cone distribution. It can be calculated in the same transverse momentum cutoff scheme by taking the limit
Having computed the one-loop diagrams, one is ready to calculate the matching coefficient
The early papers of [
Ji’s proposal for a novel approach of extracting partonic quantities on the lattice, in particular PDFs, sparked an enormous wave of interest, including numerical implementation and model investigations (see Section
The first lattice results were presented in 2014 in [
Final isovector unpolarized PDFs (shaded bands) at the largest employed nucleon boost, left: Lin et al., 1.29 GeV, right: ETMC, 1.42 GeV. The right plot also shows the quasi-PDF and the matched PDF before nucleon mass corrections. For illustration purposes, selected phenomenological parametrizations are plotted (dashed/dotted lines, no uncertainties shown) [
The two earliest numerical investigations of Ji’s approach showed the feasibility of lattice extraction of PDFs. However, they also identified the challenges and difficulties. On one side, these were theoretical, like the necessity of development of the missing renormalization programme and the matching from the adopted renormalization scheme to the desired
Further progress was reported in the next two papers by the same groups (with new members), early in 2016 by Chen et al. [
As an illustration, we show the final helicity PDFs in Figure
Final isovector helicity PDFs (shaded bands;
This concludes our discussion of the early explorations of the quasi-PDF approach. References [
Apart from theoretical analyses and numerical investigations in Lattice QCD, insights about the quasidistribution approach were obtained also from considerations in the framework of phenomenological or large-
The aim of the early (2014) work of L. Gamberg et al. [
With such setup, one can derive the model expressions for all kinds of collinear quasi-PDFs, combining the expressions for scalar and axial-vector diquarks. The obtained relations can be used to study the approach to the light-cone PDFs, also calculated in the DSM. Gamberg et al. [
Further model study of quasi-PDFs was presented in [
The DSM (with scalar diquarks) was also employed as a framework for studying quasi-GPDs [
A model investigation of quasi-PDFs was performed also by A. Radyushkin in 2016-17 [
Numerical interest in these papers was in the investigation of the nonperturbative evolution generated by the soft part of the VDF or, equivalently, the soft part of the primordial TMD. Radyushkin considers two models thereof, with a Gaussian-type dependence on the transverse momentum (“Gaussian model”) and a simple non-Gaussian model (“
In the work of [
Further model studies of the quasidistribution approach were performed in 2017 by W. Broniowski and E. Ruiz-Arriola [
In the first paper [
In the second paper [
Meson DAs were first considered in the quasidistribution formalism in 2015 by Y. Jia and X. Xiong [
Following the NRQCD investigation, Y. Jia and X. Xiong continued their work related to model quasidistributions of mesons. In 2018, together with S. Liang and R. Yu [
Additionally, Jia et al. studied both types of distributions in perturbation theory, thus being able to consider the matching between quasi- and light-cone PDFs/DAs. The very important aspect of this part is that they were able to verify one of the crucial features underlying LaMET, that quasi- and light-cone distributions share the same infrared properties at leading order in
As such, this work in two-dimensional QCD provides a benchmark for lattice studies of quasidistributions in four-dimensional QCD. It is expected that many of the obtained conclusions regarding the ’t Hooft model hold also in standard QCD. Moreover, the setup can also be used to study other proposals for obtaining the
In this section, we summarize the main theoretical challenges related to quasi-PDFs, that have been identified early on. Addressing and understanding these challenges was very critical in order to establish sound foundations for the quasidistribution method. We concentrate on two of them, the role of the Euclidean signature (whether an equal-time correlator in Euclidean spacetime can be related to light-cone parton physics in Minkowski) and renormalizability. The latter is not trivial due to the power-law divergence inherited from the Wilson line included in the nonlocal operator. It is clear that problems related to either challenge could lead to abandoning the whole programme for quasi-PDFs. Therefore, it was absolutely crucial to prove that both of these aspects do not hide insurmountable difficulties.
One of the crucial assumptions of the quasidistribution approach is that these distributions computed on the lattice with Euclidean spacetime signature are the same as their Minkowski counterparts. In particular, they should share the collinear divergences, such that the UV differences can be matched using LaMET. In [
However, in [
The serious doubts about the importance of spacetime signature were addressed in [
Thus, the apparent contradiction pointed out in [
One of the indispensable components of the quasi-PDFs approach is the ability to match equal-time correlation functions (calculable on the lattice) to the light-cone PDFs using LaMET. For this approach to be successful, it is crucial that the quasi-PDFs can be factorized to normal PDFs to all orders in QCD perturbation theory, and this requires that quasi-PDFs can be multiplicatively renormalized [
One of the main concerns is whether the nonlocal operators are renormalizable. For example, the nonlocality of the operators does not guarantee that all divergences can be removed, due to the additional singularity structures compared to local operators and also the divergences with coefficients that are nonpolynomial. Due to the different UV behavior of quasi-PDFs and light-cone PDFs, the usual renormalization procedure is not ensured. Based on the work of [
One-loop diagrams entering the quasiquark PDFs in Feynman gauge. Self-energy diagrams are shown in the first row and vertex correction diagrams in the second row. Source: [
Thus, it is of utmost importance for the renormalizability to be confirmed to all orders in perturbation theory. This issue has been addressed independently by two groups [
X. Ji and J.-H. Zhang in one of their early works [
The renormalizability of quasi-PDFs to all orders in perturbation theory has been proven for the first time by T. Ishikawa et al. in [
More interestingly, the Authors have studied all sources of UV divergences for the nonlocal operators that enter the quasi-PDFs calculation using a primitive basis of diagrams (see Figures 3-6 in [
Topologies that may lead to UV divergent contributions to the quark quasi-PDFs. Source: [
Diagrams of the topology shown in Figure
The study of the renormalizability of quark quasi-PDFs has been complemented with the work of Ji et al. in [
The introduction of a heavy quark auxiliary field,
For completeness, we also address the renormalizability of the gluon quasi-PDFs, which are more complicated to study compared to nonsinglet quark PDFs due to the presence of mixing. Their renormalizability was implied using arguments based on the quark quasi-PDFs [
The first investigation appeared in 2017 by W. Wang and S. Zhao [
Reference [
One-loop corrections to a gluon quasidistribution, without the Wilson line. The symbol “
One-loop corrections to a gluon quasidistribution, which involve the Wilson line (double line). The symbol “
One approach to study the renormalization of the quasigluon PDFs is to introduce an auxiliary heavy quark field, as adopted in the renormalization of the quark distributions. This auxiliary field is in the adjoint representation of
The renormalizability of both the unpolarized and the helicity gluon PDFs has been studied by J.-H. Zhang et al. in [
In the auxiliary field formalism, the operator presented in Equation (
Z.-Y. Li, Y.-Q. Ma, and J.-W. Qiu have studied renormalizability of gluon quasi-PDFs in [
The procedure followed in this work is based on a one-loop calculation of the Green’s functions
Apart from theoretical challenges of the quasidistribution approach, discussed in the previous section, also the lattice implementation and efficiency of computations are a major issue for the feasibility of the whole programme. In this section, we discuss these aspects in some detail, showing that tremendous progress has been achieved also on this side. In addition, we discuss challenges for the lattice that need to be overcome for a fully reliable extraction of PDFs.
To access quasi-PDFs of the quarks in the nucleon, one needs to compute the following matrix elements:
The Wick contractions for the three-point function lead, in general, to a quark-connected and a quark-disconnected diagram. Since the evaluation of the latter is far more demanding than that of the former, the numerical efforts were so far restricted to connected diagrams only. One uses the fact that disconnected diagrams cancel when considering the flavor nonsinglet combination
Diagram representing the three-point correlation function that needs to be evaluated to calculate quasi-PDFs. Source: arXiv version of [
Special attention has to be paid to the Dirac structure of the insertion operator, because mixing appears among certain structures, as discovered in [
We now turn to describing the lattice computation in more detail. For the two-point function, Wick contractions lead to standard point-to-all propagators that can be obtained from inversions of the Dirac operator matrix on a point source. The computation of the three-point function is more complicated. Apart from the point-to-all propagator, it requires the knowledge of the all-to-all propagator. Two main techniques exist to evaluate this object: the sequential method [
In the early studies, both approaches were tested by ETMC [
Having computed the three-point and two-point functions, the relevant matrix elements can be obtained. The crucial issue that has to be paid special attention to is the contamination of the desired ground state matrix elements by excited states. Three major techniques are available: single-state (plateau), multistate, and summation fits. We briefly describe all of them below. where the source and sink timeslices are excluded avoiding contact terms and with amplitudes with matrix elements of the suitable operator
In principle, the multistate method (realistically
Having extracted the relevant matrix elements, one is finally ready to calculate the quasi-PDF. We rewrite here the definition of quasi-PDFs with a discretized form of the Fourier transform:
In the previous subsection, we have established the framework for the computation of quasi-PDF matrix elements on the lattice. Now, we describe some more techniques that are usually used to perform the calculation as effectively as possible.
The first technique, commonly employed in lattice hadron structure computations, serves the purpose of optimizing the overlap of the interpolating operator that creates and annihilates the nucleon with the ground state. This can be achieved by employing Gaussian smearing [
Smearing techniques are used also to decrease UV fluctuations in gauge links entering the Wilson line in the operator insertion. In principle, any kind of smearing can be used for this purpose, with practical choices employed so far of HYP smearing [
All the above techniques are rather standard and have been employed in the quasi-PDFs computations already in the very first exploratory studies. However, the recent progress that we review in Section
To finalize this subsection, we mention one more useful technique that is applied nowadays to decrease statistical uncertainties at fixed computing time. The most expensive part of the calculation of the correlation functions is the computation of the quark propagators, i.e., the inversion of the Dirac operator matrix on specified sources. This is typically done using specialized iterative algorithms, often tailored to the used fermion discretization. The iterative algorithm is run until the residual, quantifying the distance of the current solution with respect to the true solution, falls below some tolerance level,
In this section, we discuss the challenges for lattice computations of quasi-PDFs. On the one side, this includes “standard” lattice challenges, like control over different kinds of systematic effects, some of them enhanced by the specifics of the involved observables. On the other side, the calculation of quasi-PDFs offered new challenges that had to or have to be overcome for the final reliable extraction of light-cone distributions. Below, we discuss these issues in considerable detail, starting with the “standard” ones and going towards more specific ones. Lattice simulations are, necessarily, performed at finite lattice spacings. Nevertheless, the goal is to extract properties or observables of continuum QCD. At finite lattice spacing, these are contaminated by discretization (cutoff) effects, which need to be subtracted in a suitable continuum limit extrapolation. Obviously, prior to taking the continuum limit, the observables need to be renormalized and we discuss this issue in Section Apart from finite lattice spacing, also the volume of a numerical simulation is necessarily finite. Thus, another lattice systematic uncertainty may stem from finite volume effects (FVE). FVE become important if the hadron size becomes significant in comparison with the box size. The hadron size is to a large extent dictated by the inverse mass of the lightest particle in the theory. Hence, leading-order FVE are related to the pion mass of the simulation and smaller pion masses require larger lattice sizes in physical units to suppress FVE. Usually, FVE are exponentially suppressed as Above, the main source of FVE that we considered was related to the size of hadrons. However, it was pointed out in [ It is also important to mention that finite lattice extent in the direction of the boost, The computational cost of Lattice QCD calculations depends on the pion mass. Hence, exploratory studies are usually performed with heavier-than-physical pions, as was also the case for quasi-PDFs (see Section QCD encompasses six flavors of quarks. However, due to the orders of magnitude difference between their masses, only the lightest two, three, or four flavors are included in lattice simulations. Moreover, the up and down quarks are often taken to be degenerate; i.e, one assumes exact isospin symmetry. One then speaks of a As already discussed in Section Contact with the IMF via LaMET is established at large nucleon momenta. Hence, it is desirable to use large nucleon boosts on the lattice. However, this is highly nontrivial for several reasons. First, the signal-to-noise ratio decays exponentially with increasing hadron momentum, necessitating increase of statistics to keep similar statistical precision at larger boosts. Second, excited states contamination increases considerably at larger momenta, calling for an increase of the source-sink separation to maintain suppression of excited states at the same level. As argued in the previous point, the increase of We now consider effects that may appear if the nucleon momentum is too small. Looking at the formulation of LaMET, it is clear that higher-twist effects (HTE), suppressed as Another type of HTE is nucleon mass corrections (NMCs). These, in turn, can be exactly corrected by using the formulae derived by Chen et al. [ At the level of matrix elements, the momentum dependence is manifested, inter alia, by the physical distance at which they decay to zero. This distance, entering in the limits of summation for the discretized Fourier transform in Equation ( A method to remove the nonphysical oscillations was proposed in [ where the derivative of the matrix elements with respect to the Wilson line length gives the name to the method. The integration by parts is exact and this definition of the Fourier transform is equivalent to the standard one if the matrix elements have decayed to zero at For the sake of completeness, we mention other effects that can undermine the precision of lattice extraction of PDFs, although they are not challenges for the lattice In the previous point, we have already mentioned uncertainties related to renormalization. In RI/MOM-type schemes, they manifest themselves in the dependence of Another renormalization-related issue is the perturbative conversion from the intermediate lattice scheme to the Similarly, truncation effects emerge also in the matching of quasi-PDFs to light-cone PDFs, currently done to one-loop level; see Section If the dimension of the operator with the same symmetries is lower, then the mixing will be power divergent in the lattice spacing; i.e., it will contribute a term However, it was counterargued in three papers [ It was also argued by Rossi and Testa that divergent moments of quasi-PDFs, It is exactly the matching function that makes the moments of standard PDFs finite after the subtraction of the UV differences between the two types of distributions. In other words, the divergence in the moments Further explanations were provided in [ Finally, J. Karpie, K. Orginos, and S. Zafeiropoulos demonstrated [ With all these developments, it has been convincingly established that the problem advocated by Rossi and Testa does not hinder the lattice extraction of light-cone PDFs. Thus, power-divergent mixings only manifest themselves in certain quantities, like moments of quasi-PDFs, which are
We finalize this section with a schematic flowchart (Figure
Schematic representation of different steps needed to extract light-cone PDFs from quasi-PDFs and of the challenges encountered at these steps. Source: [
The renormalization of nonlocal operators that include a Wilson line is a main component of the lattice calculation related to quasi-PDFs. Lattice results from the numerical simulations can only be related to physical quantities upon appropriate renormalization and only then comparison with experimental and phenomenological estimates becomes a real possibility. As discussed in Section
Since the proposal of Ji in 2013, several aspects of quasi-PDFs have been investigated, such as the feasibility of a calculation from Lattice QCD. This includes algorithmic developments [
Among the first attempts to understand the renormalization of nonlocal operators was to address the power divergence inherited from the Wilson line in the static potential approach, as described in this subsection. Eliminating the power divergence results in a well-defined matching between the quasi-PDFs and the light-cone PDFs.
The renormalization of nonlocal operators in gauge theories has been investigated long time ago [
Following the notation of [
A proper determination of
The promising results from the first exploratory studies of the quasi-PDFs [
X. Xiong et al. have computed in [
One-loop diagrams for the calculation of Green’s function of nonlocal operators. The double line represents the gauge link in the operator. Source: [
Here, we do not provide any technical details and focus only on the qualitative conclusions, but we encourage the interested Reader to consult [
M. Constantinou and H. Panagopoulos have calculated in [
The operator under study includes a straight Wilson line in the direction
One of the main findings of this work is the difference between the bare lattice Green’s functions and the
The conclusion from Equation (
The work of [
Real (left) and imaginary (right) parts of the conversion factors for the vector (
G. Spanoudes and H. Panagopoulos [
Including massive quarks requires a proper modification of the RI-type renormalization conditions, as developed in [
As a consequence of the additional mixing, the conversion factors are
Real (left) and imaginary (right) parts of the conversion factor for the mixing coefficient for the operator pair (
The progress in the renormalization of the nonlocal operators from lattice perturbation theory has encouraged investigations of nonperturbative calculations. This was supported by theoretical developments proving the renormalizability of the operators under study to all orders in perturbation theory (see Section
C. Alexandrou et al. [
In Figure
Left: the
In the aforementioned work, the Authors used several values of the
A modification of the RI-type prescription that was first proposed by Constantinou and Panagopoulos [
Based on the RI/MOM prescription, the vertex function of the operator under study was projected by
In the calculation of the renormalization factors, the Authors used an
In Figure
The renormalization function and mixing between vector and scalar nonlocal operators with a straight Wilson line. Source: [
Another investigation of [
The study of the symmetries was extended to include
In closing, let us add that a proper determination of the renormalization functions computed nonperturbatively in an RI-type scheme (e.g., the works presented in Sections
An alternative proposal for the renormalization of nonlocal operators is based on an auxiliary field method, a formulation also adopted to prove the renormalizability of the operators under study [
The auxiliary scalar color triplet field (
In the work of [
Here, we present selected results from [
Left: Equation (
In this subsection, we review some other developments related to the renormalization of PDF-related operators, in particular the Wilson-line-induced power divergence.
In 2016, the idea of removing such divergence by smearing was proposed by C. Monahan and K. Orginos [
Smeared operators are the fundament of another method, introduced in 2012 by Z. Davoudi and M. Savage [
We finalize by shortly discussing one more method of dealing with the power divergence related to the Wilson line in quasidistributions. In 2016, H.-n. Li proposed [
In this section, we focus on the matching from quasi-PDFs to light-cone PDFs. Since the inception of LaMET, there has been a lot of effort devoted to understanding many aspects of this procedure. In particular, the first matching paper [
For convenience, we repeat here the general factorization formula for the matching:
Let us first briefly revisit the early attempt to remove the Wilson-line-related power divergence, discussed in Section
One of the possibilities of renormalizing the quasi-PDF is to obtain it in the
The first paper that considered the matching from
However, one more issue remained unresolved for the
Alternative procedure was used in [
The matching kernel for transversity PDFs (
An alternative way of bringing the results from the intermediate RI renormalization scheme to the
In this section, we review other developments in the matching of quasidistributions to their light-cone counterparts. We also shortly discuss the matching process for the pseudo-PDFs/ITDs.
In a follow-up work [
The proof of renormalizability to all orders was indeed provided a few months later by the same Authors, together with J.-H. Zhang, X. Ji, and A. Schäfer [
Early in 2018, X. Ji et al. reinvestigated quasi-TMDs [
Very recently, a third paper considering quasi-TMDs appeared by M. Ebert, I.W. Stewart, and Y. Zhao [
Further, (pseudoscalar) meson mass corrections were calculated analytically in [
The heavy quarkonium case was considered in [
The matching for vector meson DAs was also considered [
Recently, the matching for meson DAs was also obtained for the case of RI-renormalized quasi-DAs to bring them into
The matching of pseudo-PDFs is, to some extent, simpler than that for quasi-PDFs, since there are no complications related to the nonperturbative renormalization of the pseudo-PDF when taking the ratio of matrix elements to construct the reduced ITD. Crucially, taking the ratio does not alter the IR properties and the factorization framework can be applied, as in the case of matching quasidistributions. We write here the final matching formula in the notation of [
In [
The preliminary studies presented in Section
Once the nonperturbative renormalization of the nonlocal operators with straight Wilson line has been developed and presented to the community [
In the original proposal for the nonperturbative renormalization [
A Fourier transform is applied on renormalized matrix elements leading to
Comparison of lattice estimates of the ETMC’s helicity PDF, properly renormalized (blue band) or renormalized using the local axial current renormalization factor
Despite the improvement from previous works on quasi-PDFs, a number of further improvements were still necessary at this point, as described in Section
In the work of J.-W. Chen et al. (
The renormalization function of this work was used on the results obtained in [
Real (left) and imaginary (right) part of
A recent effort to quantify systematic uncertainties was presented by Y.-S. Liu et al. (
Possibly the largest systematic effect comes from the excited states contamination, which is sensitive to the pion mass (worsens for simulations at the physical point) [
The work of [
One of the highlights of the current year is the appearance of lattice results on quasi-PDFs using simulations at the physical point
The work by C. Alexandrou et al. (ETMC) presented in [
A large number of configurations is necessary to keep the statistical uncertainties under control, in particular, as the nucleon momentum increases. The work of [
The renormalization was performed according to the procedure outlined in Section
In Figure
Comparison of ETMC’s unpolarized (left) and helicity (right) PDF for momenta 0.83 GeV (green band), 1.11 GeV (orange band), and 1.38 GeV (blue band). The results from the phenomenological analysis of ABMP16 [
An interesting discussion presented in [
Comparison of ETMC’s unpolarized PDF using the ensemble at the physical point [
A follow-up study by ETMC was presented recently [
A comparison of the three methods for the helicity is presented in Figure
Real (left) and imaginary (right) part of the matrix element for the ETMC’s helicity PDF from the plateau method (
We now continue the discussion with a presentation of the work of
In this work, the Gaussian momentum smearing [
The lattice data were properly renormalized using an RI-type scheme [
The final result for the unpolarized PDF is shown in the left plot of Figure
Left:
Extracting the transversity PDF is a powerful demonstration of the advances in the quasi-PDFs approach using Lattice QCD simulations. Preliminary studies can be found in the literature already in 2016 [
The main motivation for first-principle calculations of the transversity PDF is the fact that it is less known experimentally [
The ETM Collaboration presented the first computation of the
The final lattice data for the transversity isovector PDF,
ETMC’s transversity PDF with momentum 1.38 GeV (blue) as a function of Bjorken-
The latest work of
The real (top panel) and imaginary (bottom panel) parts of the matrix elements extracted from a two-state fit at momentum 3 GeV. The data are renormalized in the RI scheme and normalized with the matrix element of the local operator at same momentum. Source: [
Final estimates for the transversity PDF are given in Figure
In the previous section, we have concentrated on numerical results for the isovector quark PDFs in the nucleon. Now, we review other results obtained with the quasidistribution method, for mesonic DAs and PDFs, as well as first exploratory results for gluon PDFs.
Arguably the simplest partonic functions are distribution amplitudes (DAs) of mesons. The interest in them is at least for two reasons. First, being very simple, they can be used for investigating and comparing different techniques. Many exploratory studies were or are performed focusing on the pion DA. Second, mesonic DAs are of considerable physical interest as well. They represent probability amplitudes of finding a
The first lattice computation of the pion quasi-DA was presented early in 2017 by J.-H. Zhang et al. [
The final result for the improved DA, after matching and mass corrections, is shown in Figure
Improved pion DA obtained in the first lattice study [
The above study was extended by the
Technically, the computation of the kaon DA amounts to changing the mass of one valence quark to represent the strange quark mass. For the
A comparison of the pion and kaon DAs (at the largest meson boost) with models and parametrizations is shown in Figure
Improved pion (left) and kaon (right) DAs obtained in [
Apart from DAs of mesons, the interest is, obviously, also in their PDFs, particularly for the pion. Phenomenological extraction of the pion PDF uses predominantly experimental data from the Drell-Yan process in the pion-nucleon scattering. This established that the large-
The first lattice extraction of the pion PDF based on LaMET was shown in [
For renormalization,
The final results for the
Pion PDF obtained in [
Very recently, the first investigation of quasi-gluon PDFs appeared [
Fan et al. employed the following definition of gluon quasi-PDF:
In their numerical investigation, Fan et al. compared the
“Ratio-renormalized” matrix elements of the operator
The Authors concluded that, at the present level of precision, their study could not constrain gluon PDFs, which would require taking the Fourier transform and performing the matching to the light-cone PDF. Due to the fact that the magnitude of the gluon PDF is significant predominantly for small
The last two sections were devoted to reviewing results obtained for the
Despite being proposed in the early 1990s, the hadronic tensor approach [
The preliminary results are shown in Figure
Euclidean (left) and Minkowski (right) hadronic tensor obtained in the study of [
The investigations are continued and further results were presented in the Lattice 2018 Symposium, using other reconstruction methods and an ensemble with much finer lattice spacing,
The approach with auxiliary heavy quark [
The calculation proceeds via evaluating the vacuum-to-pion matrix elements of the product of two heavy-light currents separated in spacetime. The spatial Fourier transform of such matrix elements, for large enough temporal separation of the three points in the correlator, gives a quantity called
Left: integrand of
Instead of an auxiliary heavy quark, one can also use an auxiliary light quark [
As in the auxiliary heavy quark approach, the lattice part consists in calculating the vacuum-to-pion matrix element of two currents, separated spatially by
The follow-up work of [
Example results for the Ioffe-time dependence of the pion DA are shown in Figure
Left: Ioffe-time dependence of the pion DA extracted from two linear combinations: VV+AA (blue) and SP+PS (green), at
The first numerical investigation of the pseudodistribution approach [
Left: real part of reduced matrix elements with all points evolved to
As argued by Radyushkin in [
Left: real part of light-cone ITD (real part), matched from pseudo-ITD via Equation (
The final result that we report from the pseudodistribution approach is the computation of the two lowest moments of the isovector unpolarized PDF, erroneously claimed to be impossible due to fatal flaws in the approach in [
The first (blue) and second (red) lowest moments of the isovector unpolarized PDF obtained from a quenched ensemble with
Further progress was reported in the Lattice 2018 Symposium, including first calculations with dynamical flavors [
The approach dubbed “OPE without OPE” was first investigated numerically in [
Left: exemplary parametrized PDF (blue) and its reconstruction (red) using the method proposed in [
This approach, suggested in [
The vector-vector (
In this paper, we give an overview of several approaches to obtain the Bjorken-
As a summary, we would like to offer the Reader a flowchart (Figure Starting with the proposed theoretical idea (e.g., quasidistributions, good lattice cross-sections, pseudodistributions, etc.), several challenges (theoretical and technical) must be studied and overcome to achieve a successful implementation of the method. Theoretical analyses of the idea may lead to additional challenges on the lattice. The second stage is exploratory studies aiming at a demonstration of the feasibility of the method. During this stage, further technical difficulties can be revealed, as well as possible additional theoretical challenges. The next stage consists of more advanced studies focusing on a more thorough investigation of the method and first estimation of certain systematic effects. Before precision calculations can be carried out with full systematics taken into account, usually further technical difficulties must be overcome. During this evolution of knowledge, additional theoretical challenges may arise, as well as subleading systematic uncertainties. The final desired outcome is an accurate and reliable Lattice QCD estimate of the observable of interest. For this to be achieved, the various sources of uncertainties must be quantified and brought under control.
Flowchart of different methods of accessing partonic distributions considered in this review. Four main stages of every calculation are presented in blue boxes, connected with red/green boxes representing the theoretical and lattice challenges that need to be overcome to go to the next stage. Solid arrows indicate that given types of challenges emerge as a general rule, while dashed arrows signify that a given type of challenge does not have to appear for every method. The red text corresponds to different approaches and their current status. The symbol in parentheses indicates the hadron to which a given type of distribution pertains (
Based on Figure
The quasidistribution approach has also been applied to other kinds of distributions (besides the isovector flavor combination) and notable progress has recently been achieved. We discussed the exploratory studies concerning quark DAs/PDFs for mesons and gluonic PDFs (Section
Even though quasidistributions are currently the most explored, other approaches are beginning to yield very interesting results as well. Several exploratory studies have been reported for quark PDFs and DAs of nucleons and pions (Section
1-Particle irreducible
Array processor experiment
Covariant Approximation Averaging
Chiral perturbation theory
Distribution amplitude
Deep-inelastic scattering
Dimensional regularization
Diquark spectator model
Deeply Virtual Compton Scattering
Deeply Virtual Meson Production
Electron-Ion Collider
Extended Twisted Mass Collaboration
Finite volume effects
Generalized parton distribution
Hadronic cross-section
Highly improved staggered quarks
High precision
Heavy Quark Effective Theory
Hypercubic
Higher-twist effects
Infinite momentum frame
Infrared
Ioffe-time distribution
Jefferson Laboratory
Large Momentum Effective Theory
Lattice cross-section
Light-cone wave function
Leading logarithmic approximation
Low precision
Lattice Parton Physics Project
Lattice regularization
Nambu-Jona-Lasinio
Next-to-leading order
Nucleon mass correction
Nonrelativistic Quantum Chromodynamics
Operator product expansion
Parton distribution function
Regularization independent
Regularization-independent momentum subtraction
Root mean square
Quantum Chromodynamics
Quantum Electrodynamics
Semi-inclusive deep-inelastic scattering
Spectral quark model
Target mass correction
Transverse momentum dependent parton distribution function
Ultraviolet
Virtuality distribution function.
The authors declare that they have no conflicts of interest.
We first want to thank the editors of the special issue “Transverse Momentum Dependent Observables from Low to High Energy: Factorization, Evolution, and Global Analyses” for the invitation to prepare this review and the guidance they provided throughout the process. We are also grateful to all Authors for giving permission for the figures we used to illustrate the progress of the field. We are indebted to several people with whom we had discussions over the years and who helped to shape our view on different aspects discussed in this review. Their names are, in alphabetical order, C. Alexandrou, G. Bali, R. Briceño, W. Broniowski, J.-W. Chen, I. C. Cloet, W. Detmold, V. Drach, M. Engelhardt, L. Gamberg, E. García-Ramos, K. Golec-Biernat, J. Green, K. Hadjiyiannakou, K. Jansen, X. Ji, P. Korcyl, G. Koutsou, P. Kotko, K. Kutak, C.-J.D. Lin, K.-F. Liu, S. Liuti, W. Melnitchouk, A. Metz, Z.-E. Meziani, C. Monahan, K. Orginos, H. Panagopoulos, A. Prokudin, J.-W. Qiu, A. Radyushkin, G. C. Rossi, N. Sato, M. Savage, A. Scapellato, R. Sommer, F. Steffens, I. W. Stewart, R. Sufian, J. Wagner, Ch. Wiese, J. Wosiek, Y.-B. Yang, F. Yuan, S. Zafeiropoulos, J.-H. Zhang, and Y. Zhao. We also thank all members of the TMD Topical Collaboration for enlightening discussions. Krzysztof Cichy is supported by the National Science Centre (Poland) Grant SONATA BIS no. 2016/22/E/ST2/00013. Martha Constantinou acknowledges financial support by the U.S. Department of Energy, Office of Nuclear Physics, within the framework of the TMD Topical Collaboration, as well as by the National Science Foundation under Grant no. PHY-1714407.
In the remainder of the paper, the standard relativistic normalization is always assumed and the states are labeled with the hadron momentum and other labels, if necessary.
For simplicity, we neglect possible mixings under factorization.
The Dirac structure was, in the original papers, also in the same direction, i.e.
The term “factorizable matrix elements” is also employed [
For proper treatment thereof, see Section
Effective from this year, the European Twisted Mass Collaboration has officially changed its name to Extended Twisted Mass Collaboration, as it comprises now members also from non-European institutions. Along with the name change, there is a new logo.
Note that the same abbreviation is used in phenomenological analyses for the corrections due to a non-zero mass of the target in scattering experiments.
not to be confused with the symbol
We remind the Reader that prior to 2018 all available lattice data in the literature corresponded to the “
Preliminary results have been presented last year [
After the submission of this manuscript, a complete calculation was presented in Ref. [
The complete work appeared after the submission of this manuscript, in Ref. [
Effective from this year, the European Twisted Mass Collaboration has officially changed its name to Extended Twisted Mass Collaboration, as it comprises now members also from non-European institutions. Along with the name change, there is a new logo.