Multiparticle Production in High Energy Collisions

1 Institute of Theoretical Physics, Shanxi University, Taiyuan, Shanxi 030006, China 2Department of Chemistry, Stony Brook University, Stony Brook, NY 11794-3400, USA 3 Physics Department, Faculty of Science, Sana’a University, P.O. Box 1247, Sana’a, Yemen 4 Physics Department, Science & Arts Faculty (Girls Division), Najran University, P.O. Box 1988, Najran, Saudi Arabia 5 Physics Department, Banaras Hindu University, Varanasi 221 005, India

The nuclear collisions at relativistic energy offer the right kind of environment to explore a variety of phases transitions related to hot and dense nuclear matter to enhance our existing knowledge about the formation and decay of highly excited nuclear matter. The compression of nuclear matter and its subsequent expansion result in production of particles along with the disassembly of the expanded nuclear system into multiparticle production. Multiparticle production is the "first-day" research topic in the collisions and is related to the state of deconfined quarks and gluons (quark gluon plasma (QGP)) which is predicted by the quantum chromodynamics (QCD). Multiparticle production is especially related to the statistical properties of global observables, dynamical evolution of interacting system, various distributions and correlations, and so on.
From fixed target experiments to collider experiments, multiparticle production research covers various collisions over an energy range from GeV to TeV. Previously, a few accelerators provided hadron and heavy ion beams for the studies of multiparticle production. Presently, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and the Large Hadron Collider (LHC) at European Laboratory for Particle Physics (CERN) provide proton and heavy ion beams for our studies.
This special issue concerns many topics in the multiparticle production in high energy collisions, for example, multiplicity distributions and correlations, rapidity or pseudorapidity distributions and correlations, transverse momentum distributions and correlations, anisotropic flow effects and correlations, statistical and dynamical fluctuations, final-state distributions and dynamical evolution, final-state distributions and statistical behaviors, and others.
The paper "Charged hadron multiplicity distribution at relativistic heavy ion colliders" reviews the facts and problems concerning the charged hadron productions in high energy collisions. Main emphasis is laid on the qualitative and quantitative description of general characteristics and properties observed for charged hadrons produced in such high energy collisions. Various features of available experimental data including the variations of charged hadron multiplicity and pseudorapidity density with the mass number of colliding nuclei, center-of-mass energies, and the collision centrality obtained from heavy ion collider experiments are interpreted in the context of various theoretical concepts and their implications. Several important scaling features observed in the measurements mainly at RHIC and LHC experiments are highlighted in the view of these models to draw some insight regarding the particle production mechanism in heavy ion collisions.
The paper "Particle production in ultrarelativistic heavyion collisions: a statistical-thermal model review" presents the current status of various thermal and statistical descriptions of particle production in the ultrarelativistic heavy ion collisions experiments. The formulation of various types of thermal models of a hot and dense hadron gas and the methods incorporated in the implementing of the interactions between hadrons are discussed. Meanwhile, the authors' new excluded-volume model which is thermodynamically consistent is presented. The modeling results are compared with the experimental data of various ratios of the produced 2 Advances in High Energy Physics hadrons. Some new universal conditions, various transport properties, and different particle spectrums are obtained.
The paper "Meson production in high energy + collisions at the RHIC energies" studies the transverse momentum spectrum of mesons produced in proton-proton collisions in the framework of a thermalized cylinder model which is now renamed to the multisource thermal model. It is shown that in the region of high transverse momentum, the considered distributions have a tail part at the maximum energy of RHIC. A two-component distribution based on the improved cylinder model is used to fit the experimental data of the PHENIX Collaboration. The improved approach describes well the meson productions in a wider range of transverse momentums.
In the paper "Charged-hadron pseudorapidity distributions in p-p and Pb-Pb collisions at LHC energies, " the authors study the pseudorapidity distributions of charged hadrons produced in proton-proton and lead-lead collisions measured by the CMS and ALICE Collaborations at LHC energies. An improved Tsallis distribution in the two-cylinder model is used to describe the pseudorapidity spectrums. In the study, the rapidity shift at the longitudinal direction in the geometrical picture of the collisions is considered. It is shown that the calculated results are in agreement with the experimental data. The gap between the projectile cylinder and the target cylinder increases with the centrality. Meanwhile, the rapidity shifts in the cylinders increase with the centrality, too.
In the paper "Wavelet analysis of shower track distribution in high-energy nucleus-nucleus collisions, " the authors perform a continuous wavelet analysis for pattern recognition of charged particles produced in high energy silicon and sulphur induced heavy ion interactions in nuclear emulsion and try to identify the collective behavior in multiparticle production. The wavelet results are compared with a model prediction based on the Ultrarelativistic Quantum Molecular Dynamics (UrQMD), where a charge reassignment algorithm to modify the UrQMD events to mimic the Bose-Einstein type of correlation among identical mesons is adopted. Statistically significant deviations between the experiment and the simulation are interpreted in terms of nontrivial dynamics of multiparticle production.
In the paper "Entropy analysis in relativistic heavy-ion collisions, " the authors study the entropy creation in multiparticle system by analyzing the experimental data on ionion collisions at AGS and SPS energies. Their results are compared with those predicted by multiphase transport and correlation-free Monte Carlo models. Some interesting results are obtained. Entropies produced in limited-and full-phase space are observed to increase with increasing beam energy. The entropy values, normalized to the maximum rapidity and plotted against pseudorapidity (bin width also normalized to the maximum rapidity), are found to be energy independent, exhibiting a kind of entropy scaling. Such scaling is observed in the full-phase space as well as in the regions confined to the forward or backward hemispheres.
The paper "On current conversion between particle rapidity and pseudorapidity distributions in high energy collisions" discusses the conversion between the particle rapidity and pseudorapidity distributions. It is shown that the two equivalent conversion formulas used currently in experimental and theoretical analyses are incomplete. A revision on the current conversion between the particle rapidity and pseudorapidity distributions is given.
The paper "On antiproton production in 158 GeV/c protoncarbon collisions and nuclear temperature of interacting system" analyzes the antiproton production process in high energy proton-carbon collisions by using the multisource thermal model. The transverse momentum, Feynman variable, and rapidity distributions of antiprotons in the nucleonnucleon center-of-mass system are calculated. The modeling results are compared and found to be in agreement with the experimental data measured by the NA49 Collaboration at 158 GeV/c beam momentum.
This issue brings together a collection of research papers on the multiparticle production in high energy collisions. We hope this will be a useful issue for researchers working in related areas. Meanwhile, we regret that more manuscripts submitted for publication in this issue have not been accepted according to reviewer's reports.

Introduction
Quantum chromodynamics (QCD), the basic theory which describes the interactions of quarks, and gluons is a firmly established microscopic theory in high energy collision physics. Heavy-ion collision experiments provide us with a unique opportunity to test the predictions of QCD and simultaneously understand the two facets of high energy collision process: hard process (i.e, the small cross-section physics) and soft process (i.e, the large cross-section physics) [1][2][3]. Nuclear collisions at very high energies such as collider energies enable us to study the novel regime of QCD, where parton densities are high and the strong coupling constant between the partons is small which further decreases as the distance between the partons decreases. The parton densities in the initial stage of the collision can be related to the density of charged hadrons produced in the final state. With the increase in collision energy, the role of hard process (minijet and jet production) in final state particle production rapidly increases and offers a unique opportunity to investigate the interplay between various effects. In this scenario, the perturbative QCD (pQCD) lends a good basis for high energy dynamics and has achieved significant success in describing hard processes occurring in high energy collisions such as scaling violation in deep inelastic scattering DIS [4], hadronic-jet production in + − annihilation [5,6], and largept-jet production in hadron-hadron collisions [7][8][9][10][11]. On the other hand, in soft processes such as hadron production with sufficiently small transverse momentum in hadronic and nuclear collisions, the interactions become so strong that the perturbative QCD (pQCD) does not remain applicable any more. Thus, there is no workable theory yet for nonperturbative QCD regime which can successfully describe these soft processes. Due to inapplicability of pQCD in this regime, experimental input-based phenomenological models are proven to be an alternative tool to increase our knowledge of the property of the basic dynamics involved in such collision processes. Furthermore, these soft hadrons which decouple from the collision zone in the late hadronic freezeout stage of the evolution are quite useful in providing the indirect information about the early stage of the collision. Several experimental information on the multiparticle production in lepton-hadron, hadron-hadron, hadron-nucleus, and nucleus-nucleus has been accumulated in the recent past over a wide range of energy. In this context, the bulk features of multiparticle production such as the average charged particle multiplicity and particle densities are of fundamental interest as their variations with the collision energy, impact parameter, and the collision geometry are very sensitive to the underlying mechanism involved in the nuclear collisions.

Advances in High Energy Physics
These can also throw more light in providing insight on the partonic structure of the colliding nuclei. In order to understand the available experimental data, a lot of efforts have been put forward in terms of theoretical and phenomenological models. However, the absence of any well established alternative, the existing problem of the production mechanism of charged hadrons continues to facilitate proliferation of various models. The development in this direction is still in a state of flux for describing the same physical phenomenon using different concepts and modes of operation. Most of these theoretical models are based on the geometrical, hydrodynamical, and statistical approaches. However, the diverse nature of the experimental data poses a major challenge before physics community to uncover any systematics or scaling relations which are common to all type of reactions. Thus, a search for universal mechanism of charged hadron production common to hadron-hadron, hadron-nucleus, and nucleus-nucleus interactions is still continued and needs a profound effort to draw any firm conclusion. Also, the complicated process of many-body interactions occurring in these collision processes is still quite difficult to make a clear understanding of the phenomena by analyzing the experimental data on the multiparticle production in the final state. In this regard, ongoing efforts for the extensive analysis of the experimental data available on charged hadron production in the view of some successful phenomenological models can provide us with a much needed insight in developing a better understanding of the mechanism involved in the particle production. Moreover, these can also be useful in revealing the properties of the nuclear matter formed at extreme conditions of energy and matter densities.
In this review, we attempt to give a succinct description of most of the progress made in this field till date even though it is not so easy for us. Furthermore, we believe that the references mentioned in this review will surely guide the readers, but we can never claim that they are complete. We apologize to those authors whose valuable contributions in this field have not been properly mentioned.
The structure of this paper is framed in the following manner. At first, in Section 2, we start with a brief description of different models used for the study of charged hadron productions in this review in a systematic manner. In Section 3, the experimental results on charged hadron production at collider energies are presented along with the comparison of different model results. Further in Section 4, we will provide some scaling relations for charged hadrons production and evaluate them on the basis of their universality in different collisions.

Wounded Nucleon Model.
In 1958, Glauber presented his first collection of various papers and unpublished work [12]. Before that, there were no systematic calculations for treating the many-body nuclear system as either a projectile or target. Glauber's work put the quantum theory of collisions of composite objects on a firm basis. In 1969 Czyz and Maximom [13] applied the Glauber's theory in its most complete form for proton-nucleus and nucleus-nucleus collisions. By using Glauber's theory, finally Bialas et al. [14] first proposed the wounded nucleon model which was based on the basic assumption that the inelastic collisions of two nuclei can be described as an incoherent composition of the collisions of individual nucleons of the colliding nuclei. In this approach, the collective effects which may occur in nuclei were neglected. According to their assumption, in nucleonnucleus collisions, a fundamental role is played by the mean number of collisions ] suffered by the incident nucleon with the nucleons in the target nucleus [15]. Similarly, nucleusnucleus collision is also described in terms of the number of wounded nucleons ( ). For the nucleon-nucleus collisions, there is a simple relation between ] and = ] + 1 (1) but no such type of relation exists for the nucleus-nucleus collisions. Motivated by the data available on nucleon-nucleus ( -) interactions [16], the average multiplicity follows approximately the formula where is the average multiplicity in nucleon-nucleon collisions. For nucleus-nucleus ( -) collisions generalization of this picture implies that the average multiplicity is where the number of wounded nucleons (participants) in the collision of and is the sum of wounded nucleons in the nucleus and the nucleus ; that is, = + with = ( / ) and = ( / ). The extension of Glauber model was used to describe elastic, quasi-elastic, and the total cross-sections [13,[17][18][19][20][21]. In the wounded nucleon model, the basic entity is the nucleon-nucleon collision profile ( ) defined as the probability of inelastic nucleonnucleon collision at impact parameter . Once the probability of a given nucleon-nucleon interaction is known, the probability of having such interactions in collision of nuclei and is given as [22] ( , ) = ( ) [ where ( ) is thickness function and the inel is the inelastic nucleon-nucleon cross-section. The total cross-section is given by The total number of nucleon-nucleon collisions is given by Advances in High Energy Physics 3 The number of participants (wounded nucleons) at a given impact parameter is given by with ( ) ( ) = ∫ ( ) ( , ( ) ) ( ) being the probability per unit transverse area of a given nucleon being located in the target flux tube of or and ( ) ( , ( ) ) being the probability per unit volume, normalized to uniy for finding the nucleon at location ( , ( ) ).
The rapidity density of particles in nucleus-nucleus ( -) collision is given by [23] = ( ) + ( ) where is the rapidity in c.m. system and ( ) and ( ) are the contribution from a single wounded nucleon in and . Thus, The model gives a good description of the data, with condition (9) being well satisfied, except at rapidities close to the maximal values [23]. It can be seen from (8) that, for nucleon-nucleon collision, one has = ( ) + ( ) = ( ) + (− ) (10) and thus for the ratio consequently, one has, at = 0, which implies that the value of the ratio at midrapidity is fully determined by the number of wounded nucleons and independent of the function ( ). Unlike the part scaling observed in charged hadron multiplicity, pseudorapidity density at midrapidity does not scale linearly with part /2 [22]. It was conjectured [24][25][26][27][28] that at sufficiently high energy the particle production in nucleus-nucleus collisions will be dominated by hard processes. However, the gross features of particle production at CERN SPS energies were found to be approximately consistent [29] with the part scaling as accommodated by the wounded nucleon model. A better agreement with the data is found in two-component model for estimating the pseudorapidity density in wounded nucleon model as shown by [30]. In -collisions hadron production from the two processes scales as part /2 (number of nucleon participant pairs) and bin (number of binary -collisions), respectively [30]. According to this assumption the pseudorapidity density of charged hadrons is given as where quantifies the relative contributions of two components arising from hard and soft processes. The fraction corresponds to the contribution from hard processes and the remaining fraction (1 − ) describes the contributions arising from the soft processes.

Wounded Quark Model.
Charged hadron production by using the concept of the constituent or wounded quarks has been widely used for many years [31][32][33][34]. In the wounded quark picture, nucleus-nucleus collisions are effectively described in terms of the effective number of constituent quarks participating in the collision process along with the effective number of collisions suffered by each of them.
Recently, the idea of wounded quarks was resurrected by Eremin and Voloshin [35] in which they modified the overlap function by increasing the nucleon density three times and introduced one more parameter the quark-quark interaction cross-section, which reproduced the data well. They have further shown that the charged hadron density at midrapidity can be described well by the wounded quark model. This problem was further investigated in several papers [36][37][38][39][40] representing the analysis of various spectra, SPS data, total multiplicities, and the energy deposition. De and Bhattacharyya [37] have shown that the data on ± and / favors the -part scaling, over the part scaling whereas the pions do not agree well with such scaling law. Recently, we proposed a wounded quark model [41] which is primarily based on the previous work by Singh et al. [42][43][44]. In this picuture, during the collision, a gluon is exchanged between a quark of projectile or first nucleus and a quark belonging to target or other colliding nucleus. The resulting color force is then somewhat stretched between them and other constituent quarks because they try to restore the color singlet behaviour. When two quarks separate, the color force builds up between them and the energy in the color-field increases; the color tubes thus formed finally break up into new hadrons and/or quarkantiquark pairs. We consider a multiple collision scheme in which a valence quark of the incident nucleon suffers one or more inelastic collisions with the quarks of target nucleons. In a nucleon-nucleon collision only one valence quark of each nucleon (i.e, target and projectile) interacts, while other quarks remain as spectators [40]. Only a part of the entire nucleon energy is spent for secondary production at midrapidity. The other spectator quarks are responsible for forming hadrons in the nucleon fragmentation region. In the case of nucleus-nucleus collisions, more than one quark per nucleon interacts and each quark suffers more than one collision due to a large nuclear size since large travel path inside the nucleus becomes available. If we search a universal mechanism of charged particle production in the hadron-hadron, hadronnucleus, and nucleus-nucleus collisions, it must be driven by the available amount of energy required for the secondary production and it also depends on the mean number of participant quarks. The main ingredients of our model are taken from the paper by Singh et al. [42][43][44]. Charged hadrons produced from -collisions are assumed to result from a somewhat unified production mechanism common to -collisions at various energies. Based on the experimental findings by PHOBOS collaboration, recently Jeon and collaborators have shown that the total multiplicity obtained at RHIC can be bounded by a cubic logarithmic term in energy [45]. Therefore, we propose here a new parameterization involving a cubic logarithmic term so that the entire -experimental data starting from low energies (i.e. from 6.15 GeV) up to the highest LHC energy (i.e. 7 TeV) can suitably be described as [41] ⟨ ch ⟩ ℎ = ( + ln √ + ln 2 √ + ln 3 √ ) − . (15) In (15), is the leading particle effect and √ is the available center-of-mass energy (i.e., √ = √ − − , where is the mass of the projectile and is the mass of the target nucleon, resp.), , , , and are constants derived from the best fit to the -data, and the value of is taken here as 0.85 [41].
We can extrapolate the validity of this parametrization further for the produced charged particles in hadron-nucleus interactions by considering multiple collisions suffered by the quarks of hadrons in the nucleus. The number of constituent quarks which participate in hadron-nucleus (ℎ-) collisions share the total available center-of-mass energy √ ℎ and thus the energy available for each interacting quark becomes √ ℎ / ℎ , where ℎ is the mean number of constituent quarks in ℎcollisions. The total available squared centerof-mass energy ℎ in ℎcollisions is related to as ℎ = ] ℎ with ] ℎ as the mean number of inelastic collisions of quarks with target nucleus of atomic mass A. Within the framework of the Additive Quark Model [31,32,[46][47][48][49][50], the mean number of collisions in hadron-nucleus interactions is defined as the ability of constituent quarks in the projectile hadron to interact repeatedly inside a nucleus. Finally, the expression for average charged hadron multiplicity in ℎcollisions is [41][42][43][44] ⟨ ch ⟩ ℎ = ℎ [ + ln ( √ ℎ ℎ ) The generalization of the above picture for the case of nucleus-nucleus collisions is finally achieved as follows [41]: The parametrization in (17) thus relates nucleus-nucleus collisions to hadron-nucleus and further to hadron-proton collisions and the values of the parameters , , , and remain unaltered which shows its universality for all these processes.
In creating quark gluon plasma (QGP), greater emphasis is laid on the central or head-on collisions of two nuclei. The mean multiplicity in central collisions can straight forwardly be generalized [41] from (17) as The pseudorapidity distribution of charged particles is another important quantity in the studies of particle production mechanism from high energy ℎ-ℎ and -collisions, which, however, is not yet understood properly. It has been pointed out that ( ch / ) can be used to get the information on the temperature ( ) as well as energy density ( ) of the QGP [51][52][53]. For the pseudorapidity density of charged hadrons, we first fit the experimental data of ( ch / ) =0 for collision-energy ranging from a low energy to very high energy. One should use the parameterization up to squared logarithmic term in accordance with [45]. Hence, using a parameterization for central rapidity density as we obtain the values of the parameters 1 = 1.24, 1 = 0.18, and 1 = 0.044 from the reasonable fit to the -data [41]. Earlier many authors have attempted to calculate the pseudorapidity density of charged hadrons in a twocomponent model of parton fragmentation [54,55]. Its physical interpretation is based on a simple model of hadron production: longitudinal projectile nucleon dissociation (soft) and transverse large-angle scattered parton fragmentation (hard). However, this assumption which is based on a nucleon-nucleon collision in the Glauber model is crude and it looks unrealistic to relate participating nucleons and nucleon-nucleon binary collisions to soft and hard components at the partonic level. Here we modify the twocomponent model of pseudorapidity distributions incollisions as given in (14) for the wounded quark scenario and assume that the hard component which basically arises due Advances in High Energy Physics 5 to multiple parton interactions [56] scales with the number of quark-quark collisions (i.e., ] ) and the soft component scales with the number of participating quarks (i.e., ). Thus, the expression for ( ch / ) =0 in -collisions can be parameterized in terms of -rapidity density as follows [41]: In order to incorporate dependence in central -collisions, we further extend the model by using the functional form where , , and are fitting parameters and ( ch / ) =0 is the central pseudorapidity density in -collisions obtained from (20).

Dual Parton
Model. Dual parton model was introduced at Orsay in 1979, by incorporating partonic ideas into the dual topological unitarization (DTU) scheme [57][58][59][60][61]. Dual parton model (DPM) and quark gluon string model (QGSM) are multiple-scattering models in which each inelastic collision results from the superposition of two strings and the weights of the various multiple-scattering contributions are represented by a perturbative Reggeon field theory. One assumes a Poisson distribution for each string for fixed values of the string ends. The broadening of distribution arises due to the fluctuations in the number of strings and the fluctuation of the string ends. When the effect of the fluctuations of the string ends is negligibly small, DPM reduces to an ordinary multiple scattering model with identical multiplicities in each individual scatterings [62]. The inclusive spectra for charged particle multiplicity in -collisions is given as follows [63]: when all string contributions are identical which means all individual scattering are same then this expression reduces to the simple expression where ⟨ ⟩ is the average number of inelastic collisions and 0 / is the charged multiplicity per unit pseudorapidity in an individual -collision.
To calculate the weights for the occurrence of inelastic collisions, a quasi-eikonal model has been used. The is given as follows [62]: Here = ln( / ), = 8 exp(Δ ), and = 2 exp(Δ )/( 2 + ). In (24), is the Born term given by Pomeron exchange with intercept (0) = 1 + Δ. According to a well-known identity which is known as AGK cancellation [64], all multiple scattering contributions vanish identically in the single particle inclusive distribution / and only the Born term ( ) contribution is left. The parameters 2 and control the -dependence of the elastic peak and contains the contribution of diffractive intermediate states. The total -crosssection in this prescription is [65] Thus one can calculate the pseudorapidity distribution of charged hadron from (22) and using expectation value of ⟨ ⟩ as follows: Further, the cross-section for ] inelastic collisions in scattering after considering the AGK cancellation is as follows [62]: Here ( ) represents the nucleon profile function as defined in the wounded nucleon model. The first factor on the R.H.S. of (27) yields the number of ways in which ] interacting nucleons can be chosen out of . The second factor is the probability that ] nucleons interact at fixed parameter . The third factor is the probability for no interaction of the remaining -] nucleons. On the contrary the multiplicity for -scattering in DPM is given as follows: with ] = / . Thus / scales with the number of binary collisions. Further, the AGK cancellation implies that at midrapidity / = 2 / , which implies that As a function of the impact parameter, the average number of binary nucleon-nucleon collisions coll ( ) can be expressed as follows [62]: 6 Advances in High Energy Physics Capella and Ferreiro [62] have also incorporated the corrections arising due to shadowing effects by including the contribution of triple Pomeron graphs. The suppression in multiplicity from shadowing in -collision for a particle produced at midrapidity can be obtained by replacing the nuclear profile function ( ) = ∫ 2 ( ) ( − ): where (29) and (30), ( ) can be obtained as follows: 2.4. The Color Glass Condensate Approach. Color Glass Condensate [66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82] (CGC) is an effective theory that describes the gluon content of a high energy hadron or nucleus in the saturation regime. At high energies, the nuclei gets contracted and the gluon density increases inside the hadron wave functions, and at small , the gluon density is very large in comparison to all other parton species (the valence quarks) and the sea quarks are suppressed by the coupling , since they can be produced from the gluons by splitting → . Systems that evolve slowly compared to natural time scales are generally glasses. The word Color is because CGC is composed of colored gluons. The word Glass is because the classical gluon field is produced by fast moving static sources. The distribution of these sources is real. Systems that evolve slowly compared to natural time scales are generally glasses. The word condensate means the gluon distribution has maximal phase-space density for momentum modes and the strong gluon fields are self generated by the hadron. This effective theory approximates the description of the fast partons in the wave function of a hadron. This framework has been applied in a range of experiments, for example, from DIS to protonproton, proton-nucleus, and nucleus-nucleus collisions. One of the early successes of the CGC was the description of multiplicity distributions in DIS experiments [53]. When the nucleus is boosted to a large momentum, then due to Lorentz contraction in the transverse plane of nuclei, partons have to live on thin sheet in the transverse plane. Each parton occupies the transverse area / 2 and can be probed with the cross-section ∼ ( 2 ) / 2 . On the other side the total transverse area of the nucleus is ∼ 2 . Therefore if the number of partons exceeds then they start to overlap in the transverse plane and start interacting with each other which prevents further growth of parton densities. At this situation the transverse momenta of the partons are of the order of 2 ∼ ( 2 )( / 2 ) ∼ 1/3 , which is called "saturation scale" [30].
The multiplicity of the produced partons should be proportional to [30] In the first approximation, the multiplicity in this high density regime scales with the number of participants. However, there is an important logarithmic correction to this from the evolution of parton structure functions with 2 . The coefficient of proportionality is given as follows [30]: where = 3 is the number of color, ( , 2 ) is the gluon structure function in nucleus, and part is the density of participants in the transverse plane.
Number of produced partons is given as follows [30]: where is the "parton liberation" coefficient accounting for the transformation of virtual partons in the initial state to the on-shell partons in the final state. Integrating over transverse coordinate and using (35), one can obtain [30] = part ( , 2 ) .
Here is assumed to be close to unity in the context of local "parton hadron duality" hypothesis [83] according to which, at ⊥ ∼ , the distribution of produced hadrons mirrors that of the produced gluons. Using (35) and (37), one can evaluate the centrality dependence as follows [30]: Here Λ QCD is QCD scale parameter. The rapidity density can be evaluated in CGC effective field theory (EFT) by using the following expression [84]: where is the inelastic cross-section of nucleus-nucleus interaction. Further, the differential cross-section of gluon production in a -collision is written as [84] where 1,2 = ( /√ ) exp(± ) with the pseudorapidity of the produced gluon and is the running coupling evaluated at the scale 2 = max{ 2 , ( − ) 2 }. is the unintegrated gluon distribution which describes the probability to find Advances in High Energy Physics 7 a gluon with a given and transverse momentum inside the nucleus. When 2 > 2 , corresponds to the bremsstrahlung radiation spectrum and can be expressed as In the saturation regime, where is the nuclear overlap area.

Model
Based on the Percolation of Strings. The physical situation described by the Glasma, the system of purely longitudinal fields in the region between the parting hadrons, is analogous to that underlying in the string percolation model (SPM). In the SPM one considers Schwinger strings, which can fuse and percolate [85][86][87][88], as the fundamental degrees of freedom. The effective number of strings, including percolation effects, is directly related to the produced particle's rapidity density. The SPM [89] for the distribution of rapidity extended objects created in heavy-ion collisions combines the generation of lower center-of-mass rapidity objects from higher rapidity ones with asymptotic saturation in the form of the well-known logistic equation for population dynamics: where ≡ (Δ, ) is the particle density, is the beam rapidity, and Δ ≡ | | − . The variable −Δ plays the role of evolution time, parameter controls the low density evolution, and parameter is responsible for saturation. Thedependent limiting value of , determined by the saturation condition / (−Δ) = 0, is given by = 1/ , whereas the separation between the region Δ > Δ 0 (i.e., low density and positive curvature) and the region Δ < Δ 0 (i.e., high density and negative curvature) is defined by 2 / (−Δ) 2 | Δ 0 = 0, which gives 0 ≡ (Δ 0 , ) = /2. Integrating (46) between 0 and some (Δ), one can obtain [89] (Δ, ) = (Δ−Δ 0 )/ + 1 .
The particle density in SPM is proportional to (i.e., ∝ ); therefore we can write To be more specific regarding the quantities , , and Δ 0 , does not strongly depend on and the parameter is the normalized particle density at midrapidity and is related to gluon distribution at small ; that is, ∝ . As increases with rapidity , energy conservation arguments give Δ 0 = − with 0 < < 1. Now (48) can be rewritten as The particle density is a function of both Δ and the beam rapidity , whereas according to the hypothesis of limiting fragmentation, for Δ larger than some -dependent threshold, the density remains a function of Δ only. In the Glauber model, energy-momentum conservation constrains the combinatorial factors at low energy but in the framework of SPM energy-momentum conservation is accounted by reducing the effective number of sea strings via which yields In the string percolation model (SPM), the charged particle multiplicity at midrapidity in proton-proton interactions is given by [90] ch =0 = ( ) Equation (54) involves three parameters, the normalization constant ; the threshold scale √ 0 in (√ ); the power . The values of fitting parameters obtained from the best fit to and data are = 0.63 ± 0.01, = 0.201 ± 0.003, and the threshold scale √ 0 = 245 ± 29 GeV. In the SPM, the power law dependence of the multiplicity on the collision energy is the same in and collisions. The pseudorapidity dependence of collisions is given by [91] where is the Jacobian = cosh /( 1 + sinh 2 ) and = ( / ( = 0))(exp(−(1 − ) / ) + 1).

Charged Hadron Multiplicity Distributions
Here, we discuss various observed features of the measured charged hadron multiplicity in hadron-hadron, hadronnucleus, and nucleus-nucleus interactions at different energies and their comparison in the view of the above mentioned models.

Mean Multiplicity as a Function of √
. Feynman was the first to point out that the multiplicity spectrum observed in proton-proton collisions at asymptotically large energies becomes independent of the center-of-mass energy (√ ) as √ → ∞ [92][93][94]. This assumption is known as Feynman scaling. He concluded this fact primarily based on the phenomenological arguments about the exchange of quantum numbers between the colliding particles. This energyindependent behaviour of the height of rapidity distribution around midrapidity naturally implies that the total multiplicity after integration over rapidity involves ln √ behaviour since max = ln(√ / ), where is the nucleon-mass. However up to √ = 1800 GeV the experimental data does not indicate that the height of rapidity distribution around midrapidity (i.e, ( / ) =0 ) gets saturated. Thus, Feynman scaling is violated by continuous increase in the central rapidity density. Later on, it was realized that the gluons arising from gluon-bremsstrahlung processes give QCD-radiative corrections [95]. Recently, it was noticed by PHOBOS collaboration from the -and/or -data that central plateau height, that is, ( / ) =0 , grows as ln 2 √ which in turn will give ln 3 √ type behaviour in the total multiplicity [45,96,97]. Thus, the violation of Feynman scaling clearly indicates that this scaling type of behaviour is not supported by experiments. Based on the above findings, we have included ln 3 √ behaviour as in (15) and have shown its applicability in describing the -and -data quite successfully for the available entire energy range till date. In Figure 1, the inelastic (filled symbols) and nonsingle diffractive (NSD) data (open symbols) of charged hadron multiplicity in full phase-space for -collisions at various center-of-mass energies from different experiments, for example, ISR, UA5, and E735, is shown. Inelastic data at very low energies (filled symbols) are used because NSD data are not available for these energies and also the trend shows that the difference between inelastic and NSD data is very small at lower energies. Further, this data set is fitted with three different functional forms in order to make a simultaneous comparison with our parameterization as given in (15). The short-dashed line has the functional form as 2 + 2 1/4 which is actually inspired by the Fermi-Landau model. It provides a reasonable fit to the data at higher √ with 2 = 5.774 and 2 = 0.948 [98]. However, since 2 summarizes the leading particle effect also, its value should not be much larger than two. The dotted line represents the functional form as 1 + 1 ln + 1 ln 2 and it fits the data well at higher √ but shows a large disagreement with the experimental data at lower center-of-mass energies with the values 1 = 16.65, 1 = −3.147, and 1 = 0.334, respectively. The dashed-dotted line represents the form 3 + 3 and it provides a qualitative description of the data with 3 = 0, 3 = 3.102, and = 0.178 [98]. The solid line represents the parametrization given by (15) according to [41]. In Figure 2, we present the inelastic (filled symbols) and nonsingle diffractive (NSD) data (open symbols) of ch / at midrapidity for -collisions at various center-of-mass energies from different experiments, for example, ISR, UA5, E735, RHIC, and LHC [99][100][101][102][103][104][105][106][107][108][109][110][111][112]. Solid line Advances in High Energy Physics represents the parameterization used in (19) according to [41]. DPM results [62] for charged hadron pseudorapidity densities are shown by the dashed-dotted line for -collisions.
In DPM, at lower energies a larger value of multiplicities is obtained which is not supported by the experimental data which shows that a 0.11 type of energy-dependent behaviour (shown by dashed line) for the entire range of energy including LHC energy fits the data well. The results on hadron production in -collisions by using a saturation approach based on the CGC formalism are shown by the dotted line with error bands, while KLN model results are shown by dashed-dotted line with same error bands [113]. Both approaches are based on factorization, while the main difference lies in determining the saturation scale for the nucleus [113]. Figure 3 shows the variation of mean multiplicity of charged hadrons produced in central Au-Au collision with respect to √ . We also compare our model results (solid line) with the standard Glauber model (dashed line) and the modified Glauber model results (dash-dotted line) [114] alongwith the experimental data of AGS and RHIC [115][116][117]. We would like to mention here the main difference between modified Glauber model and the standard Glauber calculation. In standard Glauber model, a participating nucleon can have one or more numbers of collision at the same √ and a constant value for inel (√ ) is used in the calculation. However, in modified Glauber model, a participating nucleon loses a fraction of its momentum after the first collision and participates in subsequent collisions with somewhat lower energy and also a lower value of inel (√ ) is used in the calculation. This modification suppresses the number of collisions significantly in comparison to that obtained in the standard Glauber model. We find that the results obtained in our model give a better description to the experimental data in comparison to the standard as well as modified Glauber model predictions [41]. This clearly shows the significance of the role played by quark degrees of freedom as used by us in our picture [41]. In Figure 4, the variations of mean multiplicity of charged hadrons produced in Au-Au, Cu-Cu, and Pb-Pb collisions as a function of √ in the wounded quark model are shown alongwith the comparison of the experimental data [115][116][117][118].

Charged Hadron Multiplicity as a Function of Centrality.
The total charged hadron multiplicity in hadronic collisions is of great significance as it provides a direct measure of the degrees of freedom released in the collision process. The role of the system-size dependence on the particle production in high energy nuclear collisions is of prime interest and The experimental data has been taken from [118,119].
is mainly studied either by varying the size of the colliding nuclei or by varying the centrality of the collision event. This variation of the collision centrality directly affects the volume of the collision zone formed in the initial stage and it also affects the number of binary collisions suffered by each nucleon. Furthermore, the study of centrality dependence along with the varying collision energy also reflects the contribution of the soft and hard process involved in the particle production mechanism. In Figure 5, a compilation of the measured total charged hadron multiplicity as a function of centrality (⟨ part ⟩) in nucleus-nucleus ( -) collisions at varying energies is shown. Experimental data clearly shows a nontrivial growth of multiplicity with atomic number of colliding nuclei and also the multiplicity dependence on the collision centrality and energy is clearly visible. The main feature of total particle production in Au-Au, Cu-Cu, and Pb-Pb collisions is a direct proportionality of the total charged hadron multiplicity to the number of pair of participant nucleons part . This proportionality becomes stronger with the increase in collision energy (as observed in the form of increasing slopes of the curve) for each colliding system. Total multiplicity of charged hadron per participant pair as a function of centrality in -Au, Au-Au, and Cu-Cu collisions at RHIC energies is shown in Figure 6. The total charged hadron multiplicity scales with ⟨ part ⟩ in -Au, Au-Au, and Cu-Cu collisions indicating that the transition between inelastic ( )and -collisions is not controlled simply by the number of participants, as even very central -Au collisions do not show any sign of trending up towards the level of the Au-Au data [125].
For suitable comparison of model results for most central (0-6%) case in Au-Au and Cu-Cu collisions, we have taken the average of the RHIC multiplicity data [120] of first two centrality bins (0-3% and 3-6%). In Table 1, there is a tabular representation of wounded quark model results for the total charged hadron multiplicities as a function of the centrality in Au-Au collisions at three RHIC energies (√ = 62.4, 130 and 200 GeV). We have also made a comparison  with the experimental data [120] at the respective RHIC energies and found a reasonable agreement within experimental errors. Similarly in Table 2, wounded quark model results for the total charged hadron multiplicities as a function of the centrality in Cu-Cu collisions at three RHIC energies (√ = 22.4, 62.4, and 200 GeV) are shown and compared with the experimental data [120] at the respective RHIC energies. Thus, the centrality dependence of charged hadron production involving different colliding nuclei (Au-Au and Cu-Cu) at RHIC energies is well described by wounded quark model which clearly suggests that it can be used reliably in explaining the multiplicity data for other colliding nuclei too.

Charged Hadron Pseudorapidity
Distributions. The pseudorapidity distribution of charged hadrons is another quantity in the studies of the particle production mechanism in high energy hadron-hadron and nucleus-nucleus collisions. Pseudorapidity density is a well defined quantity which is sensitive to the initial conditions of the system, that is, parton shadowing and the effects of rescattering and hadronic final state interactions. Figure    the data on the pseudorapidity distribution for all the three colliding systems (Au-Au, Cu-Cu, and Pb-Pb) at RHIC and LHC energies.

Central Pseudorapidity
Density as a Function of √ as well as Colliding Nuclei. Pseudorapidity density of charged hadrons provides relevant information on the temperature ( ) as well as the energy density of the QGP. Furthermore, the study of the dependence of charged hadron densities  Figure 9: Variations of pseudorapidity density of charged hadrons at midrapidity as a function of √ for different colliding nuclei along with the comparison of different model results [41,130].
at midrapidity with the c. m. energy and system size can provide the relevant information on the interplay between hard parton-parton scattering process and soft processes. At first sight, it looks logistic to consider the nucleus-nucleus collision as an incoherent superposition of nucleon-nucleon collisions as in the framework of wounded nucleon model approach. However, recent results on multiplicity data at RHIC and LHC energies for proton-proton and nucleusnucleus collisions give an indication that nucleus-nucleus collisions are not simply an incoherent superposition of the collisions of the participating nucleons as ( ch / ) =0 > part ⋅ ( ch / ) =0 . It hints towards the role of multiple scattering in the nucleon-nucleon collisions. Further, the scaling with number of binary collisions also does not seem to hold good as ( ch / ) =0 ≪ coll ⋅( ch / ) =0 , which indicates the coherence effects involved in these collision processes [131]. At higher energies, the role of multiple scattering [132] becomes more significant in describing the nucleus-nucleus collisions due to the contribution of hard processes. In Figure 9, the pseudorapidity densities of charged hadrons at midrapidity are shown with c. m. energy (√ ) for different colliding nuclei. Wounded quark model results (as obtained from (20)) which is based on the two-component model are shown along with the experimental data and are in quite good agreement. It clearly signifies the role of hard and soft processes involved in these collision processes. CGC model results for Au-Au collisions is shown with dotted line [130].

part and
Scaling Observed in Charged Hadron Multiplicity. The energy dependence of the charged hadron pseudorapidity density at midrapidity normalized by the number of participant pairs for most central collision events is shown in Figure 10 for colliding systems of varying sizes. Energy dependence of normalized pseudorapidity is found to follow Advances in High Energy Physics  [90], CGC, and KLN models [113] results are shown.
a logarithmic trend for -collisions up to the highest RHIC energy of 200 GeV, which can be well explained as [118] However, the measurements in Pb-Pb collisions at 2.76 TeV observed at LHC exceed the above logarithmic dependence. This logarithmic behaviour describes Au-Au and Pb-Pb data quite successfully with a little disagreement for Cu-Cu data. A better description of the data for energy dependence incollision is given by the following relation [118,133]: which is quite successful in describing the data at RHIC and LHC but fails to explain the lower energy data as it overestimates in lower energy region below 17.3 GeV [118]. SPM results are shown for Au-Au and Cu-Cu with dashed and dotted lines respectively. CGC and KLN model results are shown by dashed-dotted and dotted lines with error bands [113]. In Figure 11, scaling behaviour of charged hadron pseudorapidity density at midrapidity is studied in wounded quark scenario over the entire range of energy starting from the lowest AGS energies √ = 2.4 GeV to the largest LHC energy √ = 2.76 TeV. Solid points in the figure are the experimental pseudorapidity density data at midrapidity normalized by the participating quarks calculated in wounded quark model. Open points are wounded quark model result which is well described by the following functional form:  Figure 11: Energy dependence of charged hadron mid-pseudorapidity density per participating quark. Open triangles are wounded quark model results. Solid line is energy-dependent functional form of wounded quark model [41].
A little deviation from this functional form can be clearly seen for Cu-Cu collisions at 19.6 and 62.4 GeV, which arises due to the difference in number of quark collisions from Au-Au collisions at the same energy and signifies the role of hard processes involved.

Scaled Pseudorapidity Density in -Collisions as a Function of Centrality.
In heavy-ion collisions, nuclei being extended objects collide at various impact parameters depending on the degree or extent of overlap region of the two colliding nuclei. This degree or extent of overlap region formed during the collision is generally referred to as the degree of centrality of the interaction event. It serves as an appropriate tool to make the suitable comparison between the measurements performed in collider experiments and the theoretical calculations available till date. The midrapidity density of charged hadrons normalized to the number of participant pairs (⟨ part ⟩/2) with a variety of colliding systems (Au-Au, Cu-Cu, and Pb-Pb) at different energies as a function of centrality (⟨ part ⟩) is shown in Figure 12. Geometrical scaling model of Armesto et al. [134] with a strong dependence of the saturation scale on the nuclear mass and collision energy explains the Au-Au multiplicity data quite well and establishes a factorization of energy and centrality dependences in agreement with the data but at LHC energy (2.76 TeV) it predicts a rather weak variation with centrality. On the other hand, the string percolation model (SPM) [89,90] driven by the same power law behaviour in both -and -collisions provides a suitable description of the existing multiplicity data at both RHIC (for Cu-Cu and Au-Au) and LHC (for Pb-Pb) energies in a consistent manner. CGC model results at RHIC energies (130 and 200 GeV) and LHC energy (2.76 TeV) are shown in Figure 12 by dasheddotted line. Geometrical scaling (GS) model results [134][135][136] at RHIC energies and LHC energy (2.76 TeV) are shown in Figure 12 by dotted line.  Figure 12: Centrality dependence of scaled mid-pseudo-rapidity density of charged hadrons with different colliding nuclei at RHIC and LHC energies [118,137]. Solid lines are the SPM results [90], dash-dotted lines are CGG model results, and dotted lines are GS results [134][135][136].

Scaling Features of Charged Hadron Multiplicity in the High Energy Collision
Scaling laws in high energy collisions are of great importance as it describes the case where a physical quantity (observable) becomes solely dependent on a combination of certain physical parameters. Such a scaling law is often useful due to the fact that it can provide us with enough information about the underlying dynamics of the observed phenomenon in such collision processes, whereas the violation of the scaling law for a certain value of the scaling parameters will be a strong indicator of new physical phenomenon occurring in these collisions.

KNO Scaling.
The energy dependence of the multiplicity distribution observed in high energy collision using a variety of colliding systems is one of the primary issues in the search for any systematics/scaling in multiparticle production. Based on the assumption of the Feynman scaling for the inclusive particle production cross-section at asymptotic energies Koba, Nielsen, and Olesen (KNO) in 1972 proposed scaling property for the multiplicity distribution at asymptotic energies [138], according to which the normalized multiplicity distributions of charged particles should become independent of asymptotic energies. According to KNO scaling, the probability ( ) to produce charged particles in the final states is related to a scaling function ( ) as follows [138]: where ( ) is universal function independent of energy and the variable = /⟨ ⟩ stands for normalized multiplicity. and tot correspond to particle production and the total cross-section, respectively. Thus, the rescaling of measured at different energies via stretching or shrinking the axes by the average multiplicity ⟨ ⟩ leads to the rescaled curves coinciding with one another. However, the experimental data show only an approximate KNO scaling behaviour for multiplicity distributions in the different energy regions and hence Buras et al. [139] proposed modified KNO scaling in which variable = ( − )/⟨ − ⟩. Here parameter depends solely on the reaction. In order to describe the properties of multiplicity distributions of final state particle at different energies, it is more convenient to study their moments. KNO scaling also implies that the multiplicity moments are energy-independent since where is the order of the moment. Indeed, KNO scaling was found to be roughly valid up to the highest ISR energies [99,140] but the violation of KNO scaling was first observed for the UA5 data of -collisions at √ = 546 GeV [141] at the CERN collider which provoked a great deal of discussion among the physics community. A high multiplicity tail and a change of slope [102,142] observed in the distribution were interpreted as evidence for multicomponent structure of the final state [143,144]. Later on, a comparative study of charged particle multiplicity distributions [142] arising from nonsingle diffractive inelastic hadronic collisions at √ = 30 GeV to 1800 GeV including the data in UA5(SPS) and E735 (Tevatron) experiments again confirmed the deviations from the so-called KNO scaling behaviour in the form of a shoulder like structure that clearly appeared in the collider data. This shoulder like structure observed in the collider data arises due to the superposition of the distribution of the particles from some other process different from KNO producing process incoherently superimposed on the top of the KNO producing process [145]. A strong linear increase of the moments with energy and strong KNO scaling violation at √ = 7TeV in form of the observed change of slope in confirm these earlier measurements. Recently, Capella and Ferreiro [65] have computed pp multiplicity distributions at LHC in the framework of a multiplescattering model (DPM) as shown in Figures 13 and 14. Multiple-scattering models do not obey KNO scaling. Indeed, the multiple scattering contributions, which give rise to longrange rapidity correlations, become increasingly important when increases and since they contribute mostly to high multiplicities they lead to KNO multiplicity distributions that get broader with increasing . On the other hand, the Poisson distributions in the individual scatterings lead to short-range rapidity correlations and give rise to KNO multiplicity distributions that get narrower with increasing . Due to the interplay of these two components the energy dependence of the KNO multiplicity distributions (or of Advances in High Energy Physics  its normalized moments) depends crucially on the size of the rapidity interval [146]. For large rapidity intervals the multiple-scattering effect dominates and KNO multiplicity distributions get broader with increasing . For small intervals the effect of the short-range component increases leading to approximate KNO scaling, up to ∼ 6. We have shown that the above features are maintained up to the highest LHC energy and that for a given pseudorapidity interval ( 0 = 2.4) the rise of the KNO tail starts at a value of that increases with energy.
All these observations and sizable growth of the measured nondiffractive inelastic cross-sections with increasing energy clearly indicate the importance of multiple hard, semihard, and soft partonic subprocesses in high energy inelastic hadronic collisions [147][148][149][150].

Intermittency.
A remarkably intense experimental and theoretical activity has been performed in search of scale invariance and fractality in soft hadron production processes. In this search, a lot of efforts have been performed by investigating all types of reactions ranging from + − annihilation to nucleus-nucleus collisions at different energies. In this investigation, self-similarity (or intermittency) is a power law behaviour of a function of the form ( ) ∼ and thus it also gives the scaling property ( ) ∼ ( ). Since the scaling is the underlying property of critical phenomena, it is natural to think of a phase transition wherever scaling (or self-similarity) is found. Self-similarity is also connected with a fractal pattern of the structure. Bialas and Peschanski [151,152] introduced a formalism to study nonstatistical fluctuations as the function of the size of rapidity interval by using the normalized factorial moments of order . It is expected  that scale invariance or fractality would manifest itself in power law behaviour for scaled factorial moments of the multiplicity distribution in such domain. Scaled factorial moment is defined as Similarly the scaled factorial moments are defined as where ⟨ ⟩ = ∑ ∞

=1
. Intermittency is defined as the scale invariance of factorial moments with respect to changes in the size of phase-space cells or bins (say ), for small enough , This gives the wide, nonstatistical fluctuations of the unaveraged rapidity distribution at all scales. Such a kind of behaviour is a manifestation of the scale invariance of the physical process involved. In Figures 15 and 16, the intermittent behaviour of charged hadrons produced in the nucleusnucleus interactions is shown in rapidity-and azimuthalspace [153]. The slope ( ) in a plot of ln versus − ln at a given positive integer is called intermittency index or intermittency slope which provide us with an opportunity to characterize the apparently irregular fluctuations of the particle  density. The term intermittency was chosen in analogy to the intermittent temporal and spatial fluctuations. If there is no dynamical contribution to the multiplicity fluctuation, such as due to a phase transition or any other type of mechanism, should exhibit the Poisson distribution, reflecting only the statistical fluctuations. The strength of the dynamical fluctuation vary from one event to another event due to the difference in initial condition. In such case, would have no dependence on ⟨ ⟩ and therefore on phasespace bin ( ). The scaled factorial moments (SFMs) reduce the statistical noise which is present in event with finite multiplicity. Moreover, the method of SFMs is potentially suitable for investigating the multiparticle correlation on small scales.
A clear signal of the intermittency in + − data was observed for the first time by HRS results [154,155], shortly followed by TASSO collaboration [156] at a center-of-mass energy of about 35 GeV by performing the analysis in 1D (rapidity ( )-space) and 2D (rapidity-azimuthal ( − )space). At LEP energies, the DELPHI [157], ALEPH [158], and OPAL [159] collaborations performed the intermittency analysis in 1D and 2D distributions. The LUND parton model (JETSET PS) predictions were also found to be consistent with the data. The CELLO collaboration analysed the 3D intermittency signal in + − annihilation and found good agreement with the LUND model. The hadron-proton ( +and + -) collisions were analyzed by the NA22 collaboration. The data for -collisions at 630 GeV were analyzed by the UA1 collaboration and an indication for the increase of the intermittency signal for the low-multiplicity sample was found. A multichain version of DPM including minijet production was compared to NA22 and UA1 data but the slopes were found to be too small [160][161][162]. The NA22 group performed the analysis for hadron-nucleus interactions and they found a weaker intermittency signal for large targets. The hadron-nucleus and nucleus-nucleus interactions were also studied by KLM collaboration in 1D and 2D phasespace. A decrease in the intermittency signal was observed for larger projectile nuclei and this decrease is smaller than that expected from the increase of the mean multiplicity in the collision. The EMU-01 collaboration performed the intermittency analysis for different colliding systems and found a similar dependence. Further, the results for the SFMs in 3D spectra for + − , hadron-proton, hadron-nucleus, and nucleus-nucleus collisions have shown that the dependence of the SFM on the resolution is stronger than a power law. The failure of Monte Carlo calculations to replicate the observed strength of the intermittency signal is quite evident in the nucleus-nucleus interactions [163]. A deeper investigations on 1D, 2D, and 3D phase-space in different kinematic variables ( , , and ) was performed by OPAL collaboration for high statistics study at LEP of up to the fifth order and down to very small bin sizes [164,165]. Later on, it was shown that the negative binomial distribution (NBD) faces difficulties to describe the high statistics genuine multiparticle correlations and no one of the conventional multiplicity distributions, including NBD, can describe the high statistics data on intermittency of OPAL measurements [166,167].
Here, we discuss some important systematics and regularities observed in recent years in this intermittency study. The phase-space density of multiparticle production is anisotropic, and the upward bending in the SFM plot is a direct consequence of this anisotropy [168,169]. Thus, it is suggested that in the (2D) SFM analysis, the phase-space should be partitioned asymmetrically [170] for taking into account the anisotropy of the phase-space by introducing a "roughness" parameter called the Hurst exponent ( ). The Hurst exponent is calculated by fitting the (1D) SFM with the Ochs formula [171,172]. For < 1.0, the -direction is partitioned into finer intervals than the -direction; for > 1.0, the vice-versa is true,while = 1 means that the phasespace is divided similarly in both directions. Ochs and Wosiek [173] observed that 1D factorial moments, even if they do not strictly obey the intermittency power law over the full rapidity range, still obey the generalized power law where the function depends on the energy and the bin width . Eliminating yields the linear relation This relation is found to be well satisfied by the experimental data for various reactions. The dependence of on can be examined by establishing a connection between intermittency and multifractality [34]. The generalized Renyi dimensions ( ) are related to as where is the topological dimension of the supported space and the anomalous dimensions are defined as If we plot the ratio of anomalous dimensions as a function of the order for various reactions, thedependence is claimed to be indicative of the mechanism causing intermittent behaviour and all points fall on a universal curve parameterized as The above parameterization has been derived by Brax and Peschanski [174] for a self-similar parton-branching process.
= 0 will imply an order-independent anomalous dimension which will happen if intermittency was due to a second-order phase transition [161]. Consequently, monofractal behaviour may be a signal for a quark-gluon plasma phase transition. A noticeable experimental fact that the factorial moments of different order follow simple relation (67) means that correlation functions of different orders are not completely independent but are interconnected in some way. Intermittency can be further studied in the framework of Ginzburg-Landau (GL) model as used to describe the confinement of magnetic fields into fluxoids in type II superconductor, according to which, the ratio / 2 should respect the relation with ] being a universal quantity independent of the underlying dimension. The value of ] was first derived analytically by Hwa and Nazirov [179] in Ginzburg-Landau model and was confirmed later in a laser experiment [180]. If there is any phase transition, it may not necessarily always be a thermal one as the new phase formed is not essentially characterized by the thermodynamical parameters. The possibility of the simultaneous existence of two nonthermal phases (in analogy to different phases of the spin-glass) can be investigated by the intermittency parameter [175] = + 1 .
In case of the existence of such different phases in a selfsimilar cascade mechanism, should have a minimum at some value = . The region < resembles liquid phase with a large number of small fluctuations and > resembles dust phase with a small number of large fluctuations. Another way to measure the intermittency strength in terms of the intermittency index is the framework of a random cascading model like the -model [181]. According to which the strength parameter is related to the Renyi dimensions as [182] = √ 6 ln 2 ( − ).
A thermodynamic interpretation of multifractality can also be given in terms of a constant specific heat , provided that the transition from monofractal to multifractal is governed by a Bernoulli type of fluctuation. Bershadskii proposed a phenomenological relation among the and as [182] = ∞ + ln − 1 ; = 0 in the monofractal phase which becomes nonzero finite in the multifractal phase.
The experimental data on intermittency do not satisfy the criterion for the phase transition and thus one can say that the observed intermittency patterns are not suitable probe for QGP formation [163].

Negative Binomial Multiplicity Distribution.
Another interesting and well-known feature observed in multiplicity distribution of produced charged particles in high energy collisions is the occurrence of a negative binomial distribution (NBD) which holds good for a variety of collision processes. It has been observed experimentally for hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions over a wide range of energy in finite rapidity as well as in the full phasespace. It has further been found that the NBD is valid not only for hadronic, but also for semileptonic and leptonic processes as well which emphasize that it is a general property of multiparticle production process regardless of type of colliding particles. To be more specific, the NBD qualitatively describes well the multiplicity distribution almost in all inelastic high energy collision processes except for the data particularly at the highest available collider energies where some deviations seem to be apparent. The negative binomial probability distribution for obtaining charged particles in the final state is given as follows: where and are two parameters varying with energy and determined by the experimental data. The parameter has the interpretation of average multiplicity and is the parameter influencing the width of the distribution. These two parameters are related to the dispersion = √ 2 − 2 as follows: It is observed that average multiplicity ( ) increases with energy (√ ) and decreases with energy. Thus, the negative binomial distribution provides a convenient framework for analyzing the energy variation of the shape of the multiplicity distribution in terms of only two energy-dependent parameters (i.e., and ). In spite of the wide range of the applicability of the NBD in high energy, it is still a not very well understood phenomenon. The parameter used in the NBD is quite interesting quantity as, for increasing ( → ∞), the probability distribution ( NBD ) gets narrower tending to be the Poisson distribution, while for the decreasing values of parameter , the probability distribution becomes broader and broader than the Poisson distribution. For = 1, the NBD is given by the Bose-Einstein (or geometric distribution). Under the limit of large multiplicity ( ch → large), the NBD goes over to a gamma distribution. The negative binomial distribution with fixed parameter exhibits scaling and asymptotic KNO scaling [3]. Interpretation of empirical relation (76) in terms of underlying production mechanism common to hadronic, leptonic, and semileptonic processes is still a challenging problem till date. Some attempts have already been made to derive the NBD from general principles using a stochastic model [183], a string model [184], a cluster model [185], a stationary branching process [186], and a two-step model of binomial cluster production and decay [187]. However, it is still difficult to understand why the same distribution fits such diverse reactions. Moreover, we are still lacking a microscopic model explaining the behaviour of parameter of the NBD distribution which according to the fits decreases with the energy. Recently the NBD form has theoretically been derived in a simplified description of the QCD parton shower with or without hadronization.
In particular, the NBD qualitatively describes the multiplicity distributions in almost all inelastic high energy collisions except for the data recently obtained in collider experiment at much higher energies showing the deviation from the NBD distribution. For the charged particle multiplicity distribution at √ = 900 GeV SPS energy, a single NBD function could suitably describe the data only for small pseudorapidity intervals at the midrapidity region, whereas for large intervals, shoulder like structure appeared in the multiplicity distribution. Appearance of substructures in multiplicity distributions at higher energies and in larger pseudorapidity intervals has been attributed to weighted superposition or convolution of more than one function due to the contribution of more than one source or process of particle production. The two-component model of Giovannini and Ugoccioni [145,188] is quite useful in explaining the multiplicity distribution data at the cost of increased number of adjustable parameters, in which they used weighted superposition of two NBDs representing two classes of events, one arising from semihard events with minijets or jets and another arising from soft events without minijets or jets. Consider the following: where soft signifies the contribution of soft events and is a function of √ , the other parameters are functions of both √ and , having their usual meanings as in (76) indicating respective components. For small pseudorapidity intervals, a single NBD function explains the distribution data at √ = 0.9 and 2.36 TeV reasonably well, but at √ = 7 TeV, a single NBD function appears inadequate while weighted superposition of two NBDs explains the data satisfactorily as shown in Figure 17, whereas for large pseudorapidity intervals ( < 2.4), the weighted superposition of two NBDs agrees better than a single NBD function with the distribution data for all available LHC energies clearly indicating the role of soft and semihard processes in TeV range.

Limiting Fragmentation or Longitudinal Scaling Observed for Charged Hadrons.
In order to understand the collision dynamics completely, the study of particle production away from midrapidity region provides some more insight. The properties of charged hadron production in the fragmentation region can be studied well by viewing the data at different energies in the rest frame of one of the colliding nuclei as the particles near beam and target rapidity are supposed to be governed by the hypothesis of limiting fragmentation [93]. The hypothesis of limiting fragmentation for inclusive particle distributions in high energy hadron-hadron collisions was first proposed by Benecke et al. [93] in 1969. According to this hypothesis, the produced particles in the rest frame of projectile or target will approach a limiting distribution; for example, the pseudorapidity density ( / ) as function of ( − beam ) approaches a limiting curve in the fragmentation region. This picture is based on the geometrical picture of scattering as considered by Yang and cowokers [190][191][192].
Advances in High Energy Physics According to their assumption, in the laboratory frame the projectile nuclei undergo a Lorentz contraction in the form of a thin disk during the collision with the target nuclei which gets further and further compressed with the increasing energy. However, the momentum and quantum number transfer process between the projectile and target does not appreciably change during this compression. This behaviour leads to limiting distribution of the produced particles in the fragmentation region independent of the center-of-mass energy. Central to this limiting fragmentation hypothesis was the assumption of the constancy of the total hadronic crosssections at asymptotically large center-of-mass energies. In other words, the probability of the interaction does not change rapidly with the further increase of energy of the incident colliding nuclei. It is expected that the excitation and breakup of a hadron would become independent of the center-of-mass energy and the produced particles in the fragmentation region would approach a limiting distribution. Later on, it was realized that the total hadronic cross-sections are not constant; they grow at a slow rate with the increasing center-of-mass energy (√ ) [193,194]. This slow increase in the cross-sections is still continued up to the LHC energy. In spite of this fact limiting fragmentation is found to remain valid over a wide range of energy. This asymptotic property has been observed experimentally in a variety of collision processes such as hadron-hadron [29,195], hadron-nucleus [29,196], and nucleus-nucleus [123-125, 153, 197-205] for produced charged hadrons in fragmentation region at different energies. It can be clearly observed that this scaling feature covers a more extended range of than expected according to the original hypothesis of limiting fragmentation due to which the term extended longitudinal scaling is often used to describe this phenomenon. For most central collision, the distributions in fragmentation region are indeed observed to be independent of the collision energy over a substantial shifted pseudorapidity (i.e., = − beam ) range. In Figure 18, a compilation of experimental data to study the limiting fragmentation behaviour in -collisions is shown along with the CGC model results [206]. The CGC calculations incollisions with the parameters 0 = 0.15 and = 0.69 are shown with dashed line at 53, 200, 546, and 900 GeV. A slight discrepancy between the calculations and the experimental data in the midrapidity region is seen which arises mainly due to the violation of ⊥ formalism in that regime as ⊥ formalism becomes less reliable; the further one is from the dilute-dense kinematics of the fragmentation regions [207][208][209][210]. In Figure 19, the pseudorapidity distributions at different energies with different colliding nuclei lie on a common curve in fragmentation region over a broad range of indicating that limiting fragmentation holds good for -collision over a wide range of energy and is independent of the size of the colliding nuclei. This gives a clear indication that this universal curve is an important feature of the overall interaction and not simply a breakup effect [200]. In the CGC approach, the physical picture behind the limiting fragmentation is based on the black disk limit, according to which, in the rest frame of the target nucleus, the projectile nucleus is highly contracted and due to its large number of gluons looks black to the partons in the target nucleus which interact with the projectile nucleus with unit probability [211]. The CGC model calculations [206] with parameters 0 = 0.0 and = 0.46 at √ = 19.6, 130, and 200 GeV for central Au-Au collisions are shown with solid line. Again, the discrepencies in the CGC calculations and the experimental data are quite evident which arises due to the violation of ⊥ -factorization and due to which the decrease in the multiplicity is expected in this regime [207][208][209][210]. SPM results [212] are shown by dashed line in the figure. Similarly, the scaled pseudorapidity density in noncentral collisions at different energy is also found to exhibit the limiting fragmentation in fragmentation region over a broad range of . Thus, we see that the hypothesis of limiting fragmentation is energy-independent for a fixed centrality. Similar to thecollisions, the fragmentation region for Au-Au colliding nuclei grows to pseudorapidity extent with the colliding beam energy. Longitudinal scaling observed in the hadronic collisions shows a strong disagreement with the boost invariant scenario which predict a fixed fragmentation region and a broad central plateau growing with energy. Recently, Bialas and Jezabek [213] argued that some qualitative features of limiting fragmentation in hadronic collisions can be explained in a two-step model involving multiple gluon exchange between partons of the two colliding hadrons. This mechanism provides a natural explanation of the observed rapidity spectra, in particular their linear increase with increasing rapidity distance from 20 Advances in High Energy Physics Data is taken from [118,125]. Cu-Cu Au-Au Pb-Pb Figure 19: Charged hadron pseudorapidity density per participant pair as a function of − beam at RHIC and LHC energies for different colliding systems. Data are taken from [122,128].
the maximal rapidity and the short plateau in the central rapidity region. In order to understand the nature of hadronic interactions that led to limiting fragmentation, a new mechanism is required which can give a better understanding of the subject. Although, the limiting fragmentation behavior in small region is quite successfully described by Color Glass Condensate (CGC) approach, the procedure employed in this approach needs further improvements. One of them is the impact parameter dependence of the unintegrated gluon distribution functions, whereas in large region, the phenomenological extrapolation is used which also needs to be better constrained and consistent.

Summary and Conclusions
In this review, we have attempted to draw certain systematics and scaling laws for hadron-hadron, hadron-nucleus, and nucleus-nucleus interactions. Any distinct deviation from these relations observed in ultrarelativistic nuclear collisions will be the indicator of a new and exotic phenomenon occurring there. However, there are other effects which cause deviations in the scaling features such as mixing of contributions from soft and hard processes. Present experimental results show that the scaling features of charged hadrons previously observed at lower energies do not hold good at higher energies. Thus, we are still lacking a scaling law which is universal to all types of reactions and can give some basic understanding of the mechanism involved in the particle production. The observed deviations from the well established scaling laws are of great interest as these clearly hint towards the increasing role of hard processes contributing in the production of final state particles at higher energies. These hard processes of the primordial stage of the collision contribute significantly to the hadronization process at the late stage of the evolution. The applicability of the two-component approach in nucleonnucleon interaction picture and in wounded quark picture to evaluate midrapidity density of charged hadrons is quite promising as the weighted superposition of hard and soft processes comes into picture and successfully describes the available experimental data over a large range of energy. Our study of the charged hadron production using phenomenological models along with the experimental data essentially highlights some kinds of scaling relations for these high energy collision processes. part scaling of pseudorapidity density at midrapidity holds only for limited range of energy, while for the entire range of energy (including LHC) it does not seem to hold good which clearly hints towards the contributions of some other processes. On the other hand, the wounded quark model provides a more realistic explanation of charged hadron production in terms of the basic parton structure involved in these processes. However, the charged hadron pseudorapidity density at midrapidity in wounded quark model also does not show an exact scaling type behaviour and the deviations are clearly observed which arises due to difference in the number of quark collisions for different colliding nuclei as involved in two-component model approach used for calculating the pseudorapidity density. Thus, both in wounded nucleon picture and in wounded quark picture, the participant scaling does not hold good over the entire energy range. However, two-component picture which involves the relative contribution of both the number of participants and the number of collisions provides a suitable description of the available experimental data in the entire energy range. It clearly indicates the contributions of both the hard and soft processes involved in the charged hadron production. Further, the deviations from the KNO scaling behaviour of charged hadron multiplicity distribution give a clear indication towards the role of multiple hard, semihard, and soft partonic subprocesses involved in the hadronic collisions. Similarly, the failure of single NBD function to describe the experimental data is clearly visible in TeV energy range also, while the success of weighted superposition of two NBDs in explaining the multiplicity distribution clearly reflects the contribution of more than one source or process involved in these collisions. The well-known limiting fragmentation behaviour of charged hadrons in fragmentation region seems to hold good since the scaling is compatible with the experimental results at LHC energy (2.76 TeV). This universal limiting fragmentation behaviour appears to be a dominant scaling feature of the pseudorapidity distribution of the charged hadrons in high energy collisions. Another such phenomenon which is universal to all types of reactions is the scaling properties of multiplicity fluctuations (called intermittency). The picture in this sector is still not very clear and complete which requires furthermore scrutiny.
In conclusion, the multiparticle production in the ultrarelativistic hadron-hadron, hadron-nucleus, and nucleusnucleus collisions is still a burning topic in high energy physics because it throws light on the production mechanism. Since we still lack explanation of these processes in QCD, we hope that our investigations regarding universal systematics and scaling relation on charged hadron production will certainly lead us to a correct phenomenological model for the production mechanism. In addition to this, recent studies regarding fluctuations and correlations is expected to provide deeper insight into different stages of the system evolution produced in heavy-ion collisions. One of the key observables in fluctuation-correlation studies is elliptic flow (V 2 ). Elliptic flow provides the strength of interactions among the particles at early stages in the heavy-ion collisions. A consistent and unified description of the elliptic flow measured in Cu-Cu and Au-Au collisions at midrapidity can be obtained after scaling V 2 with the participant nucleon eccentricity ( part ) [214,215]. Further, elliptic flow scaled with part for the same number of participating nucleons in Cu-Cu and Au-Au system shows a scaling pattern of up to 3.5 GeV/c in and across ±5 units of pseudorapidity [214,215]. Thus, it will definitely be interesting to investigate these fluctuations and correlations in future to draw a more consistent and unified picture of particle production and their evolution in heavyion collisions.

24
Advances in High Energy Physics [128] K. Gulbrandsen, "Charged-particle pseudorapidity density and anisotropic flow over a wide pseudorapidity range using ALICE at the LHC, " Journal of Physics: Conference Series, vol. 446, Article ID 012027, 4 pages, 2013.

Introduction
The primary objective of studying high-energy heavy-ion interactions is to compress and heat up the nuclear matter beyond the critical values of certain thermodynamic parameters in such a way that the boundaries of individual nucleons melt down to form a thermally and chemically equilibrated color deconfined state of quark-gluon plasma (QGP) [1][2][3].
As the collision process evolves in space-time, such an exotic state, if formed, subsequently expands and cools down to undergo a reverse transition to the usual hadronic state. In high-energy physics, the process is known as multiparticle production. Each nucleus-nucleus ( ) event has its own collision history that ultimately leads to large local fluctuations in the final state particle densities, apparently lacking any definite pattern. In different events, dense clusters of particles are formed at different locations and at different scales of phase-space variables. It is therefore, necessary to formulate a technique that can examine these clusters on an event by event basis. Often, the fluctuations are so large that they can not be explained simply in terms of statistical reasons. It is all very likely that nonstatistical (dynamical) components are present as well, but they are contaminated with trivial noise. With the help of suitable data analysis techniques, it is possible to filter out the genuine clusters of produced particles that in many high-energy interactions are found to scale selfsimilarly with the phase-space resolution size, approximately following a power law [4]. Global analysis techniques such as the scaled factorial moment method [5,6], the frequency moment method [7,8], and the " -parameter" method [9] have extensively been used to characterize the particle correlations, and efforts are made to interpret the results in the framework of various mechanisms that are mostly speculative in nature. Formation of the QCD parton shower cascade [10], formation of the disoriented chiral condensate [11,12], and collective phenomena like the emission of Cherenkov gluons and/or Mach shock wave in the nuclear/partonic medium [13,14] are examples of some such speculative measures. The wavelet analysis technique has found its application in many branches of physics [15,16]. It is capable of revealing the local properties of particle distributions in individual events and at different scales. The technique is, therefore, very appropriate for pattern recognition in multiparticle distribution. In the present paper, we report some results on the wavelet analysis of the angular distribution of shower tracks that are caused by the singly charged produced particles moving with relativistic speed. Our data samples comprise 28 Si-Ag/Br events at an incident energy of 14.5A GeV and 32 S-Ag/Br events at an incident energy of 200A GeV. Nuclear emulsion technique has been used to collect the experimental data. Several works on the wavelet analysis of multiparticle production at lab = 10-10 3 GeV/nucleon have so far been reported [17][18][19][20][21]. These works suffer from a common drawback in the sense that there has hardly been any comparison between the experiment and a proper simulation on interaction. It is, therefore, difficult to conclude whether the experimental observations are significant or they are consequences of mere statistical artifacts. We compare our results with the predictions of a microscopic transport model based on the ultrarelativistic quantum molecular dynamics (UrQMD) [22,23]. It may also be noted that the UrQMD code does not incorporate the Bose-Einstein correlation (BEC) among the identical mesons, a phenomenon considered to be the most dominant factor behind particle cluster formation. Therefore, keeping the phase-space distribution of the produced particles (mostly -mesons) unaltered, we implement a charge reassignment algorithm [24][25][26] to each UrQMD generated event and thereby try to mimic the BEC into simulation. Any discrepancy between the experiment and the simulation should now be recognized as a genuine collective behavior of the final state particle emission, which has to be interpreted in terms of nontrivial dynamics. Thus, the present analysis on one hand allows us to compare experiments induced by very close projectile masses, while the corresponding lab values differ by an order of magnitude; on the other hand, it provides an opportunity to compare the experiment with such simulated data where the known dominant source(s) of cluster formation is taken into account. Our paper is organized according to the following sequence: in Section 2, we briefly describe the experiment and the gross characteristics of the data samples used in the paper; in Section 3, we summarily discuss the basic aspects of the UrQMD model and explain the charge reassignment algorithm; in Section 4, without claiming any originality we outline the method of wavelet analysis; in Section 5, we discuss our results-experimental as well as simulated; and in Section 6, we conclude with a critical assessment of these results.

Experiment
Ilford G5 nuclear photoemulsion pellicles of size 16 cm × 10 cm × 600 m were horizontally irradiated with 28 Si beam at an incident energy of 14.5A GeV from the alternating gradient synchrotron (AGS) of the Brookhaven National Laboratory (BNL). Similarly pellicles of size 18 cm × 7 cm × 600 m were irradiated with a 32 S beam at an incident energy of 200A GeV from the super proton synchrotron (SPS) at CERN. The primary interactions (also called events/stars) within the emulsion plates are found by following individual projectile tracks, that is, tracks caused by the 28 Si and 32 S nuclei, along the forward as well as along the backward direction. The process known as line scanning was performed with Leitz microscopes under a total magnification of 300x. On the other hand, Koristka microscopes were utilized for the track counting and angle measurement purposes, for which a total magnification of 1500x was used. The secondary charged particles coming out of an event are categorized in the following way.
(i) The shower tracks-caused by the singly charged produced particles most of which are mesons. In an event, their number is denoted by .
(ii) The grey and black tracks-resulting from the fragments of the target (Ag/Br) nuclei. Their numbers are denoted, respectively, by and , and the total number ℎ (= + ) denotes the number of target fragments in an event.
(iii) The projectile fragments-caused by the spectator parts of the incident projectile (Si/S) nuclei. In an event, their number is denoted by pf .
The details of emulsion experiments, track selection criteria, and data acquisition techniques are nicely elaborated in [27,28]. To ensure that an interaction involves either an Ag or a Br nucleus as the target, in each event we impose a cut ℎ > 8. Thus, altogether 331 28 Si-Ag/Br events and 200 32 S-Ag/Br events are selected for further analysis, which is confined only to the angular distribution of the shower tracks. The average shower track multiplicity ⟨ ⟩ = 52.67 ± 1.33 for the 28 Si-sample, and ⟨ ⟩ = 217.19 ± 6.16 for the 32 Ssample. The pseudorapidity ( ) variable is an approximation of the dimensionless boost parameter rapidity, and it is related to the emission angle ( ) of a track as = −ln tan( /2). An accuracy of = 0.1 unit is achieved through the reference primary method of angle measurement. For each data set, the distribution can be crudely approximated by a Gaussian function, whereas the azimuthal angle ( ) distributions are in both cases more or less uniform between 0 and 2 . The Gaussian fit parameters for the -distribution in the 28 Si-sample are the peak density 0 = 17.88, the centroid 0 = 1.90, and the width = 2.17. For the 32 S-sample they are 0 = 56.34, 0 = 3.37, and = 1.55. Due to event averaging, the statistical noise as well as the nonstatistical components of the fluctuations present in individual events are simultaneously smoothed out in the overall distributions. Our basic task is, therefore, (i) to look for statistically significant unusual local structures in the particle distributions in individual events and (ii) to study systematic collective behaviour in large samples of events, if there is any.

Simulation
To eliminate the background noise, we compare the experiment with the UrQMD (version 3.3p1) model [22,23]. UrQMD itself does not incorporate any kind of particle correlation, and therefore, in this regard it can be utilized only to generate the statistical background. The rationale behind using a transport model like the UrQMD is that it treats the final freeze-out stage dynamically. It does not make any equilibrium assumption and describes the dynamics of a hadron gas system very well in and out of the chemical   and/or thermal equilibrium. In the present case, neither the incident nuclei are too large nor the collision energies are extremely high. Hence, one can not be sure whether local thermal and/or chemical equilibrium are/is achieved.
To describe such nonequilibrium many-body dynamics, a transport model is a natural choice. The UrQMD model is applicable over a wide range of energies starting from √ ∼ 5 GeV and ending up at √ > 200 GeV. In this scheme, particle production at high-energy interactions is implemented by the color string fragmentation mechanism similar to that of the Lund model. The UrQMD code has been successfully used to reproduce the particle density distributions and the transverse momentum ( ) spectra of various particle species in proton-proton, proton-nucleus, 4 Advances in High Energy Physics  and collisions. However, as mentioned above, the model does not incorporate the symmetry aspects of the fields associated with the produced particles.
It is well known that the Bose-Einstein correlation (BEC), an identical particle effect, dominates the origin of cluster formation. Due to the correlated emission of like sign and/or opposite sign mesons, the particle yield with small relative momenta may be enhanced, which is one of the reasons of large local densities in the final state particles in any high-energy interaction. The effect is quantum statistical in nature and it is not incorporated in the framework of a transport model like the UrQMD. Recently, a new algorithm Advances in High Energy Physics has been developed [24,25], where the BEC is introduced by reassigning the charges of produced mesons in such a way that the overall phase-space distribution in each simulated event remains unaltered. The event-wise particle multiplicities are not changed, and it looks as if the particles (mesons) are satisfying the BE statistics. The method of numerically modeling the BEC at the level of a so-called "after burner, " where the output of the UrQMD code is used, is very briefly described below. The UrQMD code provides the four coordinates and the four momenta of all particles. The particle information are contained in an ASCII file written in the OSCAR format. Each particle entry in an event contains a serial number, a particle ID, the particle four momentum ( , , , ), the particle mass , and the final freeze-out four coordinates ( , , , ).
(i) In the first step, we arbitrarily choose a meson from an event, and irrespective of its original charge, assign a charge "sign" = +, − or 0 to it with weight factor = / . Here, + , − , 0 are the numbers, respectively, of the +ve, −ve and neutral mesons, and (= + + − + 0 ) is the total number of mesons in the event. The chosen meson, say the th one, defines a phase-space cell.
(ii) In the next step, we calculate the distances in the four momenta ( ) = | − | and the four coordinates ( ) = | − | between the already chosen meson (i.e., the th one) and all other mesons (indexed by ) that are not yet assigned any charge "sign. " Each th meson is associated with a weight factor [24] which characterizes the bunching probability of the particles in a given cell.
(iii) Then, we start to generate uniformly distributed random numbers ∈ (0, 1). If < , we reassign the same charge "sign" to the th meson and put it in the same phase-space cell as the th one. We continue the process until either (a) exceeds or (b) all mesons in the event having same charge "sign" as the th one are exhausted.
(iv) Now, we go back to our first step and again randomly choose a meson from the pool of the left over mesons for which the charge reassignment has not yet been done. Obviously, the weight factors ±,0 will now be updated, as some of the particles present in the event are already used up.
(v) The algorithm is then repeated until all mesons belonging to each charge variety in the event are used up, and then we move to the next event.
Only the meson pairs with space-like separation are accepted, and appropriate checks are imposed so that does not exceed unity [26]. Without changing the overall set of four momenta, four coordinates, or total mesonic charge of the system, we can in this way generate clusters of closely spaced identical charge states of mesons.
We use the UrQMD code in its default setting and generate the minimum bias event samples in the laboratory frame, separately for the Ag and the Br targets and, respectively, for the 28 Si and 32 S projectiles. For each projectile, the Ag and Br event samples are then mixed up. While doing so, the proportional abundances of these nuclei in the G5 emulsion [28] are maintained. Only the produced charged mesons are retained for subsequent analysis. From the minimum bias samples, we select subsamples in such a way as to match the respective experimental -distributions. For each projectile, the final sample of simulated events is five times as large as the corresponding experimental one, and the corresponding normalized and/or distributions can be approximately described by more or less the same set of parameters as quoted above for the respective experiment. The UrQMD events are then passed through the charge reassignment algorithm as mentioned above, and from now on these modified samples will be known as the UrQMD + BEC samples.

Wavelet Analysis
The wavelet method is used to analyze nonstationary as well as inhomogeneous signals that can be any ordered set of numerically recorded information on some processes, objects, functions, and so forth. A wavelet construction is based on a dilation ( ) and a translation ( ) parameter. By changing , the local characteristics of a signal are distinguished, while by doing the same with the whole range of a spectrum is analyzed. Unlike the Fourier transformation method which uses only two basis functions, the wavelet transformation method can in principle use an infinite set of discrete or continuous functions as the basis. However, a suitable choice of the basic wavelet is made only after looking at the basic features of the signal to be processed.
In the present case, we use a continuous wavelet to find out the strongest fluctuations on an event by event basis that may exceed the expected statistical noise at a particular scale and at a particular point of the underlying phase-space variable, say . The wavelet transform of a function ( ) is its decomposition into an orthogonal functional family (Ψ) like where Ψ , ≡ (1/√ )Ψ(( − )/ ) is the wavelet characterized by and as mentioned above, are often used as mother wavelets. In particular, the second derivative: popularly known as the Mexican hat (MHAT) distribution, is customarily used to analyze multiparticle emission data.
In Figure 1, we show the plots of 1 ( ) and 2 ( ). In the present case, the phase-space variable is and the signal to be analyzed is the density function where is the number of shower tracks in the event sample considered and is the pseudorapidity of the th particle. may either be the value of a single event, or it may be the total number of shower tracks present in the entire event sample/subsample within the interval considered. The wavelet transform of ( ), therefore, becomes Ψ ( , ) is the contribution of Ψ( , ) to the spectrum ( ) in the sense that it represents the probability to find out a particle at some = at the scale . A wavelet image at large scale shows only the coarse features, while the same at small reveals the more detailed and finer structures of the underlying distribution.

Results
In Figure 2, we have presented the 2 pseudorapidity spectra of the shower tracks coming out of all 331 28 Si-Ag/Br events at an incident energy of 14.5A GeV at different scales (four different values). For comparison, the UrQMD and the UrQMD + BEC graphs are plotted along with those of the experiment. Though the overall multiplicity and the distributions of the simulated and experimental event samples are identical, we observe that the 2 spectra of the experiment are quite different from those of the simulations. The fluctuations are more rapid in the experiment. In Figure 2(a), we can see peaks at ≈ 1.0, 2.0 and 3.0 in the experimental distribution. These are the preferred values where particle clusters are formed, and one can relate them, respectively, to the target fragmentation, the central particle producing, and Advances in High Energy Physics the projectile fragmentation regions. However, we also notice that the central particle producing peak around ≈ 2.0 is well reproduced by the UrQMD + BEC plot. As expected, with increasing the fluctuations are smoothed out, and the distributions gradually converge to the mother wavelet 2 . Needless to say that such plots do not reflect any unique structure of particle distribution in individual events. They would rather correspond to a systematic collective behaviour of the particle emission of the entire sample, if there is any. Similar plots for all the entire 32 S-Ag/Br event sample at 200A GeV/c are presented in Figure 3. While the general features of Figures 2 and 3 are more or less similar, we notice that more peaks are present in the 32 S-sample than in the 28 Si-sample. There are at least 6 prominent peaks within ≈ 1.0-5.0 in the experiment, out of which two very prominent peaks are lying within the central particle producing region ( ≈ 3.0-4.0), and the simulations can not replicate them. Even at a large scale (= 0.5), we find a hump to the left of the peak of the experimental distribution that refuses to be smoothed out, a feature that is absent in the 28 Si case. The other peaks, one to the right and three to the left side of the central region, can be related, respectively, to the projectile and the target fragmentations. The wavelet spectra can be generated for individual events at many different scales that can be used to simultaneously study the location and scale dependence of Ψ ( , ). We have obtained such distributions for two high multiplicity events, one for a 28 Si-Ag/Br event ( = 146) and the other for a 32 S-Ag/Br one ( = 379). We have schematically presented the Ψ ( , ) distributions, respectively, in Figures 4(a) and  4(b). The dark (white) regions in the graphs correspond to the low (high) Ψ ( , ) values. As mentioned before, at the finest scales ( < 0.05) we only get information about individual particles while at large particles loose their individual identities to coalesce into a big cluster. It is, therefore, pointless to study any event at these two extreme  Identification of the peculiarities in particle distribution in individual events from the 2-d energy spectrum { Ψ ( , )} 2 is a difficult proposition. Instead we may concentrate on the scalogram ( ) defined as which represents the 1-d energy distribution with respect to the scale ( ). A scalogram reflects some of the characteristic features of an event. As, for example, a minimum on it represents the average distance between the particle clusters, while a maximum represents the most compact groups of particles present. Two such scalograms, one for the 28 Si-Ag/Br event and the other for the 32 S-Ag/Br event considered above, are plotted in Figure 5. In each diagram, a peak or a small rise seen at the lowest scale represents individual particles, and they are of little significance. In the scalogram of the 28 Si-Ag/Br event a peak at ≈ 0. such maxima (minima) are found. To check whether there exists any systematic behaviour of particle emission, or the clusters occur at random, we investigate the distributions of the extremum points over our entire event sample(s). In Figure 6, we plot the frequency distribution of the scales max and min at which, respectively, the maxima and the minima of the scalograms belonging to individual 28 Si-Ag/Br events are graphically seen. The experiments as usual are plotted together with the simulations. Except in Figure 6(b), where the experiment slightly exceeds the simulation at the characteristic scale of max ≈ 0.2, no significant difference between experiment and simulation is observed. In Figure 7, similar histograms for the 32 S-Ag/Br events are plotted. In this case also no significant difference between the experiment and the corresponding simulation is seen. The wavelet analysis is not complete unless we study the distributions of the locations ( ), where the local maxima in Ψ ( max , max ) are observed. We do this with different choices of scale intervals, cumulative as well as differential. In Figure 8, such distributions for the 28 Si-Ag/Br sample (both experiment and simulation) are graphically presented at different cumulative scale windows. The common features of these diagrams are that at the lowest max range the distributions are rapidly fluctuating, and as expected with increasing scale window size the fluctuations get reduced. In comparison with the experiment, the UrQMD distributions vary more smoothly. However, when the BEC is incorporated into the UrQMD, to some extent the fluctuating patterns are retrieved. The 32 S-Ag/Br sample on the other hand behaves a little differently. The distributions are shown in Figure 9. In this case, the experiment is still more fluctuating than both the UrQMD and the UrQMD + BEC plots. However, incorporating BEC into UrQMD apparently has little effect on the respective distributions. In Figures 10 and 11, the max distributions, respectively, for the 28 Si-Ag/Br and 32 S-Ag/Br samples are once again graphically shown, where we choose differential scale intervals to draw the histograms. For both sets of data, the basic features are more or less the same. As expected at the smallest scale 0.05 ≤ max ≤ 0.1 most rapid fluctuations are seen, which are systematically smoothed out with increasing max . The distributions for the 32 S-Ag/Br interaction are slightly wider than those for the 28 Si-Ag/Br interaction. It seems that the inclusion of BEC into the UrQMD in both interactions increases the heights of the local peaks to a small extent.

Conclusion
Pseudorapidity distributions of singly charged particles coming out with relativistic speeds from the 28 Si-Ag/Br and 32 S-Ag/Br interactions, respectively, at 14.5A GeV and 200A GeV, are analyzed by using the continuous wavelet transform technique. Compared to similar other such emulsion investigations [17][18][19][20][21], the target nuclei in the present case have less uncertainties.
For background noise elimination, the experiments are compared with a set of ordinary UrQMD simulated data, and also with the same set of UrQMD output that is modified by a mimicry of the Bose-Einstein type of correlation. The observed discrepancies between the experiment and the corresponding simulation should, therefore, result from nontrivial dynamics like collective flow of hadronic matter.
Irregularities in the wavelet pseudorapidity spectra, not reproducible by the simulation, are observed in individual events, and the cluster characteristics are not reproducible by the simulations. As far as systematic behavior in many events is concerned, we observed certain differences between experiment and simulation in the 28 Si event sample under consideration. The differences with all probability are not a result of ordinary correlations among identical bosons. They should be interpreted in terms of certain nontrivial dynamical reason(s), which are not very much clear from the present analysis.
The present study can be extended to the azimuthal angle distribution of the irregularities and to the 2-d wavelet analysis with larger statistics, so that the impact parameter dependence of the observed irregularities can also be investigated.

Introduction
One of the main purposes of various heavy-ion collision programmes running at various places such as relativistic heavy-ion collider (RHIC) at Brookhaven National Laboratory (BNL) and large hadron collider (LHC) at CERN is to understand the properties of strongly interacting matter and to study the possibility of a phase transition from a confined hot, dense hadron gas (HG) phase to a deconfined and/or chiral symmetric phase of quark matter called quark-gluon plasma (QGP) [1][2][3][4][5][6][7][8][9]. By colliding heavy-ions at ultrarelativistic energies, such a phase transition is expected to materialize and QGP can be formed in the laboratory. Unfortunately, the detection of the QGP phase is still regarded as an uphill task. However, the existence of a new form of a matter called strongly interacting QGP (sQGP) has been demonstrated experimentally [10]. There is much compelling evidence, for example, elliptic flow, high energy densities, and very low viscosity [11]. However, we do not have supportive evidence that this fluid is associated with the properties quark deconfinement and/or chiral symmetry restoration which are considered as direct indications for QGP formation [11]. Although various experimental probes have been devised, but a clean unambiguous signal has not yet been outlined in the laboratory. So our prime need at present is to propose some signals to be used for the detection of QGP. However for this purpose, understanding the behaviour and the properties of the background HG is quite essential because if QGP is not formed, matter will continue to exist in the hot and dense HG phase. In the ultrarelativistic nucleus-nucleus collisions, a hot and dense matter is formed over an extended region for a very brief time, and it is often called a "fireball". The quark matter in the fireball after subsequent expansion and cooling will be finally converted into HG phase. Recently, the studies of the transverse momentum spectra of dileptons [12][13][14][15][16][17][18][19][20][21] and hadrons [22][23][24][25][26][27] are used to deduce valuable information regarding temperature and energy density of the fireball. The schematic diagram for the conjectured space-time evolution of the fireball formed in the heavy-ion collisions is shown in Figure 1  labeled as "Preequilibrium" in Figure 1, processes of partonparton hard scatterings may predominantly occur in the overlap region of two colliding nuclei, thus depositing a large amount of energy in the medium. The matter is still not in thermal equilibrium, and perturbative QCD models can describe the underlying dynamics as a cascade of freely colliding partons. The time of the preequilibrium state is predicted to about 1 fm/c or less. (ii) After the short preequilibrium stage, the QGP phase would be formed, in which partonparton and/or string-string interactions quickly contribute to attain thermal equilibrium in the medium. The energy density of this state is expected to reach above 3-5 GeV/fm 3 , equivalent to the temperature of 200-300 MeV. The volume then rapidly expands, and matter cools down. (iii) If the first order phase transition is assumed, the "mixed phase" is expected to exist between the QGP and hadron phases, in which quarks and gluons are again confined into hadrons at the critical temperature . In the mixed phase, the entropy density is transferred into lower degrees of freedom, and therefore the system is prevented from a fast expansion. This leads to a maximum value in the lifetime of the mixed phase which is expected to last for a relatively long time ( > 10 fm/c). (iv) In the hadronic phase, the system keeps collective expansion via hadron-hadron interactions, decreasing its temperature. Then, the hadronic interactions freeze after the system reaches a certain size and temperature, and hadrons that freely stream out from the medium are to be detected. There are two types of freeze-out stages. When inelastic collisions between constituents of the fireball do not occur any longer, we call this a chemical freeze-out stage. Later when the elastic collisions also cease to happen in the fireball, this stage specifies the thermal freeze-out.
Since many experiments running at various places measure the multiplicity, ratios, and so forth of various hadrons, it is necessary to know to which extent the measured hadron yields indicate equilibration. The level of equilibration of particles produced in these experiments is tested by analyzing the particle abundances [22,23, or their momentum spectra [22-27, 37, 38, 46, 47, 58] using thermal models. Now, in the first case, one establishes the chemical composition of the system, while in the second case, additional information on dynamical evolution and collective flow can be extracted. Furthermore, study of the variations of multiplicity of produced particles with respect to collision energy, the momentum spectra of particles, and ratios of various particles have led to perhaps one of the most remarkable results corresponding to high energy strong interaction physics [6].
Recently various approaches have been proposed for the precise formulation of a proper equation of state (EOS) for hot and dense HG. Lacking lattice QCD results for the EOS at finite baryon density , a common approach is to construct a phenomenological EOS for both phases of strongly interacting matter. Among those approaches, thermal models are widely used and indeed are very successful in describing various features of the HG. These models are based on the assumption of thermal equilibrium reached in HG. A simple form of the thermal model of hadrons is the ideal hadron gas (IHG) model in which hadrons and resonances are treated as pointlike and noninteracting particles. The introduction of resonances in the system is expected to account for the existence of attractive interactions among hadrons [59]. But in order to account for the realistic behaviour of HG, a short range repulsion must also be introduced. The importance of such correction is more obvious when we calculate the phase transition using IHG picture which shows the reappearance of hadronic phase as a stable configuration in the simple Gibbs construction of the phase equilibrium between the HG and QGP phases at very high baryon densities or chemical potentials. This anomalous situation [60][61][62] cannot be explained because we know that the stable phase at any given ( , ) is the phase which has a larger pressure. Once the system makes a transition to the QGP phase, it is expected to remain in that phase even at extremely large and due to the property of asymptotic freedom of QCD. Moreover, it is expected that the hadronic interactions become significant when hadrons are densely packed in a hot and dense hadron gas. One significant way to handle the repulsive interaction between hadrons is based on a macroscopic description in which the hadrons are given a geometrical size, and hence they will experience a hardcore repulsive interaction when they touch each other, and consequently a van-der Waals excluded-volume effect is visible. As a result, the particle yields are essentially reduced in comparison to that of IHG model, and also the anomalous situation in the phase transition mentioned above disappears. Recently, many phenomenological models incorporating excluded-volume effect have been widely used to account for the properties of hot and dense HG [63][64][65][66][67][68][69][70][71][72][73][74][75][76]. However, these descriptions usually suffer from some serious shortcomings. First, mostly, the descriptions are thermodynamically inconsistent because one does not have a well-defined partition function or thermodynamical potential (Ω) from which other thermodynamical quantities can be derived, for example, the baryon density ( ) ̸ = Ω/ . Secondly, for the dense hadron gas, the excluded-volume model violates causality (i.e., velocity of sound in the medium is greater than the velocity of light). So, although some of the models explain the data very well, such shortcomings make the models mostly unattractive. Sun et al. [76] have incorporated the effect of relativistic correction in the formulation of an EOS for HG. However, such effect is expected to be very small because the masses of almost all the hadrons present in the hot and dense HG are larger than the temperature of the system; so they are usually treated as nonrelativistic particles except pion whose mass is comparable to the temperature, but most of the pions come from resonances whose masses are again larger than the temperature of HG [77]. In [78], two-source thermal model of an ideal hadron gas is used to analyze the experimental data on hadron yields and ratios. In this model, the two sources, a central core and a surrounding halo, are in local equilibrium at chemical freeze-out. It has been claimed that the excludedvolume effect becomes less important in the formulation of EOS for hadron gas in the two-source model.
Another important approach used in the formulation of an EOS for the HG phase is mean-field theoretical models [79][80][81][82] and their phenomenological generalizations [83][84][85]. These models use the local renormalizable Lagrangian densities with baryonic and mesonic degrees of freedom for the description of HG phase. These models rigorously respect causality. Most importantly they also reproduce the ground state properties of the nuclear matter in the low-density limit. The short-range repulsive interaction in these models arises mainly due to -exchange between a pair of baryons. It leads to the Yukawa potential ( ) = ( 2 /4 ) exp(− ), which further gives mean potential energy as = 2 / . It means that is proportional to the net baryon density . Thus vanishes in the → 0 limit. In the baryonless limit, hadrons (mesons) can still approach pointlike behaviour due to the vanishing of the repulsive interactions between them. It means that, in principle, one can excite a large number of hadronic resonances at large . This will again make the pressure in the HG phase larger than the pressure in the QGP phase, and the hadronic phase would again become stable at sufficiently high temperature, and the Gibbs construction can again yield HG phase at large . In some recent approaches this problem has been cured by considering another temperature-dependent mean-field VDW ( , ), where is the sum of particle and antiparticle number densities. Here VDW ( , ) represents van-der Waals hardcore repulsive interaction between two particles and depends on the total number density which is nonzero even when net baryon density is zero in the large temperature limit [72,73,75]. However, in the high-density limit, the presence of a large number of hyperons and their unknown couplings to the mesonic fields generates a large amount of uncertainty in the EOS of HG in the mean-field approach. Moreover, the assumption of how many particles and their resonances should be incorporated in the system is a crucial one in the formulation of EOS in this approach. The meanfield models can usually handle very few resonances only in the description of HG and hence are not as such reliable ones [75].
In this review, we discuss the formulation of various thermal models existing in the literature quite in detail and their applications to the analysis of particle production in ultrarelativistic heavy-ion collisions. We show that it is important to incorporate the interactions between hadrons in a consistent manner while formulating the EOS for hot, dense HG. For repulsive interactions, van-der Waals type of excluded-volume effect is often used in thermal models, while resonances are included in the system to account for the attractive interactions. We precisely demand that such interactions must be incorporated in the models in a thermodynamically consistent way. There are still some thermal models in the literature which lack thermodynamical self-consistency. We have proposed a new excluded-volume model where an equal hardcore size is assigned to each type of baryons in the HG, while the mesons are treated as pointlike particles. We have successfully used this model in calculating various properties of HG such as number density, and energy density. We have compared our results with those of the other models. Further, we have extracted chemical freeze-out parameters in various thermal models by analyzing the particle ratios over a broad energy range and parameterized them with the center-of-mass energy. We use these parameterizations to calculate the particle ratios at various center-of-mass energies and compare them with the experimental data. We further provide a proposal in the form of freeze-out conditions for a unified description of chemical freeze-out of hadrons in various thermal models. An extension of the thermal model for the study of the various transport properties of hadrons will also be discussed. We also analyze the rapidity as well as transverse mass spectra of hadrons using our thermal model and examine the role of any flow existing in the medium by matching the theoretical results with the experimental data. Thus the thermal approach indeed provides a very satisfactory description of various features of HG by reproducing a large number of experimental results covering wide energy range from alternating gradient synchrotron (AGS) to large hadron collider (LHC) energy.

Formulation of Thermal Models
Various types of thermal models for HG using excludedvolume correction based on van-der Waals type effect have been proposed. Thermal models have often used the grand canonical ensemble description to write the partition function for the system because it suits well for systems with large number of produced hadrons [30] and/or large volume. However, for nonrelativistic statistical mechanics, the use of a grand canonical ensemble is usually just a matter of convenience [86]. Furthermore, the canonical ensemble can be used in case of small systems (e.g., peripheral nucleusnucleus collisions) and for low energies in case of strangeness production [87,88] due to canonical suppression of the phase space. Similarly some descriptions also employ isobaric partition function in the derivation of their HG model. We succinctly summarize the features of some models as follows.

Hagedorn Model.
In the Hagedorn model [67,68], it is assumed that the excluded-volume correction is proportional to the pointlike energy density 0 . It is also assumed that the density of states of the finite-size particles in total volume can be taken as precisely the same as that of pointlike particles in the available volume Δ where Δ = − Σ 0 and 0 is the eigen volume of the th particle in the HG. Thus, the grand canonical partition function satisfies the relation: ln ( , , ) = ln 0 ( , Δ, ) . (1) The sum of eigen volumes Σ 0 is given by the ratio of the invariant cluster mass to the total energy density, and is the fugacity, that is, = exp( / ). Hence Σ 0 = /4 = /4 , and the energy density = Δ 0 / . 0 is the energy density when particles are treated as pointlike. Now, using the expression for Δ, one finally gets When 0 /4 ≫ 1, = Σ ex = 4 which is obviously the upper limit for since it gives the energy density existing inside a nucleon and is usually regarded as the latent heat density required for the phase transition. Here represents the bag constant. The expressions of number density and pressure can similarly be written as follows: Here, 0 and 0 are the number density and pressure of pointlike particles, respectively.

Cleymans-Suhonen Model.
In order to include van-der Waals type of repulsion between baryons, Cleymans and Suhonen [63] assigned an equal hardcore radius to each baryon. Consequently, the available volume for baryons is − Σ 0 ; here 0 is the eigen volume of th baryon, and is the total number. As a result, the net excluded number density, pressure, and the energy density of a multicomponent HG are given as follows: where 0 , 0 , and 0 are net baryon density, pressure, and energy density of pointlike particles, respectively, and Σ 0 0 is the fraction of occupied volume. Kuono and Takagi [66] modified these expressions by considering the existence of a repulsive interaction either between a pair of baryons or between a pair of antibaryons only. Therefore, the expressions (4), (5), and (6) take the folowing forms: where 0 and 0 are the number density of the pointlike baryons and antibaryons, respectively and 0 ( 0 ) and 0 ( 0 ) are the corresponding energy density and pressure. Similarly, 0 , 0 , and 0 are the number density, pressure, and energy density of pointlike mesons, respectively.

Rischke-Gorenstein-Stocker-Greiner (RGSG) Model.
The above discussed models possess a shortcoming that they are thermodynamically inconsistent because the thermodynamical variables like cannot be derived from a partition function or thermodynamical potential (Ω), for example, ̸ = Ω/ . Several proposals have come to correct such types of inconsistencies. Rischke et al. [69] have attempted to obtain a thermodynamically consistent formulation. In this model, the grand canonical partition function for pointlike baryons can be written in terms of canonical partition function as follows: They further modified the canonical partition function by introducing a step function in the volume so as to incorporate excluded-volume correction into the formalism. Therefore, the grand canonical partition function (8) finally takes the following form: The above ansatz is motivated by considering particles with eigen-volume 0 in a volume as pointlike particles in the available volume − 0 [69]. But, the problem in the calculation of (9) is the dependence of the available volume on the varying number of particles [69]. To overcome this difficulty, one should use the Laplace transformation of (9). Using the Laplace transform, one gets the isobaric partition function as follows: Advances in High Energy Physics where = − 0 and̃= − 0 . Finally, we get a transcendental type of equation as follows: where,̃= The expressions for the number density, entropy density, and energy density in this model can thus take a familiar form like ex ( , ) = 0 ( ,̃)̃= 0 ( ,̃) These equations resemble (4) and (6) as given in Cleymans-Suhonen model [63] with being replaced bỹ. The above model can be extended for a hadron gas involving several baryonic species as follows: where,̃= with = 1, . . . , ℎ. Particle number density for the th species can be calculated from following equation: Unfortunately, the above model involves cumbersome, transcendental expressions which are usually not easy to calculate. Furthermore, this model fails in the limit of = 0 becausẽbecomes negative in this limit.

New Excluded-Volume Model.
Singh et al. [70] have proposed a thermodynamically consistent excluded-volume model in which the grand canonical partition function using Boltzmann approximation can be written as follows: where and = exp( / ) are the degeneracy factor and the fugacity of th species of baryons, respectively, is the magnitude of the momentum of baryons, and 0 is the eigenvolume assigned to each baryon of th species; hence ∑ 0 becomes the total occupied volume where represents the total number of baryons of th species. We can rewrite (18) as follows: where integral is Thus we have obtained the grand canonical partition function as given by (19) by incorporating the excluded-volume effect explicitly in the partition function. The number density of baryons after excluded-volume correction ( ex ) can be obtained as So our prescription is thermodynamically consistent, and it leads to a transcendental equation: Here = ∑ ex 0 is the fractional occupied volume. It is clear that if we put the factor / = 0 and consider only one type of baryons in the system, then (22) can be reduced to the thermodynamically inconsistent expression (4). The presence of / in (22) thus removes the thermodynamical inconsistency. For single-component HG, the solution of (22) can be taken as follows: For a multicomponent hadron gas, (22) takes the following form: Using the method of parametric space [89], we write where is the parameter and gives the space. We finally get the solution of (24) as follows: Advances in High Energy Physics where is a parameter such that If 's and are known, one can determine . The quantity is fixed by setting 1 = 0, and one obtains = 1/ 1 1 1 ; here the subscript 1 denotes the nucleon degree of freedom, and ℎ is the total number of baryonic species. Hence by using and / one can calculate . It is obvious that the above solution is not unique, since it contains some parameters such as , one of which has been fixed to zero arbitrarily. Alternatively, one can assume that [70] Here an assumption is made that the number density of th baryon will only depend on the fugacity of same baryon. Then (24) reduces to a simple form as The solution of (29) can be obtained in a straight forward manner as follows [70]: Now, can be obtained by using the following relation: Here is the ratio of the occupied volume to the available volume. Finally, ex can be written as The solution obtained in this model is very simple and easy.
There is no arbitrary parameter in this formalism, so it can be regarded as a unique solution. However, this theory still depends crucially on the assumption that the number density of th species is a function of the alone, and it is independent of the fugacities of other kinds of baryons. As the interactions between different species become significant in hot and dense HG, this assumption is no longer valid. Moreover, one serious problem crops up, since we cannot do calculation in this model for > 185 MeV (and > 450 MeV). This particular limiting value of temperature and baryon chemical potential depends significantly on the masses and the degeneracy factors of the baryonic resonances considered in the calculation.
In order to remove above discrepancies, Mishra and Singh [32,33] have proposed a thermodynamically consistent EOS for a hot and dense HG using Boltzmann's statistics. They have proposed an alternative way to solve the transcendental equation (22). We have extended this model by using quantum statistics into the grand canonical partition function; so that our model works even for the extreme values of temperature and baryon chemical potential. Thus (20) can be rewritten as follows [31]: and (22) takes the following form after using the quantum statistics in the partition function: where is the partial derivative of with respect to . We can write in an operator equation form as follows [32,33,90,91]: where 1 = 0 /(1 + 0 ) with 0 = ∑ 0 0 + ∑ 0 2 ; 0 is the density of pointlike baryons of th species and the operatorΩ has the following form: Using the Neumann iteration method and retaining the series uptoΩ 2 term, we get After solving (39), we finally get the expression for total pressure [70] of the hadron gas as follows: Here meson is the pressure due to th type of meson.
Here we emphasize that we consider the repulsion arising only between a pair of baryons and/or antibaryons because we assign each of them exclusively a hardcore volume. In order to make the calculation simple, we have taken an equal volume 0 = 4 3 /3 for each type of baryons with a hardcore radius = 0.8 fm. We have considered in our calculation all baryons and mesons and their resonances having masses up to a cutoff value of 2 GeV/c 2 and lying in the HG spectrum. Here only those resonances which possess well-defined masses and widths have been incorporated in the calculations. Branching ratios for sequential decays have been suitably accounted, and in the presence of several decay channels, only dominant mode is included. We have also imposed the condition of strangeness neutrality strictly by putting ∑ ( − ) = 0, where is the strangeness quantum number of the th hadron and ( ) is the strange (antistrange) hadron density, respectively. Using this constraint equation, we get the value of strange chemical potential in terms of . Having done all these things, we proceed to calculate the energy density of each baryon species by using the following formula: Similarly, entropy density is It is evident that this approach is more simple in comparison to other thermodynamically consistent excluded-volume approaches which often possess transcendental final expressions [69,86]. Our approach does not involve any arbitrary parameter in the calculation. Moreover, this approach can be used for extremely low as well as extremely large values of and , where all other approaches fail to give a satisfying result since we do not use Boltzmann's approximation in our calculation.

Statistical and Thermodynamical Consistency
Recently, question of statistical and thermodynamical consistency in excluded-volume models for HG has widely been discussed [77]. In this section, we reexamine the issue of thermodynamical and statistical consistency of the excludedvolume models. In RGSG model [69], the single particle grand canonical partition function (9) can be rewritten as follows: where, Here, in this model, in the available volume ( − 0 ) is independent of . Differentiating (43) with respect to , we get the following equation: Multiplying both sides of (45) by / ex , we get We know that the expressions for statistical and thermodynamical averages of number of baryons are as follows: respectively. Using (47) in (46), we get [77] = ⟨ ⟩ .
Thus, we see that in RGSG model, thermodynamical average of number of baryons is exactly equal to the statistical average of number of baryons. Similarly in this model, we can show that Now, we calculate the statistical and thermodynamical averages of number of baryons in our excluded-volume model. The grand canonical partition function in our model (i.e., (18)) can take the following form: where is given by (44). We use Boltzmann's statistics for the sake of convenience and consider only one species of baryons. In our model, , present in the available volume ( − 0 ), is dependent. However, for multicomponent system, one cannot use "fixed ", because in this case van-der Waals approximation is not uniquely defined [92,93]. So, we use average in our multicomponent grand partition function. However, at high temperatures it is not possible to use one component van-der Waals description for a system of various species with different masses [92,93]. Now differentiating (50) with respect to , we get 8

Advances in High Energy Physics
Multiplying both sides of (51) by / ex , we get Using the definitions (47), (52) can take the following form: Here is the thermal average of number density of baryons, 0 is the number density of pointlike baryons, and The second term in (54) is the redundant one and arises because , present in the available volume ( − 0 ), is a function of . We call this term "correction term". In Figure 2, we have shown the variation of thermodynamical average of the number density of baryons and the "correction term" with respect to at = 400 MeV. We see that there is an almost negligible contribution of this "correction term" to thermodynamical average of number density of baryons. Although, due to this "correction term", the statistical average of the number density of baryons is not exactly equal to its thermodynamical average, the difference is so small that it can be neglected. Similarly, we can show that such redundant terms appear while calculating statistical average of energy density of baryons and arise due to the temperature dependence of . Such terms again give negligible contribution to thermodynamical average of the energy density of baryons. Here, we see that the statistical and thermodynamical averages of physical quantities such as number density and energy density are approximately equal to each other in our model also. Thus, our excluded-volume model is not exactly thermodynamically consistent, but it can safely be taken as consistent because the correction term in the averaging procedure appears as negligibly small.

Comparisons between Model Results and Experimental Data
In this section, we review various features of hadron gas and present our comparisons between the results of various HG models and the experimental data.

Chemical Freeze-Out Criteria.
Thermal models provide a systematic study of many important properties of hot and dense HG at chemical freezeout (where inelastic interactions cease). To establish a relation between chemical freezeout parameters ( , ) and √ , a common method is used to fit the experimental hadron ratios. Many papers [30,49,50,[94][95][96] have appeared in which and are extracted in terms of √ . In [30], various hadron ratios are analyzed from √ = 2.7 GeV to 200 GeV, and chemical freeze-out parameters are parameterized in terms of √ by using following expressions: Here, , , , , , and lim (the limiting temperature) are fitting parameters. Various authors [94,95] have included the strangeness suppression factor ( ) in their model while extracting the chemical freeze-out parameters. In the thermal model, is used to account for the partial equilibration of the strange particles. Such situation may arise in the elementary Advances in High Energy Physics p-p collisions and/or peripheral A-A collisions, and mostly the use of is warranted in such cases [94,97]. Moreover, ≈ 1 has been found in the central collisions at RHIC [98,99]. We do not require in our model as an additional fitting parameter because we assume that strangeness is also fully equilibrated in the HG. Also, it has been pointed out that inclusion of in thermal model analysis does not affect the values of fitting parameters and much [30]. Dumitru et al. [96] have used inhomogeneous freeze-out scenario for the extraction of and at various √ . In a recent paper [100], condition of vanishing value of 2 or equivalently 4 = 3 2 is used to describe the chemical freezeout line where , , 4 , and are kurtosis, the standard deviation, fourth order moment, and susceptibility, respectively. In [101], first time experimental data on 2 and , here is skewness, has been compared with the lattice QCD calculations and hadron resonance gas model to determine the critical temperature ( ) for the QCD phase transition. Recently, it is shown that the freeze-out parameters in heavy-ion collisions can be determined by comparing the lattice QCD results for the first three cumulants of net electric charge fluctuations with the experimental data [102]. In Figure 3, we have shown the energy dependence of thermal parameters and extracted by various authors. In all the studies, similar behaviour is found except in the Letessier and Rafelski [95], which may be due to usage of many additional free parameters such as light quark occupancy factor ( ) and an isospin fugacity. We have also extracted freeze-out parameters by fitting the experimental particleratios from the lowest SIS energy to the highest RHIC energy using our model [31]. For comparison, we have shown the values obtained in other models, for example, IHG model, Cleymans-Suhonen model, and RGSG model, in Table 1. We then parameterize the variables and in terms of √ as follows [103]: where the parameters , , , , and have been determined from the best fits: = 1.482 ± 0.0037 GeV, = 0.3517 ± 0.009 GeV −1 , = 0.163 ± 0.0021 GeV, = 0.170 ± 0.02 GeV −1 , and = 0.015 ± 0.01 GeV −3 . The systematic error of the fits can be estimated via quadratic deviation 2 [30] defined as follows: where exp and therm are the experimental data and thermal model result of either the hadron yield or the ratio of hadron yields, respectively.
In this analysis, we have used full phase space (4 ) data at all center-of-mass energies except at RHIC energies where only midrapidity data are available for all the ratios. Moreover, the midrapidity and full phase space data at these  energies differ only slightly as pointed out by Alt and NA49 Collaboration for + / + and − / − ratios [104]. In Figure 4, we have shown the parametrization of the freeze-out values of baryon chemical potential with respect to √ , and similarly in Figure 5, we have shown the chemical freeze-out curve between temperature and baryon chemical potential [31].

Hadron Ratios.
In an experimental measurement of various particle ratios at various centre-of-mass energies [105][106][107][108][109][110][111], it is found that there is an unusual sharp variation in the Λ/ − ratio increasing up to the peak value. This strong variation of the ratio with energy indicates the critical temperature of QCD phase transition [112] between HG and QGP [113,114], and a nontrivial information about the critical temperature ≈ 176 MeV has been extracted [114]. Figure 6 shows the variation of Λ/ − with √ . We compare the experimental data with various thermal models [31,63,69]  and find that our model calculation gives much better fit to the experimental data in comparison to other models. We get a sharp peak around centre-of-mass energy of 5 GeV, and our results thus almost reproduce all the features of the experimental data.
In Figure 7, we have shown the variations of / ratio with √ . The yields in the thermal models are often much higher in comparison to data. We notice that no thermal model can suitably account for the multiplicity ratio of multistrange particle since is hidden-strange quark combination. However, quark coalescence model assuming a QGP formation has been claimed to explain the results [115,116] successfully. In the thermal models, the results for the multistrange particles raise doubt over the degree of chemical equilibration for strangeness reached in the HG fireball. We can use an arbitrary parameter as used in several models. The failures of thermal models in these cases may indicate the presence of QGP formation, but it is still not clear. In Figure 8, we have shown the energy dependence of antiparticle to particle ratios; for example, − / + , / , Λ/Λ, and Ξ + /Ξ − . These ratios increase sharply with respect to √ and then almost saturate at higher energies reaching the value equal to 1.0 at LHC energy. On comparison with the experimental data we find that almost all the thermal models describe these data successfully at all the center-ofmass energies. However, RGSG model [69] fails to describe the data at SPS and RHIC energies in comparison to other models [31].

Thermodynamical Properties.
We present the thermal model calculations of various thermodynamical properties of HG such as entropy per baryon ( / ) and energy density and compare the results with the predictions of a microscopic model URASiMA event generator developed by Sasaki [122]. URASiMA (ultrarelativistic AA collision simulator based on multiple scattering algorithm) is a microscopic model which includes the realistic interactions between hadrons. In URASiMA event generator, molecular-dynamical simulations for a system of a HG are performed. URASiMA includes the multibody absorptions, which are the reverse processes of multiparticle production and are not included in any other model. Although, URASiMA gives a realistic EOS for hot and dense HG, it does not include antibaryons and strange particles in their simulation, which is very crucial. In Figure 9, we have plotted the variation of / with respect to temperature ( ) at fixed net baryon density ( ). / calculated in our model shows a good agreement with the results of Sasaki [122] in comparison to other excludedvolume models. It is found that thermal model approach, which incorporates macroscopic geometrical features gives a close results with the simulation involving microscopic interactions between hadrons. There are various parameters such as coupling constants of hadrons appear in URASiMA model due to interactions between hadrons. It is certainly encouraging to find an excellent agreement between the results obtained with two widely different approaches. Figure 10 represents the variation of the energy density of HG with respect to at constant . Again our model calculation is more closer to the result of URASiMA in comparison to other excluded-volume models. Energy density increases very slowly with the temperature initially and then rapidly increases at higher temperatures.

Causality.
One of the deficiencies of excluded-volume models is the violation of causality in the hot and dense hadron gas; that is, the sound velocity is larger than the velocity of light in the medium. In other words, > 1, in the unit of = 1, means that the medium transmits information at a speed faster than [74]. Since, in this paper we are discussing the results of various excluded-volume models, it would be interesting to see whether these models respect causality or not. In Figure 11, we have plotted the variations of the total hadronic pressure as a function of the energy density of the HG at a fixed entropy per particle using our model calculation [31]. We find for a fixed / that the pressure varies linearly with respect to energy density. In Figure 12, we have shown the variation of ( 2 = / at fixed / ) with respect to / . We find that ≤ 0.58 in our model with interacting particles. We get = 0.58 (i.e., 1/ √ 3) for an ideal gas consisting of ultrarelativistic particles. This feature endorses our viewpoint that our model is not only thermodynamically consistent but it does not also involve any violation of causality even at large density. Similarly in RGSG model [69], we do not notice that the value of exceeds 1 as shown in Figure 12. It should be mentioned that we are using full quantum statistics in all the models taken for comparisons here. However, we find that the values in the RGSG model cannot be extracted when temperature of the HG exceeds 250 MeV. No such restriction applies for our model.

Universal Freeze-Out Criteria.
One of the most remarkable successes of thermal models is in explaining the multiplicities and the particle ratios of various particles produced in heavy-ion experiments from the lowest SIS energy to maximum LHC energy. Some properties of thermal fireball are found to be common to all collision energies which give a universal freeze-out conditions in heavy-ion collisions. Now, we review the applicability of thermal models in deriving some useful chemical freeze-out criteria for the fireball. Recent studies [39,48,103,[123][124][125] predict that the following empirical conditions are to be valid on the entire freeze-out hypersurface of the fireball: (i) energy per hadron always has a fixed value at 1.08 GeV; (ii) sum of baryon and anti-baryon densities is + = 0.12/fm 3 ; (iii) normalized entropy density is / 3 ≈ 7. Further, Cleymans et al. [103] have found that all the above conditions separately give a satisfactory description of the chemical freeze-out parameters and in an IHG picture only. Moreover, it was also found that these conditions are independent of collision energy and the geometry of colliding nuclei. Furthermore, Cleymans et al. [103] have hinted that incorporation of excluded-volume correction leads to wild as well as disastrous effects on these conditions. The purpose in this section is to reinvestigate the validity of these freeze-out criteria in excluded-volume models. Along with these conditions, a condition, formulated by using percolation theory, is also proposed as a chemical freeze-out condition [124]. An assumption is made that in the baryonless region the hadronic matter freezes out due to hadron resonances and vacuum percolation, while in the baryon rich region the freeze-out takes place due to baryon percolation. Thus, the condition which describes the chemical freeze-out line is formulated by following equation [124]: where ℎ is the volume of a hadron. The numbers 1.24 and 0.34 are obtained within percolation theory [126]. In Figure 13, we have shown the variation of / with respect to √ at the chemical freeze-out point of the fireball. The ratio / shows a constant value of 1.0 in our model, and it shows also a remarkable energy independence. Similarly the curve in IHG model shows that the value for / is slightly larger than the one as reported in [103]. However, results support the finding that / is almost independent of energy and also of the geometry of the nuclei. Most importantly, we notice that the inclusion of the excluded-volume correction does not change the result much which is contrary to the claim of Cleymans et al. [103]. The condition / ≈ 1.0 GeV was successfully used in the literature to make predictions [45] of freeze-out parameters at SPS energies of 40 and 80 A GeV for Pb-Pb collisions long before the data were taken [31]. Moreover, we have also shown, in Figure 13, the curves in the Cleymans-Suhonen model [63] and the RGSG model [69], and we notice a small variation with √ particularly at lower energies. In Figure 14, we study a possible new freeze-out criterion which was not proposed earlier. We show that the quantity entropy per particle, that is, / , yields a remarkable energy Energy density of HG (GeV/fm 3 ) Total hadronic pressure (GeV/fm 3 ) s/n = 5.8 (×1000) s/n = 9 s/n = 11.2 Figure 11: Variations of total hadronic pressure with respect to energy density of HG at fixed entropy per particle / . Our calculations show linear relationship, and slope of the lines gives square of the velocity of sound 2 [31]. independence in our model calculation. The quantity / ≈ 7.0 describes the chemical freeze-out criteria and is almost independent of the centre-of-mass energy in our model calculation. However, the results below, √ = 6 GeV, do not give a promising support to our criterion and reveal some energy dependence also. This criterion thus indicates that the possible use of excluded-volume models and the thermal descriptions at very low energies is not valid for the HG. Similar results were obtained in the RGSG, Cleymans-Suhonen, and IHG models also [31]. The conditions, that is, / ≈ 1.0 GeV and / ≈ 7.0, at the chemical freeze-out form a constant hypersurface from where all the particles freeze out and all kinds of inelastic collisions cease simultaneously and fly towards the detectors. Thus all particles attain thermal equilibrium at the line of chemical freeze-out, and when they come out from the fireball, they have an almost constant energy per particle (≈1.0) and entropy per particle (≈7.0). Moreover, these values are independent of the initial collision energy as well as the geometry of the colliding nuclei.
Our finding lends support to the crucial assumption of HG fireball achieving chemical equilibrium in the heavyion collisions from the lowest SIS to RHIC energy, and the EOS of the HG developed by us indeed gives a proper description of the hot and dense fireball and its subsequent expansion. However, we still do not get any information regarding QGP formation from these studies. The chemical equilibrium once attained by the hot and dense HG removes any memory regarding QGP existing in the fireball before HG phase. Furthermore, in a heavy-ion collision, a large amount of kinetic energy becomes available, and part of it is 4.6. Transport Properties of HG. Transport coefficients are very important tools in quantifying the properties of strongly interacting relativistic fluid and its critical phenomena, that is, phase transition and critical point [127][128][129]. The fluctuations cause the system to depart from equilibrium and a nonequilibrated system that is created for a brief time. The response of the system to such fluctuations is essentially described by the transport coefficients, for example, shear viscosity, bulk viscosity, speed of sound, and so forth. Recently the data for the collective flow obtained from RHIC and LHC experiments indicate that the system created in these experiments behaves as strongly interacting perfect fluid [130,131], whereas we expected that QGP created in these experiments should behave like a perfect gas. The perfect fluid created after the phase transition indicates a very low value of shear viscosity to entropy ratio so that the dissipative effects are negligible, and the collective flow is large as it was obtained by heavy ion collision experiments [10,132,133]. There were several analytic calculations for and / of simple hadronic systems [134][135][136][137][138][139][140] along with some sophisticated microscopic transport model calculations [141][142][143] in the literature. Furthermore, some calculations predict that the minimum of shear viscosity to entropy density is related with the QCD phase transition [144][145][146][147][148][149]. Similarly sound velocity is an important property of the matter created in heavy-ion collision experiments because the hydrodynamic evolution of this matter strongly depends on it. A minimum occurred in the sound-velocity has also been interpreted in terms of a phase transition [29,145,[150][151][152][153][154][155], and further, the presence of a shallow minimum corresponds to a crossover transition [156]. In view of the above, it is worthwhile to study in detail the transport properties of the HG in order to fully comprehend the nature of the matter created in the heavy-ion collisions as well as the involved phase transition phenomenon. In this section, we have used thermal models to calculate the transport properties of HG such as shear viscosity to entropy ratio [31].
We calculate the shear viscosity in our thermal model as it was done previously by Gorenstein et al. [157] using RGSG model. According to molecular kinetic theory, we can write the dependence of the shear viscosity as follows [158]: where is the particle density, is the mean free path, and is the average thermal momentum of the baryons or antibaryons. For the mixture of particle species with different masses and with the same hardcore radius , the shear viscosity can be calculated by the following equation [157]: where is the number density of the ith species of baryons (antibaryons) and is the total baryon density. In Figure 15, we have shown the variation of / with respect to temperature as obtained in our model for HG having a baryonic hardcore size = 0.5 fm, and compared our results with those of Gorenstein et al. [157]. We find that near the expected QCD phase transition temperature ( = 170-180 MeV) / shows a lower value in our HG model than the value in other models. In fact, / in our model looks close to the lower bound (1/4 ) suggested by AdS/QCD theories [159,160]. Recently, measurements in Pb-Pb collisions at the large hadron collider (LHC) support the value / ≈ 1/4 when compared with the viscous fluid hydrodynamic flow [161].  Figure 16: Variation of / with respect to baryon chemical potential ( ) at very low temperature 10 MeV. Solid line represents our calculation [31], and dotted curve is for the calculations done by Itakura et al. [139].
In Figure 16, we have shown the variation of / with respect to at a very low temperature (≈10 MeV) [31]. Here we find that the / is constant as increases upto 700 MeV and then sharply decreases. This kind of valley-like structure at low temperature and at around 950 MeV was also obtained by Chen et al. [137] and Itakura et al. [139]. They have related this structure to the liquid-gas phase transition of the nuclear matter. As we increase the temperature above 20 MeV, this valley-like structure disappears. They further suspect that the observation of a discontinuity in the bottom of / valley may correspond to the location of the critical point. Our HG model yields a curve in complete agreement with these results. Figure 17 represents the variation of and / with respect to temperature at a fixed (=300 MeV), for HG having a baryonic hardcore size = 0.8 fm. We have compared our result with the result obtained in [139]. Here we find that increases with temperature in our HG model as well as in the simple phenomenological calculation [139], but it decreases with increasing temperature in lowtemperature effective field theory (EFT) calculations [137,139]. However, / decreases with increasing temperature in all three calculations, and / in our model gives the lowest value at all the temperatures in comparison to other models. In Figure 18, we have shown a comparison between calculated in our HG model with the results obtained in a microscopic pion gas model used in [141]. Our model results show a fair agreement with the microscopic model results for the temperature higher than 160 MeV, while at lower temperatures the microscopic calculation predicts lower values of in comparison to our results. The most probable reason may be that the calculations have been done only for pion gas in the microscopic model, while at low temperatures the inclusion of baryons in the HG is very important in order to extract a correct value for the shear viscosity [31]. Figure 19 shows the variation of / with respect to √ in our model calculation. We have compared our results with   [141]. that calculated in [157]. There is similarity in our results at lower energies, while our results significantly differ at higher energies. However both the calculations show that / decreases with increasing √ . The study of the transport properties of nonequilibrium systems which are not far from an equilibrium state has yielded valuable results in the recent past. Large values of the elliptic flow observed at RHIC indicate that the matter in the fireball behaves as a nearly perfect liquid with a small value of the / ratio. After evaluating / in strongly coupled theories using AdS/CFT duality conjecture, a lower bound was reported as / = 1/4 . We surprisingly notice that the fireball with hot, dense HG as described in our model gives transport coefficients which agree with those given in different approaches. Temperature and baryon chemical potential √s NN (GeV) Figure 19: Variation of / with respect to √ in our model [31] and other model calculation [157].
dependence of the / are analyzed and compared with the results obtained in other models. Our results lend support to the claim that knowledge of the EOS and the transport coefficients of HG is essential for a better understanding of the dynamics of the medium formed in the heavy-ion collisions [31].

Rapidity and Transverse Mass Spectra.
In order to suggest any unambiguous signal for QGP, the dynamics of the collisions should be understood properly. Such information can be obtained by analyzing the properties of various particles which are emitted from various stages of the collisions. Hadrons are produced at the end of the hot and dense QGP phase, but they subsequently scatter in the confined hadronic phase prior to decoupling (or "freeze-out") from the collision system, and finally a collective evolution of the hot and dense matter occurs in the form of transverse, radial, or elliptic flow which is instrumental in shaping the important features of the particle spectra. The global properties and dynamics of freeze-out can be at best studied via hadronic observables such as rapidity distributions and transverse mass spectra [162]. There are various approaches for the study of rapidity as well as transverse mass spectra of HG [28,37,38,. Hadronic spectra from purely thermal models usually reveal an isotropic distribution of particles [186], and hence the rapidity spectra obtained with the purely thermal models do not reproduce the features of the experimental data satisfactorily. Similarly the transverse mass spectra from the thermal models reveal a more steeper curve than that observed experimentally. The comparisons illustrate that the fireball formed in heavy-ion collisions does not expand isotropically in nature, and there is a prominent input of collective flow in the longitudinal and transverse directions which finally causes anisotropy in the rapidity and transverse mass distributions of the hadrons after the freeze-out. Here we mention some kinds of models of thermal and collective flow used in the literature. Hydrodynamical properties of the expanding fireball have been initially discussed by Bjorken and Landau for the central-rapidity and stopping regimes, respectively [28,163]. However, collisions even at RHIC energies reveal that they are neither fully stopped, nor fully transparent. As the collision energy increases, the longitudinal flow grows stronger and leads to a cylindrical geometry as postulated in [37,38,164,165]. They assume that the fireballs are distributed uniformally in the longitudinal direction and demonstrate that the available data can consistently be described in a thermal model with inputs of chemical equilibrium and flow, although they have also used the experimental data for small systems only. They use two simple parameters: transverse flow velocity ( ) and temperature ( ) in their models. In [166,167], nonuniform flow model is used to analyze the spectra specially to reproduce the dip at midrapidity in the rapidity spectra of baryons by assuming that the fireballs are distributed nonuniformly in the longitudinal phase space. In [175][176][177][178][179], the rapidity-dependent baryon chemical potential has been invoked to study the rapidity spectra of hadrons. In certain hydrodynamical models [180], measured transverse momentum ( ) distributions in Au-Au collisions at √ = 130 GeV [181][182][183] have been described successfully by incorporating a radial flow. In [184], rapidity spectra of mesons have been studied using viscous relativistic hydrodynamics in a 1+1 dimension assuming a nonboost invariant Bjorken's flow in the longitudinal direction. They have also analyzed the effect of the shear viscosity on the longitudinal expansion of the matter. Shear viscosity counteracts the gradients of the velocity field; as a consequence it slows down the longitudinal expansion. Ivanov [185] has employed 3 FD model [187] for the study of rapidity distributions of hadrons in the energy range from 2.7 GeV to 62.4 GeV. In 3 FD model, three different EOS: (i) a purely hadronic EOS, (ii) the EOS involving first order phase transition from hot, dense HG to QGP, and (iii) the EOS with smooth crossover transition are used. Within all three scenarios they reproduced the data at the almost same extent. In [188], rapidity distributions of various hadrons in the central nucleus-nucleus collisions have been studied in the Landau's and Bjorken's hydrodynamical model. The effect of speed of sound ( ) on the hadronic spectra and the correlation of with freeze-out parameters are indicated.
In this section, we study the rapidity and transverse mass spectra of hadrons using thermal approach. We can rewrite (36) in the following manner [58]: If we use Boltzmann's approximation, (63) differs from the one used in the paper of Schnedermann et al. [164] by the presence of a prefactor [(1 − ) − / ]. However, we measure all these quantities precisely at the chemical freeze-out using our model, and hence quantitatively we do not require any normalizing factor as required in [164]. We use the following expression to calculate the rapidity distributions of baryons in the thermal model [58]: Here is the rapidity variable and is the transverse mass ( = √ 2 + 2 ). Also is the total volume of the fireball formed at chemical freeze-out, and is the total number of th baryons. We assume that the freeze-out volume of the fireball for all types of hadrons at the time of the homogeneous emissions of hadrons remains the same. It can be mentioned here that in the above equation, there occurs no free parameter because all the quantities , , , , and so forth, are determined in the model. However, (64) describes the experimental data only at midrapidity, while it fails at forward and backward rapidities, so we need to modify it by incorporating a flow factor in the longitudinal direction. Thus the resulting rapidity spectra of ith hadron is [37,38,58,164] where ( / ) th can be calculated by using (64). The expression for average longitudinal velocity is [166,167,189] Here max is a free parameter which provides the upper rapidity limit for the longitudinal flow velocity at a particular √ , and its value is determined by the best experimental fit. The value of max increases with the increasing √ , and hence also increases. Cleymans et al. [179] have extended the thermal model [175,176], in which the chemical freeze-out parameters are rapidity-dependent, to calculate the rapidity spectra of hadrons. They use the following expression for rapidity spectra: where 1 / is the thermal rapidity distribution of particles calculated by using (64) and ( FB ) is a Gaussian distribution of fireballs centered at zero and given by Similarly we calculate the transverse mass spectra of hadrons by using following expression [58]: where 1 ( / ) is the modified Bessel's function: The above expression for transverse mass spectra arises from a stationary thermal source alone which is not capable of describing the experimental data successfully. So, we incorporate flow velocity in both the directions in (69), longitudinal as well as transverse, in order to describe the experimental data satisfactorily. After defining the flow velocity field, we can calculate the invariant momentum spectrum by using the following formula [164,190]: While deriving (71), we assume that the local fluid velocity gives a boost to an isotropic thermal distribution of hadrons. Now the final expression of transverse mass spectra of hadrons after incorporation of flow velocity in our model is [58] Here 0 ( sinh / ) is the modified Bessel's function: where is given by = tanh  Figure 20: Energy dependence of the freeze-out volume for the central nucleus-nucleus collisions. The symbols are the HBT data for freeze-out volume HBT for the + [191]. , , and are the total freeze-out volume and , , and depict the / as found in our model for + , + and − , respectively. represents the total freeze-out volume for + calculated in the Ideal HG model. is the the total freeze-out volume for + in our model calculation using Boltzmann's statistics [58]. velocity and is treated as a free parameter and = ( / 0 ). The average of the transverse velocity can be evaluated as [182] ⟨ ⟩ = ∫ ∫ = ( 2 2 + ) .
In our calculation, we use a linear velocity profile ( = 1), and 0 is the maximum radius of the expanding source at freeze-out (0 < < 1) [182].
In Figure 20, we have shown the variations of and / with the √ calculated in our excluded-volume model and compared with the results of various thermal models. We show the total freeze-out volume for + calculated in our model using Boltzmann's statistics. We see that there is a significant difference between the results arising from quantum statistics and Boltzmann's statistics [58]. We also show the total freeze-out volume for + in IHG model calculation by dash-dotted line . We clearly notice a remarkable difference between the results of our excluded-volume model and those of IHG model also. We have also compared predictions from our model with the data obtained from the pion interferometry (HBT) [191] which in fact reveals thermal (kinetic) freeze-out volumes. The results of thermal models support the finding that the decoupling of strange mesons from the fireball takes place earlier than the -mesons. Moreover, a flat minimum occurs in the curves around the center-of-mass energy ≈8 GeV, and this feature is well supported by HBT data. In Figure 21 GeV. Dotted line shows the rapidity distribution calculated in our thermal model [58]. Solid line and dashed line show the results obtained after incorporating longitudinal flow in our thermal model. Symbols are the experimental data [192]. the distribution of + due to purely thermal model. Solid line shows the rapidity distributions of + after the incorporation of longitudinal flow in our thermal model, and results give a good agreement with the experimental data [192]. In fitting the experimental data, we use the value of max = 3.2 and hence the longitudinal flow velocity = 0.92 at √ = 200 GeV. For comparison and testing the appropriateness of this parameter, we also show the rapidity distributions at a different value, that is, max = 2.8 (or, = 0.88), by a dashed line in the figure. We find that the results slightly differ, and hence it shows a small dependence on max [58]. Figure 22 represents the rapidity distributions of pion at various √ calculated by using (67) [179]. There is a good agreement between the model results and experimental data at all √ . In Figure 23, we show the transverse mass spectra for + and proton for the most central collisions of Au+Au at √ = 200 GeV. We have neglected the contributions from the resonance decays in our calculations since these contributions affect the transverse mass spectra only towards the lower transverse mass side, that is, < 0.3 GeV. We find a good agreement between our calculations and the experimental results for all except < 0.3 GeV after incorporating the flow velocity in purely thermal model. This again shows the importance of collective flow in the description of the experimental data [193]. At this energy, the value of is taken as 0. 50 Figure 22 is taken from [179].
hadrons at √ = 200 GeV. We notice that the transverse flow velocity slowly increases with the increasing √ . If we take = 0.60, we find that the results differ with data as shown in Figure 23. In Figure 24, we show the transverse momentum ( ) spectra for + , + , and in the most central collisions of Au-Au at √ = 200 GeV. Our model calculations reveal a close agreement with the experimental data [193]. In Figure 25, we show the spectra of − , − , and for the Pb-Pb collisions at √ = 2.76 TeV at the LHC. Our calculations again give a good fit to the experimental results [194]. We also compare our results for spectrum with the hydrodynamical model of Shen et al. [195], which successfully explains − and − spectra but strongly fails in the case of spectrum [58]. In comparison, our model results show closer agreement with the experimental data. Shen et al. [195] have employed (2+1)-dimensional viscous hydrodynamics with the lattice QCD-based EOS. They use Cooper-Frye prescription to implement kinetic freeze-out in converting the hydrodynamic output into the particle spectra. Due to lack of a proper theoretical and phenomenological knowledge, they use the same parameters for Pb-Pb collisions at LHC energy, which was used for Au-Au collisions at √ = 200 GeV. Furthermore, they use the temperatureindependent / ratio in their calculation. After fitting the experimental data, we get = 0.80 ( = 0.53) at this energy which indicates the collective flow as becoming stronger at LHC energy than that observed at RHIC energies. In this plot, we also attempt to show how the spectra for − will change at a slightly different value of the parameter, that is, = 0.88 [58]. GeV. Dashed and dotted lines are the transverse mass spectra due to purely thermal source for + and proton, respectively. Solid and dash-dotted lines are the results for + and proton, respectively, obtained after incorporation of flow in thermal model [58]. Symbols are the experimental data [193].

Summary and Conclusions
The main aim in this paper is to emphasize the use of the thermal approach in describing various yields of different particle species that have been measured in various experiments running at various places. We have discussed various types of thermal approaches for the formulation of EOS for HG. We have argued that, incorporation of interactions between hadrons in a thermodynamically consistent way is important for the realistic formulation of HG from both qualitatively and quantitatively point of view. We have presented the systematic study of the particle production in heavy-ion collisions from AGS to LHC energy. We have observed from this analysis that the production of the particles seems to occurr according to principle of equilibrium. Yields of hadrons and their ratios measured in heavy-ion collisions match with the predictions of thermal models assured the thermalization of the collision fireball formed in heavy-ion collisions. Furthermore, various experimental observables such as transverse momentum spectra and elliptic flow indicate the presence of the thermodynamical pressure, developed in the early stage, and correlations which are expected in a thermalized medium.
We have discussed a detailed formulation of various excluded-volume models and their shortcomings. Some excluded-volume models are not thermodynamically consistent because they do not possess a well defined partition function from which various thermodynamical quantities Advances in High Energy Physics  Figure 24: Transverse momentum spectra for + , , and + for the most central Au-Au collision at √ = 200 GeV [58]. Lines are the results of our model calculation, and symbols are the experimental results [193]. such as number density can be calculated. However, some of them are the thermodynamically consistent but suffer from some unphysical situations cropping up in the calculations. We have proposed a new approximately thermodynamically consistent excluded-volume model for a hot and dense HG. We have used quantum statistics in the grand canonical partition function of our model so that it works even at extreme values of and where all other models fail. Moreover, our model respects causality. We have presented the calculations of various thermodynamical quantities such as entropy per baryon and energy density in various excludedvolume models and compare the results with those of a microscopic approach URASiMA. We find that our model results are in close agreement with that of the entirely different approach URASiMA model. We have calculated various particle ratios at various √ , and we confronted the results of various thermal models with the experimental data and find that they are indeed successful in describing the particle ratios. However, we find that our model results are closer to the experimental data in comparison to those of other excluded-volume models. We have calculated some conditions such as / and / at chemical freeze-out points and attempted to test whether these conditions involve energy independence as well as independence of structure of the nuclei involved in the collisions. We find that / ≈ 1.0 GeV and / ≈ 7.0 are the two robust freeze-out criteria which show independence of the energy and structure of nuclei. Moreover, the calculations of transport properties in  [58]. Lines are the results of model calculations, and symbols are the experimental results [194]. Thick-dashed line is the prediction of viscous-hydrodynamical model [195] for . our model match well with the results obtained in other widely different approaches. Further, we present an analysis of rapidity distributions and transverse mass spectra of hadrons in central nucleus-nucleus collision at various √ using our EOS for HG. We see that the stationary thermal source alone cannot describe the experimental data fully unless we incorporate flow velocities in the longitudinal as well as in the transverse direction, and as a result, our modified model predictions show a good agreement with the experimental data. Our analysis shows that a collective flow develops at each √ which increases further with the increasing √ . The description of the rapidity distributions and transverse mass spectra of hadrons at each √ matches very well with the experimental data. Thus, we emphasize that thermal models are indeed an important tool to describe the various features of hot and dense HG. Although, these models are not capable of telling whether QGP was formed before HG phase, they can give an indirect indication of it by showing any anomalous feature as observed in the experimental data.
In conclusion, the net outcome of this review is indeed a surprising one. The excluded-volume HG models are really successful in describing all kinds of features of the HG formed in ultrarelativistic heavy-ion collisions. The most important property indicated by such description is the chemical equilibrium reached in such collisions. However, the description is still a geometrical one and does not involve any microscopic picture of interactions. Moreover, its relativistic and field theoretic generalizations are still needed 22 Advances in High Energy Physics in order to make the picture a more realistic description. But it is amazing to find that these models still work much better than expected. Notwithstanding these remarks, we should add that Lattice QCD results are now available for the pressure, entropy density, and energy density, and so forth for the entire temperature range from = 0 to higher values at = 0. Here low temperature phase of QCD is the HG, and recently our excluded-volume model reproduces these properties quite in agreement with Lattice results [196]. We have also used these calculations in the precise determination of the QCD critical end point [197,198]. Thus we conclude that the excluded-volume models are successful in reproducing numerical results obtained in various experiments, and, therefore, further research is required to show how these descriptions are connected with the microscopic interactions.

Introduction
High energy collisions are an important research field in particle and nuclear physics. In the collisions, a lot of particles are produced, and the rapidity and/or pseudorapidity distributions can be obtained and studied [1][2][3]. Usually, one needs to do a conversion between the rapidity and pseudorapidity distributions in the case of only one of the two distributions being obtained. There are two equivalent conversion formulas used in the current literature [4][5][6][7][8][9][10][11]. Naturally, one thinks that the two conversions are perfect in investigations of the rapidity and pseudorapidity distributions.
However, our incidental find shows that the two conversions are incomplete. In obtaining the Jacobian in the current literature [4][5][6][7][8][9][10][11], a nongiven quantity, namely, transverse momentum, is erroneously used as a given one, which renders an incomplete conversion. In this paper, we will give a reanalysis on the Jacobian. A revised conversion between the rapidity and pseudorapidity distributions will be presented.

General Definition
We consider a system of high energy projectile-target collisions. The incident projectile direction is defined as axis, and the reaction plane is defined as plane. Let , , , , 0 , and denote, respectively, the energy, momentum, longitudinal momentum, transverse momentum, rest mass, and emission angle of a concerned particle. According to general textbooks on particle physics [12,13], the rapidity (which is in fact the longitudinal rapidity) is defined by In the case of ≫ 0 , we have where is the pseudorapidity.
Because the condition of ≫ 0 is not always satisfied, the pseudorapidity distribution (density function) ( ) = (1/ )( / ) and the rapidity distribution (density function) ( ) = (1/ )( / ) are not approximately equal to each other, where denotes the particle number in the pseudorapidity or rapidity bin and denotes the total number of considered particles.

Current Conversion
To give a conversion between / and / in the case of one of them being obtained, one has two equivalent methods which are currently used in the literature [4][5][6][7][8][9][10][11]. According to [4,5,11], the first conversion relation between / and / can be given by Then, the first conversion is given by [4][5][6] where √ 2 + 2 0 ≡ is the transverse mass. We see that the first conversion is related to .
The second conversion is given in [5,[7][8][9][10][11]. We have which is also related to . In [11], a similar conversion which uses 2 −2 instead of 2 0 −2 in (7) is given, where = 350 MeV, = 0.13 GeV + 0.32 GeV(√ /1 TeV) 0.115 , and √ denotes the center-of-mass energy. The conversion used in [11] is a mutation of the second conversion. We now give the eduction of the current conversion. According to [12], In the case of being a given quantity, we have ] Advances in High Energy Physics where denotes the velocity of the concerned particle. Then, we obtain the current conversion. However, we would like to point out that the previous conversion is incomplete due to the fact that = / cosh is also a function of , which should be considered in doing the differential treatment. Instead, and can be regarded as given quantities.

Revised Conversion
In the differential treatment, we think that both the = / cosh and = tanh are functions of . Contrarily, and have the fixed values for a given particle. Then, It is different from the first conversion which gives that / = . Correspondingly, The expressions after the last equal marks in (11) and (13) are obtained from the expressions before the last equal marks in (13) and (11), respectively. It is obvious that the eduction of the revised conversion is simpler than that of the current conversion.
To use (10)-(13), we have relations Then, we have further Equations (15) and (16) translate the rapidity distribution to pseudorapidity one and the pseudorapidity distribution to rapidity one, respectively. In the previous discussions, which can be used in the conversion. Then, the conversion is related to and 0 . To do a conversion, we need to know and 0 for each particle.

Conclusion and Discussion
We have given a revision on the current conversion between the particle rapidity and pseudorapidity distributions. It is shown that, comparing to the current first conversion, the revised one ( (15) or (16)) has an additional term (1 − 2 )sinh 2 or −(1 − 2 )cosh 2 in the denominator. In central rapidity region, sinh ≈ 0 and cosh ≈ 1; then, (15) and (16) change to the current conversion. However, in forward rapidity region, the difference between the revised conversion and current one is obvious.
Our conclusion does not mean that the current conversion between the unit-density functions 2 / and 2 / , that is, is also erroneous or incomplete [12]. In fact, the conversion between the two unit-density functions is correct due to being a series of fixed values in (18). To use (18), we also need to know and 0 for each particle. Because the conversion between rapidity and pseudorapidity distributions is not simpler than a direct calculation based on the definitions of rapidity and pseudorapidity, we would rather use the direct calculation in modeling analysis. In fact, in the epoch of high energy collider, the dispersion between rapidity and pseudorapidity distributions is small [7]. This means that we would also like to not distinguish strictly rapidity and pseudorapidity distributions in general modeling analysis.

Introduction
One of the main goals of studying nucleus-nucleus (AA) collisions at relativistic energies is to study the properties of strongly interacting matter under extreme conditions of initial energy density and temperature, where formation of quark-gluon-plasma (QGP) is envisaged to take place [1,2]. Fluctuations in the physical observables in relativistic AA collisions are regarded as one of the important signals for QGP formation because of the idea that in many body systems, phase transition may cause significant changes in the quantum of fluctuations of an observable from its average behaviour [2,3]. For example, when a system undergoes a phase transition, heat capacity changes abruptly, whereas the energy density remains a smooth function of temperature [3]. Entropy is regarded as the most significant characteristic of a system having many degrees of freedom [4][5][6][7]. Processes in which particles are produced may be regarded as the socalled dynamical systems [4][5][6][7][8] in which entropy is generally produced. Investigations involving the local entropy produced in relativistic AA collisions are expected to provide direct information about the internal degrees of freedom of the QGP medium and its evolution [9]. It has been suggested [4][5][6][7][9][10][11][12] that the event coincidence probability method of measuring entropy proposed by Ma [13][14][15] is well suited for the analysis of local properties in multiparticle systems produced in high energy collisions. This method is applicable to both hadron-hadron and AA collisions [9,11]. In AA collisions, entropy measurement can be used not only to search for QGP formation but it may also serve as an additional tool to investigate the correlations and event-byevent fluctuations [4][5][6][7]9]. Analysis of the experimental data on hh collisions over a wide range of incident energy (up to √ = 900 GeV) carried out by Simak et al. [16] indicates that entropy increases with beam energy, while the entropy per unit rapidity seems to be an energy independent quantity. These findings indicate the presence of ultimate scaling over an energy range extending up to a few TeV. Such scaling has also been observed in pp collisions at LHC energies [17]. For AA collisions, however, only a few attempts have been made [18][19][20][21] to study the entropy production in multiparticle systems. It was, therefore, considered worthwhile to carry out a well-focused study of entropy production and subsequent scaling in AA collisions by analysing the experimental data over a wide range of incident energies. The findings are compared with the predictions of a multiphase transport 2 Advances in High Energy Physics (AMPT) and an independent emissions hypothesis (IEH) models.

Method of Analysis
In high energy AA collisions, as the two colliding nuclei interpenetrate, the collision between the participating nucleons causes the combined system to fly apart. Initially, there would be a large random component to the particles' velocities, which is described by temperature [22]. As the particles move outwards, their velocity vectors become more and more oriented in radial direction with a reduced random component [22]. In thermodynamic terms, the nuclear gas cools, with thermal energy converted to collective flow energy in the radial direction. As the time elapses, volume of the system, , increases, whereas the temperature, , decreases. At a certain point of time, collisions between participating nucleons cease and the measurable characteristics of the system freeze out [22], but the thermodynamic analysis of the system in terms of and is very difficult as both the quantities change rapidly with time. It has, however, been suggested by Seimens and Kapusta [23] that instead of studying and , entropy may easily be studied as it grows rapidly only during the initial stage of the collision and does not significantly change in the later stage, when the system expands and cools. Since, in heavy-ion collision experiments, measurements are confined only to the final state particles, which are mostly hadrons, net entropy is, therefore, expected to provide invaluable insight into the state of matter the in early stage of the collision, as it is nearly conserved between initial thermalization and freezeout [24]. After freeze-out, when particles freely stream out, entropy remains essentially constant. Entropy may increase only because of viscous effects, shock waves, and decoupling processes. The entropy from the final state, thus, provides and an upper bound for the entropy of the initial state.
Entropy of produced particles is calculated from their multiplicity distribution using [20]: where is the probability of relativistic charged particles being produced in an interaction. If there are ]-independent sources which contribute to the particle production, the entropy being an additive quantity may be expressed as [20]: The invariance of entropy under an arbitrary change of multiplicity scale allows to choose a subsample of particles, like charged particles. The entropy of the emitted relativistic charged particles is, therefore, calculated for the experimental and simulated data sets selected in the present study.

Details of the Data
A stack of G5 emulsion, horizontally exposed to 14.5A GeV/c 28 Si ions from AGS at BNL, is used. The events were searched for by following an along the track method and the ones which satisfy the following criteria were selected for carrying out various measurements and analysis: (i) the tracks of the incident particles producing events should not be inclined more than 3 ∘ with respect to the mean beam direction and (ii) the events should lie at a depth ≥20 m from the top or the bottom surface of the emulsion pellicle.
Adopting the above criteria, a sample consisting of 1039 interactions characterized by ℎ ≥ 0 were collected; ℎ denotes the number of tracks with ionization, ≥ 1.4 0 , 0 is the minimum ionization produced by a singly charged relativistic particle. From this sample, events produced due to the AgBr group of target were sorted out on the basis of their ℎ values. Events with ℎ ≥ 8 are envisaged to be produced exclusively due to the interactions with AgBr targets, whereas those having ℎ ≤ 7 are produced either due to the interactions with H or CNO group of nuclei or due to peripheral collisions with AgBr nuclei [25][26][27][28][29][30][31]. Following these criteria, 561 events due to AgBr targets were selected for the present analysis. Furthermore, two other sets of data on the interactions of 16 O ions with AgBr targets at 60 and 200A GeV/c, from the emulsion experiment performed by EMU01 collaboration [25][26][27][28], are also analysed; number of events in these data sets are 422 and 223, respectively. It should be emphasized that the conventional emulsion technique has two main advantages over the other detectors: (i) its 4 solid angle coverage and (ii) emulsion data are free from biases due to full phase space coverage. In the case of other detectors, only a fraction of charged particles are recorded due to limited acceptance cones. This not only reduces the charged particle multiplicity but also distorts some of the event characteristics, such as particle density fluctuations [32]. Emission angle, , and azimuthal angle, , were measured for each track produced by relativistic charged particles with respect to the mean beam direction. Using the measured values of s, pseudorapidity variable, of each of the produced relativistic charged particles is estimated using the relation = − ln tan( /2). The number of relativistic charged particles in a selected window is, thus, counted to estimate the probability of producing charged particles, (Δ ) for the entire data sample. (Δ ) values, thus obtained, were used to determine the entropy produced in a particular -window using (1), as described in the next section. In order to compare the findings of the present study with the predictions of Monte Carlo model and AMPT [33], event samples matching the real data are simulated using the code ampt-v-1.2.21. The number of events in each simulated data set is equal to that in the real data sample. The events are simulated by taking into account the percentage of interactions which occur in the interactions of projectile with various target nuclei in emulsion [29]. The values of impact parameter for each data set is so set that the mean multiplicities of relativistic charged particles become nearly equal to those obtained for the real data sets.

Results and Discussion
Probability, (Δ ), of producing relativistic charged particles in a pseudorapidity window of fixed width is calculated by selecting a window of fixed width, Δ = 0.5. This window is chosen so that its midposition coincides with the centre of symmetry of distribution, . Thus, all the relativistic charged particles having their values lying in the interval − (Δ /2) ≤ ≤ + (Δ /2) are counted to evaluate . The window width is then increased in a step of 0.5 until a region ± 3.0 is covered. The values of entropy for different windows, ranging from 0.5 to 6.0, is then calculated using (1). Variations of entropy, , with Δ for the real and AMPT data samples are displayed in Figure 1. It may be noted from the figure that with increasing Δ , entropy first increases rather quickly then slows down and thereafter saturates beyond Δ ∼ 4.0. Such a trend is expected because of the fact that with increasing Δ , particle multiplicity increases, first rapidly and then slowly, causing entropy to increase in a similar fashion. However, beyond Δ ∼ 4.0, multiplicity increases nominally yielding essentially the same value of entropy. It may also be noted from the figure that for a given Δ , entropy increases with increasing beam energy. Furthermore, a comparison of the plots for the real and AMPT simulated events, shown in Figure 1, shows that the entropy values for the experimental data are close to those predicted by the AMPT model. Studying the multiplicity dependence of average phase space densities and entropies of thermal pions, Sinyukov and Akkelin [34] have reported the presence of deconfinement and chiral phase transitions in AA collisions at relativistic energies. The observed energy dependence of entropy per unit rapidity by these workers have been interpreted as the chemically equilibrium pion number which is frozen at initially very high temperature that increases with collision energy [34] with regard to the experimental scenario, entropy evolution has been investigated by Simak et al. [16] in hh collisions in the energy range of √ = 22 to 900 GeV. Entropy increase has been observed [16] with increasing projectile energy in the entire energy range considered. Similar energy dependence of entropy has also been observed by Mizoguchi and Biajima [17] at LHC energies (√ = 0.2 to 7 TeV).
To check whether the observed entropy behaviour is a distinct feature of the data or arises solely due to fluctuations in the event multiplicities, correlation-free Monte Carlo events are generated and analysed. These events are generated in the frame work of Independent emission hypotheses (IEH) by adopting the following criteria [30,31,35,36].
(1) Multiplicity distribution of the produced particles should be similar to the one obtained for the experimental data.
(2) There should be no correlation between the produced particles.
(3) For each event, single particle inclusive distribution in space is set to have Gaussian shape with its mean value, ⟨ ⟩, and dispersion, , equal to their corresponding values for the real event. By applying the above criteria, three sets of correlation-free Monte Carlo (IEH) events corresponding to the three real data samples are simulated and analysed. The number of events in each sample is equal to those for the real data.
Variations of with Δ for these events at the three incident energies considered are shown in Figure 1 (bottom panel). It is evident from the figure that the trend of variation of with Δ for the IEH events is markedly different from those observed for the real and the AMPT data. This indicates that the observed entropy dependence on the width ofwindow is a distinct feature of the data and definitely not a manifestation of statistics. These findings are, thus, in fine agreement with those reported earlier [16] for hh collisions in the energy range from ∼2 GeV to a few TeV. In AA collisions too, entropy has been observed by Khan et al. [21] to increase with increasing beam energy. It has been reported [16] that for hh collisions in the energy range of ∼22 to 900 GeV, total entropy produced in a limited pseudorapidity bin, when normalized to maximum rapidity in the centre-of-mass frame, is found to be essentially independent of the energy and identity of the colliding hadrons, indicating, thereby, the presence of a kind of entropy scaling. Such a scaling of entropy has also been observed by Mizoguchi and Biajima [17] in pp collisions at cms energy range, 0.2-7 TeV. In case of AA collisions too, a similar scaling behaviour has been observed by Khan et al. [21] in 4.5 and 14.5A GeV/c 28 Si-nucleus collisions and also by Ghosh et al. [20] in 60A GeV/c 16 O-AgBr and 200A GeV/c 32 S-AgBr collisions. An attempt is, therefore, made to examine the occurrence of entropy scaling with the present data that would definitely cover a significant energy range (AGS and SPS). For this purpose, values of the maximum rapidity in the centre-of-mass frame, , are calculated using the relation [20] = ln ( √ ) , where √ denotes the centre-of-mass energy of the participating system and represents the rest mass of a pion. The value of √ is calculated from [37] where beam is the energy of projectile nucleus in laboratory frame, and , respectively, denote the numbers of participating projectile and target nuclei, and is the nucleon mass. Values of and used for calculating √ are the event-averaged values estimated from the AMPT event samples.
Variations of entropy normalized to maximum rapidity, / for the experimental, AMPT and IEH data are exhibited in Figure 2. It may be seen in the figure that for the experimental and AMPT data, / first increases up to Δ / ∼ 0.5 and thereafter tends to acquire almost a constant value. It is interesting to point out that the data points corresponding to various energies overlap and fall on a single curve, indicating the presence of entropy scaling which is well supported by AMPT. No such scaling is, however, observed in the case of IEH data. This suggests that the observed scaling in the case of real data is truly of dynamical nature and the findings are in fine agreement with the AMPT predictions.
Investigations involving forward-backward (F-B) multiplicity fluctuations and correlations [30,[38][39][40][41][42][43][44] suggest that event-by-event (ebe) multiplicities of relativistic charged particles in the forward (F) and backward (B) hemispheres are not equal. This prompts one to study multiplicity fluctuations by comparing the ebe multiplicity, , in a pseudorapidity window of width Δ placed in the F region with the multiplicity observed in an identical window in the B region; the two windows are chosen in such a way that they are symmetric around . Using these definitions, a number of attempts have been made to investigate F-B multiplicity correlations and fluctuations either by examining the dependence of on ⟨ ⟩ [42][43][44][45][46] or by considering [30,[38][39][40][41] a multiplicity asymmetry variable = ( − )/√( + ). These  investigations observe presence of strong F-B correlations. Such correlations are believed to arise due to isotropic decays of cluster-like objects in F or B region. The presence of ebe multiplicity asymmetry, thus, indicates that the value of the entropy in the two regions are different. In order to test this, we have studied the entropy production and its scaling separately in and AMPT data are displayed in Figure 3. Similar plots for the IEH events are shown in Figure 4. Th following observations may be made from these figures.
(i) In the case of experimental and AMPT data, entropy increases with increasing Δ up to ∼1.5 and thereafter acquires nearly a constant value in both F and B hemispheres. (ii) For a given Δ , entropy increases with increasing beam energy. (iii) For IEH events, on the other hand, with increasing Δ , the value of first increases quickly then rather slowly but does not saturate. Thus, the trends of variations of with Δ for the real and AMPT data are observed to be quite different from those observed for the IEH data. (iv) It is interesting to note that for the IEH event samples, the values of against Δ in F and B regions are nearly the same. For the real and AMPT data, on the other hand, acquires rather a larger value in the F hemisphere as compared to those for the B hemisphere. Such a difference in values is noticed in the entire Δ region and for all the three energies considered.
The observed difference in the entropy values in the two regions for the experimental and AMPT events (and not with the IEH events) may arise due to the strong correlations existing between the particles belonging to the adjacent F and B regions around midrapidity [9,[39][40][41]. It is commonly believed [38][39][40][41][42][43][44][45][46] that the forward-backward correlations observed are of short-range type and there are almost no or very small contributions from the long-range correlations, particularly at lower energies. In order to ensure further that whether or not the observed entropy difference arises due to the particle correlations of short-range type, entropy values are calculated in F and B regions after leaving a gap of 2 units It is interesting to note in Figure 5 that the entropy difference in the two regions is much smaller as compared to those reflected from Figure 3. Moreover, for the F and B regions separated by 3 units, this difference almost vanishes and the entropy values against Δ in the two regions are nearly the same ( Figure 5). Results from the analysis of the AMPT events also supports the experimental observations. These findings, thus, suggest that there exist strong correlations amongst the particles belonging to adjacent F and B regions and are mostly of short-range type. On introducing a separation between the two regions, the F-B correlations become weaker and finally vanish for larger separations, giving nearly the same entropy values for the two hemispheres.
In order to test whether the relativistic charged particles emitted in F and B hemispheres exhibit the same kind of entropy scaling as observed when the particles of the two regions are collectively considered, variations of / with Δ / for the charged particles emitted in F and B regions are examined for real, AMPT, and IEH data sets. These variations are displayed in Figure 6. It is clear from the figure that the data points corresponding to three energies overlap and indicate the presence of entropy scaling in both F and B regions. The observed scaling behaviour is nicely reproduced by the AMPT data sets. Moreover, the results for the IEH events confirm that the observed entropy scaling is a distinguishable feature of the data rather than a manifestation of statistics.

Conclusions
Based on the findings of the present work, the following significant conclusions may be reached.
(1) With widening of the windows, entropy first increases and thereafter acquires nearly a constant value.
(2) For a given Δ , entropy increases with increasing beam energy.
Advances in High Energy Physics (3) Dependence of / on Δ / shows the presence of entropy scaling in AA collisions at AGS and SPS energies. Such a scaling is observed to hold good even if the particles emitted in the forward and backward regions are separately considered.
(4) Entropy dependence on pseudorapidity bin width, examined separately in forward and backward hemispheres, indicates the presence of strong correlations amongst the particles emitted in the two hemispheres around mid-pseudorapidity. On effecting a separation between the two hemispheres, these correlations are observed to become weaker and finally vanish when the separation between the two hemispheres becomes relatively large. This suggests that the observed correlations are of short range in nature, arising due to the "clusters" and the high mass states, produced during the initial stage of collisions, which finally decay isotropically in their centre-of-mass frame to real physical hadrons, while the contributions from the long-range correlations appear to be absent.
The findings of the present study reveal that entropy per unit rapidity increases with collision energy, whereas when normalized to maximum rapidity, becomes energy independent. These results are in fair agreement with those obtained from theoretical calculations of average phase space density and entropy in thermal hadronic system at the final (freezeout) stage of AA collisions, carried out by Sinyukov and Akkelin [34]. Theoretical investigation [24] of entropy per unit rapidity at freeze-out with minimal model dependence from the available measurements of particle yields, spectra, and source size indicates that at the same energy density, QGP would be a high entropy state as compared to pion gas. It may be stressed that an increase in the entropy density, if observed at RHIC or LHC energies, might be taken as a signal of QGP formation.

Introduction
The Large Hadron Collider (LHC) at CERN has been built to research the properties of matter produced in high-energy collisions [1,2]. It will study proton-proton collisions at a center-of-mass energy of 14 TeV and heavy-ion collisions at a center-of-mass energy of 5.5 TeV, which are much higher than the maximum collision energy at RHIC. An environment of high temperature and density is formed in such high-energy collisions, which lead to a significant extension of the kinematic range in longitudinal rapidity and transverse momentum [3][4][5]. The investigation of the particle production in proton-proton and nucleus-nucleus collisions at LHC energies is helpful for understanding the statistical behavior of particles and production mechanism. The multiplicity and pseudorapidity distributions of finalstate particles can be used to test different theoretical models and ideas. In the central rapidity region, the multiplicity of charged particles produced in the high-energy collisions is an important observable and can give the basic attribution of quark-gluon plasma (QGP) produced in the collisions. Proton-proton collisions at 7 TeV produce about 70 charged hadrons integrated over the full rapidity space, including the unmeasured region. The study on the pseudorapidity distribution of charged hadrons, (ch) / , helps us to understand the production mechanism of major charged hadrons. The relativistic diffusion model (RDM) [6] has made some valuable attempts in describing and predicting pseudorapidity distribution of charged hadrons produced in heavyion collisions at SPS, RHIC, and LHC energies. Different phenomenological models of initial coherent multiple interactions and particle transport have been introduced to describe the production of final-state particles in Au-Au collisions [7,8]. In the analysis of the experimental data, one statistical distribution gained prominence with very good fits to the data measured by the STAR [9] and PHENIX [10] Collaborations at RHIC and measured by the CMS [11] and ALICE [12] Collaborations at LHC. With Tsallis statistics' development and success in dealing with nonequilibrated complex systems in condensed matter research, it has been utilized to understand the particle production in highenergy collisions. Recently, in order to describe transverse momentum spectra, an improved Tsallis distribution which satisfies better the thermodynamic consistency was proposed [13]. As the collision energy increases to LHC, which is much higher than the maximal collision energy at RHIC, the kinematic range in the longitudinal direction will increase. In this work, we consider the rapidity shift of the interacting system and use the improved Tsallis distributions to analyze the pseudorapidity distribution functions in p-p and Pb-Pb collisions at LHC energies as measured by the CMS and ALICE Collaborations.

The Improved Tsallis Distribution and the Rapidity Distribution
In the framework of Tsallis statistic, more than one version of the Tsallis distribution is used to discuss the transverse momentum distribution of final-state particles produced in high-energy collisions. The improved form of the Tsallis distribution can naturally meet the thermodynamic consistency. The quantum form of the Tsallis distribution succeeded in description of the transverse distribution measured by ALICE and CMS Collaborations. According to the framework, the momentum distribution is given by where , , , , , and are the momentum, the energy, the temperature, the chemical potential, the volume, and the degeneracy factor, respectively, and is a parameter characterizing the degree of nonequilibrium. For zero chemical potential, a rapidity distribution is where is the transverse momentum. The distribution function is only the rapidity distribution of particles emitted in a considered emission source at the fixed rapidity. In -space, the longitudinal location of the source needs to be taken into account. Therefore, for the fixed emission source with rapidity , the rapidity distribution of produced particles is given by Generally speaking, the parameters and are obtained by fitting the transverse spectra measured in the collisions. In [13], the values of and taken for the calculations are about 0.07 GeV and 1.1, respectively. In order to describe the rapidity shift, we introduce the geometrical picture of the two-cylinder model [16]. In the laboratory reference system, the projectile cylinder is in the positive rapidity direction and the target cylinder is in the negative one, with rapidity ranges [ min , max ] and [ min , max ], respectively. On both sides of the two cylinders there are leading particles appearing as two isotropic emission sources with rapidity shifts and , respectively. So, in the final state, the normalized pseudorapidity distribution is where , , , and are the contributions of the target leading particles, the target cylinder, the projectile cylinder, and the projectile leading particles, respectively. For symmetric collisions p-p and Pb-Pb, = = , = = 1 − . The pseudorapidity distribution can be expressed as The pseudorapidity distribution can be calculated by a conversion between the pseudorapidity distribution and the rapidity distribution. In the case of ≫ 0 , the rapidity and pseudorapidity are approximately equal to each other. However, the condition of ≫ 0 is not always satisfied. The conversion between the pseudorapidity distribution / and the rapidity distribution / is where a Jacobian of the transformation is  Figure 1: (Color online) The charged particle multiplicity ch / in p-p inelastic collisions at √ =2.36 TeV and 7 TeV. The symbols represent the experimental data measured by the CMS Collaboration [5]. The curves are our calculated results. with the experimental data. The rapidity shifts max − min in the cylinders for 7 TeV are larger than that for 2.36 TeV. So is the gap between the projectile cylinder and the target cylinder 2 min . Figure 2 shows the pseudorapidity distributions of charged particles produced in Pb-Pb collisions with different centralities at √ = 2.76 TeV. The values of centralities are shown in the figure. The symbols represent the experimental data of the ALICE Collaboration [14,15] and the curves are our calculated results. The value of is 0.421 ± 0.010.

Comparison with Experimental Results
The other parameters max , min , , and obtained by fitting the experimental data are given in Table 1 with 2 /dof. From these values, we find that the max and min increase with the increase in the centralities. In other words, the length of the double cylinder and the distance between the two cylinders increase with the increase in the centralities. The maximum value of 2 /dof is 1.156. One sees that the calculated results approximately agree with the experimental data for all concerned centralities. The dependences of the different parameters on the centrality class are given in Figure 3.
where and denote the centrality and a normalization constant, respectively. The values of 2 /dof are 0.819, 0.652, 0.736, and 0.441, respectively.

Introduction
High-energy collisions are an important experimental phenomenon in modern physics. From fixed target experiments at accelerators to collider experiments, a lot of experimental results have been reported. In the collisions, an incident projectile and a fixed target (or another incident target at colliders) can be particles, ions, or nuclei. Generally, the products in nucleus-nucleus collisions are more abundant than those in particle-particle collision. The analysis of the former one is also more complex. As a transition stage from particle-particle collision to nucleus-nucleus collisions, particle-nucleus collisions have not only abundant experimental results but also simpler physics process. In fact, in proton induced nuclear collisions, the projectile is simple and has no spectator's contribution, and the target is complex and has spectator's contribution to final state.
Many models have been proposed in the field of highenergy collisions, for example, the equivalent quark-gluon string model [1], the hadron resonance gas model [2,3], the statistical multifragmentation model [4], the expanding and emitting source model [5] or the expanding-evaporating source model [6], the nonequilibrium-statistical relativistic diffusion model [7], the dual parton model [8,9], the relativistic or ultrarelativistic quantum molecular dynamics model [10][11][12][13], and so forth. In a workshop [14] held a few years ago at the CERN Theory Institute, more models have reported their predictions for the collision program at the LHC energies. Most of the mentioned models are microscopic models based on quantum chromodynamics (QCD) and concern the system evolution and dynamical process. Parts of them are thermal and statistical models and focus on the global properties of interacting system and finalstate products.
In the past years, we have proposed a multisource thermal model [15,16] and extended it to relativistic situation [17] for descriptions of particle production in high-energy collisions. Some experimental results are described by the model. Recently, the NA49 Collaboration reported inclusive productions in proton-carbon ( -C) collisions at 158 GeV/c beam momentum [18]. We are interested in the description of antiproton ( ) production and will give a description in this paper. Because there is no effect of leading particles, the distribution law of antiprotons in analysis is simpler than that of protons.

The Model
According to the multisource thermal model [15][16][17], many emission sources are formed in high-energy collisions. In the rapidity space, most of these sources are distributed homogeneously in a projectile cylinder and a target cylinder due to the penetration of the projectile and the target. In the rapidity space, the projectile and target cylinders are located in the rapidity intervals [ min , max ] and [ min , max ], respectively. For antiproton production in proton-carbon collisions, the leading nucleons have no contribution, but the target spectator contributes a cylinder in the rapidity interval [ TS min , TS max ] due to the produced particles causing the cascade collisions in the spectator. Let TS denote the weight of the target spectator cylinder; the weights of the projectile and target cylinders are the same: We define the beam direction to be the axis and the reaction plane to be the plane. In the source rest frame, let and denote the source temperature and particle momentum, respectively. Considering the relativistic effect [19,20], we have the distribution in the relativistic ideal gas model to be where denotes the Boltzmann constant, 0 denotes the rest mass of the considered particle, = 1/( 2 0 ) ⋅ 1/ ( 2 ( 0 / )) is the normalization constant, and 2 ( 0 / ) is the modified Bessel function of order 2. In the previous equation, we have taken the natural system of units in which the speed of light in vacuum is 1.
In the Monte Carlo method, can be obtained by solving the inequality | ∫ 0 ( ) − 1 ≤ |, where 1 and are a random number distributed evenly in the range from 0 to 1 where 0 and 1 are included (i.e., in [0, 1]) and a small quantity, respectively. Let denote the emission angle of the considered particle. An isotropic emission gives = arccos(1−2 2 ), where 2 is another random number in [0, 1]. Then, the particle transverse momentum ≡ sin , longitudinal momentum ≡ cos , transverse mass ≡ √ 2 + 2 0 , energy ≡ √ 2 + 2 0 , kinetic energy ≡ − 0 , and rapidity ≡ 0.5 ln[( + )/( − )] can be obtained. Particularly [17], the distributions of , , and are respectively, where 1 and 2 are the normalization constants. We would like to point out that (2) is in fact the Boltzmann distribution which is used in the literature (e.g., [21]). Both (2) and (3) are valid because they are in agreement with the Monte Carlo calculations based on the definitions. The distributions of and are respectively. The distribution of velocity V is In the nucleon-nucleon center-of-mass system, let denote the source rapidity. According to different weights, we have = ( max − min ) 3 + min for the projectile cylinder, = ( max − min ) 4 + min for the target cylinder, or = ( TS max − TS min ) 5 + TS min for the target spectator cylinders, where 3 , 4 , and 5 are random variables in [0, 1]. Then, the particle rapidity = + , transverse momentum = , longitudinal momentum = cosh + sinh , momentum = √ 2 + 2 , energy = √ 2 + 2 0 , transverse mass = , and Feynman variable = 2 /√ can be obtained, where √ is the energy in the center-of-mass system. Then, we transform all of the concerned kinetic quantities from the rest frame of a single source to the nucleon-nucleon center-ofmass system. The effects of the multiple sources are obtained by different . On the other hand, from the definition of Feynman variable, we have = 0.5 √ . Then, = are the functions of . Particularly, the relationship between and at a given is built. The distributions of and at different for produced in -C collisions at 158 GeV/c are presented in Figures 2 and 3, respectively. The symbols represent the experimental data of the NA49 Collaboration [18] and the curves are our calculated results. The value of is the same as that for Figure 1 and TS = 0.15 ± 0.01. The values of other parameters (rapidity shifts) which have a relative error of 6% are given in Table 1 with the values of 2 /dof. Once again the model describes well the experimental data.

Comparison with Experimental Data
To see the dependences of different rapidity shifts on , the relations of max − , min − , max − , min − , TS max − , and TS min − are given in Figure 4. The symbols represent the parameter values used in Figures 2 and 3. The lines are our fitted results described by the equations = + , where denotes different rapidity shifts and the values of and are given in Table 2 with the values of 2 /dof. One can see that max decreases and min and TS min increase with the increases of . A very slight decrease in max and very slight increases in min and TS min are observed with the increase of . If we define max − min as the total length of the projectile and target cylinders, this length shows a decrease with the increase of . The condition max − min = 0 results in a zero length cylinder which gives approximately the maximum to be 3.82 GeV/c. From = 2 GeV/c to 3.82 GeV/c, the relations of min − , max − , and TS max − will not be the same linear changes as those in the region of < 2 GeV/c. The projectile and target cylinders will be from partly overlapping to totally overlapping. The length will become in fact shorter and shorter and finally will be a single source.

Discussions
In the previous discussions, we have assumed in fact that a thermal equilibrium (or a local thermal equilibrium) is achieved in the collisions when the thermal source freezes out final-state particles. Thus, the concept of temperature can be used in our calculation. In the calculation, we have not given special attention to the possible presence of early emitting source before an achievement of equilibrium. In fact, when studying antiprotons instead of protons with leading particle contributions, the preequilibrium emissions are naturally excluded. This is the reason that we have not studied protons but antiprotons.
In a previous work [21], the transverse momentum and rapidity distributions of mesons produced in Pb-Pb collisions at 20A, 30A, 40A, 80A, and 158A GeV were studied by using the multisource thermal model (or multisource ideal gas model). In another previous work [22], the transverse mass distributions of protons produced in Au-Au collisions at 8A GeV and Pb-Pb collisions at 158A GeV were studied by using the same or similar model. Besides, the transverse mass spectra of protons, pions, kaons, Lambdas, and Antilambdas produced in Au-Au collisions at 2A, 4A, 6A, and 8A GeV and Pb-Pb collisions at 20A, 30A, 40A, 80A, and 158A GeV were studied [23]. The pseudorapidity distributions of charged particles produced in -collisions at 0.053, 0.2, 0.54, 0.546, 0.63, 0.9, and 1.8 TeV; the pseudorapidity and multiplicity distributions of charged particles produced in -collisions at 0.9, 2.36, and 7 GeV; and the pseudorapidity distributions of charged particles produced in Pb-Pb collisions at 2.76A TeV were studied too [24][25][26][27]. We would like to say that the multisource thermal model used in the present work describes different particles produced in different collisions at different energies. However, the present work is the first one to describe the antiproton production by the model.
We would like to point out that the temperature extracted in the present work reflects excitation degree of thermal source when it freezes out antiprotons. According to production process, the source temperature extracted from proton spectrum is on average less than 150 MeV due to some protons being leading particles and that from meson spectrum is higher than 150 MeV due to violent collisions. If we exclude the contributions of leading particles, the temperature extracted from proton spectrum is approximately equal to 150 MeV. These results are consistent with other measurements in the field [18] and with other model expectations [14].
The target spectator weighting factor is found to be about 15%. This reflects the contribution of cascade collisions in the target spectator caused by the produced particles. In highenergy collisions, the spectator effect cannot be neglected in most cases. Particularly, in asymmetric collisions such as proton-carbon collisions, one can obtain some asymmetric rapidity spectra. The contribution of target spectators has to be considered. In nucleus-nucleus collisions, both the contributions of projectile and target spectators have to be considered.

Conclusions
We have analyzed the transverse momentum, Feynman variable, and rapidity distributions of antiprotons produced in proton-carbon collisions at high-energy by using the multisource thermal model. This model assumes that many sources are formed in high energy collisions. Each source is treated as a relativistic ideal gas. The distributions of momenta, transverse momenta, longitudinal momenta, energies, kinetic energies, transverse mass, velocities, and other related quantities for a given kind of particles at a given temperature in the source rest frame can be obtained.
In the comparisons with the experimental data, the transverse momentum distribution is not related to the source positions and arrangements but to the temperature of the rest source. A single-temperature distribution describes the  transverse momentum distributions of antiprotons produced in proton-carbon collisions at 158 GeV/c measured by the NA49 Collaboration [18]. From the transverse momentum distributions of antiprotons, we determine that the source temperature is approximately 150 MeV. In longitudinal distributions, the distributions of Feynman variables and rapidities are mainly related to the source positions and arrangements. In rapidity space, these sources form a projectile cylinder and a target cylinder. Particularly, in proton-carbon collisions, a target spectator cylinder consisting of a series of sources has a weight for the production of antiprotons. By using the three cylinders, the model describes the distributions of Feynman variables and rapidities for antiprotons in proton-carbon collisions at high energy. Our analyses show that the weight of the target spectator cylinder is 15%. Both the weights of the projectile cylinder and the target cylinder are the same.
With the increase of the transverse momentum, the maximum rapidity shift of the projectile cylinder has a decrease, the minimum rapidity shifts of the target and target spectator cylinders have increases, the maximum rapidity shift of the target cylinder has a very slight decrease, and the minimum rapidity shift of the projectile cylinder and the maximum rapidity shift of the target spectator cylinder have very slight increases. The length of the total projectile and target cylinders decreases with the increase of the transverse momentum.
Combined with the previous works [15,16,[21][22][23][24][25][26][27] analyzed by the multisource thermal model, we guess the energy behavior of some parameters of the model here. For a given projectile-target impacting system, the rapidity shifts of the cylinders increase slowly with the logarithmic center-of-mass energy and do not depend obviously on the centrality. The temperatures extracted from the hadron spectra increase slowly with the energy and centrality. The relative contributions of the cylinders, leading nucleons, and spectators have no obvious relation to the energy but the centrality. In central collisions, the leading nucleons and spectators have very limited contributions or no contribution to the finalstate distributions; and in peripheral collisions, they have large contributions.

Introduction
Relativistic heavy ion collisions are performed to study nuclear matter under extreme temperature and density conditions [1]. Proton-proton collisions are conventionally used as a reference to compare with nuclear collisions and to understand the observed collective effects. And the Large Hadron Collider (LHC) [2] was originally designed to rise center-of-mass (cm) energy to 14 TeV, which is almost 8 times the record of 1.8 TeV achieved by the Fermilab in the United States. After the LHC data obtained in p+p interactions at √ NN = 900 GeV and 7 TeV [3,4], the new interest in general features of p+p collisions at ultrarelativistic energies appeared. In p+p collisions, the meson spectra provide insight into the particle production mechanism and interaction in the hadronic and quark gluon plasma (QGP) phases. Furthermore, the detailed study of meson spectra is important because it acts as an ingredient for estimating the hadronic decay backgrounds in the photon, single lepton, and dilepton spectra, which are the penetrating probes of QGP. To isolate phenomena related to the dense and hot medium created in such collisions, it is also important to measure particle production in smaller collision systems like p-p and d-Au. Measurements of transverse momentum spectra for particles produced in p+p collisions are used as a baseline, to which similar measurements for heavy ion collisions are compared. In addition, the nuclear modification factor AA of several identified hadrons with high transfer momentum is used to probe jet quenching.
The information about the production process is retained by the final-state particle distributions in the collisions [5][6][7][8][9]. Single-hadron production at large transverse momenta in high-energy hadronic and nuclear collisions results from the fragmentation of quarks and gluons issuing from partonparton scatterings with large momentum transfer. To explain the abundant experimental data, different phenomenological mechanisms of initial coherent multiple interactions and particle transports were proposed and extended in recent years [10][11][12][13][14][15]. The comparison to model calculations can provide valuable information of the collision evolution and help better understand properties of the QGP. In particular, several theory models of high energy collisions were reported in a workshop held at the CERN Theory Institute [16]. Recently, systematic studies on the production of finalstate particles performed by the NA49 collaboration were discussed [17]. Hadronic transport models fail to describe the production of final-state particles, while the results of statistical models are generally in good agreement with the measured particle yields at available energies.
Based on the one-dimensional string model [19] and the fireball model [20], we have developed a thermalized cylinder model, which successfully describes the particle production in heavy ion collisions over an energy range from the Alternating Gradient Synchrotron (AGS) to the relativistic heavy ion collider (RHIC) [21][22][23][24][25]. In our previous work [25], the transverse momentum distributions of strange hadrons produced in Cu+Cu and Au+Au collisions at RHIC energies were explained by a single component distribution. The excitation degree of the emission source is allowed to be determined by studying the transverse momentum spectra, anisotropic flow effects, and their correlations. We found that the single component distribution only describes a narrow transverse momentum rang. Recently, the invariant differential crosssection for production of neutral mesons in p+p collisions at √ NN = 200 GeV were published by the PHENIX Collaboration [18]. The high reach of the transverse momentum is helpful to characterize the mechanisms of truly perturbative parton-parton scatterings and parton fragmentation in different QCD environments. Particle yields observed in these experiments inspired our work. It is interesting for us to analyze the results of pp collisions at the RHIC energies. In order to verify the thermalized cylinder model and describe the broad distribution range, in this paper, we will develop the single component spectra into a two-component distribution.

The Formula
According to the cylinder model [25], we assume the projectile and target cylinders to be formed in A+A, d+A and p+p collisions at high energies. The cylinders are wounded sources of particles. The idea comes from the observation that the process of particle production is not instantaneous, which was noted in [26,27]. In the reference frame where the longitudinal momentum of a produced particle vanishes, the minimal time necessary for its creation is 0 ⩾ 1/ , where = √ 2 + 2 . In the laboratory frame, the particle in question acquires some longitudinal momentum and we have where and are the Lorentz factor and the energy of the particle, respectively. So, the resolving power of the longitudinal distance (the uncertainty of the distance from the collision point to those at which the particle is created) is When the rapidity of the produced particle is large enough so that > ( ), where ( ) is the size of the nucleus at a given impact parameter, the particle cannot resolve individual collisions. It is natural to suggest that its creation may be insensitive to the number of collisions in the source. This is the origin of the idea of wounded sources. The concept of the wounded source may practically be applied in the whole rapidity region. In AA collisions the cylinder is thick, and in p+p collisions the cylinder is thin. Final-state particles are randomly produced from emission sources in the cylinder(s). The excitation degree of the side-surface region is naturally lower than that of the central axis region in the cylinders. From central axis region to side-surface region of the concerned emission source, the excitation degree is assumed to decrease linearly along the transverse axis direction [25]. When the excitation degree increases, the value of the distribution width becomes bigger and bigger. So, the excitation degree can be characterized by the momentum distribution width . The emission points with the same excitation degree form an emission circle in the transverse momentum space. Therefore, a given corresponds to wounded sources which emits a fixed density of particles. Let and denote the widths of the transverse momentum distribution of particles produced in the side-surface and central axis regions, respectively. The distribution of can be given by with the normalization constant According to the single component distribution in the Monte Carlo calculation, the transverse momentum distribution of final-state particles can be given by where 1 and 2 are the random variable distributed in [0, 1].
In our previous work [25], transverse momentum spectra of strange particles produced in high-energy collisions were investigated by the above thermalized cylinder model. Considering the simply cylinder shape, we obtained the emission source location dependence of the exciting degree specifically. It is shown that the single component distribution, that is, (5), is successful in describing the experimental results measured by the STAR and PHOBOS Collaborations. Moreover, we found that the single component distribution can describe a narrow transverse momentum range. To explain the wider transverse momentum spectra of identified particles produced in p+p collisions, we need to consider the relative importance of hard versus soft processes in the particle production mechanisms at different energies. Hard partonparton scatterings with large momentum transfer occur on a short time scale and are governed by perturbative QCD. The bulk of particle production occurs via soft processes (with low momentum transfer and consequently longer time scales) which are described with phenomenological models.
In order to describe the invariant cross-section of hadrons as a function of over a wide range, Hagedorn proposed an empirical formula [28], which is described as "inspired by QCD" and is given by Advances in High Energy Physics   3 where , , and are fit parameters. The limiting cases show that the distribution is an exponential form at low transverse momenta, but it is a power law at large transverse momenta, which came from "QCD inspired" quark interchange model [29], The dominant contribution to the inclusive cross section is low particles. However, with center of mass energy increasing, the hardening of the distribution implies an increase of the transverse momentum. At both low and high , UA1 collaboration [30] gives a hybrid form, where is a free parameter. In order to give a good description of pion spectra in wide range, the PHENIX Collaboration [18,31,32] found a single form referred as the modified Hagedorn formula. The modification is to better describe the 0 spectrum for wider range, in particular at high where the spectrum behaves close to a simple power law function. The single formula has been used to successfully describe the hadron spectra measured in p-p collisions at different energies [18,31,32]. The formula has been extensively used in terms of , where and are the fit parameters. The distribution function is close to an exponential form at low and a pure power law form at high . Similarly, our model should be improved to a two-component distribution, where 3 and 4 are the random variable distributed in [0, 1], (1 − ) and 1 ( 2 ) are the contribution coefficient and normalization constant of the first (second) component respectively, and 1 ( 2 ) and 1 ( 2 ) are the momentum distribution widths of particles produced in side-surface region and central axis region of the first (second) component, respectively. Our calculation shows that 1 ̸ = 2 . The first and second items in (10) correspond to the contributions of soft production and hard emission, respectively.
The basic model assumes the cylinders with the emitted momentum distribution width changing from the side-surface to the central axis. In fact, except for the most central collisions, the active (overlap) region is certainly not cylindrical.
We could understand the "cylinder" as a nonperfect cylinder along the beam direction in the momentum space but not a perfect cylinder in the coordinate space. The emission sources with the same excitation degree stay at the same (sub)surface. This situation is similar to the equipotential surface in electromagnetism.

Comparison with PHENIX Results
The invariant differential cross-sections of neutral mesons produced in p-p collisions at nucleon-nucleon center-ofmass energy √ NN = 200 GeV in various decay modes are presented in Figure 1. The symbols represent the experimental data of the PHENIX Collaboration [18] and the curves are our calculated results by using (10). The 2 testing provides statistical indication of the most probable value of corresponding parameters. The values of , 1 − 1 , and 2 − 2 obtained by fitting the experimental data are given in Table 1 with 2 per degree of freedom (dof). The maximum value of the observed reaches about 13.0 GeV. We see that the calculated results approximately agree with the experimental data for neutral mesons in the region. The values of , 1 , and 2 are taken to be 99.98±0.02, 1.70±0.12 and 4.64 ± 0.35, respectively. Both 1 , and 2 values do not change obviously.
To compare these results to other particles, we show the invariant differential cross sections of different particles produced in p+p collisions at √ NN = 200 GeV in various decay modes in Figure 2. The different symbols are the experimental data of the PHENIX Collaboration [18]. The solid curves are our results calculated by the model. By fitting the experimental data, the parameter values are given in Table 1 with the values of 2 /dof. Similar to Figure 1, the values of 1 ( 2 ) do not change significantly. We see again that the model describes approximately the invariant differential cross-sections of final-state mesons produced in p+p collisions at the highest RHIC energy. Compared with the case of Figure 1, the values of parameters do not change significantly.
In Figure 3, we present the invariant cross section at midrapidity for in p+p collisions at √ NN = 200 GeV. The symbols are the experimental data of the PHENIX Collaboration [33,34]. The solid curves are our results calculated by the model. The dotted and dashed curves are our results corresponding to the contributions of the first and second items in (10). The parameter values and corresponding 2 /dof are given in Table 1. One can see the particle production of soft production and hard emission intuitively in the figure.
In order to testify the validity of the model, Figure 4 shows the invariant 0 cross section, 3 / 3 , in p+p collisions at different RHIC energies. The symbols are the experimental data of the PHENIX Collaboration [35]. The solid curves are our results calculated by the model. The dotted and dashed curves are our results corresponding to the contributions of the first and second items in (10). The values of , 1 ∼ 1 , and 2 ∼ 2 are given in Table 2 with the corresponding 2 /dof. The calculated results are in agreement with the experimental data in p-p collisions at the RHIC energies. The values decrease with the increase of Table 1: Values of the parameters , 1 ∼ 1 , and 2 ∼ 2 obtained by fitting the experimental spectra in Figures 1, 2, and 3. The units of 1 , 1 , 2 , and 2 are GeV/c.   the collision energy, and has no obvious change, equals only about 0.60 GeV/c. The values of 1 (0.99-1.81 GeV/c) and 2 (3.94-4.83 GeV/c) increase with the collision energy increasing. As discussed in detail in [25], the excitation degree of emission source on the cylinder central axis becomes higher due to the higher collision energy.
We would like to point out that the calculated results in Figures 1-4 are obtained by (10), which is a general representation in the Monte Carlo method. Equations (6)- (9) are only used to explain why the two components need to be considered in (10). The curves in the figures are numerical results from (10). Some of them are not smooth due to low statistics.
Advances in High Energy Physics

Discussion
The above comparison shows that the improved cylinder model can be used in the description of the meson production in the wider range of transverse momenta. The values of 2 /dof for all fits are shown in Tables 1 and 2. The maximum value is 1.420, and the minimum value is 0.246. We demonstrate that the calculated results are in good agreement with the available experimental data. In our calculation, is the most important parameter which determines the distributive slope and range. In most cases, the second item cannot be neglected due to the contribution of the hard process. A large (≃1) gives a precipitous and narrow distribution, whereas a smaller (<1) gives a subdued and wide distribution. The second item in (10) is very small (<0.01) and does not contribute to the spectra at the lowest RHIC energy. In p+p collisions for different energies, we see that 1 − increases slightly with the increasing of collision energies. This means that the contribution of hard emission increases slightly. For the asymmetric system of d+Au collisions, it is predicted that the values of do not change significantly due to the spatial asymmetry of the collision nucleons. Scattering processes at high energy hadron colliders can be classified as either hard or soft. Quantum Chromodynamics (QCD) is the underlying theory for all such processes, but, the approach and level of understanding arevery different for the two cases. For hard processes (e.g., Higgs boson or high jet production), the rates and event properties can be predicted by using perturbation theory. In addition, for soft processes (e.g., the total cross section, the underlying event, etc.), the rates and properties are dominated by nonperturbative QCD effects, which are less well understood. For many hard processes, soft interactions are occurring along with the hard interactions and their effects must be understood for comparisons to be made to perturbative predictions. An understanding of the rates and characteristics of predictions for hard processes, both signals and backgrounds, using perturbative QCD (pQCD) is crucial for both the Tevatron and LHC. We are in a position to evaluate the soft and hard contributions to the observed spectra by using in the statistical model.
The particles produced in high-energy nuclear collisions have attracted much attention since people are trying to understand the properties of strongly interacting quarkgluon plasma by studying the possible production mechanisms [36,37]. The final-state particles are emitted isotropically in the rest frame of emission sources with the different excitation degree in collisions. If we consider that the local emission source has a motion in the transverse direction, there are interactions among the emission sources, then the transverse flow (directed flow and elliptic flow) can be explained by the model. Thermal-statistical models have been successful in describing particle yields in various systems at different energies. The cylinder model is developed from the fireball model, which is suggested in heavy-ion collisions [20]. The excitation degree varies with location in the cylinder. In the previous work [25], we obtained the emission source location dependence of the exciting degree specifically. From central axis to side-surface of the cylinder, the excitation degree of the emission source decreases linearly with the direction of radius. The excitation degree can easily be characterized by the corresponding distribution width, which is written as (4) by using the Monte Carlo calculation. In the model, because of the influence of the emission source temperature, the values of are much smaller than those of the . It is consist with the conclusion of [15], where the temperature of the fireball decrease linearly with time.
In the present work, we improve the method by considering the difference between the soft and hard emission. The parameters 1 ( 1 ) and 2 ( 2 ) are used to describe the excitation degrees of emission sources closing to the sidesurface and central axis of the cylinders, respectively. They are in fact reflections of the excitation degree of the soft emission process and the hard emission process, respectively. Our results agree well with the considered distributions of mesons with high transverse momenta produced in nuclear collisions at RHIC energies. At finite temperature, the stronger the collision strength, the larger the excitation degree. Therefore, on the central axis of the cylinder, the interaction between particles is strongest, and the excitation degree is highest. For different collision energies, our results in the improved cylinder model show that the parameters 1 and 2 slowly increase with the increasing of energy, and no obvious change can be observed in the values of 1 ( 2 ). The dependence of parameters on energies renders that intranuclear cascade collisions in the central axis regions play a more important role at higher energy. In the side-surface regions, a high multiplicity at high energy does not contribute to a high excitation degree due to weak intranuclear cascade collisions. In our opinion, high hadrons come primarily from the central region due to their large emission angles. And high momentum hadrons come primarily from the side-surface region due to their weak cascade collisions.

Conclusions
In summary, the transverse momentum distributions of mesons produced in p+p collisions at the RHIC energies have been studied in the framework of the improved cylinder model. The model is successful in description of the high transverse momentum meson production. Based on our phenomenological approach, we evaluated the soft and hard contributions to the observed spectra. At the same time, it can offer some information about soft and hard interactions in the collisions. In our previous work, a rudimentary investigation of the azimuthal anisotropy has been carried basing upon the cylinder model. Combined with the present work, the model can be used to describe uniformly the momentum distributions of final-state particles produced in high energy collisions.