Classic ( Nonquantic ) Algorithm for Observations and Measurements Based on Statistical Strategies of Particles Fields

Our knowledge about surroundings can be achieved by observations and measurements but both are influenced by errors (noise). Therefore one of the first tasks is to try to eliminate the noise by constructing instruments with high accuracy. But any real observed and measured system is characterized by natural limits due to the deterministic nature of the measured information. The present work is dedicated to the identification of these limits. We have analyzed some algorithms for selection and estimation based on statistical hypothesis and we have developed a theoretical method for their validation. A classic (non-quantic) algorithm for observations andmeasurements based on statistical strategies of optical field is presented in detail. A generalized statistical strategy for observations and measurements on the nuclear particles, is based on these results, taking into account the particular type of statistics resulting from the measuring process also.


Introduction
The methods of testing statistical hypothesis and parameters estimation, built up in the frame of mathematical statistics, represent algorithms which confirm the "functionality" of experimental systems [1][2][3][4].The aim of this paper is to identify natural limits by building up "observation" and "estimation" algorithms based on "statistical strategies" of "assessment and control" of these limits.In the experimental systems as optical communications a large interest is focused on observation and measurement of signals with entropy bigger than the noise level.Thus, the signal/noise ratio is used as a main observable for validation of correct operation of a communication system [5][6][7][8][9][10][11][12].
A classic (non-quantic) algorithm based on statistical strategies for an optical field is presented in detail.A generalized statistical strategy based on observations and measurements on the nuclear particles as neutrinos can be also developed [13,14].The neutrinos physics and engineering are related very closely to that of the stars.The chemical composition of the solar interior is one of the frontiers of solar neutrino spectroscopy.They have a decisive role as an energy-loss channel for understanding stellar evolution also.The observed astrophysical neutrino sources other than the Sun help us in understanding supernova physics as stellar core collapse as well as the dynamics of supernova explosions and nucleosynthesis [15][16][17][18][19][20][21][22][23][24].
The methods of statistical physics we will discuss in the paper are inseparably intertwined in the strategy for observations and measurements on the nuclear particles, as neutrinos.
A high-statistics neutrino observation provides us with very important data about other low-mass particles which determine large-scale experiments in which new types of particle detectors will be developed and built.Concomitantly with the neutrino observation, a lot of theoretical and numerical work remains to be done, based on statistical physics methods giving us crucial information for the accuracy of the experiments to be developed and built.
Two application examples are given: one is based on the bilateral test for validation of statistical hypothesis (validation of mean value for a given dispersion) and another one for Advances in High Energy Physics validation of the mean value for an unknown dispersion [6-12, 15, 25].
To eliminate the noise by construction of instruments with high accuracy it is very important to mention that statistical validation of some communication systems based on control statistical strategies points out that the signal/noise ratio is not the essential parameter in characterising such a system but the structure of the statistical strategy.Therefore a system with high signal/noise ratio will not solve the validation (good working of this system) from the point of view of multistochastic processes that generate noise.

Theoretical Considerations
We define the set of measurements for the considered signal as follows: Starting with this, we intend to calculate the false alarm probability ( 0 ), the detection probability (  ), and the physical system state [7].We assume that a physical system is in  statistical states, described by  statistical hypothesis { Ĥ }.If an  physical system is characterized by { Ĥ } statistical state, the detected signal will be, [8], where   () represents the useful signal and () the random signal.

𝑉 ) (classic measurement operator).
The {π  ( →  )} =1,2,... functions represent a "random strategy" for choosing the best statistical hypothesis.We define the probability of choosing the statistical hypothesis { Ĥ } when the physical system is characterized by statistical hypothesis { Ĥ } [9] as follows: where The statistical event (given by   {/} probability) is described by {  } risk.Using the prior probability {  }, the value of average risk for immediate strategy leads to We define the risk function   ( →  ) in choosing statistical hypothesis { Ĥ } as follows: Using (6), the value of average risk leads to the following expression: The problem consists in finding a number of  functions {π  ( →  )}, which satisfy the conditions: The value of minimum risk is defined as follows: or in other form: where: The risk functions {  ( →  )} are directly proportional to the risks defined "a posteriori" {  ( →  )} as it is presented in the following: where represents "a posteriori" probability for statistical hypothesis { Ĥ }: Also, the a posteriori probability could be defined in relation to verisimilitude ratio under the expression: Advances in High Energy Physics

Structure of Statistical Strategy for Two Statistical Hypotheses
Let us consider two statistical hypotheses defined as follows: From measurements, we obtained Probability densities associated with the statistical hypothesis will be  0 (),  1 () [10]: The risk functions are calculated as follows: Bayes' strategy consists in choosing statistical hypothesis { Ĥ1 } if the following relation is valid: (see [10]).From (18) and ( 19) one can get The a posteriori probability has the form: ( 0 ,   ) We calculate the probability to choose { Ĥ1 } hypothesis for a system retrieved in { Ĥ0 } hypothesis: and also we calculate the probability to choose for a selected physical system the { Ĥ1 } hypothesis which can be found in the { Ĥ1 } hypothesis Actually, the strategy consists in   maximization for a certain given value of  0 (the Neyman-Pearson criteria).
In this case, if the verisimilitude ratio has the form: then for ( 22) and ( 23) we obtain, [11], Let us define a phase space ( 0 ,   ) with parameter function {π( →  )}.This is a simple convex space: D (a simple convex region) is the field of possible values for  0 and   (0), as shown in Figure 1.
A reliability assessment of equipment compliance indicators based on these calculations is performed.Values  st and  st are chosen to use the test plan for testing ( ststandardized coefficient calculated in one step).
At limits of compliance for  1 = 0 the value / 0 is calculated, the number of failures for / 0 =  st is Advances in High Energy Physics determined and, in rectangular coordinate system, a trace line by the two points is obtained (Figure 2(a)).
For areas of inadequacy / 0 = 0 the number of failures  2 is determined, the value / 0 for  =  st is calculated, and a trace line by the two points is obtained (Figure 2

(b)).
Achieve line is represented as a process that begins right at point zero and coincides with the horizontal axis.
Let the curve equation (Γ 1 ) which upper limits the region D be Since region D is convex, no tangent at Γ 1 curve crosses the D region.
Let  be the point of the { 0 ,   } coordinate tangent.The line equation with  slope has the expression: Whatever the {π( →  )} values are, the points belonging to D region fulfill the condition: This can be rewritten as follows: In this case, the statistic strategy consists in maximization of the integral in (30): where  includes D → the uncertainty region.In this case, values of  0 and   will have the expressions: For a continuous structure on observable space [ →  ] (34) and (35) become where [] region has null probability ( = 0) for any hypothesis { Ĥ0 , Ĥ1 }.

Calculation Algorithm of Statistical Strategy (Classic Case) for Observation and Measurement of an Optical Signal in Presence of Gaussian Fluctuations
Let us suppose that we consider the two statistical hypotheses case { Ĥ0 , Ĥ1 }: where →  () represents the Gauss random distributed signal (the Gauss noise) and →  () the detectable useful signal.We suppose an averaging operator Ê for statistical hypothesis { Ĥ0 , Ĥ1 } and then we have the calculation rules: where ( 1 ,  2 ) represents the correlation function of Gaussian noise.The signal →  () defined as →  () = { 1 (),  2 (),  3 (), . . .,   ()} must be "observed" and "measured" in (0, ) period.
Let an observable space where Also, we build the correlation matrices (for Gaussian noise): or The matrix (   ) is defined as Let correlation operator be: Advances in High Energy Physics And let eigenvalues equation be where Then we obtain the scalar matrix: where (  ) represents the measure of dispersion for every measurement.The distribution functions  0 ( →  ) and  1 ( →  ) acquire a factorial expression (as if every sampling quantity has been Gaussian distributed).
Thus, we have The verisimilitude ratio can be written as Next,   0 ,    will have the following expressions: where  is the significance threshold of statistical strategy.
If we accomplish only one measurement, the signal/noise ratio can be considered as follows [12]: but only if Then the result is The signal/noise ratio becomes If probability distributions for the two statistical hypotheses are characterized by a parameter, then we can write We define the probability ( Ĥ0 ) as where () is the critical domain in the observable space.
In the end, we can write the equations as follows: In this case, the statistic strategy consists in determining the optimum critical domain { * } in the observable space so that, if there is any other critical domain, the following relation can be written: We specify (by estimation) the parameters of probability distribution: If alternative hypothesis ( Ĥ1 ) is not only a simple hypothesis: then the most powerful test will exist.
If for the alternative hypothesis ( Ĥ1 ) there is a value   1 which satisfies the following condition: for value {  1 }, then it results that the best test is the one for which (  ) values (which determine the critical range) satisfy inequality (the Neyman-Pearson auxiliary theorem) For Gaussian noise we have From ( 69) it results that Λ(  ) increases as {  } increases.Thus, the highest value for {  } that fulfills (68) will be (  0 ) which satisfies the equality: By expanding (70) it results that The best critical domain (critical value) { * } is Using the following special functions: we can calculate for instance From ( 74) it results that and then we find In similar way, it results that From expressions (76) and (77) we obtain or in another form: Similarly, we can write the following expressions: Advances in High Energy Physics From (80) we can determine the signal/noise ratio as or written in a different form: If   0 = 0 then we have Let the empiric average of  measurements be The statistical hypothesis has the structure as follows: where {  } represents the variable of distribution function.Now the distribution functions are written as where  0 ,  1 , and / are the parameters for distribution functions.Therefore, we can conclude that the probability equation is and then Also and the result is From ( 88) and (89) it results that Therefore, for empirical average, the following expressions are obtained: 4.1.Application for a Particular Case with  0 = 0 and  2 ̸ = 0. We consider as known the following expressions: and the parameters  and  need to be determined.The signal/noise ratio will be calculated as The detection probabilities are And then we look for the best characteristic region ( * ): where The graph in Figure 3 indicates that, in case of low amplitude signals detection, more measurements (n-bigger values) are necessary in comparison with high amplitude signal detection, where the number of measurements () must be low.

Algorithm Regarding the Bilateral Test for Validation of Statistical Hypothesis (Validation of Mean Value for a Given
Value of Dispersion  2 ).Let us put the matrix in a diagonal form so the repartition functions will have the expression: We make the following hypothesis: In this case, we have or, in general, we define a verisimilitude function for correspondence: or in the following form: For , the maximum verisimilitude estimation will be In the following we calculate the verisimilitude functions: (104) Also, the verisimilitude ratio is calculated as follows: The critical region is given by 0 <  <  (106) and the limit of critical region will be and then it will result that The final result will be The test significance threshold equation will be The following approximation is considered: and the result is and also and finally (117)

Algorithm Regarding Bilateral Test for Validation of Statistical Hypothesis (Validation of Mean Value for an Unknown
Value of Dispersion  2 ).Let us consider the following equation: We define null hypothesis: and alternative hypothesis: The general form of the verisimilitude function is From the equations system: the result is (123) So (121) becomes Therefore, maximum likelihood estimation for  2 in case of null hypothesis (i.e.,  =  0 ) will be obtained from ( 122) and (123).The estimated value of dispersion σ2 will be (in the limit case  =  0 ) Then, by definition it results that The verisimilitude ratio will be in the following form: If we define then we can write where  is the dispersion experimentally determined according to these values.Thus, the verisimilitude ratio gets to the following form: and the following limit is verified: The best critical region is given by the following inequality:

Conclusions
Our knowledge is achieved by observation and by measurements of systems, operations which are affected by errors.The aim of this paper has been to identify these natural limits by the developing of observation and assessment algorithms based on statistical strategy of control and checking.
It is very important to mention that statistical validation of some communication systems based on control statistical strategies points out that the signal/noise ratio is not the essential parameter in characterising such a system but the structure of the statistical strategy.In the simplest and most relevant form it contains the false alarm probability  0 and detection probability   .Therefore a system with high signal/noise ratio will not solve the validation (good working of this system) from the point of view of multistochastic processes that generate noise.
Using the algorithms described in the paper an algorithm based on the bilateral test for description of the unknown dispersion can be further developed.
A generalized statistical strategy for observations and measurements on the nuclear particles is based on these results, taking into account the particular type of statistics resulting from the measuring process also.

3 and
the verisimilitude ratio can be written as, Λ  (

Figure 2 :
Figure 2: The limits of a rectangular coordinate system.