Statistics are mathematical tools applying scientific investigations, such as engineering and medical and biological analyses. However, statistical methods are often improved. Nowadays, statisticians try to find an accurate way to solve a problem. One of these problems is estimation parameters, which can be expressed as an inverse problem when independent variables are highly correlated. This paper’s significant goal is to interpret the parameter estimates of double generalized Rayleigh distribution in a regression model using a wavelet basis. It is difficult to use the standard version of the regression methods in practical terms, which is obtained using the likelihood. Since a noise level usually makes the result of estimation unstable, multicollinearity leads to various estimates. This kind of problem estimates that features of the truth are complicated. So it is reasonable to use a mixed method that combines a fully Bayesian approach and a wavelet basis. The usual rule for wavelet approaches is to choose a wavelet basis, where it helps to compute the wavelet coefficients, and then, these coefficients are used to remove Gaussian noise. Recovering data is typically calculated by inverting the wavelet coefficients. Some wavelet bases have been considered, which provide a shift-invariant wavelet transform, simultaneously providing improvements in smoothness, in recovering, and in squared-error performance. The proposed method uses combining a penalized maximum likelihood approach, a penalty term, and wavelet tools. In this paper, real data are involved and modeled using double generalized Rayleigh distributions, as they are used to estimate the wavelet coefficients of the sample using numerical tools. In practical applications, wavelet approaches are recommended. They reduce noise levels. This process may be useful since the noise level is often corrupted in real data, as a significant cause of most numerical estimation problems. A simulation investigation is studied using the MCMC tool to estimate the underlying features as an essential task statistics.
Taif UniversityTURSP-2020/2791. Introduction
Parameters’ estimation, to provide an interpreted model, is often the biggest challenge in statistics since data might contain noise, blur, or both. These kinds of problems were found in science, geophysics, engineering, and medicine. This kind of situation received much attention from researchers over the past decade. In practical applications, the biggest challenge in estimating the unknown parameters is that real data usually contain white noise. Hence, using a pretreatment may reduce noise, where it might provide a suitable fit. More precisely, it is used in the statistical approaches of data corrupted with white noise arising from the collocation of equipment. There are two types of statistical tools that are usually involved in processing the data. The first one is data pretreatment, which is applied to reduce the independent variable’s correlation or noise level. The second is model calibration, which can be related to using Bayesian and wavelet methods. Hence, the key issues can be presented as working with many unknown features compared to the number of observations and then an ill-posed or ill-conditioned order in the model; that is, the maximum likelihood estimation is unsuitable for estimating underlying parameters. The widespread problem is to study real data collected by magnetometer or voltage reading, which are usually highly correlated. This process is needed since the sample’s measured spectral characteristics may have noise levels and blur. Statistically, several established methods can be applied, such as classical thresholding approaches. Early work for studying this procedure can be found in [1, 2] who introduced a new tool for removing noise (see [3, 4] for explicit motivation). Bayesian approaches were studied using different probability distributions in many fields over the last century. A common practice would be to perform exponential [5], and others were applied to various density distributions. For example, the authors in [6] studied the exponential distribution and estimated their parameters, whereas those in [7] employed the Weibull distribution to estimate the parameters using censored data. Also, the authors in [8] studied the Rayleigh distribution using consorted data. Hence, the idea of this article is to combine Bayesian and wavelet methods for estimating underlying parameters. Wavelets can be powerful mathematical tools applied to reduce the impact of multicollinearity problems. Wavelet basis can be explained as a special complicated level of the Fourier transform. However, the main reason for using wavelet approaches is that it is easy to choose between different wavelet bases. Many summaries were written about this topic by several authors. For example, Mallat [9] states that a probability density function of wavelet coefficients is notably peaked and centered around at zero. Also, the algorithm of discrete wavelet transform can be found in [10]. In wavelet, the stationary basis is recommended for the reconstruction (see [11] for more details). Then, the wavelets have received many comments from scientists, while several authors analyzed some real-statistical applications (see [12] for a direct result). Different approaches to the use of the wavelet can be found in [13]. Considerable details about wavelets can found in [14]. The central concept of the Bayesian approach is using the construction of theory. However, when the rules are built carefully, the model provides a good fit afterward as the estimation process. There are several papers on Bayesian methods (see [15] who studied Bayesian approaches in the wavelet domain). Wavelet via Bayesian approaches can be studied in many articles, such as in [16]. More details about the combination of Bayesian and wavelet can be studied in [17]. Besides, using the MCMC algorithm is extracting a sample at each run of simulation from the rule. The posterior rule is more complicated for an analytic solution. The easy type of MCMC is in [18], which can be implemented to extract notation. More details about the MCMC tool can be studied in [19–21]. Moreover, the estimation of the unknown parameters of the double generalized Rayleigh DGRay (γj,κj,λj) distributions is proposed to provide a new tool, where J=0,1,…,j−1 for some indexes J. In practical terms, this type of investigation is sometimes called the “level-dependent” models since the distribution parameters are estimated for each level j, especially when the measurable characteristics are assumed under two or more different conditions. For example, some wavelet coefficients have defects that are close to be around zero, whereas wavelet coefficients without defects may take a form far from zero. Consider the linear inverse problem defined by(1)x=θ+ɛ,with observed measurement xn×1=Xi:i=1,…,n, the vector of the unknown parameters θn×1=θi:i=1,…,n, and errors ɛn×1. Furthermore, ɛ∼Nn0,σ2In, the noise level is usually assumed to be independent and identically distributed normally random, and n=2J. Consider the unknown parameters Θ defined by(2)ΘC,D=KΘ,where K is an orthonormal matrix containing the wavelet basis. Hence, the unknown parameters Θ can be defined by their discrete wavelet transform ΘC,D=θ0,0C,θj,lD:j=0,1,…,J−1,l=0,1,…,n−1, and the stationary transform is used in this article. So the number of wavelet coefficients and observations is equal. Also, the wavelet coefficients of the observed data x are defined by(3)xC,D=ΘC,D+ϱ,where xC,D is the set of the wavelet coefficients of x and ΘC,D⊂R is also the set of Θ, where ϱ∼Nn0,σ2In. Level-dependent models play a significant role in wavelet applications—this procedure allows us to investigate the value of unknown parameters at each resolution j of wavelet coefficients. There are numerous methods for specifying values of unknown parameters of the double generalized Rayleigh distributions. Moreover, the MCMC algorithms are implemented to investigate the unknown parameters from complicated or nonstandard posterior distributions [22]. In statistics, there are many tools that can be applied to estimate parameters, such as EM and MCMC algorithms. In this article, two types of methods are supposed; the first one is the posterior mean (PM), and the second is maximum a posteriori (MAP).
Figure 1 illustrates the shape of the double Rayleigh distribution for different values of γ. It can be seen that as γ⟶0, the density double Rayleigh approaches infinity, and this type of distribution can be used to fit the density of the empirical wavelet coefficients. More precisely, the wavelet coefficients are nearby the zero, which is found using the double generalized Rayleigh distribution with γ=0 and 0<κ≤0.5. In the other words, the density double Rayleigh approaches infinity as x approaches zero when κ∈0,0.5 and γ=0, which is equivalent to the summary of Mallat. This article is structured as follows: introduction to the double generalized Rayleigh distribution is explained in Section 2. All technical arguments are referred to in Sections 3 and 4. Numerical work confirming their features and simulation study to investigate estimation properties is provided in Sections 5 and 6. Section 7 gives the result of the proposed rule to real data. The final summary and conclusions are presented in Section 8.
Typical data (points) derived from the generalized Rayleigh distribution (dashed line) along with different values of γ, while κ=0.5 and λ=10.
2. Double Generalized Rayleigh Distribution
The generalized Rayleigh DGRay (γj,κj,λj) distribution was proposed by Aykroyd et al. as a generalized distribution. They showed the properties of the model, such as cumulative and survivor functions. Also, Aykroyd et al. [23] showed that the generalized Rayleigh distribution works well to fit data. They also used the Bayesian approaches to estimate unknown parameters of the generalized Rayleigh distribution. In this paper, a double generalized Rayleigh distribution will be used to model the wavelet coefficients, equivalent to the density of the wavelet coefficients. Let single wavelet coefficient θj,lD at the level j be the probability density function (pdf) given by(4)fθj,lD|λj,κj,γj=λjκj2θj,lDθj,lD2−γjκj−1exp−λjθj,lD2−γjκj2,−γj<θj,lD<γj,0,O.W,where . is the absolute value. The cumulative distribution function (cdf) is defined by(5)Fθj,lD|λj,κj,γj=1−exp−λjθj,lD2−γjκj2,θj,lD>γj,exp−λjθj,lD2−γjκj2,θj,lD<−γj,0,O.W.The survivor function (sf) is given by(6)Sθj,lD|λj,κj,γj=exp−λjθj,lD2−γjκj2,θj,lD>γj,1−exp−λjθj,lD2−γjκj2,θj,lD<−γj,0,O.W,and the failure function (hrf) is given by(7)hθj,lD|λj,κj,γj=λjκjθj,lDθj,lD2−γjκj−1,θj,lD>γj,λjκj/2θj,lDθj,lD2−γjκj−1exp−λjt2−γjκj/21−exp−λjθj,lD2−γjκj/2,θj,lD<γj,0,O.W,where γj>0,κj>0 and λj>0. In some indexes, J=log2n. The parameters λj and κj are shape parameters, and γj is a location parameter. Setting γj=0 and κj=1 in (4)–(6), the results of the standard of the double Rayleigh distribution with parameter λj are obtained.
3. Bayesian Approach
In statistics, Bayesian tools play important roles, where the approach has two keys. The first one is the likelihood, concocted between observation and unknown parameters, say px|ζ, where ζ and x are sets of underlying parameters and observations, respectively. The second key is the prior distribution, say pζ, and then the combining posterior distribution. Assuming the link between the model of x and the unknown of wavelet coefficients KTΘD,(8)px|ΘD,σ2=∏i=1n12πσ2nexp−12σ2∑i=1nxi−KTΘDi2,x,ΘD⊂Rn;σ>0,where σ2 is the variance of data and can be assumed by(9)pσ2|τ=τexp−τσ2,τ>0,using equation (2) and the marginal likelihood given by(10)px|θ,τ=∫0∞px|θ,σ2pσ2|τdσ2=2τ2exp−2τx−θ.
The result of the previous integration can be found in [24]. The equivalent likelihood is defined by(11)px|ΘD,τ=∏i=1n2τ2nexp−2τ∑i=1nxi−KTΘDi,x,ΘD⊂Rn;σ>0.
In addition, the posterior distribution for ΘD given x is(12)pΘD|x,λJ−1,…,λ0,κJ−1,…,κ0=px|KTΘDpθJ−1D|λJ−1,κJ−1,…,pθ0D|λ0,κ0∝∏i=1nexp−2τ∑i=1nxi−KTΘDi×2τn/2λJ−1nJ−1κJ−1nJ−12n∏l=0nJ−1θJ−1,lDθJ−1,lD2κJ−1−l×exp−λJ−lθJ−1,lD2κJ−l2⋯×λ0n0κ0n0θ0,0D×θ0,0D2κ0−1exp−λ0θ0,0D2κ02,xi,θj,lD∈R;λJ−1,…,λ0,κJ−1,…,κ0>0,σ>0,where nJ−1,…,n0 are the size of the coefficients at each level J−1,…,0. Hence, the value of κj is suggested as 0<κj≤0.5. The main reason for choosing the double generalized Rayleigh is that as the value of γj⟶0 and θj,lD⟶0, the proposed distribution approaches infinity, which is followed by the saying of Mallat about the interpretation of the wavelet coefficients distribution. Clearly, equation (12) can be used to estimate the unknown parameters ΘD given x,λJ−1,…,λ0,κJ−1,…,κ0, and then these unknown parameters can be employed to describe the reconstruction. Hence, the unknown parameters are made up of one set, say with ζ=θ0,0C,θ1,0C,θ1,1C,…,θJ−1,n−1C,λJ−1,…,λ0,κJ−1,…,κ0,τ, and then, the previous form (12) becomes(13)pζ|x∝px|ζpζ=px|KTΘD,ωpΘDpωpτ,where ΘC,D=θ0,0C,θj,lD:j=0,1,…,J−1,l=0,1,…,n−1 and suppose that ω=τ,λ,κ at the level j. Aykroyd et al. considered gamma prior density for λ and κ with hyperprior parameters α1,β1 and α2,β2. Also, gamma distribution is proposed for τ with hyperparameters α3,β3, with density function(14)pωi|αi,βi=1Γαiβiαωαi−1exp−βiω,α,β>0,i=1,2,3.
Then, the posterior density of the single value of θj,lD with parameters τ, λj, and κj at the level j, given the data x, is given by(15)pθj,lD,τ,λj,κj|xi=pxi|KTΘDi,τpτpλjpκj∫θj,lD∫λj∫κj∫τpxi|KTΘDi,τpτpλjpκjdκjdλjdθj,lD,pΘD,τ,λJ−1,…,λ0,κJ−1,…,κ0|x∝px|KTΘD,τpτpθJ−1D…×pθ0DpλJ−1…pλ0pκJ−1…pκ0,and the joint posterior density given data, x, can be written as(16)pΘD,τ,λJ−1,…,λ0,κJ−1,…,κ0|x∝2τn/2τn+α1−1λJ−1nJ−1+α2,J−1−1…λ0n0+α2,0−1κJ−1nJ−1+α3,J−1−1…κ0n0+α3,0−12nΓα1,nβ1,nα1,nΓα2,J−1β2,J−1α2,J−1…Γα2,0β2,0α2,0Γα3,J−1β3,J−1α3,J−1…Γα3,0β3,0α3,0exp−τβ1,n+λJ−1β2,J−1+⋯+λ0β2,0+κJ−1β3,J−1+⋯+κ0β3,0∏l=0nJ−1θJ−1,lDθJ−1,lD2κJ−1−1exp−λJ−1θJ−1,lD2κJ−12×⋯θ0Dθ0D2κ0−1exp−λ0θ0D2κ02∏i=1nexp−2τ∑i=1nxi−KTΘDi,xi,θj,lD∈R;λJ−1,…,λ0,κJ−1,…,κ0>0,σ>0.
The hyperprior parameters τ, κ=κ1,J−1,…,κ1,0,κ2,J−1,…,κ2,0, and β=β1,J−1,…,β1,0,β2,J−1,…,β2,0 can be fixed, as follows: let the expectation and variance of ωj at resolution j, say ti,j and ri,j, where i=1,2,3. By solving the following equations(17)Eωj=αi,jβi,j,Varωj=αi,jβi,j2,i=1,2,3,the corresponding hyperprior parameters can be defined as αi,j=ti,j2/ri,j and βi,j=ri,j/ti,j.
4. Stationary Approaches
The vital task in the wavelet approaches is to choose a basis. For more details, the interpretation of the wavelet basis is to start with two functions. The first one is scaling or father function ϕ, where the main task of this function is to compute the scaling coefficients. The other is a wavelet or mother function ψ, where it can be used to calculate the wavelet coefficients. Several wavelet bases are now available with different degrees of smoothness. However, the Haar basis is a simple version of the wavelet transform. Moreover, there are several established wavelet families demonstrated (see [25–29] for details). Stationary wavelet transforms (SWTs) attracted much attention for many applications over the last few years. In particular, the classical stationary wavelet transform was introduced in [30], while the authors in [31, 32] applied at that time as the maximal overlap for discrete wavelet.
In 1995, Nason extended the discrete wavelet and recalled it as the “stationary.” In the same year, Ronald and David [33] proposed a new tool: stationary wavelet coefficients and is sometimes referred to as “cycle spinning.” In general, the SWT can be described as “fills in the gaps” between the decimated wavelet coefficients; that is, there is no missing computation between two different values of wavelet coefficients. Nason stated that this leads to an over-determined redundancy of the original data (see the below example for more explanation). The producer gives a shift-invariant removing noisy tool, which simultaneously shows improvements in reconstruction quality (see Ronald and David). For example, of the SWT, the Haar wavelet is applied to the data x=x1,x2,x3,x4. The first and second sets of the scaling and detail coefficients can be computed:(18)θ1,lC=222200022220002222220022×x1x2x3x4,θ0,lC=220220022022220220022022×θ1,0Cθ1,1Cθ1,2Cθ1,3C,θ0,lC=220220022022220220022022×θ1,0Cθ1,1Cθ1,2Cθ1,3C,θ1,lD=22−2200022−2200022−22−220022×x1x2x3x4,where θ1,lC, θ0,lC, θ1,lD, and θ0,lD are the matrices of transform at level j=0,1, respectively. Hence, the number N of vanishing moments decreases. This implicates that the smoothness of the corresponding shape decreases. In this paper, Daubechies father function ϕ and mother ψ with N=8 vanishing moments are used to provide a smooth reconstruction.
The plotting procedure for the stationary wavelet transform is shown in Figure 2. It can be seen that each level j has the same number of wavelet coefficients. Figure 3 shows the scaling and wavelet functions for Daubechies with N=8 vanishing moments. Table 1 shows the wavelet coefficients for Daubechies compact, phase N=8. Here, we present the idea of Daubechies, omitting some technical details.
Graphical depiction of the stationary wavelet transform. The first row depicts the data, the second row indicates the wavelet coefficients, and the third shows the correspondence to the detailed wavelets. hl and gl are high- and low-pass quadrature mirror filters.
Plots of the father ϕ and mother ψ wavelets with N=8 vanishing moments.
Orthogonal Daubechies coefficients for filter number 8.
l
0
1
2
3
4
hl
0.0544158422
0.3128715909
0.6756307363
0.5853546837
−0.0158291053
l
5
6
7
8
9
hl
−0.2840155430
0.0004724846
0.1287474266
−0.0173693010
−0.0440882539
l
10
11
12
13
14
hl
0.0139810279
0.0087460940
−0.0048703530
−0.0003917404
0.0006754494
l
15
16
17
18
19
hl
−0.0001174768
0
0
0
0
5. Numerical Methods
The goal of the Bayesian computation is to extract a posterior sample for some unknown parameters ζ. However, computational statistics can be explained as inverse problems. Some tools can be used to make the estimation more efficient. They include the standard version of the MCMC algorithms, Metropolis-Hastings tools, to extract a random sample from the posterior rule pζ|x in (16). The procedure of the technique for parameter estimation, through the MCMC approach, can be found in [34]; for more information, see [35,36] and more recent works such as [37].
Figure 4 shows the diagram of the procedure of the proposed methods, where the procedure starts with data, which is corrupted by noise. The data are transformed to wavelet coefficients, which are used to estimate the unknown wavelet coefficients using the suggested method, and then, the underlying signal is calculated by inverting the estimation of wavelet coefficients. The main idea of the MCMC algorithms is that the parameter can take at any valued point in the parameter space Ω, say ζi is the value point. Then, at each step, MCMC creates values, say ζi1,ζi2,…,ζir. Each single parameter updates separately in the order that the MCMC algorithms depend on a random walk. More precisely, the general framework of the tool is defined as follows:
Starting with an initial value for ΘC,D=0 and for each level j=0,1,…,2J−1, that is, for parameters, let ω0=κ00,κ10,…κ2J−10,β00,β10,…β2J−10.
For times k=1,…,K.
Diagram showing the structure of the suggested methods.
Generate a new value ωj∗=ωjk−1+ɛ, where ɛ∼N0,ςω,j2k−1. Hence, the current value of the prior parameters is proposed with a variance parameter for each resolution j, which is chosen to obtain an acceptable convergence rate.
Compute the posterior distribution in (16).
For s=1,2,…,n, that is, for each wavelet coefficient θj,lD.
Generate a new wavelet coefficient θj,lD∗=θj,lDs−1+ɛ, where ɛ∼N0,ςθ,j2s−1.
Again, compute the posterior distribution in (16).
Generate u∼U0,1.
When αζ∗|ζsk=min1,pθ0,0D,θ1,0D,…,θj,lD∗,…,θJ−1,n−1D,λJ−1,…,λj∗,…,λ0,κJ−1,…,κj∗,…,κ0/pθ0,0D,θ1,0D,…,θj,lDs−1,…,θJ−1,n−1D,λJ−1,…,λjk−1,…,λ0,κJ−1,…,κjk−1,…,κ0>u, accept the proposal and set θj,lDs=θj,lD∗ and ωjk=ωj∗; else, θj,lDs=θj,lDs−1 and ωjk=ωjk−1.
Hence, all parameters are generated from the Gaussian distribution, while the current amount of the parameter is the expectation of the normal distribution with updating variance, which is based on the acceptance rate. It is essential to realize to pick up a random value around the current value, that is, both low and high, with variances ςζ,j2 chosen to depend on an acceptance rate. More precisely, choosing any valued point ω in the parameter space is accepted. The authors in [38] stated determining value is between 20% and 30% for an acceptance rate. Hence, we considered the following gamma prior density for the variance of noise σ2, where the starting point is computed from the finest level of the wavelet coefficients, ΘJ−1,D (see Nason).
Once the sample is collected from the posterior rule, the posterior mean for ζ can be calculated by(19)ζ^=ζ¯=1K−M∑k=M+1Kζk,and also, the posterior variance can be calculated by(20)σ^2=1K−M∑k=M+1Kζk−ζ¯2,where K and M are the number of run and burn-in, respectively. Hence, there is an enormous method to compute the estimate point and interval. For the MAP rule, the previous procedure is changed into a simulated annealing process of Geman and Geman; this process can answer more quickly than the posterior mean. More accurately, the MAP estimate is chosen as the final iteration θ^MAP=θ^K. In other words, sample mean and variance can not be computed. The maximum a posteriori estimator (MAP) is defined as(21)ζ^MAP=argmaxζpζ|xK,where K indicates the final iteration of the run of the MCMC algorithms.
6. Simulation
The investigation of the proposed rule is considered. Then, the results are compared to some established wavelet-based methods. The authors in [39] introduced four test signals: bumps, Doppler, heavisine, and blocks. Moreover, these functions were corrupted by the independent Gaussian noise Nn0,Inςθ2. Different sizes are studied to investigate the proposed method’s performance, which is n=64 and 128, where the four test functions were simulated. Also, various wavelet bases were used: Daubechies with N=8 applied for the test functions heavisine, Doppler, and bumps, while Haar basis was used for blocks. The starting level was j0=3, as recommended in [40]. The average mean squared-error (AMSE) evaluated the results of the estimation, which is defined as(22)AMSE=1KN∑j=1K∑i=1Nθ^j,i−θi2,where N and K are the numbers of the data and the runs of the MCMC algorithms. Moreover, the results from the proposed method are denoted by θ^k,i,i=1,…,N at k-th run of MCMC algorithms.
The proposed estimators were compared to various methods, such as the Bayesian wavelet thresholding (BAYES.THR) method of Abramovich and Silverman, the ABWS rule of Chipman, Kolaczyk, and McCulloch, and the BAMS rule of Vidakovic and Ruggeri. Table 2 shows the results of the simulation when decimated and the nonstationary wavelet were used. It shows the result of AMSE; for our simulation, two bases are used. The first one is the basis with zero vanishing moments, and the other is the Daubechies’ wavelets with N=8 vanishing moments. The proposed technique always gives the best reconstructions. The main interest is to improve the result of the reconstruction. This can be seen when the size of the sample is large because extensive observations contain massive information about the feature of the signal. In general, the MAP method provides a fair resolution in the test functions. However, the worst of the results is better than the other of the competed wavelet rules. The biggest problem in the MAP estimate is that the confidence intervals can not be computed because the latest sample of posterior is picked.
The results of the simulation based on different methods.
Signal
σ
BAYES.THR
ABWS
BAMS
SWTMAP
SWTPM
64
Block
0.1
7.8638
0.0168
0.0144
0.0183
0.0124
0.4
8.8873
0.1615
0.0756
0.0470
0.0446
0.8
9.9290
0.6249
0.3282
0.0498
0.0512
Doppler
0.1
4.0397
0.0142
0.0467
0.0202
0.0144
0.4
4.0528
0.1600
0.2326
0.0612
0.0581
0.8
4.4217
0.6146
0.6219
0.0834
0.0707
Heavisine
0.1
0.1215
0.0192
0.0197
0.0146
0.0115
0.4
0.2981
0.1541
0.3688
0.0344
0.0375
0.8
0.5775
0.6227
0.6369
0.0642
0.0718
Bumps
0.1
10.6644
0.0135
0.0110
0.0018
0.0026
0.4
10.8529
0.3605
0.0865
0.0210
0.0281
0.8
11.5967
0.6120
0.3033
0.0684
0.0674
128
Block
0.1
7.874
0.0151
0.0134
0.0092
0.0120
0.4
8.8243
0.1585
0.1615
0.0427
0.0482
0.8
9.9474
0.6196
0.5358
0.0447
0.0594
Doppler
0.1
1.2554
0.0105
0.0218
0.0198
0.0113
0.4
1.3098
0.1575
0.1440
0.0420
0.0431
0.8
1.7197
0.6317
0.4826
0.0861
0.0682
Heavisine
0.1
0.0541
0.0097
0.0191
0.0128
0.0146
0.4
0.1404
0.1572
0.3324
0.0317
0.0370
0.8
0.3421
0.6324
0.6257
0.0506
0.0500
Bumps
0.1
16.9746
0.0102
0.0159
0.0014
0.0084
0.4
17.2119
0.1592
0.0989
0.0207
0.0276
0.8
17.6878
0.6350
0.3713
0.0674
0.0650
7. Application to Medical Data
The suggested method is studied and investigated to a real-world inductance plethysmography data to evaluate the excellent performance of the proposed rules, compared to the state-of-the-art methods. The Department of Anaesthesia at the Bristol Royal Infirmary collected these observations. The number of observations is 2048, equally spaced points. Readers can obtain these data within WaveThresh using data (BabyECG). Also, the structure of the sleep state can be downloaded using data (BabySS). Figure 5 shows the plots of BabyECG and sleep state. Hence, the aim of the investigation of the BabyECG was to specify the sleep state successfully from the observations. These data were studied and investigated by other authors (for example, [41]). The reconstruction of the unbalanced Haar approach (red line) is illustrated in Figure 6. It is not accessible to describe every moment using the unbalanced Haar method or to talk in general about the sleep state for the babies. Figures 7 and 8 show the reconstructions of the underlying feature with the MAP method using the Haar wavelet basis and Daubechies wavelet with N = 8 vanishing moments. In our reconstruction, the value of the shape κj is set within the interval 0,0.4. Table 3 shows the results of the simulations using the MAP and PM estimators. As the level j decreases, the value of κ increases. In contrast, the value of the parameter λ is slightly changed.
Plots of BabyECG data (solid line) and sleep state (dashed line).
Plots of the reconstructions using the unbalanced Haar estimator (red line) and BabyECG data (black line).
Plots of the reconstructions using the MAP estimator with a Haar basis.
Plots of the reconstructions using the MAP estimator with N=8 vanishing moments.
The results of the simulation based on different methods.
Number of vanishing moments
κ4
κ5
κ6
κ7
κ8
κ9
κ10
κ11
λ4
λ5
λ6
λ7
λ8
λ9
λ10
λ11
MAP estimator
N = 0
0.3811
0.2259
0.2307
0.2249
0.2259
0.2287
0.0103
0.0121
0.0686
0.0411
0.1228
0.0943
0.1230
0.1798
0.1321
0.0614
N = 8
0.3478
0.2230
0.2188
0.2243
0.2172
0.2101
0.0003
0.0001
0.1053
0.0648
0.1016
0.0519
0.1430
0.0548
0.0229
0.0941
PM estimator
N = 0
0.3992
0.2220
0.2211
0.2243
0.2201
0.2322
0.0002
0.0013
0.0232
0.0966
0.0178
0.0646
0.0214
0.1528
0.1007
0.0232
N = 8
0.2718
0.2150
0.2347
0.2215
0.2131
0.2236
0.0003
0.0001
0.1600
0.1376
0.1534
0.0647
0.0257
0.1622
0.0351
0.0476
8. Conclusion
In this article, we show various ways in which the Bayesian rules and wavelet methods were used successfully in the practical problem. Also, a procedure for estimating the scale parameters, k and λ, of double generalized Rayleigh was estimated based on the BabyECG sample. This approach was adopted from the wavelet method for the independent level j and Bayesian approaches. Prior probability distributions for the parameters were assumed to be gamma distribution. Bayesian estimates for the points were proposed in the cases of artificial samples under the squared-error loss. The simulation studies are showing that the proposed rules worked well, and the proposed Bayesian estimate performed better than the existing state-of-the-art methods based on signal functions by reducing the AMSE. We discussed the proposed method estimates to estimate the underlying parameters. Numerical results were obtained to compare the theoretical performance results. Some points are observed from numerical results, which are summarized as follows:
)From the results in Table 2, the suggested method process provides better excellent results for artificial data.
Estimation results under the PM and MAP methods provide better estimation than the other established wavelet denoising methods according to the MSE.
The use of the suggested method allows to describe the main feature of the real data, especially when observations are large.
This paper has confirmed that the wavelet approach provides attractive alternatives to other established wavelet methods, especially when underlying signals are inhomogeneous.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The author declares that there are no conflicts of interest.
Acknowledgments
This study was funded by Taif University Researchers Supporting Project number TURSP-2020/279, Taif University, Taif, Saudi Arabia.
ChipmanH. A.KolaczykE. D.McCullochR. E.Adaptive Bayesian wavelet shrinkage1997924401413142110.1080/01621459.1997.104736622-s2.0-0031315608ClydeM. A.GeorgeE. I.Empirical Bayes estimation in wavelet nonparametric regression199914130932210.1007/978-1-4612-0567-8_19AbramovichF.TrevorC.SapatinasT.Wavelet analysis and its statistical applications200049110.1111/1467-9884.002162-s2.0-0001849748AbramovichF.BesbeasP.SapatinasT.Empirical Bayes approach to block wavelet function estimation200239443545110.1016/s0167-9473(01)00085-82-s2.0-0037189287LewlessJ. F.2011Hoboken, NJ, USAJohn Wiley & SonsKunduD.PradhanB.Estimating the parameters of the generalized exponential distribution in presence of hybrid censoring200938122030204110.1080/036109208021925052-s2.0-67651234502BalakrishnanN.KateriM.On the maximum likelihood estimation of parameters of Weibull distribution based on complete and censored data200878172971297510.1016/j.spl.2008.05.0192-s2.0-54049148967ArturoJ.Bayesian inference from type II doubly censored Rayleigh data2000484393399MallatS. G.A theory for multiresolution signal decomposition: the wavelet representation198911767469310.1109/34.1924632-s2.0-0024700097NasonG. P.SilvermanB. W.The discrete wavelet transform in S19943216319110.1080/10618600.1994.104746372-s2.0-84952202969NasonG. P.SilvermanB. W.The stationary wavelet transform and some statistical applications199510328129910.1007/978-1-4612-2544-7_17KlapperJ.BarberS.Wavelet analysis of high performance liquid chromatography dataProceedings of the 58th WSC of the ISI, 2011August 2011Dublin, IrelandAbramovichF.SilvermanB. W.Wavelet decomposition approaches to statistical inverse problems199885111512910.1093/biomet/85.1.1152-s2.0-0000521350GuyN.2010New York, NY, USASpringerRuggeriF.VidakovicB.Bayesian modeling in the wavelet domain20052531533810.1016/s0169-7161(05)25011-32-s2.0-49649084168VidakovicB.Nonlinear wavelet shrinkage with Bayes rules and Bayes factors19989344117317910.1080/01621459.1998.104740992-s2.0-0032349461VidakovicB.RuggeriF.BAMS method: theory and simulations2001632234249MetropolisN.RosenbluthA. W.RosenbluthM. N.TellerA. H.TellerE.Equation of state calculations by fast computing machines19532161087109210.1063/1.16991142-s2.0-5744249209GemanS.GemanD.Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images19846672174110.1109/tpami.1984.47675962-s2.0-0021518209GelmanA.RubinD. B.Inference from iterative simulation using multiple sequences1992457472AykroydR. G.WangM.Statistical image reconstruction2015Cambridge, UKWoodhead PublishingChibS.GreenbergE.Understanding the metropolis-hastings algorithm199549432733510.1080/00031305.1995.104761772-s2.0-32344446687AykroydR. G.MahmoudM. A. W.AljohaniH. M.Bayesian analysis using MCMC methods of record values based on a new generalised Rayleigh distribution20172124966AndrewsD. F.MallowsC.Scale mixtures of normal distributions19746199102DaubechiesI.Ten lectures on wavelets1992TibshiraniR.Regression shrinkage and selection via the lasso199658126728810.1111/j.2517-6161.1996.tb02080.xBarberS.NasonG. P.Real nonparametric regression using complex wavelets200466492793910.1111/j.1467-9868.2004.b5604.x2-s2.0-8644253439CutilloL.JungY. Y.RuggeriF.VidakovicB.Larger posterior mode wavelet thresholding and applications2008138123758377310.1016/j.jspi.2007.12.0152-s2.0-49649128164ReményiN.VidakovicB.Wavelet shrinkage with double Weibull prior201344188104HolschneiderM.Kronland-MartinetR.MorletJ.TchamitchianP.CombersJ. M.GrossmannA.TchamitchianP.A real-time algorithm for signal analysis with the help of the wavelet transform1989Heidelberg, GermanSpringer286297PercivalD. P.On estimation of the wavelet variance199582361963110.1093/biomet/82.3.6192-s2.0-0000477306PesquetJ.-C.HamidK.CarfantanH.Time-invariant orthonormal wavelet representations199644819641970CoifmanR. R.DonohoD. L.AntoniadisA.OppenheimG.Translation-invariant de-noising1995Heidelberg, GermanSpringer125150RobertC.GeorgeC.A short history of Markov chain Monte Carlo: subjective recollections from incomplete data201126110211510.1214/10-sts3512-s2.0-82955202811WalterR.RichardsonS.SpiegelhalterD. J.Introducing markov chain monte carlo1996119LuiJ. S.2001Berlin, Heidelberg, GermanSpringer-VerlagBrooksS.GelmanA.JonesG.MengX.-Li2011London, UKChapman and Hall/CRC pressRobertsG. O.GelmanA.WalterR.Weak convergence and optimal scaling of random walk Metropolis algorithms199771110120DonohoD. L.JohnstoneI. M.Ideal spatial adaptation by wavelet shrinkage199481342545510.1093/biomet/81.3.4252-s2.0-0041958932AntoniadisA.BigotJ.SapatinasT.Wavelet estimators in nonparametric regression: a comparative simulation study2001618310.18637/jss.v006.i06NasonG. P.Wavelet shrinkage using cross-validation19961463479