A Seismic Blind Deconvolution Algorithm Based on Bayesian Compressive Sensing

Compressive sensing in seismic signal processing is a construction of the unknown reflectivity sequence from the incoherent measurements of the seismic records. Blind seismic deconvolution is the recovery of reflectivity sequence from the seismic records, when the seismicwavelet is unknown. In this paper, a seismic blind deconvolution algorithmbased onBayesian compressive sensing is proposed.The proposed algorithm combines compressive sensing and blind seismic deconvolution to get the reflectivity sequence and the unknown seismic wavelet through the compressive sensing measurements of the seismic records. Hierarchical Bayesian model and optimization method are used to estimate the unknown reflectivity sequence, the seismic wavelet, and the unknown parameters (hyperparameters). The estimated result by the proposed algorithm shows the better agreement with the real value on both simulation and field-data experiments.


Introduction
Digital seismic signal receiving devices have been developed for many years.In traditional seismic signal receiving system, the accurate signal is obtained by Nyquist sampling theorem which claims that the seismic signal sampling rate should be at least twice the seismic signal's highest frequency.Measurement reduction, such as compressive sensing (CS), can make the seismic record devices be simpler, smaller, and cheaper.CS is a signal processing method which can reconstruct the original signal from a small number of random incoherent liner measurements [1,2].CS uses the property of the signal's sparseness or compressibility in some domain, which achieves the entire signal from relatively few measurements [3].CS has become very popular in recent years because it can reduce the number of data transmissions and give potential aid in many practical applications [4,5].
The CS problem is to compress data from original data while the inverse CS problem is to reconstruct original data from the compressed data.There exist lots of effective methods to solve the CS problem.Some of these methods are based on energy limit [6,7] and the others are based on the Bayesian framework [8,9].Bayesian compressive sensing gives a Bayesian framework for solving the inverse problem of compressive sensing (CS).The basic Bayesian CS algorithm exploits the relevance vector machine and then it marginalizes the noise variance with improved robustness [10,11].
Deconvolution, as a seismic data processing technique, has been widely used to yield reflectivity sequence from the seismic records [12][13][14][15].The Bernoulli-Gaussian model introduced in [16], as the reflectivity sequence model, is popular.It is very intuitive and its statistical properties can easily apply in Bayesian methods [17][18][19] such as Gibbs sampling and MCMC (Markov Chain Monte Carlo) techniques.In statistical models, an expectation maximization (EM) algorithm is an iterative method for finding maximum likelihood (ML) or maximum a posteriori (MAP) estimates of parameters, where the model depends on unobserved latent variables [20].So, EM algorithm is usually used to estimate the unknown parameters (the wavelet, parameters of the BG model and additive noise variance) in blind seismic deconvolution.The EM algorithm iteration alternates between performing an expectation (E) step and a maximization (M) step.
2 Mathematical Problems in Engineering E step creates a function for the expectation of the loglikelihood while M step computes parameters maximizing the expected log-likelihood found on the E step.These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
The Bayesian method is model-based method which is applied to estimate unknown parameters using the prior information.So it can handle observation noise and missing data problems [21].Moreover, sparsity, as the important characteristic of a signal, has been exploited in many application areas, such as seismic deconvolution; see [22][23][24].It could be an alternative method for the deconvolution of sparse signals that model a sparse signal with a Gaussian random variable whose variance is an inverse-Gamma random variable [25][26][27][28][29].It exploits variational Bayesian techniques for a closedform approximate posterior inference [30].
In this paper, the reflectivity sequence and the seismic wavelet are recovered through CS measurement of the seismic records.Bayesian framework is used in CS problem in seismic signal processing.Blind seismic deconvolution and optimization method are combined to estimate the unknown reflectivity sequence, the seismic wavelet, and the hyperparameters.The organization of this paper is as follows.In Section 2, the Bayesian model with CS measurement of the blind seismic restoration problem is given.In Section 3, the observation model and the prior model on the signals coefficients are presented.In Section 4, the Bayesian inference is presented.Optimization method is used to get the solution of the unknown reflectivity sequence and the parameters.In Section 5, comparison between ML and Bayesian deconvolution with CS measurement is given based on the both simulation and field data experiments.In the last section, we draw the conclusions.

Problem Formulation
Seismic record is the convolution with the seismic wavelet and the reflectivity sequence.A standard seismic record image model is given in a matrix-vector form by where y() = [(1), . . .()] T denotes a seismic observation vector, r() = [(1), . . ., ()] T denotes an unknown seismic reflectivity sequence vector, and n() = [(1), . . ., ()] T is an additive white Gaussian noise vector, respectively.S, which represents seismic wavelet, is a  ×  block Toeplitz matrix obtained by a block-circulant matrix.The seismic wavelet and reflectivity vector are both unknown, so the deconvolution is a blind problem.
Considering the sparsity of the seismic reflectivity vector, r is compressible by a linear wavelet basis Ψ; that is, r = Ψw, where w is a  × 1 sparse signal.Reference [18] proposed sparse schemes that capture the convolution structure of pulse streams by defining (2) which resorts to a CS matrix Φ to complete  ≪  measurements.
Equation ( 1) can be rewritten in the following form: where Φ = SΨ.Based on the sparsity of w, the  1 norm of w is used in the inverse problem.The estimation of the original signal can be obtained by solving the following optimization problem [31]: where ‖ ⋅ ‖ 1 represents the  1 norm.

Bayesian Modeling
The prior distribution of unknown signal w is assumed to be (w | ).The observation y is the random process with distribution (y | w, ), where  −1 denotes the unknown noise variance.These seismic record and reflectivity depend on the unknown hyperparameters  and .Following the Bayesian framework of the CS inverse problem, the following joint posterior distribution can be expressed by In Bayesian estimation, all unknown parameters are assumed to be random variables with specific distribution.Our goal is to estimate the parameters S, w, , and  from y.The parameter S can be estimated by the Stochastic expectation maximization algorithm of Rosec et al. [32].The following gives how to get the other parameters.

Seismic Observation Modeling.
By assuming that the seismic data noise in (2) is Gaussian distribution with zero mean and variance equal to  −1 , the conditional distribution of the seismic record is with a Gamma prior distribution on  as the following: The Gamma distribution for the hyperpriors  is defined as where  > 0 represents hyperparameter,   is the scale parameter, and   is the shape parameter.These parameters are also assumed to be random variables.We will show how they are calculated in the following subsection.The Gamma distribution has the following mean and variance: Mathematical Problems in Engineering 3 3.2.Signal Modeling.In [33], as the reflectivity sequence also has sparsity, w can use the following prior model: where  = ( 1 ,  2 , . . .,   ).Then, hyperprior of the hyperparameter   is as the following: Based on ( 9) and ( 10), we have the following equation: At last, hyperparameter  is modeled as the following Gamma hyperprior: ) .
The modeling has three stage forms.The first stage is the prior distribution about w in (9).The second stage is the prior distribution about  in (10).Finally, ( 12) is used to calculate .So we can first sample a Γ(]/2, ]/2) from the prior distribution on w distribution to get  and then sample a Γ(1, /2) distribution  times to get   ,  = 1, . . ., , and at last sample a (0,   ) distribution to get   .
Based on the above, the joint distribution can be given as the following: where (y | w, ), (), (w | ), ( | ), and () are defined in ( 5), ( 6), ( 9), (10), and ( 12), respectively.The seismic record variables of the proposed model are {  }.The parameter is {  } while the hyperparameters are {  } and {  }.The parameter of the hyperparameters model is .The dependencies among the random variables that define the proposed Bayesian compressive sensing model are shown in Figure 1.
Given the initial estimates  1 (w),  1 (),  1 (), and  1 (), we now proceed to explicitly calculate the distributions   (w),  +1 (),  +1 (), and  +1 () ( = 1, 2 . ..) until a convergence criterion is met in the above algorithm.Then, the means above can be used to recalculate the distributions of reflectivity in the above algorithm.The performance of the algorithm heavily depends on the level of the noise and the initial distributions used in the iterative algorithm.

Bayesian Inference
The following Bayesian inference posterior distribution is well known; that is, However, the posterior (w, , ,  | y) cannot be obtained just because and (y) is not easy to be calculated directly.So, the proper method should be given to perform the estimation.Evidence procedure is exploited to complete Bayesian inference in this paper.
Bayesian inference is derived by the evidence procedure which is based on the following decomposition: with Since the distribution (, ,  | y) = (y, , , )/(y) is proportional to (y, , , ), the hyperparameters can be obtained by maximizing the joint distribution (y, , , ).⋅ where I is  ×  identity matrix and C = (I + ΦΛ −1 Φ T ).
Taking the logarithm of ( 19) and then maximizing the result, we can get the following function: Taking the logarithm of (21), where The following function is obtained by ( 23) and ( 17): Therefore, based on the above equations and differential of the equation  with respect to   , we obtain where ⟨ 2  ⟩ =  2  +Σ  and Σ  denotes the th diagonal element of Σ. Making (25) be zero, we can obtain the solution of   : Similarly, the other hyperparameters are updated in the following manner, which is given by The hyperparameter ] satisfies the following equation: where (⋅) = ln Γ().

Experiments Results
In order to demonstrate the performance of the proposed Bayesian compressive sensing algorithm, we give experimental results on both simulated and field data experiments in this section.

Simulation Data.
The proposed Bayesian compressive sensing algorithm can improve the estimations quality compared to ML method and standard Bayesian method in the situation where there are noise and incomplete data.In this section, we support these theoretical arguments with one dimensional synthetic signals simulation experiments.
The proposed Bayesian compressive sensing algorithm is terminated when the criterion ‖(  ) − ( +1 )‖ 2 < 10 −4 is satisfied.The estimation quality of the three algorithms is compared in the following three parts: the reflectivity sequence recovery, the parameters estimation, and performance indices nMSE.Let us define the signal-to-noise ratios (SNR) as where () is the seismic wavelet energy and  is the seismic wavelet length, () =  −1 (Σ  =1  2  ).A reflectivity sequence with 100 samples by BG model is selected randomly as shown in Figures 2(a impulses are well detected and localized.It can be remarked that the tree algorithms show good robustness with regard to the SNR decrease.Furthermore, to study improvements brought by the proposed Bayesian compressive sensing algorithm, we consider the following performance indices: where nMSE 1 represents the normalized error energy of the estimated wavelet, while nMSE 2 represents the normalized error of noiseless data reconstruction.Comparison in nMSE 1 and nMSE 2 of the three deconvolution algorithms is shown in Figure 7.The results show the improvement of the performance indices with the Bayesian compressive sensing algorithm compared with the standard Bayesian algorithm and ML algorithm.Although all algorithms show larger errors while SNR decreased, the proposed Bayesian compressive sensing algorithm suffers from noise too, and it is still more reliable than the other algorithms.Moreover, the Bayesian compressive sensing behaves much better when there is almost no noise, because the Bayesian compressive sensing uses the signals prior knowledge and the linear wavelet transform.

Figure 1 :
Figure 1: Graphical model describing the data-generation process for the blind deconvolution problem.