WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS) used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR) algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.


Introduction
There are many compression algorithms in wireless sensor networks such as distributed wavelet compression algorithm [1,2] (distributed wavelet compression, DWC); it is adopted in the data compression algorithm and made for some changes.DWC compression algorithm has the advantage of good performance and small compression error, but the drawback is the high complexity of the algorithm and the energy cost of calculation is huge, and much communication between the parity nodes will cause wireless sensor node calculation of excessive power consumption and shorten the life of the network.Swinging Door Trending (SDT) [3,4] has the advantage of a simple algorithm, low energy consumption of calculation, and high calculation speed and can be well applied to the wireless sensor network nodes work, but the disadvantage is that the compression algorithm is relatively small.Because of smaller data changes and slow data changes, SDT in traditional cultural monitoring systems has better performance.But linear compression algorithm performance in cultural intrusion detection process has been greatly affected by the big change in the signal amplitude and fast rate of change.
Compressed sensing theory [5][6][7] is one of the hotspots in recent years of data and signal processing, widely used in wireless sensor networks.Compressed sensing theory breaking the Nyquist sampling theorem and restriction Shannon theory can be less than the amount of data beyond the classical sampling method to obtain high quality raw response signal.Data reconstruction algorithm is an important part of the process; the key question is how to recover the high-dimensional data from known low-dimensional data in the greatest degree.At present, compressed sensing data reconstruction algorithm includes gradient projection algorithm [8], orthogonal matching algorithm [9], regularization orthogonal matching algorithm [10], tracking based on method [11], and quantum chaos algorithm based on immune clone data reconstruction algorithm Q-CSDR [12].
Current data signal reconstruction algorithm has good performance at low sparsity.Reconstruction conditions, for higher sparsity of signal reconstruction accuracy, algorithm performance plummeted and other problems occur.To solve these problems, the author proposed compression algorithm based on compressed sensing theory of microseismic signals segmentation algorithm by the analysis of data compressed sensing reconstruction algorithm and microseismic signal

Algorithms
Sparse data obtained by using linear measurements reconstruct the original information as much as possible to ensure the accuracy of the reconstruction, which is one of the most critical operations of perception frame compression.The reconstruction process of the original signal can be obtained by solving the inverse problem formula (1); then get the original reconstructed signal  by the formula Reconstruction of the existing method overall is divided into three categories.
Therein, for the vector ,  2 -norm is defined as the formula Minimization reconstruction  2 -norm analytical solution can be easily made as shown in the formula This approach theoretically only involves Defy matrix multiplication theory and is very simple, but it cannot be obtained in the calculation process sparse solution; analytical solution c obtains more nonzero elements.Therefore, this method is not highly practicable.

𝑙
In the actual process of reconstruction, it will produce certain error, so formula (5) can also be expressed as the formula c = arg min ‖‖ 0 , The formula, , is the minimum constant value.To solve this kind of problem, the numerical NP-complete problems are unstable; we need exhaustive solving sparse vector  in the position of nonzero elements of possible combinations of (   ) kinds.

𝑙 1 -Norm Minimization Reconstruction.
Candes pointed out that the met premise conditions are  > log 2 (/ + 1) Based on the independent identically distributed Gaussian observation matrix  0 -norm problem can be transformed to the  1 -norm.And using (7) exact reconstruction of sparse signal can be high probability approximation compressible signals: The application of the above formula will allow the existence of certain errors; the formula is used for solving The method to solve the  1 -norm minimization is to translate nonconvex problem into a convex programming problem, where solving process is simple.Looking for a minimum of  1 solution space can be expressed as a linear programming problem.But using the  1 -norm for data reconstruction creates the problem of high computational complexity; its computation complexity is O( 3 ).
The existing data of compressed sensing reconstruction algorithms are mostly based on the above three issues.Therein, OMP algorithms are solving the  0 -norm problem, in which core content combines greedy algorithm with iteration method to perceive the column vectors of matrix .Make the selected column vectors and the current residual vectors have the maximum correlation and then subtract the correlated volume from the observable volume and repeat the process until reaching the known sparsity .Q-CSDR algorithm solves the  1 -norm of the problem; in essence, the algorithm is still a greedy algorithm; the core content is to use quantum immune clone algorithm optimization feature; formula (8) is the objective function, using the theory of quantum immediately generated population, using the theory of immune clone population, and constantly looking for the best individual in the population.

Microseismic Signal Subsection
Compression Algorithm Based on Compressed Sensing

Microseismic Signal Characteristics and Thinning Methods.
Sparsity is the premise condition of compression perception theory; for the data thinned out, we need to first analyze the characteristics of the microseismic data.Sensor for the conventional microseismic data signals is shown in Figure 1.
As can be seen, there are a number of peak signals in the microseismic signal, in which there is a plurality of lower amplitude vibration signal.The peak signal is sensors monitoring of activity in the area of microseismic signal and is needed for analysis and detection of signals, and low amplitude vibration signal is far away in the process of monitoring activities; it is not needed for analysis and detection of the signal and is the redundant data in doping in detecting signal, which can be gotten rid of in the process of pretreatment.Using the feature, we can eliminate redundant data from original seismic signal, replaced by an amplitude of the time domain data 0 to complete the original signal thinning.This will not only signal the completion of the thinning, but also maximize the retention of the original signal information.When there is no target activity around the sensor, the collected data shown in Figure 2 is not valid.It can be seen around the sensor when there is no activity that the amplitude of the signal is generally around 45. To identify and distinguish the invalid data, we will set redundant numerical threshold to 2∼3 times the average of invalid data, about 140 or so.
After thinning, the signal is shown in Figure 3.

Improved Method.
As can be seen from the description of Section 2, data reconstruction algorithm based on compression perception is the essence of the rising populations that are actually looking for the original signal.Reconstruction algorithm's role is to find out how fast and accurate reconstruction of data is; NP-hard problems entirely belong to its essence.When reconstructing signal, when the number of the positions of the nonzero element combination is too much, the number of solutions of the solution space is huge, which caused the complexity of the algorithm to be too high, reducing the efficiency of the optimization algorithm.Its theoretical analysis is as follows.
Suppose the original signal  length is ; the sparsity is  and the number of nonzero elements is  × , with  reconstructing the processes requiring nonzero elements position for  ×  to be uniformly distributed in [min, max].According to probability theory knowledge, the reconstruction of probability formula is as in As can be seen from formula (9), the original signal length  shortens.Suppose the original signal is compressed into  sections; the  segment length is ; with the number of nonzero elements in each piece of data for , the reconstruction of the original signal probability formula is as follows: Compared to the above two formulas, it can be seen that the reconstruction probability of the compressed segment is significantly higher than the probability of reconstruction before segment.Therefore, the original data can be compressed section in order to improve the probability of data refactoring.Considering the blind sparsity and sparsity known under the two conditions of data compression and reconstruction characteristics, this paper presents a microseismic signal subsection compression algorithm based on compressed sensing theory.

Algorithm
Steps.This algorithm includes signal segmentation algorithm and signal data compression process.The purpose of segmentation is to reduce the number of exhaustive reconstruction algorithms and improve the probability of reconstructing the original signal.Data compression process uses compressive sensing theory and segment compression of each piece of data, thereby reducing the energy consumption of network traffic and achieving high rates of microseismic signal compression.
Algorithm steps are described below: (1) Argument initialization: this step includes the largest sample number max, the number of nonzero elements in the current segment temp, the number of segments , and the value of segmentation threshold .
According to the redundancy threshold values its time-domain thinning after the max data collected.
(3) Signal segmentation: when temp = , it can be divided into the starting data of the next data segment; go to step (4).The current number  of segments plus 1 and  is set to 0 at the same time.
According to the data length before compression, the observation matrix dictionary is taken up in the sensor nodes.
(5) Each segment consists of a data frame after the data package and sends it to a remote server for data reconstruction; go to step (2).
For example,  is assumed as the original signal: 0 0 0 1 1 1 0 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 0 1 0 0 1 0 0 0; the length of the original signal is 30.When  = 2, data is divided into six The feature of this method is that, in addition to the first paragraph of subsignal, the other starting position, all the data segments are nonzero elements.And no matter whether the sparsity of the original signal is predictable, sparsity of the segmented signal is known.Therefore, there is a probability of a substantial increase in reconstructed signal.Table 1 shows the signal block before and after the combination of the nonzero element position number.
As can be seen from the data in Table 1, before compressed signal segments, the number of nonzero elements' combined position is huge, especially under conditions of blind sparsity.The number of combinations of the original signal can reach as much as 10 9 .This can also explain the low probability signal reconstruction and poor accuracy reconstruction under the original conditions.Also, from Table 1, it can also be seen that, after the segment compression, the number of nonzero combinations of the original signal decreased a lot.Reconstruction of the probability substantially increased.According to the data segment length, the remote server carries out data reconstruction for each segment data.
The frame packet format is shown in Table 2. Data segment table in the format is shown in Table 3.

Energy Model.
In order to further analyze the impact on energy consumption data compression algorithm for wireless sensor networks, the data is compared at the same standard under compression algorithm in the network energy consumption and computational consumption of computing performance.A first-order model for network wireless energy analysis is used herein.Send  bits of data to the distance  of the node or the base station; sending and receiving energy consumption are defined as  trans (, ) =  tr-elec () +  Tx-amp (, ) trans and  recv represent nodes sending and receiving energy consumption. Tx-amp is the channel transmission power consumption. elec is the energy consumption when sending a single byte.Generally,  elec = 50 nJ/bit and  = 100 pJ/(bit⋅m −2 ).
Assume that each node to collect the data is 1 bit; distance from the node to the base station is .Taking into account the communication overhead of premise, the average data compression ratio  is defined as formulae ( 13) and ( 14): ,com and  ,incom are the length of data after compression and before compression, respectively. cost is the communication overhead.
When the network does not use data compression algorithms, the total energy consumption is Assuming an average compression ratio of  compression algorithm has been used, the size of the compressed data is  times the size of the original data.In this case the total energy consumption of the link is According to formulae (15) and ( 16), after an average compression ratio of  compression algorithm, ratio  of energy consumption and the unused energy consumption of the compression algorithm is As can be seen from ( 16), under the conditions of energy consumption and when other factors are ignored, with the energy consumption of the network, the ratio of the network energy computation before and after compression is equal to 1 − .That is, under this condition, the greater the average compression ratio, the less the communication data, the smaller the communication energy, and the smaller the energy consumption.Therefore, the compression ratio of energy consumption has a significant impact.
Under the condition of the average compression ratio of 0.5, the sensor microseismic signals were collected as the data source.The performance of the ISDT algorithm is compared with the segmentation compression algorithm, the distributed wavelet compression algorithm, and the improved segmentation linearization compression algorithm.The three algorithms were compiled into the Crossbow node implementation, as shown in Table 5.
As can be seen from Table 5, under the same conditions, the performance of this algorithm and ISDT algorithm is similar, and its instruction cycle is 77.7% for ISDT.The DWC algorithm instruction cycle is far higher than the previous two algorithms, about 3.9 times of data compression algorithm based on compression perception.Under the same compression ratio, the calculation energy consumption of this paper is 22.3% less than that of ISDT and 75.8% less than DWC algorithm.

Algorithm Performance Analysis.
As it can be seen from Figure 4, the recovery accuracy of the signal is subject to the effective compression ratio and the number of data segments in the nonzero elements  has greater impact, but the impact is less data segment length.Reconstruction of variance  here is defined as formula (17).Consider the following: The results are shown in Figure 4.The first line is the figure sensor to the original microseismic signals.The second line is compressed sensing segment after compression algorithm, and the reconstructed signal is recovered by the Q-CSDR algorithm gradually.The third line is the reconstruction error, the original signal, and the reconstructed signal difference.
When  = 2, the original data is divided into 22 segments' data, with signal compression ratio of 200 : 51, almost up to 4 : 1. Considering the communication, each data packet needs an additional 2 bytes of data for each segment and the sparsity of the original length, the total number of segments, and serial number information, Therefore, the actual data compression ratio of 190 : 400 = 0.475.The reconstruction signal MSE is less than 0.01.
As shown in Figure 5, when  = 3, the original data is divided into 15 segments' data, with signal compression ratio of 200 : 59, about 3 : 1. Considering the communication, each data packet needs an additional 2 bytes of each segment data of the original length and sparsity information, with the actual packet length compression ratio of 178 : 400 = 0.445.The reconstructed signal MSE is 0.02.
As shown in Figure 6, when  = 4, the original data is divided into 12 segments' data, with signal compression ratio of 200 : 66, about 3.3 : 1. Considering the communication, each data packet needs an additional 2 bytes of each segment data of the original length and sparsity information, and the actual packet length compression ratio is 180 : 400 = 0.45.The reconstructed signal MSE is 0.2.
As can be seen from the above results, the threshold value  has greater recovery segment with lower accuracy.From the above experimental results, the segmentation threshold of  is higher, and the recovery accuracy is lower.From the transmission point of view, the less the value of , the less the communication overhead, but the probability and accuracy of data reconstruction are also bad; the lower the value of , the higher the probability of the reconstruction and the recovery precision.

Wireless Sensor Network Data Compression Algorithm
Comparison.The comparison of the data recovery performance of the proposed algorithm with the ISDT algorithm is presented.SDT algorithm is a linear trend of compression algorithms; it defines a threshold  to given data and finds the longest straight trend, by a straight line which is determined by the start and end point instead of a series of consecutive data points collected.SDT algorithm set two "fulcrums" at the start of the data where its vertical distance is equal to the threshold .The line between the fulcrum and the subsequent data is called "door." When the algorithm is initialized two doors are closed.As more data is collected into the data series, these doors will be open or stop operation according to the actual situation; the width of the door is not fixed, and once the doors opened, they cannot be closed.When the two doors reach parallelism, the current compression interval is over.Begin a new round of compression.The performance of SDT algorithm is only limited by the threshold , and the threshold  is selected depending on the experience and experimentation and cannot be changed in the entire compression process; once the threshold is established, so compressing the signal data which the conditions of volatility is not ideal.ISDT algorithm for the shortcomings of SDT algorithm has been improved.ISDT algorithms can, according to data's fluctuations, adjust its size within a range in real time, continuously and adaptively, so that ISDT always maintains a good compression during the compression.ISDT algorithm judges the data fluctuation by the ratio of one point.According to the difference between the data points and the two data points, the SDT algorithm is judging the data fluctuation, and the threshold value is updated continuously according to the formula.From the above description, we know ISDT algorithm's compressed performance is perfect, but the compression process is the close relationship with the data collection process; the algorithm should not only calculate the value of the limited door, but also adjust the limit value.These two kinds of algorithm for data compression are good under the condition of small amplitude fluctuation effect, but data compression effect is not good under the condition of volatile data, and because the threshold value needs to be updated in real time, its computation cost can increase instead.Concrete comparative experiment results are shown in Figure 7.
We can make conclusion that the performance of the algorithm in this paper is superior to the contrast algorithm.The square error value can keep stable under the condition of various compression ratios; this is because the probability of signal reconstruction after using the algorithm can be improved.When  = 3 and  = 4 and compression ratio is greater than 0.8, algorithm's performance is not as good as ISDT algorithms.This is due to the characteristics   of the data reconstruction algorithm, and quantum cloning immune algorithm in optimization process is easy to fall into local optima.At the same time when the compression ratio is less than 0.2, the performance of the algorithm will be a sharp decline, mainly because in the data compression process the raw data information loss is too much.When the compression ratio is from 0.2 to 0.8, the performance of the proposed algorithm is better than the other two algorithms on the whole.

Figure 6 :
Figure 6: The result of data reconstruction when  = 4.

Figure 7 :
Figure 7: Comparison of the performance of the algorithm.

Figure 8 :
Figure 8: Comparison of experimental results of similar algorithms.
Figure 8  shows the comparison of performance with different compression sensing data algorithms (without 0 -Norm Minimization Reconstruction. 0 -norm has solved the analytic solution for too many nonzero elements in the  2 -norm minimization problem.In solving the underdetermined linear equations, there are a number of nonzero elements in the minimization. 0 -norm is different from the conventional norm with its value equal to the number of nonzero elements; for example, for -sparse signal , its  0 -norm is .So the  2 norm optimization problem is as the formula c = arg min ‖‖ 0 , s.t. =  CS  = ΦΨ.

Table 1 :
The number of combinations of signal segments before and after the position of nonzero elements.

Table 5 :
Three compression algorithms' instruction cycle statistics.