Asymptotic Theory in Model Diagnostic for General Multivariate Spatial Regression

We establish an asymptotic approach for checking the appropriateness of an assumed multivariate spatial regression model by considering the set-indexed partial sums process of the least squares residuals of the vector of observations. In this work, we assume that the components of the observation, whose mean is generated by a certain basis, are correlated. By this reason we need more effort in deriving the results. To get the limit process we apply the multivariate analog of the well-known Prohorov’s theorem. To test the hypothesis we define tests which are given by Kolmogorov-Smirnov (KS) and Cramér-von Mises (CvM) functionals of the partial sums processes. The calibration of the probability distribution of the tests is conducted by proposing bootstrap resampling technique based on the residuals. We studied the finite sample size performance of the KS and CvM tests by simulation. The application of the proposed test procedure to real data is also discussed.


Introduction
As mentioned in the literatures of model checks for multivariate regression, the appropriateness of an assumed model is mostly verified by analyzing the least squares residual of the observations; see, for example, Zellner [1], Christensen [2], pp.1-22, Anderson [3], pp.187-191, and Johnson and Wichern [4], pp.395-398.A common feature of these works is the comparison between the length of the matrix of the residuals under the null hypothesis and that of the residuals under a proposed alternative.
Instead of considering the residuals directly MacNeill [5] and MacNeill [6] proposed a method in model check for univariate polynomial regression based on the partial sums process of the residuals.These popular approaches are generalized to the spatial case by MacNeill and Jandhyala [7] for ordinary partial sums and Xie and MacNeill [8] for set-indexed partial sums process of the residuals.Bischoff and Somayasa [9] and Somayasa et al. [10] derived the limit process in the spatial case by a geometric method generalizing a univariate approach due to Bischoff [11] and Bischoff [12].These results can be used to establish asymptotic test of Cramér-von Mises and Kolmogorov-Smirnov type for model checks and changepoint problems.Model checks for univariate regression with random design using the empirical process of the explanatory variable marked by the residuals was established in Stute [13] and Stute et al. [14].In the papers mentioned above the limit processes were explicitly expressed as complicated functions of the univariate Brownian motion (sheet).
The purpose of the present article is to study the application of set-indexed partial sums technique to simultaneously check the goodness-of-fit of a multivariate spatial linear regression defined on high-dimensional compact rectangle.In contrast to the normal multivariate model studied in the standard literatures such as in Christensen [2], Anderson [3], and Johnson and Wichern [4] or in the references of model selection such as in Bedrick and Tsai [15] and Fujikoshi and Satoh [16], in this paper we will consider a multivariate regression model in which the components of the mean of the response vector are assumed to lie in different spaces and the underlying distribution model of the vector of random errors is unknown.

International Journal of Mathematics and Mathematical Sciences
To see the problem in more detail let  1 ≥ 1, . . .,   ≥ 1 be fixed.Let a -dimensional random vector Z fl (  )  =1 be observed independently over an experimental design given by a regular lattice: ∈ I  : 1 ≤   ≤   ,  = 1, . . ., } , where I  fl ∏  =1 [0, 1] is the -dimensional unit cube.Let g fl (  )  =1 be the true but unknown R  -valued regression function on I  which represents the mean function of the observations.Let Z  1 ⋅⋅⋅  fl ( , 1 ⋅⋅⋅  )  =1 and g fl ( , 1 ⋅⋅⋅  )  =1 be the observation and the corresponding mean in the experimental condition (  /  )  =1 .Under the null hypothesis  0 we assume that Z  1 ⋅⋅⋅  follows a multivariate linear model.That is, we assume a model where, for  = 1, . . ., ,   fl (  ) is a   -dimensional vector of known regression functions whose components are assumed to be square integrable with respect to the Lebesgue measure   I on I  , that is,   ∈ (6) for every A  1 ⋅⋅⋅  fl (  1 ⋅⋅⋅  )  1 ,...,   1 =1,...,  =1 , and B  1 ⋅⋅⋅  fl (  1 ⋅⋅⋅  )  1 ,...,   1 =1,...,  =1 ∈ R  1 ×⋅⋅⋅×  .Next we define the set-indexed partial sums operator.Let A be the family of convex subset of I  , and let    I be the Lebesgue pseudometric on A defined by    I ( 1 ,  2 ) fl   I ( 1 Δ 2 ), for  1 ,  2 ∈ A. Let C(A) be the set of continuous functions on A under    I .We embed the array of the residual R  1 ⋅⋅⋅  into a -dimensional stochastic process indexed by A by using the component-wise set-indexed partial sums operator such that, for any  ∈ A, where, for 1 ≤  1 ≤  1 , . . ., 1 ≤   ≤   ,   1 ⋅⋅⋅  fl ∏  =1 ((  − 1)/  ,   /  ].Let us call this process the -dimensional setindexed least squares residual partial sums process.The space C  (A) is furnished with the uniform topology induced by the metric  defined by for u fl (  )  =1 and w fl (  )  =1 ∈ C  (A).We notice that, in the works of Bischoff and Somayasa [9], Bischoff and Gegg [17], and Somayasa and Adhi Wibawa [18], the limit process of the partial sums process of the least squares residuals has been investigated by applying the existing geometric method of Bischoff [11,12].However, the method becomes not applicable anymore in deriving the limit process of V  1 ⋅⋅⋅  (R  1 ⋅⋅⋅  ) as the dimension of W , 1 ⋅⋅⋅  varies.Therefore, in this work, we attempt to adopt the vectorial analog of Prohorov's theorem; see, for example, Theorem 6.1 in Billingsley [19] for obtaining the limit process.For our result we need to extend the ordinary partial sums formula to -dimensional case defined on I  as follows.Let  fl {1, 2, . . ., } and    be the set of all -combinations of the set , with  = 1, . . ., .For a chosen value of , we denote the th element of    by a -tuple (( For example, let  = {1, 2, 3}.Then, for  = 1, we denote the elements of   1 as 1( 1 ) fl 1, 2( 1 ) fl 2, and 3( 1 ) fl 3.In a similar way, we denote the elements of   2 which consists of {(1, 2), (1, 3), (2, 3)}, respectively, by (1( 1 ), 1( 2 )) fl (1, 2), (2( 1 ), 2( 2 )) fl (1, 3), and (3( 1 ), 3( 2 )) fl (2,3).Finally the element of   3 = {(1, 2, 3)} is sufficiently written by (1( 1 ), 1( 2 ), 1( 3 )).Hence the -dimensional ordinary partial sums operator transforms any -dimensional array for every t fl ( It is clear that the partial sums process of the residuals obtained using ( 11) is a special case of (8) since for every t fl ( 1 , . . .,   ) ⊤ ∈ I  it holds It is worth noting that the extension of the study from univariate to multivariate model and also the expansion of the dimension of the lattice points are strongly motivated by the prediction problem in mining industry and geosciences.As for an example recently Tahir [20] presented data provided by PT Antam Tbk (a mining industry in Southeast Sulawesi).The data consist of a joint measurement of the percentage of several chemical elements and substances such as Ni, Co, Fe, MgO, SiO 2 , and CaO which are recorded in every point of a three-dimensional lattice defined over the exploration region of the company.Hence, by the inherent existence of the correlation among the variables, the statistical analysis for the involved variables must be conducted simultaneously.
There have been many methods proposed in the literatures for testing  0 .Most of them have been derived for the case of W 1 = ⋅ ⋅ ⋅ = W  under normally distributed random error.Generalized likelihood ratio test which has been leading to Wilk's lambda statistic or variant of it can be found in Zellner [1], Christensen [2], pp.1-22, Anderson [3], pp.187-191, and Johnson and Wichern [4], pp.395-398.Mardia and Goodall [21] derived the maximum likelihood estimation procedure for the parameters of the general normal spatial multivariate model with stationary observations.This approach can be straightforwardly extended for obtaining the associated likelihood ratio test in model check for the model.Unfortunately, in the practice especially when dealing with mining data, normal distribution is sometimes found to be not suitable for describing the distribution model of the observations, so that the test procedures mentioned above are consequently no longer applicable.Asymptotic method established in Arnold [22] for multivariate regression with W 1 = ⋅ ⋅ ⋅ = W  can be generalized in such a way that it is valid for the general model.As a topic in statistics it must be well known.However, we cannot find literatures where the topic has been studied.
The rest of the paper is organized as follows.In Section 2 we show that when  0 is true Σ −1/2 V  1 ⋅⋅⋅  (R  1 ⋅⋅⋅  ) converges weakly to a projection of the -dimensional set-indexed Brownian sheet.The limit process is shown to be useful for testing  0 asymptotically based on the Kolmogorov-Smirnov (KS-test) and Cramér-von Mises (CvM-test) functionals of the set-indexed -dimensional least squares residual partial sum processes, defined, respectively, by For both tests the rejection of  0 is for large value of the KS and CvM statistics, respectively.Under localized alternative the above sequence of random processes converges weakly to the above limit process with an additional deterministic trend (see Section 3).In Section 4, we define a consistent estimator for Σ.In Section 5 we investigate a residual based bootstrap method for the calibration of the tests.Monte Carlo simulation for the purpose of studying the finite sample behavior of the KS and CvM tests is reported in Section 6. Application of the test procedure in real data is presented in Section 7. The paper is closed in Section 8 with conclusion and some remarks for future research.Auxiliary results needed for deriving the limit process are presented in the appendix.We note that all convergence results derived throughout this paper which hold for  1 , . . .,   simultaneously go to infinity, that is, for   → ∞, for all  = 1, . . ., ; otherwise they will be stated in some way.The notion of convergence in distribution and convergence in probability will be conventionally denoted by D  → and P  →, respectively.

The Limit of V
Let  fl {() :  ∈ A} be the one-dimensional set-indexed Brownian sheet having sample path in C(A).We refer the reader to Pyke [23], Bass and Pyke [24], and Alexander and Pyke [25] for the definition and the existence of .Let us consider a subspace of C(A) which is closely related to , defined by Under the inner product and the norm defined by where is a projector such that, for every  ∈ C(A) and  ∈ A, For t fl (  )  =1 ∈ I  , we set (t) for (∏  =1 [0,   ]), and ∫ () stands for the integral in the sense of Riemann-Stieltjes.
Moreover, the limit process B  0 ,f is a centered Gaussian process with the covariance function given by  (, ) fl diag ( ( ∩ ) Proof.By applying the linear property of V  1 ⋅⋅⋅  and Lemma C.2, we have under  0 , It can be shown by extending the uniform central limit theorem studied in Pyke [23], Bass and Pyke [24], and Alexander and Pyke [25] to its vectorial analog that the term 21) converges weakly to B  .Therefore we only need to show that the second term satisfies the weak convergence: where  is a -dimensional centered Gaussian process with the covariance matrix given by By Prohorov's theorem it is sufficient to show that, for any finite collection of convex sets  1 , . . .,   in A and real numbers  1 , . . .,   , with  ≥ 1, it holds that where the left-hand side has the covariance which can be expressed as where , for  = 1, . . .,  and  = 1, . . ., .Let  and ℓ be fixed.Then by a simultaneous application of the definition of pr W , 1 ⋅⋅⋅  H  (see Lemma C.3), (11), the definition of the Riemann-Stieljes integral (cf.Stroock [26], pp.7-17), and the independence of the array { , 1 ⋅⋅⋅  : 1 ≤   ≤   }, we further get By recalling Lemma C.3 the last expression clearly converges to Hence the matrix ((  (  )  ( ℓ ))) ,=1 converges element wise to the symmetric matrix which can be represented as Σ⊙ D, for a matrix D defined by with for ,  = 1, . . ., .Thereby ⊙ denotes the Hadarmard product defined, for example, in Magnus and Neudecker [27], pp.53-54.Since ) converges to the following general linear combination: (29) which is actually the covariance of ∑  =1   (  ).Next we observe that, by applying the definition of V  1 ⋅⋅⋅  and the definition of the Riemann-Stieltjes integral, we can also write where Let Then by considering the stochastic independence of the array of the -vector of the random errors, it holds that in which by the well-known bounded convergence theorem (cf.Athreya and Lahiri [28], pp.57-58) the last term converges to zero.Thus the Lindeberg condition is satisfied.Therefore by the Lindeberg-Levy multivariate central limit theorem studied, for example, in van der Vaart [29], pp.16, it can be concluded that , where ∑  =1   (  ) has the -variate normal distribution with mean zero and the covariance given by (29).
The tightness of Z  1 ⋅⋅⋅  can be shown as follows.By the definition Z  1 ⋅⋅⋅  can also be expressed as

The Limit of V
The test procedures derived above are consistent in the sense of Definition 11.1.3in Lehmann and Romano [30].That is, the probability of rejection of  0 under the competing alternative converges to 1.As an immediate consequence we cannot observe the performance of the tests when the model moves away from  0 .Therefore, to be able to investigate the behavior of the tests, we consider the localized model defined as follows: When  0 is true we get the similar least squares residuals as given in Section 1.Therefore, observing Model (35), the test problem will not be altered.
In the following theorem, we present the limit process of the -dimensional set-indexed partial sums process of the residuals under  1 associated with Model (35).
Proof.Considering the linearity of V  1 ⋅⋅⋅  and Lemma C.2, when  1 is true we have Since, for  = 1, . . ., ,   is in BVV(I  ), it can be shown that , for every  ∈ A. Also the last two terms on the righthand side of the preceding equation converge in distribution to B  0 ,f by Theorem 1. Thus to the rest we only need to show that pr The right-hand side of the last expression is obtained directly from the definition of the ordinary partial sums (11) for a fixed g ∈ ∏  =1 BVV(I  ), respectively.In Section 5 we investigate the empirical power functions of the KS and CvM tests by simulation.

Estimating the Population Covariance Matrix
If the covariance matrix Σ is unknown, as it usually is, it is impossible to use KS Furthermore, let Z ( 1 ⋅⋅⋅  ) , g ( 1 ⋅⋅⋅  ) , and E ( 1 ⋅⋅⋅  ) be ( 1 ⋅ ⋅ ⋅   ) × -dimensional matrices whose th column is given, respectively, by the column vectors , and  ( 1 ⋅⋅⋅  )  ,  = 1, . . ., .Then Model (35) can also be represented as follows: where, for , V = 1, . . ., , Cov( with I  1 ⋅⋅⋅  being the identity matrix in R ( 1 ⋅⋅⋅  )×( 1 ⋅⋅⋅  ) .Associated with the subspace W , 1 ⋅⋅⋅  we define the design matrix X as an element of R ( 1 ⋅⋅⋅  )×  whose th column is given by the ( 1 ⋅ ⋅ ⋅   )-dimensional column vector: We denote the column space of X ) ⊂ R ( 1 ⋅⋅⋅  )×  for the sake of brevity.We also define the columnwise projection of any matrix U ( 1 ⋅⋅⋅  ) fl ( A reasonable estimator of the covariance matrix Σ is denoted by Σ 1 ⋅⋅⋅  , defined by where constitutes the component-wise orthogonal projector into the orthogonal complement of the product space ∏  =1 C(X ). Zellner [1] and Arnold [22] investigated the consistency of Σ 1 ⋅⋅⋅  toward Σ in the case of the multivariate regression model with W 1 = ⋅ ⋅ ⋅ = W  .Some difficulties appear when the situation is extended to the case of W 1 ̸ = ⋅ ⋅ ⋅ ̸ = W  , since it involves the problem of finding the limit of matrices with the components given by inner products of two vectors.Proof.If  0 is true, it can be easily shown that

International Journal of Mathematics and Mathematical Sciences
For technical reason we assume without lost of generality that X ( 1 ⋅⋅⋅  )  is an orthogonal matrix, for  = 1, . . ., .Hence we further get the representation , ,V=1 are independent and identically distributed random matrices with mean Σ, by the well-known weak law of large numbers, we get Note that in the practice we consider the polynomial regression model.Hence, for every V = 1, . . ., , the design matrix X ( 1 ⋅⋅⋅  ) V satisfies the so-called Huber condition (cf.Pruscha [31], pp.115-117).By this reason, for the rest of the terms, we can immediately apply the technique proposed in Arnold [22] to show X ( 1 ⋅⋅⋅  )⊤ V  ( 1 ⋅⋅⋅  )  D  →   V (0,   I  V ), for all , V = 1, . . ., .Therefore, we finally get the following componentwise convergence: where O × is the  × -zero matrix.

Remark 6. Since
, without altering the convergence result presented in Theorem 1, the population variance-covariance matrix Σ can be directly replaced by the consistence estimator Σ 1 ⋅⋅⋅  .

Calibration of the Tests
The limits of the test statistics are not distribution-free and we need therefore calibration for the distribution of the statistical tests.For the calibration we adapted the idea of residual based bootstrap for multivariate regression studied in Shao and Tu [32] for approximating the distributions of KS  1 ⋅⋅⋅  ,A and CvM  1 ⋅⋅⋅  ,A .
For fixed  1 , . . .,   , let   1 ⋅⋅⋅  be the empirical distribution function of the vectors of least squares residuals {r  1 ⋅⋅⋅  − R  1 ⋅⋅⋅  : 1 ≤   ≤   ,  = 1, . . ., } centered at zero vector, where Based on this model we get the array of -dimensional bootstrap least squares residuals which is given by the component-wise projection of the bootstrap observations: Hence, the bootstrap analog of KS  1 ⋅⋅⋅  ,A and CvM where The question regarding the consistency of the bootstrap approximation of the -dimensional processes Theorem 7. Let { 1 , . . .,    } be an ONB of W  , for  = 1, . . ., .Suppose the conditions of Theorem 1 are fulfilled.Then under  0 it holds that where Proof.We notice that {E *  1 ⋅⋅⋅  : 1 ≤   ≤   ,  = 1, . . ., } are independent and identically distributed with and Σ * −1/2  1 ⋅⋅⋅  can be written as where it can be shown easily that with  P (1) being the collection of the terms converging in probability to O × .Then by recalling Theorem 5 and the linearity of V  1 ⋅⋅⋅  we only need to show that The proof is established by imitating the steps of proving convergence result of Theorem 1.We are done.

Simulation Study
In this section, we report on a simulation study designed to investigate the finite sample size behavior of the KS and CvM tests.We simulate a multivariate model with four components defined on the unit rectangle I 2 .The hypothesis under study is for constants , , , and  determined prior to the generation of the samples.For fixed  1 and  2 and 1 ≤ ℓ ≤  1 and 1 ≤  ≤  2 the vector of random errors E(ℓ/ 1 , / 2 ) is generated independently from the 4-variate normal distribution with mean zero and variance-covariance matrix given by Σ 4×4 fl ( ) ; (59) however, we assume in the computation that Σ 4×4 is unknown.It is therefore estimated using Σ 1  2 defined in Section 4. It is important to note that for computational reason we restricted the index set to the Vapnick Chervonenkis Classes (VCC) of subsets of I 2 which is given by the family of closed rectangle with the point (0, 0) as the essential point.That is, the family {[0, ] × [0, ] : 0 ≤ ,  ≤ 1}.
When , , , and  are set simultaneously to zero then we get the samples which coincide to the model specified under  0 .Conversely, when at least one of them takes a nonzero value then we obtain the samples which can be regarded as from the alternative whose corresponding samples are generated by assigning nonzero values to either one of the constants, , , , and , or the combinations of them.
Table 1 presents the empirical probabilities of rejection of  0 for  = 0.05 and some selected values of , , , and .The empirical powers of the KS and CvM tests are denoted by αKS and αCvM , respectively.The notations σKS and σCvM stand, respectively, for the standard deviation of the samples.The critical values of the statistics KS  1 , 2 ;A and CvM  1 , 2 ;A are International Journal of Mathematics and Mathematical Sciences  6.0742 and 7.9910 which are approximated by simulation.For  =  =  =  = 0 the values of αKS and αCvM fluctuate around 0.05 as it should be.This means that, independent of the selected number of the lattice points, both tests attain the specified level of significance.
Furthermore, Figure 1 exhibits the graphs of the empirical power function of the KS and CvM tests for  = 0.05 associated with hypothesis  0 specified above against  1 :   ∉ W  , for  = 1, 2, 3, 4. For the four cases we generate the error vectors independently from the same 4-variate normal distribution mentioned above.In the clockwise direction the left-top panel presents the graphs of the power function for testing  0 against  1 :  1 ∉ W 1 , the right-top panel is for  0 against  1 :  2 ∉ W 2 , the right-bottom is for  0 against  1 :  3 ∉ W 3 , and left-bottom is for  0 against  1 :  4 ∉ W 4 .The common characteristic of the tests is that the power gets larger as the the model moves away from  0 .The KS tests represented by smooth line tend to have slightly larger power.However, somewhat unexpectedly, in the second case, the CvM test has much larger power.

Example of Application
In this example, the proposed method is applied to a mining data studied in Tahir [20].As introduced in Section 1, the data consist of a simultaneous measurement of the percentages of Nickel (Ni), Cobalt (Co), Ferum (Fe), and other substances like Calcium-Monoxide (CaO), Silicon-Dioxide (SiO 2 ), and International Journal of Mathematics and Mathematical Sciences  We obtained the values KS 7×14;A = 1.44109 and CvM 7×14;A = 0.936687 with the associated simulated  values of 0.98350 and 0.93670, respectively.We notice that in the computation we consider the VCC {[0, ] × [0, ] : 0 ≤ ,  ≤ 1} as the index sets instead of A. Hence when using the KS test as well as CvM test the hypothesis will be not rejected for almost all commonly used values of .There exists a significant evidence that the assumed model is appropriate for describing the functional relationship between the experimental conditional and the percentages of those elements.
In the practice some computational difficulties appear for testing using our proposed method.First, to the knowledge of the authors, the analytical formula for computing the critical and  values of the tests have been not yet available in the literatures; therefore we need to approximate them by simulation using computer.Second, although the test procedures are established for a much larger family of sets A, in the application the computation is always restricted to the VCC of subsets of I  like that of {∏  =1 [0,   ] ⊂ I  : 0 <   ≤ 1,  = 1, . . ., } or {∏  =1 [  ,   ] ⊂ I  : 0 ≤   <   ≤ 1,  = 1, . . ., }.

Concluding Remark
In this article we have developed an asymptotic method for checking the validity of a general multivariate spatial regression model by considering the multidimensional setindexed partial sums of the residuals.For the calibration of the distribution of the test statistics we propose the residual based bootstrap for multivariate regression.It is shown by applying imitation technique that the residual bootstrap resampling technique is consistent.In a simulation study the finite sample size behavior of the KS and CvM statistics is investigated in greater detail.For the first-order model CvM test has much larger power, whereas for constant, secondorder, and third-order models the powers of the two tests are almost the same.
Other possibilities of tests for multidimensional case can be obtained by incorporating a sampling technique according to an arbitrary experimental design.Sometimes because of technical, economic, or ecological reason, practitioners will not or cannot sample the observations equidistantly.One possible approach is to sample according to a continuous probability measure; see, for example, the sampling method proposed in Bischoff [11].Under this concern we get the socalled weighted KS and CvM tests which can be viewed as generalization of the KS and CvM tests studied in this paper.
Instead of considering the least squares residuals of the observations we can also define a test by directly investigating the partial sums of the observations.The limit process will be given by a type of signal plus a noise which is given by the multidimensional set-indexed Brownian sheet.Observing the limit process we can formulate likelihood ratio test based on the Cameron-Matrin-Girsanov density formula of the limit process.Establishing such type of test will be of our concern in our future research project.
Lemma C.2 (see Bischoff and Somayasa [9]).For any where Furthermore, by the definition of the component-wise projection, we finally get

𝜅Figure 1 :
Figure 1: The empirical power functions of the KS test (smooth line) and CvM test (dotted line) for the 4-variate model using 50 × 60-point regular lattice.The simulation is under 10000 runs.

󸀠 ∈ A, where I 𝑝 is the 𝑝 × 𝑝 identity matrix. Suppose {𝑓 𝑖1 , . . . , 𝑓 𝑖𝑑 𝑖 } are in C(I 𝑑 ) ∩ BV 𝐻 (I 𝑑 ), where C(I 𝑑 ) is the space of continuous functions on I 𝑑 (see Definition A.4 for the definition of BV 𝐻 (I 𝑑 )). Then under 𝐻 0 it holds that
it is clear that H  and  2 (  I ) are isometric.For our result we need to define subspaces W  and W H  associated with the regression functions  1 , . ..,    , where W  fl [ 1 , . ..,    ] ⊂  2 (  I ) and W H  fl [ℎ  1 , . .., ℎ   ] ⊂ H  , for  = 1, . . ., .Now we are ready to state the limit process of the sequence of -dimensional set-indexed residual partial sums processes for the model specified under  0 .
for all  and , then the last expression clearly converges component-wise to (∑ I    (t)ℎ   (t))ℎ   ) By Theorem 3 the power function of the KS and CvM tests at a level  can now be approximated by computing the probabilities of the form and the definition of the Riemann-Stieltjes integral on I  .Since s( 1 ⋅⋅⋅  )  converges uniformly to   and   has bounded variation International Journal of Mathematics and Mathematical Sciences 7 on I =1 be an array of independent and identically distributed random vectors sampled from   1 ⋅⋅⋅  and let ĝ(Ξ  1 ⋅⋅⋅  ) be the ordinary LSE of g(Ξ  1 ⋅⋅⋅  ), where ĝ(Ξ  1 ⋅⋅⋅  ) = pr ∏  =1 W  Z  1 ⋅⋅⋅  .Then we generate the array of -dimensional bootstrap observations which is denoted in this paper by Z *  1 ⋅⋅⋅  fl (Z *  1 ⋅⋅⋅  ) 1 ,...,   1 =1,...,  =1 through the model: where g(ℓ/ 1 , / 2 ) is defined as

Table 1 :
The empirical probabilities of rejection of the KS and CvM tests for several selected values of , , , and .The sample sizes are selected to 35 × 40 and 50 × 60.The simulation results are based on 10000 runs.