Asymptotic Normality of Nonparametric Kernel Regression Estimation for Missing at Random Functional Spatial Data

Tis study investigates the estimation of the regression function using the kernel method in the presence of missing at random responses, assuming spatial dependence, and complete observation of the functional regressor. We construct the asymptotic properties of the established estimator and derive the probability convergence (with rates) as well as the asymptotic normality of the estimator under certain weak conditions. Simulation studies are then presented to examine and show the performance of our proposed estimator. Tis is followed by examining a real data set to illustrate the suggested estimator’s efcacy and demonstrate its superiority. Te results show that the proposed estimator outperforms existing estimators as the number of missing at random data increases.


Introduction
In several domains of current research, including environmental sciences, geography, econometrics, microbiology, geophysics, climates, and other applied felds, the analysis of massive volumes of data with a spatial argument is frequently required (geographical location).To describe these processes, you need to fnd the relationship between random variables in one area, in terms of correlation, and those in nearby areas.Tis step is considered one of the most essential parts of analyzing spatial data.Recently, the new statistical branch, called functional data analysis (FDA), gave a new dynamism to theoretical and methodological improvements and the diversifcation of application domains.Such improvements have been possible as computer tools' storage capabilities have increased, allowing them to store and analyze large amounts of data.We mention the monographs [1] for the practical aspects, reference [2] for the theoretical elements, and reference [3] for a nonparametric study as reference works on the issue.For the most recent contributions in this area, readers can consult the book [4] as well as several bibliographic reviews in [5,6].In this context, functional regression is an essential component of the FDA because it links its regressor X to the scalar variable Y. Te authors in [7] discussed and determined the initial fndings for estimating the regression function (in semimetric space).It should be clear to the engaging readers that such topic theories and methods in this feld of study are well-established; for examples, see monograph [3] and the references included, as well as [8,9].
Incorporating spatial statistics with functional data analysis is made possible by functional data coupled with geographical dependencies and spatial functional statistics.Tis combination extends the FDA method to analyze a sample of functions obtained at diferent regional sites (functional data with spatial correlation).Both the theoretical and practical aspects of statistics stand to beneft from this combination (for some recent, advanced, and noteworthy citations on the topic, see [10]).Te model of spatial functional regression is considered and explored by [11].Te authors constructed the rates of almost sure convergence using the nonparametric kernel method in functional regression.Ten, the authors in [12] determined the asymptotic normality of robust regression simultaneously.We take note that the spatial functional regression is a particular case of two widely recognized spatial dependence models that have garnered signifcant interest in the analysis of lattice data, known as the spatial autoregressive (SAR)-dependent variable and the spatial autoregressive error (SAE), where the model error is SAR.Tese models extend the concept of regression from a time series framework to a spatial context (see [13] for a large discussion).
Complete data analysis is also the topic of all the work listed.Unfortunately, this topic has not received better attention in many applications, such as the analysis of survival statistics.Specifcally, the problem lies in fnding the best way to replace missing data and control the accuracy of such an imputation; this topic has received extensive studies and treatment in the multivariate case (see, for example, [14][15][16]).Imputation techniques for missing responses commonly used involve kernel regression imputation, linear regression imputation, and so on.Tere are many studies on regression functions with missing data and related statistical conclusions in the statistical literature when the predictor variables are fnite-dimensional.For parametric regression, we quote [17,18], and for nonparametric regression with a kernel, we cite [19,20].Also, references [21][22][23] examined the case in which some observations on the covariates are missing at random (MAR), whereas the observations on the scalar response are entirely observed.Tus, reference [24] examined the presence of missing data in the robust regression model while the author in [25] studied MAR regression using the response variable and predictors (covariates).
Very little research has been conducted on the properties of the functional nonparametric regression model for the missing data when the predictors are functional.It was frst suggested by the authors in [8], who estimated the mean of a MAR scalar response using an i.i.d.functional sample and observed predictors.Tey extended the result in [19] and demonstrated the asymptotic characteristics of the regression operator estimate when the functional regressor is totally observed and some responses are randomly missed.Later, the authors in [26] established the asymptotic properties of the regression function, considering cases when the explanatory variables are functional, stationary, and ergodic with MAR response.Te local linear estimation method and the k-nearest neighbor (k-NN) technique were used by the authors in [27] for estimating the regression function when the regressor and response variables are functional and scalar, respectively.Still, the latter observed fewer MARs, while the authors in [6] constructed the nonparametric quantile-regression estimate for the functional data with MAR response.Te authors in [28] suggest and compare diferent methods for estimating spatial autoregressive panel models with randomly missing data in the dependent variable.
As far as we know, no previous studies have been conducted on nonparametric regression based on functional spatial data with MAR response.Hence, our goal is to investigate the kernel method to estimate the regression function based on spatially dependent data, and the response is MAR.
We structure this paper by introducing the considered spatial model in Section 2 (as in (1)) and explicitly generating the estimate of m(.) utilizing MAR.We outline, in Section 3, the notations and several assumptions behind the considered model.Section 4 shows the main theoretical result of our study.Te strong proof supporting our fndings is presented in Section 5, where the latter involves the evaluation of our method using both simulation and real data application.It includes a comparison between the typical spatial nonparametric functional model and its incomplete counterpart, demonstrating the superiority of our method.Finally, our conclusion is stated at the end.

The Estimates and the Spatial Model
a measurable strictly stationary spatial process defned over a probability space (Ω, A, P) with identical distribution as Z � (X, Y), where X is a functional random variable valued in a separable semimetric space (E, d(., .))and Y is a real-valued and integrable variable.We suppose that the process can be observed in the rectangular region where n � (n 1 , . . ., n N ).Suppose moreover that, for l � 1 . . ., N, n l approaches infnity at the same rate: Te term "site" will be used to refer to a point i.If min k�1...,N (n k ) ⟶ ∞, we shall write n ⟶ ∞.
Te nonparametric spatial regression model is as follows: where the function m(.) is an unknown and the random errors ε i are centered independent and identically distributed with E(ε i | X i ) � 0 and unknown fnite variance σ 2 � var(ε i ).
We know that (see [11]) the spatial kernel regression estimator of m(. where with K as the kernel function and a n as a sequence of decreasing bandwidths as n approaches infnity. Our contribution is distinguished by the fact that we tackle the issue of incomplete data.In particular, we examine the situation when the response observations (Y values) are MAR, but the independent variable (X) values are all observed.If Y i does not contain all of the required elements, then we say that something is missing.For simplicity, we refer to δ as a real random variable and take into account the 2 Journal of Mathematics sample δ i where δ i � 1 if the value Y i is known; otherwise, δ i � 0. Tus, we receive the following missing information: Te conditional probability π(x) of the observable response Y given the explanatory X is typically unknown; hence, it is assumed that the random variable δ i follows Bernoulli distribution where Under this assumption, we provide the estimator of m(.) using the sample with where is the kernel function and the bandwidths h n are a series that tends to zero as n approaches infnity.
Recall that our primary objective is to investigate the asymptotic normality of our estimator when the process Z i is strictly stationary, which satisfes the following α-mixing condition.
Tere exists φ(t) a real function that tends to 0 as t goes to ∞, such that for fnite cardinals subsets E, E ′ ⊂ Z N : where , is the σ-felds that are generated by the random variables Z i , and ψ: Z 2 ⟶ R + is a symmetric nondecreasing function.We assume that the two functions ψ and ϕ satisfy the following conditions: and Note that condition (8) can be replaced by Note that many stochastic processes satisfy the mixing conditions (8) and (9) (see [29] for some examples).

Notations and Hypotheses
Foremost, for x ∈ E, we denote B(x, h) � x ′ ∈ E/d(x, x ′ )  < h} and ϕ x (h) � P(X ∈ B(x, h)) called small ball probability.Te proposed predictor's consistency outcomes are established under the following assumptions: (H1): we suppose that ∀i ≠ j ∈ Z N , and the probabilistic joint distribution ] ij of X i and X j fulflls ∀x ∈ E: (H2): K: R ⟶ R + is assumed to be a diferentiable function supported on the interval [0, 1].Its K ′ derivative function exists, as well as there are two constants, C 3 and C 4 , such that (H3): there exist constants C > 0, κ > 0, and C > 0, such that (H4): there exist diferentiable nonnegative functions τ and f where Journal of Mathematics (H5): the bandwidth h n is defned as follows: h n tend to 0 as n tends to ∞, and for all t in the interval [0, 1], we have lim (H6): we assume that with s > 2. We suppose that the functions V 2 (.) and V s (.) are continuous functions near x, i.e., sup (H8): we also suppose that π(.) is a continuous function near x, i.e., sup (H9): there exists 1 > θ > N/c, in such a way that  n (θ− 1)/(2N+1) ≤ ϕ x (h).
Comments on the assumptions: (i) In this regard, our conditions are quite standard.
Te conditions (ϕ) are identical to those employed by the authors in [3].Te above assumptions are common analyses in nonparametric statistics for functional regression models.Te assumption (H1) specifes the behavior of the joint distribution of the couple (i, j) with respect to its margin and permits us to present an explicitly asymptotic variance term (measure the local dependence of the observations).
Local dependence condition (H6) (respectively, (H7)) is a classical condition in kernel estimation based on nonstrictly stationary-dependent data (see, for example, [30]).Te assumption (H6) (respectively, (H7)) controls the local dependence (respectively, the local identical distribution), whereas the mixing condition regulates the dependence of distant sites.(ii) It is important to note that the condition (H3) defnes the nonparametric space of our model.Once more, it is possible to proceed without making the assumption of Hölder condition.Instead, we can make a regularity assumption that is less restrictive on the nonparametric model.Nevertheless, the convergence rate of the bias term is also impacted by any limitation imposed on this assumption.In this context, imposing a more stringent condition on the model leads to an enhanced rate of convergence, while conversely relaxing the condition results in a slower rate of convergence.Specifcally, if we substitute the Hölder condition with a continuity assumption, the convergence rate becomes slower, accompanied by a bias term of order o(1).In summary, it can be stated that hypothesis (H3) is formulated in a broad manner, enabling the examination of the nonparametric aspect of the model's convergence rate through the bias term.(iii) In infnite-dimensional spaces, the assumption defnition of ϕ x (z) and (H4) is known as the "concentration property."For numerous instances, the small ball probability ϕ x (h) can be approximated, around zero, as the product of two independent functions f(x) and τ(h) (see, for example, reference [31] for the difusion process, reference [32] for a Gaussian measure, and reference [33] for a general Gaussian process).Te most common result found in the research literature has the form ϕ x (h) ∼ f(x)τ(h), where τ(h) � h c exp(− C/h c ) with c ≥ 0 and p ≥ 0. It corresponds to the Ornstein-Uhlenbeck and general difusion processes (p � 2 and c � 0 for such processes) and the fractal processes (c > 0 and p � 0 for such processes).Tis class of processes also meets the requirements of condition (H5).It should be noted that these concepts are closely related to the proximity measure d that is taken into consideration, and all the instances described previously involve d being standard norms (such as the Hölder norm or supremum norm, for example).Multiple continuous time processes (see, for example, reference [32] for a Gaussian process) are used to test the hypothesis (H5).(iv) Te hypotheses (H7)-(H9) display the local continuous conditions required to establish the main results and consolidate the results.In fact, conditional expectation requires that if V s is continuous for some s > 2, then V 2 is also continuous.

Theoretical Results
We can now present our main results.It is important to note that these results extend the case of full data obtained by [11].
Te following result gives the probability convergence of the regression kernel estimator with MAR.

Theorem 2. For the hypotheses (H1) through (H9), if additionally 􏽢
n(ϕ x (h n )) tend to ∞, as n go to +∞, consequently, we have The proof of Teorems 1 and 2 is established on the following decomposition: where Theorems 1 and 2, thus, are immediate consequence of the following Lemmas.

Simulation and Application Results
Tis section's primary purpose is to evaluate the excellent behavior of our estimator for various missing rates and sample sizes and to demonstrate the efcacy of this approach in comparison to the conventional one.

Simulation Study.
We establish here the signifcance of our proposed predictor by evaluating its performance in numerical experiments.Te introduced predictor is compared to the conventional kernel technique, which ignores missing data.In order to determine the fnite sample performance of the introduced estimator  r, we conducted a simulation study derived from the observations and ∀i ∈ Z 2 .Te model was created in the following manner: and Journal of Mathematics with m(Z) � 5/  1 0 |Z(t)|dt.Next, we refer to GRF(m, σ 2 , s) as a stationary Gaussian random feld with mean m and the functional covariance is given as where the latter function is designed to ensure and adjust spatial mixing conditions.We simulated model ( 29) and used the missing method, as described by [8], where Note that expit(u) � e u /(1 + e u ), ∀u ∈ R. Te above formula contains a parameter denoted by κ, which controls the level of dependence between the functional curve X and the variable δ.In order to maintain the value of p(x), we calculate Figure 1 depicts the simulated functional curves.
Regarding the parameters involved in the implementation of the estimator A, we would like to emphasize that a quadratic kernel has been taken into consideration, given by K(t) � 3/2(1 − t 2 )1 [0,1] (t).Similar to [3], the bandwidth parameter is defned by Te location observations i and j with ‖i − j‖ < 15 are spatially dependent and almost independent when ‖i − j‖ ≥ 15, since the model (in these conditions) is based on Gaussian random felds with covariance function C and scale s � 5. Our observations are, therefore, a combination of dependent and independent observations (see Figure 2).Terefore, decreasing the value of a is all that is required to abandon independence (our results are based on a � 0.5).
Te primary purpose of this comparison is to examine our proposed estimator, MAR (  m n (x)), with the naive estimator, denoted by MARV (  m n (x)), and the complete data estimator, denoted by ECD (  m n,l (x)) and proposed by [11].To assess the efectiveness of the proposed estimator, (X i , Y i , δ i ) i was divided into two subsets at random: We use the training sample to determine the smoothing parameters h k opt for the k− NN cross-validation operations.Te bandwidth corresponding to the optimal number of neighbors generated by a cross-validation technique is denoted by h k opt : with and also is the leave-one-out version of  m n , evaluated by eliminating the i th datum from the initial sample (for additional information, see [3]).
In one sense, the accuracy of the estimate,  m n (•) of m(•), was performed by using the mean square errors (MSEs): where #(I ′ ) is the sample size used for testing.Te results of the 3 various models are depicted in Figure 3, which compares the predicted values to the real values.Consequently, Tables 1 and 2 exhibit the MSE and Bias for the MAR, MARV, and complete data models, respectively.
We evaluate the suggested estimator's performance in terms of bias as well.Using M � 100 replicates of the experiment, we can quantify the bias of the estimators of r by where  m (k) n (x) is the estimator of m(x) for the replication k of the diferent proposed models.We summarize these results in Table 2.
When the missing data rate is minor, the naïve version provides a superior MSE, but as the rate rises, the MAR estimator provides a better estimate.Tis is shown in Tables 1 and 2. We also take note of the fact that when n increases, the MSE and bias reduce dramatically.Te theoretical conclusions of Teorem 1 are consistent with such a numerical outcome.In addition, the bias is negligible in all cases and is always negative for all MAR settings.

Real Data Application.
In this section, it can be stated that the stationarity hypothesis is fundamental to the nonparametric analysis of spatio-functional data, and that the proposed detrending method is an ideal method for ensuring this hypothesis.Given the daily temperature curve denoted by X, we are interested in the daily mean ozone concentration forecast Y (for the day of August 11, 2019).We suppose that the two variables are linked by Figure 5 provides 122 curves of hourly temperature measures of each station measured in Degrees Fahrenheit.
Te functional explanatory X i represents the daily temperature curve in the ith station (specifed geographically by the coordinates i � (Latitude; Longitude)), whereas Y i is the ozone concentration in the same location.We implemented the theoretical fndings from the previous section into actual data.Specifcally, in the context of spatial functional prediction, we analyze the efectiveness of our constructed estimator with MAR data, which highlights the signifcance of taking spatial locations into consideration in this kind of data.Note that our data have some missing values (38 NaNs stations, about (31.15% missing data), since, in some stations, Y i are not measured on some of the samples.Terefore, our sample is formed as Journal of Mathematics follows (X i , Y i , δ i ), where δ i � 1 if Y i is observed and 0 otherwise.Further, the quadratic function is used for defning the kernel of the model.In functional nonparametric regression, the choice of the pseudo-metric is a key decision since it signifcantly infuences the type of model that is taken into consideration and the efectiveness of the estimation process adapted to this type of data.We use PCA-type semimetric, defned by Here, we utilize q � 4 and choose the eigenfunction v k from the set of eigenfunctions of the empirical covariance operator: Finally, we examined at the cross-validation method's selection of the optimal bandwidth h: � h n,K .Ten, we divided our data (X i , Y i ) i into two subsets at random: test sample (T i , X i , Y i ) i∈I ′ , (30 stations) and learning sample (X i , Y i ) i∈I (92 stations).Te following defnition outlines the mean square error (MSE), which we use as an accuracy indicator: where  Y i denotes the estimator's value.To investigate the efcacy of our models further, we execute M � 100 independent repeats, which permit us to generate 100 values for MSE and depict their distribution using a boxplot.Te boxplots of MSE of the prediction values are shown in Figure 6.Now, in Figure 7, we show the 90% prediction ranges for the ozone concentrations of the 20 last data in the sample test.Tis result demonstrates that our asymptotic normality is efective.[34], the use of this type of spatial modeling requires prior preparation of the initial data in order to verify the stationarity hypothesis.Te latter controls the spatial heterogeneity linked to a diferentiation of the efects of space on the sampling units.To control this aspect, we adopt the algorithm proposed by [34] for the multivariate case in a fnite-dimension for which the spatial heterogeneity of the two variables (explanatory and response) is modeled by the following regression:

USA maps
Tus, instead of the initial observations (X i , Y i , δ i ) i , we compute the SPL and NP estimators from the statistics and  m 1 (.) and  m 2 (.) are the kernel estimators of the regression functions m 1 (.) and m 2 (.) which are expressed by where H 1 and H 2 are kernel functions and λ n and c n are the bandwidth parameters of the real regression.Such a step is called "detrending step" and is fundamental in the nonparametric analysis of spatial data.For our actual data set, we highlight the impact of this detrending step in practice.To do this, we compare the efciency of our estimator in the two situations (with and without detrending).For this, we keep the same strategies as those used in the simulation example to select the parameters involved in the estimator.More precisely, we use the quadratic kernel on (0, 1) and the PCA metric and the criterion CV to choose the smoothing parameter h n .Concerning the real regressions m 1 (.) and m 2 (.), we used the routine code npreg in the R-package np over Te feasibility of this is evaluated, by splitting randomly and several times (exactly 100 times) the data sample.Finally, we examine the importance of the proposed detrending procedure through the MSE in Figure 8 used in the simulation example.

Proofs of the Main Results
Troughout the rest of this paper, we defne, respectively, λ i (x) and the random variable L i (x) by Lemma 6.Based on the hypotheses of Teorem 2, we have, for all (i, j): Ten, we quote Var(L i (x)) and For the variance term, we have Var( . Now, by conditioning on X i and using (H3), (H8), and MAR assumption, we get  Likewise, use MAR assumption and conditioning on X i to get It follows by (H7) and (H8) that and by (H3) and (H8) that Based on the following inequality: we get Journal of Mathematics Furthermore, under (H1)-( H2) and (H4)-(H5), we know that where β j is given in Teorem 2. Tus, we get Let us now focus on the covariance term.As and by some argument as above, we have Tis sum is divided into two distinct sums, one on the site E 1 and the other on the site E 2 , with where c n tend to +∞ as n ⟶ ∞ which will be given later.Let it follows, by assumption (H1), that Now, using the boundedness of the random variable K i , we deduce from Lemma (3.3) in [30] that from where we have Ten, by condition ( 9) and if c n � (ϕ x (h)) − 1/Na , we have 12 Journal of Mathematics So, equations (59), (61), and (63) imply that  i,j Cov(L i (x), L j (x)) � o( n).

□
Proof (Lemma 3).Clearly, E( g n (x)) ⟶ π(x) as n ⟶ ∞.Indeed, by conditioning on X i and when the MAR assumption holds, the assumptions (H2) and (H8) imply Ten, it sufces to show that Var( g n (x)) tends to 0 as n ⟶ ∞.For this, let Using the same argument as in Lemma 6 with the same notations, we have on one side that which implies that On the other hand, for R n �  i≠j E(Λ i Λ j ) and using the same reasoning as in the previous lemma, we get For the frst sum on E 1 , the defnition of c n and the assumptions (H1) and (H8) imply that For the second sum on E 2 and by using the bounding of the random variables K i , we deduce from Lemma (3.3) in [30] that Ten, by condition ( 9) and the defnition of c n we have Finally, from (66), (69), and (73), we deduce that which implies that Var( g n (x)) ⟶ 0 from where we get (24).
) and according to Lemma 3, it sufces to demonstrate that With the same steps of the proof of Lemma 4 and according to (H2) and (H8), it results that Tis completes the proof of (25).Now, for proofng (26) follows the same reasoning as in [26] to establish that Ten, (25) and the condition  nϕ x (h n ) ⟶ ∞ as n ⟶ ∞ complete the proof of (26) and therefore the proof of Lemma 4. Te proof of (77) is identical to Lemma 5. □ Proof (Lemma 5).According to the fact that As in [35], this normality is demonstrated by the blocking method.Tis method is defned by putting into large blocks and small blocks of the random variables L j by U(n, j, 1) � where with By (H9), it is simple to show that all sequences q n , p n , and s n go to infnity.
In the following, we put m k � n k /(p n + q n ) N and, for each integer i � 1, . . ., 2 N , we defne the random variable Ten, by verifying that the proof of Lemma 5 requires only Clearly, to prove (85), we only need to show that For this, it sufces to notice that

Journal of Mathematics
Ten, for all 2 ≤ i, j ≤ 2 N , by the Cauchy-Schwartz inequality, we get So, to obtain (86), it sufces to prove that We will only demonstrate (89) when i � 2 since the other cases are similar.Start with enumerate U(n, j, 2) in the arbitrary way  U 1 , . . .,  U M and write (90) First, as (X i , Y i ) is stationary, then we have According to Lemma 6 and equation (53), we have Var[L 1 ] ⟶ V(x).Moreover, we employ Lemma (3.3) in [30] to get So, we deduce that Consequently, we have From the defnition of M and p n and the fact that we have 16

Journal of Mathematics
Ten, by the fact that s n � o([ nϕ x (h) (1+2N) ] 1/(2N) q − 1 n ), this last term converges to 0. Moreover, by (9) with c > N, we have Ten, we deduce that For the evaluation of A 2 , it sufces to notice by a simple calculation that the sites of r.v.'s L i which intervenes in the two variables  U i and  U j with i ≠ j are spaced by distance of q n at least.So, (92) and the stationarity of the process imply that and then we can obtain However, by assumption (H9) and the defnition of q n , we observe that which implies that Te proof of (85) is, therefore, completed.
It is sufcient to demonstrate the three claims as follows in order to prove (84): and

Journal of Mathematics
Proof of (103).Let us enumerate the r.v.U(n, j, 1), j ∈ J, in arbitrary manner  U 1 , . . .,  U T where T �  N k�1 r k .Ten, to prove (103), we will use Lemma 3 in [36] For j ∈ J, we denote  I j the set involved with Ten, each these of sites  I 1≤j≤T contains p N n sites and they are at least a distance apart q n .Applying Lemma (3.3) in [30], we get (109) By using the same arguments previously for A 2 , we show that the covariance tends to zero.Terefore, the limit in (104) is the same as the limit of 1/ nE(W(n, 1)) 2 .

□
Proof (Teorem 1).To prove (19), it sufces to use the fact that Ten, according to Lemma 5 and the equations ( 24) and ( 26), we have which implies that □

Conclusion
Tis study examines a functional regression model when responses are missing at random and spatial dependence is present.We construct a Nadaraya-Watson kernel estimator for the nonparametric component based on insufcient data and derive the estimator's asymptotic properties, such as probability convergence (with rates) and asymptotic normality under certain weak conditions.Simulation analysis and real data application are performed to illustrate the fnite sample behaviors of the suggested estimator.We also consider the missing mechanism to be missing at random based on our small investigation.Tis issue of nonignorable missing data, which has been extensively researched in traditional statistical analysis, has received little attention in functional data setup.

Figure 3 :
Figure 3: Plots of the predictions for the MAR, MARV, and complete data models.

Figure 4 :Figure 5 :
Figure 4: Observed stations' locations (the red points are the observed stations and the blue are the missing one).

Figure 7 :
Figure 7: Extremes of the predicted values compared to the real values and confdence intervals.Te true values are connected by the solid black line.Te dashed blue curves connect the expected minimum and maximum values.

Figure 8 :
Figure 8: Te boxplots of the predicted values' MSE without and with detrending.

Table 1 :
MSE (mean squared error) for complete data, MAR, and MARV models.

Table 2 :
BIAS for a fxed x for complete data, MAR, and MARV models.