Temporal Forecasting with a Bayesian Spatial Predictor : Application to Ozone

This paper develops and empirically compares two Bayesian and empirical Bayes space-time approaches for forecasting next-day hourly ground-level ozone concentrations. The comparison involves the Chicago area in the summer of 2000 and measurements from fourteen monitors as reported in the EPA’s AQS database. One of these approaches adapts a multivariate method originally designed for spatial prediction. The second is based on a state-space modeling approach originally developed and used in a case study involving one week in Mexico City with ten monitoring sites. The first method proves superior to the second in the Chicago Case Study, judged by several criteria, notably root mean square predictive accuracy, computing times, and calibration of 95% predictive intervals.


Introduction
This paper compares two methods for temporally forecasting next-day hourly ground-level ozone concentrations over spatial regions.Software for implementing both methods along with demo files can be downloaded from http://enviro .stat.ubc.ca/.The paper focuses on a case study involving Chicago during the summer of 2000.The methods can be used to forecast the maximum eight-hour average ozone concentration, which is reported for many urban areas.For example, on June 27, 2009 the AIRNow website forecasts a maximum for Chicago of between 0 and 50 ppb, rating that as "Good."In contrast, for that day in one part of Los Angeles, the rating was "Unhealthy for sensitive groups," meaning a forecast maximum of between 101 and 150 ppb.
These forecasts are needed to forewarn susceptible groups of high ozone concentrations that are associated with acute health effects.Such effects are well documented in the air quality criterion document (Ozone [1].See http://oaspub .epa.gov/eims/eimsapi.dispdetail?deid=149923), the basis of the recommendations made in 2007 to the US Environment Protection Agency by its Clean Air Scientific Advisory Committee for Ozone, in which the third author served.In fact, the accumulated body of evidence was so strong that the committee recommended strengthening the air quality standards for this criterion pollutant to meet the requirements of the US Clean Air Act.In particular, the evidence pointed to a strong association between morbidity as well as reduced lung function and high levels of ground-level ozone concentrations.This points to a need for enhanced near-term forecasting methods.
One general method for making such forecasts relies on the fusion of measured hourly ozone concentration values and simulated values obtained from chemical transport models (CTMs) such as CMAQ.Two papers [2,3] develop methods for doing this, albeit by different approaches unrelated to those in this paper.In future work, all these methods should be compared in domains where CTM data are available.Even without that hypothetical comparison, those in this paper have the advantage of being available in domains where they are not.These Bayes-empirical Bayes methods offer the flexibility needed to characterize environmental space-time processes, while fully representing the various kinds of uncertainty involved in their construction.Both have been developed for and successfully used to model hourly ozone air pollution concentrations in other contexts.

Advances in Meteorology
The first method in this paper denoted by M1 adapts a multivariate method developed for modeling space-time fields [4][5][6][7].A univariate version of that method for hourly ozone concentrations is the subject of a companionable paper [8], which compares it with a state space model but for spatial prediction, not temporal forecasting, and it does so in a different geographical region.The goal there is mapping the ozone field for another requirement of the US Clean Air Act of 1970, namely, the protection of human welfare including such things as crop yields.M1 needs some new theory, which is presented in the sequel along with a demonstration on how it may be applied.
The second method denoted by M2 uses a method originally developed for modeling hourly ozone concentrations in Mexico City [9].That method and the models on which it is based along with the computational algorithms used to implement it seem to have been quite successful in that application.Moreover, even though it was developed for use in Mexico City for one specific week, a strong prima facie case can be made for its applicability in other weeks and jurisdictions and that is why we assess the performance of that method here.Much recent work has been done in modeling random space-time pollution fields [10][11][12].As one of the photo-oxidants, ozone is produced in the same way in all temporal and spatial domains by a complex interaction of oxides of nitrogen (NO x ) with volatile organic compounds (VOCs) in the presence of heat and sunlight.In most modern urban environments such as Chicago and Mexico City, vehicle emissions are a prime source of the NO x and VOCs [1].Furthermore the prima facie case is supported by an exploratory data analysis that shows very similar daily cycles in our domain as those observed in the Mexico City application.This led us to match, to the greatest possible extent [8], the method used there in our adaptation of it, and no originality is claimed for it.
The main finding in this paper is that in the case study M1 outperforms M2 in a number of ways.First is its computational efficiency.To run the M2 approach, it often took about a week or so to get the results, while M1 only took about ten to twelve hours at the same Linux server.Thus, M2 would not be suitable for making 24 ahead forecasts, while M1 running on a faster processor could be used for that purpose.We also found that M1 produced more accurate forecasts than M2, as measured by their root-mean-squaredprediction errors.Moreover, M1's predictive error bands proved to be better calibrated.In other space-time domains, a similar assessment would have to be made to select a forecasting procedure, and M2 may be superior in some.Overall, we believe that the value of this paper lies in the guidance on how that assessment could be made and the source of software that can be used for it.
The layout of this paper will now be described.Section 2 presents both approaches to forecasting hourly ozone concentrations.Section 2.1 introduces M1.Its forecasting (posterior) distribution is developed, and the corresponding pointwise predictive intervals at each gauged site constructed.Section 2.1 also extends the results to forecasting r-stepahead responses for any r ∈ N .Section 2.2 reviews M2.Section 3 implements these two methods in a case study involving Chicago and data from the US EPA's AQS air qua-lity database.Section 3.1 presents and compares the results for one-day-ahead predictions.Finally, Section 4 summarizes these results and gives our conclusions.

Methodology
This section presents our two temporal forecasting approaches.Although both are general and can be used in other contexts, we develop them as methods for forecasting an hourly response tomorrow given data up to today.The measured value of the response is available and serves as a "test value" at each of g monitoring sites for a comparative assessment of the two forecasted methods.

Method M1
2.1.1.Basic Theory.The general approach [7], on which M1 is based, assumes a multivariate space-time field of pdimensional random response vectors indexed by their locations, a finite number of sites in a specified geographical domain.These sites need not lie on a lattice.
The general theory involves a geographical region that includes g monitored (i.e., gauged) sites and u unmonitored (i.e., ungauged) sites.Although this paper requires only the part of that theory for those that are monitored, we state it in generalable to link this paper to its companion publication [8].The p dimensional response vectors at these sites for times t = 1, . . ., n are combined in the response matrix Y : n × (u + g)p.This matrix can be partitioned as (Y [u] , Y [g] ), where Y [u] : n × up contains the response vectors at the ungauged sites and Y [g] : n×g p, those at the gauged sites.The theory posits ( [7], p. 145-146) that where Z : n × l denotes the non-site-specific covariates (their site-specific counterparts could be included in the response vectors); β : l × (u + g)p denotes the matrix of site-specific random covariate coefficients; Σ : (u + g)p × (u + g)p denotes the covariance matrix among the responses at any given time point.The hyperparameters for this hierarchical Bayes model are β 0 , F, δ, and Ψ, with F representing the variance component of β between its l rows.Here, GIW denotes the Generalized Inverted Wishart distribution, a conjugate prior for normal matrix distributions.Separability of the hypercovariant matrices for both the response and random coefficient matrices is assumed for computational simplicity.Invoking Box's celebrated dictum that all models are wrong, we defend this assumption by the good performance of the resulting method as seen in the empirical assessment provided in the sequel.
Validating the modeling assumptions above will usually require a transformation of the random responses, a square root transformation in this paper's case study.Then systematic components such as the temporal trend over the whole regions will need to be removed.These can be accurately inferred from the typically large dataset formed by aggregating the data over all sites and times.Finally, something needs to be done to eliminate autocorrelation in the temporal sequence of responses.For example, the temporal series can often be filtered using a regional time series model without site-specific parameters.However, our relative abundance of data leads us here to a different approach described in detail in the next subsection, that splits the transformed, detrended residuals into separate, disjoint subsequences of responses, which are separated widely enough in time as to be uncorrelated and hence independent under our Gaussian sampling model.In our experience, the residuals obtained after these steps have been taken usually satisfy the model assumptions above, and these comprise the response vectors in M1.So the model above can then be applied to each subsequence and type II maximum likelihood estimators found for the hyperparameters.These can subsequently be averaged across the subsubsequences to get an overall estimate.While this approach would be less efficient than a full-data approach under a correctly specified model, it avoids the risk of model misspecification in complex situations like that of the case study.The forecasting model developed below can then be applied, and the preliminary steps above reversed, to get the forecasts back on the scale of the raw data.
To elaborate on our distributional assumptions, the GIW prior for Σ [u,g] Σ [u,u] : can be defined, through the Bartlett decomposition, as follows: , where Γ [u] = Σ [u|g] = Σ [u,u] −Σ [u|g] (Σ [g,g] ) −1 Σ [g|u] , and τ [u] = (Σ [g,g] ) −1 Σ [g,u] .Note that Σ [g,g] has an inverted Wishart distribution with hyperparameters (Λ 1 , δ 1 ); the matrix τ 00 is the hypermean of τ [u] , and the matrix H 0 gives the covariance between the rows of τ [u] .Denote the set of hyperparameters as Given the observations at the gauged sites (i.e., Y [g] ), the predictive distribution of Y [u] is completely determined, it is the distribution required in our companionable paper on spatial prediction [8].Furthermore given partially observed responses at gauged sites, the predictive distribution of the missing responses at gauged sites given these observations can be derived after the hyperparameters have been estimated by an empirical Bayes approach ( [7], p. 300-303), and that is how the theory is used in this paper.
Deriving that forecasting model requires a general result that concerns a sequence of n response vectors of which n−w are observed, and w lies in the future.Then with the superscripts "m" and "o," respectively, standing for "missing" and "observed," we may further partition the random response matrix already partitioned above, as H ) given in the following result ( [7], p. 160-161).
Conditional on the hyperparameters H , the marginal posterior distribution is a matrix t distribution: where where , and Ψ gg is assumed to be Λ 1 ⊗ Ω, denoting spatial and between hour correlations, respectively.Remark 2. This theorem gives the joint predictive distribution for w future response vectors given (n−w) observed responses.As coordinates of those future vector are observed, this distribution yields in turn the conditional predictive distribution for the unobserved coordinate responses, That is how the theorem is applied in the next section.

One-Day-Ahead Forecasts.
For expository simplicity, we describe the general method M1 in terms of the goal of forecasting ozone concentrations at a specific hour on Day 121 and each of g = 14 monitoring sites based on data collected at those sites during the preceding days, that being the objective in the case study described in the next section.However, we emphasize that M1 is generally applicable to other days and geographical domains with appropriate modifications.In fact, the approach can be adapted for use with other environmental processes.With that caveat, we now describe M1 in this subsection.Note that in our description, hour 1 refers to the period between midnight and 1 AM and so on.
To begin, we follow the standard practice of transforming the hourly data by taking their square roots to achieve a more nearly Gaussian data distribution [13].Hereafter these transformed values will be our "responses."M1 then partitions the sequence of responses into blocs of p = 24 hours each.For k < 24 (hereafter referred to as Case 2), the last bloc spans the 24 hour period from hour k + 1 on day 120 to hour k on Day 121, and the first from hour k + 1 of Day 1 to hour k of day 2. However k = 24 (Case 1) is different, and there the blocs correspond to the days from 1 to 121.In either case, each bloc yields a 24-dimensional multivariate response vector with an unspecified covariance structure.Reformulating our model in this way gives us the advantage of avoiding the challenging task of specifying the complex short-term autocovariance structure, which varies over the day.But it does require the multivariate theory described in the previous subsection.
The next step in developing the forecast model would generally require the removal of any systematic, regional components in the series.In particular, it is necessary to learn which covariates/predictors to include in the design matrix Z so that in application of the method, site-specific coefficients can be fitted allowing deviations from the regional baselines established for them at the preliminary stage.Note that the EnviRo.statsoftware (see [14]) referred to in Section 1 automatically estimates the those baseline coefficients as prior hypermeans, using a maximum likelihood-based approach, the key preliminary step here is actually the identification of Z.Note also that in our application the covariates need to be adapted in form to conform to the temporal span of the response vectors (the so-called "blocs" introduced below), once hour k has been specified.
Finally, we need to address the autocovariance structure.While the responses are primarily an AR(2) time series after removing their diurnal pattern [13], as we do in the case study, we need to allow for a lag 1 autocorrelation in series of the response vectors to capture small but potentially significant longer-term dependence [15].To eliminate it, we use the approach described in the previous subsection and split the observed response vectors into two groups: those with odd numbers and those with even numbers.For each of the resulting subseries, the model assumptions will hold approximately, and the hyperparameters can be estimated as described in the previous subsection.More detail is given below where we turn to a more precise description of M1.
In Case 2 where k < 24, the 24-hour bloc containing hour k includes measured responses for the hours k + 1, . . ., 24 on Day 120.Thus we may first apply Theorem 1 with w = 1 designating the last bloc in our construction above, to get a joint predictive distribution for its associated "future" response vector.Then we compute the marginal conditional predictive distribution for hour k implied by that joint distribution, given these data from Day 120 to finish the construction.
However, Case 1 is more difficult and the one we treat first.Among options considered by the authors for this case, was changing the bloc length to say p = 25 so that the last bloc could reach back to hour k on Day 120, which contains a measured response.That approach was discarded since the sequence of 7 bloc sequences would not synchronize with weeks.The latter are important structural features of the process that reflect the changing daily traffic patterns and give us the single covariate we have in the case study.Instead we accomplish the same thing, another way, which does preserve the week, namely, we apply Theorem 1 with w = 2 to get the initial joint predictive distribution.The required marginal conditional distribution is then obtained by conditioning on all the data from Day 120.
Note that in both cases, the future response or responses in Theorem 1 depend on the bloc immediately preceding them.To avoid the need to model in that dependence, we simply eliminate that data vector.We did explore the result of keeping it in and ignoring that dependence and found virtually no difference in the Case Study.We now turn to a more precise description of M1.
For that we need some notation.Let Y denote the ith coordinate of the unobserved response vector for blocs t and gauged site j, while Y t ,i denotes the observed response for bloc t , t, t = 1, . . ., n, i = 1, . . ., p, j = 1, . . ., g.For bloc t, Sites g 1 , . . ., g 14 , and hours between i and j inclusive, that is, t,i: j , l = 1, . . ., 14 denote the random response.Throughout p = 24.
The two cases referred to above are as follows.
Case 1 (predict the response for the last hour k = 24 day 121).
Construction of the predictive distribution of the response at hour k = 24, begins with the corresponding matrixvariate observed responses, Y [g o ] made by extracting the corresponding responses from Day 1 to Day 118 according to the above constructed U t or V t .Now suppose hypothetically that Day 120's responses have not been observed and suppose the corresponding matrix-variate missing responses, Y [g m ] are constructed as follows: (7) where 121,1:p : 1 × g p is the unobserved future response vectors for days 121 and Y [g o 1:g ] 120,1:p : 1 × g p, the observed response vector of Day 120.Hence we have w = 2 and n = 121 in Theorem 1. Thus the predictive posterior distribution of Y [g m ] can be obtained by applying (6).To obtain that of 1:120,1:p , since in reality the latter are observed, one can decompose μ (u|g) and Φ (u|g) as follows: where μ ir : 1 × g p and B i j : 1 × 1 for i, j = 1, 2. Hence, the predictive posterior distribution of Y [g m 1:g ] 121,1:p is given by To get the joint predictive distribution of responses at these gauged sites for Day 121's last hour, let e l k : k × 1 be such that e kl = 1 and e k j = 0 for j / = l, j = 1, . . ., k.Let where )), that completes the description of M1 in this case.
Case 2 (predict the response for hour k < 24 on day 121).For bloc t, Sites g 1 , . . ., g 14 , and hours between i and j inclusive, that is, i : j, let Y

[gl]
t,i: j , l = 1, . . ., 14 denote the random response.Now let Y [g m ] consist of k unobserved responses and p−k observed ones at each of the gauged sites.What we need to do now at each site is to use the hours leading up to the first hour without data on Day 121, that is, hour k for which the forecast is needed.To do this, we can use any data from hours 1 : k on Day 121 that may be available, supplemented by the data from the p − k preceding hours on Day 120, (k + 1) : p = 21 : 24.Thus we create a p = 24 dimensional response vector, 121,1:k ).The same routine as in Case 1 using odd and even blocs, is then used with the remaining data to obtain parameter estimates, albeit with these shifted 24 dimensional hourly response vectors.
For hour k < 24, Y [g o ] holds the observed responses from Day 1 to the one ending on day 119.To predict the responses one-day-ahead at gauged sites in this field, we have w = 1 and n = 120 in Theorem 1 with Y [g m ] rearranged so that all missing responses are at the beginning of the response vector.Specifically, let e i, j 1 : g p × 1, be 1 at the (p j − i + 1)th element and 0 otherwise, for i = 1, . . ., k and j = 1, . . ., g; e i, j 2 : g p × 1, be 1 at the (p( j − 1) + i)th element and 0 otherwise, for i = 1, . . ., p − k and j = 1, . . ., g.Let E 2 : g p × g p = (e ).Applying Theorem 1 yields n−1,(p−k) ).To obtain the predictive distribution for the unobserved responses, we first need to decompose Y [g m ] E 2 , μ u|g and E 2 Ψ (u|g) E 2 as follows: , and Applying standard theory, for the multivariate t distribution yields the predictive distribution of the unobserved response T 1c given data T 2c as a t-distribution given by This completes the description of M1 in this case.

Multi-Day Ahead
Forecasts.This subsection generalizes M1 to get a method that provides an r-step-ahead forecast, (r ∈ N ).Let N be the total number of days of observed responses.As before, we consider the multivariate setting, p = 24 being the total number of response coordinates and g, the total number of gauged sites.As well, we generalize the forecasting problem in two ways, namely, to forecasting the response on the last hour of the (N + r)th day; hour k < 24 on the (N + r)th day.
Case 1 (predict the final response for the last hour on day N + r).Here the odd bloc responses are U (1) Remark 3. Notice that the total number of observations in data submatrices can be different for each N, being an odd or even number.Denote by t N , the total number of observed responses.Then t N = t O for the odd-day-responses and t E , for the even-day-responses.
As in Section 2.1, we obtain the estimates of hyperparameters by averaging those two sets of estimates given odd or even bloc responses, respectively.Given these final estimates, we now obtain the predictive posterior distributions given all hourly observations up to Day N in Theorem A.1.They are also multivariate t-distributions.The proof and details can be seen in Appendix A.1.
Case 2 (predict the response for hour k < 24 on day N + r).
Here the odds block responses are 2t,k+1 , . . ., Y 2t,p , Y We do the same thing here to obtain the "final" estimates for hyperparameters.Thus we are able to get the predictive posterior distribution given all observations up to bloc N and estimates for hyperparameters.
Remark 4. We also let t N be the total number of observed response variables.So t N is t O for the odds-day-response blocs and t E , for the even-day-response blocs.
Given the final estimates in Section 2.1, the predictive posterior distributions given all hourly observations up to N days are also multivariate t-distribution.The proof and details can be seen in Appendix A.2.
Remark 5. From Theorem A.2, the predictive distribution for the unobserved response variables from day N +1 to N +r is the product of a sequence of matrix-t and t distributions.This implies no analytic form can be found for the response variable at the (k−1)th (for k = 2, . . ., p) hour of the (N+r)th day at gauged site j ( j = 1, . . ., g).

Method M2.
An alternative approach to M1, through dynamic linear modeling, can also be used for forecasting and would seem an obvious choice, being an amalgamation of state-space time series models.Let Y t be the response vector across all sites at hour t.As in the previous subsections, responses are square root transformed hourly ozone concentrations.Exploratory analysis of these transformed data, found 24-and 12-hour diurnal cycles, pointing to the approach in Huerta et al. [9] for ozone that is based on the same patterns.
Thus, the measurement and state equations of the DLM are given by with initial information: . The hyperparameters m 0 , σ 2 0 , and C 0 are also identical to those in Huerta et al. [9].One can obtain the posterior distribution of the state parameters at the last known time point, n, that is, x n | y 1:n , θ ∼ N(m n , σ 2 C n ), using the Kalman filter, a smoothing method and the Metropolis-within-Gibbs sampling algorithm [8,9,[16][17][18].We omit details on updating and forecasting the state parameters given the model parameters and observations up to current time point.
Given the distribution of the state parameters at the last time point, n, the observed responses until time n, y 1:n , and the model parameters, θ = {λ, σ 2 , a 1 , a 2 }, the r-step-ahead prediction is given by for r ∈ N .Note that F t+r , m n and C n can be obtained by application of a standard method [8,9,17].Here n = 2880 and r = 1, . . ., 24 for the one-day-ahead prediction in the Case Study.For any fixed r, the predictive response, y n+r , can also be obtained by the MCMC (Markov Chain Monte Carlo) method.More specifically, at iteration j, suppose we have updated the vector of model parameters: 2 ) using the FFBS (forward-filtering-backward-sampling) algorithm [8,9,16,18].That is, one has Then, the predictive response at iteration j, y n+r ( j) can be drawn from (15), that is, Consequently, the predictive responses are obtained by the sample means of {y n+r ( j) : j = 1, . . ., J} (J = 500, where J denotes the total number of iteration after burn-in period; r = 1, . . ., 24).The empirical predictive intervals at the 95% nominal level can be obtained as the corresponding sample quantiles.

Case Study
This section implements the forecasting methods in the last section for one Chicago summer (from May 1 to August 31) using data for that urban area taken from the EPA's AQS database (2000).These extracted data come from fourteen irregularly distributed monitoring stations measuring hourly ozone concentrations in parts per billion (ppb), which, to assure the validity of our Gaussian model assumptions, are square-root-transformed as noted in the last section.Each has few missing values under the EPA 1997 Standard in 1997 (i.e., 80 parts per billion for the eight-hour ground level ozone concentrations) during the overall time span across all available sites in this region.
To assess the model's performance for temporal forecasting, 14 sites are selected as "gauged" sites (i.e., g = 14) and their observed responses on Day 121 are set aside as test values.Figure 1 shows the geographical locations of these fourteen gauged sites.
To explore these data further, weekday and hourly effects were computed for each site by averaging the transformed hour values over each of the seven weekdays over the whole summer.We found these effects to be very similar from one gauged site to the next.Thus, since "bloc" is the unit of time t, our approach puts the appropriate zero-one elements into the Z to mark off the progression of blocs as the t progressed.Baseline hourly effects are represented in the hypermean function and are automatically fitted by software, EnviroStat.1.0.1.This then represents the overall diurnal pattern, while allowing site-specific deviations within the model.

Results and Comparisons.
The two methods considered in this paper were applied at all fourteen gauged sites (GSs for short) to predict the twenty-four left out (test) and squareroot-transformed hourly observations on Day 121.For all sites, plots showing the observations during the six days leading up to test Day 121, as well as the twenty-four forecasts by both methods for that day, may be seen in Dou et al. [19].These plots also include the 95% pointwise predictive intervals for that day for M1 and M2.For brevity, we include only figures that present some noteworthy features of these predictors.
To begin, Figure 2 for GS 1 shows a flat ozone field at this location on Day 119 and strongly varying one on Day 120 with two peaks.M1 does better than M2 in following not only the overall trend but tracking the turns quite well.All twenty-four of the test values lie within the 95% predictive credibility band although that for hour 12 overlaps the upper boundary.M2 follows ozone's peaks fairly well, Advances in Meteorology but it forecasts valleys that do not turn up-the strong structure provided by the sines and cosines reduces this model's flexibility and capacity to track the series in this case.Those harmonics point to two twelve-hour cycles despite the general lack of same in the observed series during the days before.
In contrast to the case of spatial prediction [8] where generally four peaks are seen in the sinusoidal curves bounding the DLM's 95% predictive credibility bands, here there are just two.In fact, four peaks would be expected since the mean model's components of variation has sines and cosines that are squared when they enter into the posterior variance.Thus their valleys turn into peaks, one every six hours since both twelve-and twenty-four hour cycles are present in the mean.So why only two?
The answer seems to lie in the fact that the random coefficients for the twelve hour components of variance in the forecast at any monitoring site, is not as uncertain in forecasting at that site than they are in spatial prediction at other sites, which may be a substantial distance from the monitoring sites.Consequently these components, although small, would have large posterior variances in the predictor than these components of the forecaster at one of the monitoring sites.In other words, much more information is available in the data leading up to the last day for the forecasts for the test values at that site than is available at a remote and unmonitored sites.
Incidentally, the lower bounds for M2 forecasts can go below zero so in practice would need to be truncated.Moreover, few of the test values lie within the 95% credibility band.
Huerta et al. [9] give similar plots for three stations in their case study for Mexico City using the method that led to our method M2.These plots differ from ours in that their plotted ordinates represent ozone while ours represent square-root-transformed ozone.We elected to keep the latter since that is the scale on which our analysis was done and hence the one that provides maximal diagnostic benefit.Furthermore our square root scale does not risk exaggerating the observed differences between both methods being considered.At the same time, we recognize the practical importance of publishing forecasts on the ozone rather than square-root ozone scale, and hence Table 1 presents comparisons on the former scale.
Huerta et al. [9] focus on just one week, so their forecasts are for their last Day 7, based on the six preceding days (whose observed hourly concentrations are plotted).In contrast, since our forecasts are based on the entire summer, our last day is Day 121 and we plot the observed values for this day (as well as the preceding seven).Although the daily amplitudes of the sinusoids in Huerta et al. [9] vary from day to day, the periodicity is very consistent over those days, with two fairly distinct peaks each day.In contrast, the ozone patterns over the seven days preceding our forecast Day 121, differ markedly from one day to another.Thus while the data series for Day 120 shows two very distinct peaks, that for Day 116 is nearly flat, and that for Day 118 (essentially) shows only one peak, after a monotone increasing trend rising to the end of that day.Finally Day 7 in their plot shows good agreement at all three of their stations, between their "predictive median" and the test data values.In contrast, our averages of 500 MCMC generated predicted responses for M2 disagree markedly with many of the hourly test values on Day 121.We would conjecture that these discrepancies occur because the data series on Day 121 tends to be much flatter than on all the preceding days except Day 116.(It may also derive from the additional (temperature) data Huerta et al. [9] had to enhance their forecasts that we did not have).
We do not see in any of the plots in Huerta et al. [9], the four peaks seen in some of our 95% credibility bands, suggesting that the uncertainty in the coefficients of the twelvehour cyclical components is quite well resolved by the six days of data preceding their test day.The credibility bands for their version of M2 contain all the test values for all three of their sites, as do M2's bands in Figure 2. (However, that is not the case for all fourteen of our stations as noted below.Moreover, their retrospective 95% credibility bands for all three sites are too narrow and fail to contain a large fraction of the observed values on a number of their days, e.g., on Sep 10 and Sep 12 at the Xalostoc site).
A summary very similar to that above for GS 1 also applies to the omitted figures for GS's 2-4, 7-13, although for GS 11 M1 forecasts diverge from the test values over the final four hours, and for GS 13 M1 underestimates those test values in the middle of the day.
Figure 3 shows that GS 5 is different.Although M2's forecast series has two peaks of moderate height, the bounds for the credibility bands have four of them consistent with two twelve-hour cycles.None of the forecast series tracks the series of test values well.
Summaries similar to that for GS 5 applies to GS 6 with the exception that M1 forecasts track the test value series quite well unlike M2 and to GS 14, where both methods underestimate the test values, M1 being closer overall than M2.

Advances in Meteorology
The width of the 95% pointwise predictive intervals of the one-day-ahead prediction at 14 gauged sites using M1.
Figure 4 plots the width of the 95% pointwise predictive credibility bands generated by the BSP at each of the twentyfour hours of Day 121.Starting from around 9 AM, these bands tend to increase and continue to do so until the last hour at 11 PM, reflecting the increasing uncertainty about the forecasts since increasingly fewer responses are observed as time increasing.
Figure 5 is a similar plot to that above, but this one for M2 instead of M1.These lengths are close to each other for various sites, exhibiting a wiggly periodic behavior across all gauges sites, a characteristic previously observed in Dou et al. [8,18].Although these lengths are very close to each other, M2 actually underestimates the predictive variances at gauged sites as seen in Figure 6, which shows the coverage probabilities of M1 and M2, and also shows a slightly overestimated predictive variance for M1, at the 95% nominal level.
Table 1 presents the root-mean-square-predictive error (RMSPE) of the predictive responses on the 121st day at each one of fourteen gauged sites using these two approaches.At GS j, the RMSPE of the prediction at hour h can be computed by where PRED j h is the predictive response at hour h of Day 121 and OBS j h , the corresponding observed response at the same hour, same day, and same site.M2 has a larger RMSPE over all the gauged sites compared with M1.M1 has the smallest RMSPE across most gauged sites.

Discussion and Conclusions
For forecasting ground-level hourly ozone concentrations in a Chicago summer, M1 seems better than M2.It seems more accurate, and its 95% predictive credibility interval is better calibrated.However, in any practical application M1 and M2 would need to be assessed in the same manner as in this paper before making a final selection.It should be noted that a new model also based on the dynamic linear approach has been proposed by Sahu et al. [11] for ozone modeling.It would be interesting to compare this new approach with the methods in this paper.
Those methods M1 and M2 are two quite different approaches to modeling space-time process and comparing and contrasting them at a more fundamental level seems worthwhile.To begin, both are quasi-Bayesian models in that they rely on some preliminary data analyses.Thus the diurnal cycles are identified for the M2 mean function, while regional non-site-specific weekday effects are found for M1.Both methods can then incorporate predictors or covariates in their parametric mean functions with random coefficients as well as reflect diurnal patterns of variation.M1 proceeds with this in two steps.First, regional time-dependent covariates or predictors are identified for the construction of the design matrix Z, in his case day-of-the-week bloc effects.Secondly, it estimates hypermeans for this predictor's coefficients as well as for the multivariate bloc vector's responses, in this case the hourly effects.At the same time, it allows site-specific deviations from these baseline estimates through the random mean coefficients.The case study suggests that these random coefficients capture site-specific hourly effects quite well.In contrast, M2 builds regional features and daily variation into its mean response function through the incorporation of mean trends and periodic components before implementation.Thus, its prescribed mean is fairly structured with Fourier components to describe daily 12-and 24-hour cycles.In contrast, M1 incorporates all general trends and diurnal patterns in the hypermean for its random coefficients and then allows site-specific deviations from this hypermean at all sites.The former is more flexible than the latter, in allowing the coefficients to change over time, but the second is more flexible than the first in allowing an arbitrary shape for the daily pattern of variation and allowing site-specific trends.
Both approaches put spatial covariance structures on their mean models as well as on the residuals.In contrast to M2, M1 does not require a nonstationary spatial covariance structure, and the form of the spatial covariance matrix is completely unspecified at level one of the Bayesian hierarchy.This is not important for the Chicago analysis where the spatial ozone field is quite flat, but we believe it would be an important difference between the models in say Los Angeles or Seattle, where M1 would be favored.M2 prescribes its temporal correlation structure through the structure of its mean function, notably a random walk model for its model coefficient vector.In contrast, M1's 24-hour bloc covariance matrix is unspecified at level one of the hierarchical model, leaving the data a big role in determining its form.However, this feature comes at the price of an assumption that the 24 autocovariance matrix is separable from the spatial covariance.Moreover, the covariance is constant over time.Both of these assumptions are limitations of M1.
Both M1 and M2 rely on both autocorrelation as well as temporal correlation for forecasting next day ozone levels.We believe responses will be somewhat autocorrelated from day to day and that feature can be exploited to enhance the forecasting performance.As formulated, M2 does borrow that additional strength, where M1 loses in the way we have implemented is parent, the BSP, by dropping a day to avoid having to formulate a multivariate time series model for the vectors of daily bloc responses.However, this is not strictly necessary.The more general version of the BSP approach does allow for that correlation, and in principle we would have estimated the hyperparameters that approach, suffered the consequences of possible misspecification and increased the computational burden of implementing M1.Thus M1 was formulated under the assumption of uncorrelated responses between days, unlike M2 which makes no such assumption, with the goal of ensuring timely 24 ahead ozone forecasts.
M2 has a much more general parent, in the dynamic linear model (DLM) and undoubtedly other implementations of the DLM could be made that retained its positive features while overcoming some of the limitations of M2 noted above.For example, a nonstationary spatial covariance could undoubtedly be used.As well the random walk model which has serious limitations could be replaced by say a more reasonable model like an AR(1), albeit with an added parameter burden.That would in turn further restrict the number of monitoring sites it could realistically handle in an urban area.As it stands, M1 computational efficiency enables it to handle a much larger number of sites than M2 in an urban area such as the greater Los Angeles area, which has 30 sites well beyond the reach of M2.
Although any ozone forecast for hourly concentrations 24 hours in advance cannot be much better than the baseline estimate, we have included Case 1 for completeness.Its Equation ( 9) is the basis of the forecast for that case.That equation actually gives a predictive distribution for all the hourly concentrations in Day 121 and could be used to forecast them all.However, the forecasts would not be very good compared to those given by the method in Case 2. The reason is that the latter exploits the strong AR(2) structure in any consecutive sequence of 24 hourly responses, unlike the former which assumes the daily vectors of responses are conditionally independent as an approximation made mainly for computational expediency.Thus for all other hours the forecaster in Case 2 should be used when the data in Day 120 are available.Note that within the Bayesian framework, the unconditional distribution of 24 dimensional response vectors are not independent, a feature that Case 1 exploits.
We have not considered the realistic case where only a limited number of hours of Day 120 data are available.That is because this case would be just a formalistic extension of methods M1 and M2.
Finally, we would emphasize that the results in Section 2.1 have been generalized in that section, another way in which M1 and M2 go beyond the limited application in the Case Study.Moreover, our approach for turning BSP into the temporal forecasting tool in M1 could well be used for any univariate time series.The approach would avoid the need to capture autocorrelation at fine temporal scales, something that can be difficult to do as in the ozone case, where the AR structure varies over the day.
Overall, we have found that for forecasting Chicago's next day ozone concentration levels, M1 would be more practical and more accurate than M2.With its well-calibrated forecast intervals, it seems a promising methodology for practical application.
N+r,p , the pth unobserved response on the (N + r)th day at gauged site j, is t-distributed: N+r,p | Y where Proof.The result is straightforward by Theorem 1, where w = r + 1 and n = N + r; decompose μ(u|g) and δ (u|g) Φ(u|g) as follows Hence, we have where μ, Φ, Ψ, and δ are given in Theorem A.1.
We have (e N+r,p , that is, the unobserved response of the last hour of the (N + r)th day at Gauged Site j ( j = 1, . . ., g).Hence, we have N+r,p ∼ t 1×1 e Proof.(i) The result is straightforward by Theorem 1 where w = r and n = N + r − 1; (ii) denote E 2 j = (e ( j−1)p+1 g p , . . ., e j p g p ) : g p × p for j = 1, . . ., g.And let E 3 = (e p p , . . ., e 1 p ) : p × p.We will have the following results (details can be referred to in [19]): N+r,k . . .Y N+r,1 Y N+r−1,1 . . .Y We then decompose T r 2 j , μ r 2 j , and Ψ j as follows: N+r,k has no analytic form.

Figure 1 :
Figure 1: Geographical locations for the Chicago AQS database (2000), where the latitude and longitude are measured in degrees.(G: gauged sites; UG: ungauged sites).

Figure 2 :
Figure 2: The observed square-root of ozone concentrations ( ppb) from Day 114 to Day 121, the predicted values using M1 and M2, and their 95% pointwise predictive intervals at GS 1.

Figure 3 :
Figure 3: The observed square-root of ozone concentrations ( ppb) from Day 114 to Day 121, the predicted values using M1 and M2, and the 95% pointwise predictive intervals using M1 and M2 at GS 5.

Figure 5 :Figure 6 :
Figure5: The width of the 95% pointwise predictive intervals (PIs) of the one-day-ahead prediction at 14 gauged sites using M2.

Table 1 :
The root-mean-square-predictive error (RMSPE) of the one-day-ahead prediction at fourteen gauged sites by using M1 and M2.M1 dominates M2 in all but 1 case.