Applying Econometrics to the Carbon Dioxide “Control Knob”

This paper tests various propositions underlying claims that observed global temperature change is mostly attributable to anthropogenic noncondensing greenhouse gases, and that although water vapour is recognized to be a dominant contributor to the overall greenhouse gas (GHG) effect, that effect is merely a “feedback” from rising temperatures initially resulting only from “non-condensing” GHGs and not at all from variations in preexisting naturally caused atmospheric water vapour (i.e., [H2O]). However, this paper shows that “initial radiative forcing” is not exclusively attributable to forcings from noncondensing GHG, both because atmospheric water vapour existed before there were any significant increases in GHG concentrations or temperatures and also because there is no evidence that such increases have produced measurably higher [H2O]. The paper distinguishes between forcing and feedback impacts of water vapour and contends that it is the primary forcing agent, at much more than 50% of the total GHG gas effect. That means that controlling atmospheric carbon dioxide is unlikely to be an effective “control knob” as claimed by Lacis et al. (2010).


Introduction: Previous Econometric Modelling
The main technique used in this paper is econometric least squares regression analysis, which enables computation of the relative strength of proposed alternative and independent causal factors in determination of the dependent variable, temperature change. This procedure is not used in Solomon et al. [1] or by Schmidt et al. [2] and Lacis et al. [3]. Instead, they all rely on computer models of the climate system in which parameterized expressions for the main variables under consideration are first used to generate a simulation of the global climate, and when the average of an ensemble of such models generates some conformity with observations, the expressions for one or other of the noncondensing and condensing GHGs are removed in turn from their composite model, and thereby they estimate the relative strength of individual GHGs. However, the claims that only the noncondensing GHGs are the "forcing" agents, and that condensable water vapour has just a feedback role, are built into the models' alternate simulations, and do not constitute confirmatory evidence validating their hypothesis that the only role of water vapour and clouds is to "amplify the initial [sic] warming provided by the noncondensing GHGs, and in the process, account for the bulk of the total terrestrial greenhouse effect" [3][4][5][6][7][8][9]. For that, in the absence of controlled physical experiments like those of Tyndall [10], which are not possible at the global or regional levels with or without computer models, econometrics is essential. Dessler and Davis [11, page 1] state that the water vapour feedback "is the process whereby an initial warming of the planet, caused, for example, by an increase in long-lived greenhouse gases, leads to an increase in the humidity of the atmosphere. Because water vapour is itself a greenhouse gas, this increase in humidity causes additional warming. This is the most powerful feedback in the climate system, with the capacity by itself to double [sic] the warming from carbon dioxide alone. The Scientific World Journal as to which comes first, the former according to Dessler and Davis [11], or the latter, in "forcing" temperature changes. Not many researchers have used time domain econometrics methods to analyze climate change. Stern and Kaufmann [12, page 412], Tol and de Vos [13], and Tol [14], are amongst the few that explicitly use econometric multi-variate regression analysis of time series data to investigate the causes of climate change. . . 1 None of these papers addresses the respective proportions of condensing and noncondensing GHGs to the overall greenhouse effect, and none mention [H 2 O] as an independent variable with potential explanatory value for changes in temperature. Kaufmann et al. [15,16] have made further use of econometric methods, and comment how "statistical models of the relationship between surface temperature and radiative forcing that are estimated from the observational temperature record often are viewed skeptically by climate modelers. One reason is uncertainty about what statistical models measure. Because statistical models do not represent physical linkages directly, it is difficult to assess the time scale associated with statistical estimates for the effect of a doubling in CO 2 on surface temperature." These papers' database regressions (Section 4) use a wide range of "physical linkages," and the derived coefficients provide an ample resource for "assessing the time scale. . . for the effect of a doubling in CO 2 ," which could be more than a hundred years if their analysis is correct. 2 Hegerl et al. [17], in AR4, [1] claimed that they would attempt to differentiate between climate changes "that result from anthropogenic and natural external forcings" (p.667). However, they do not report any regression results estimating the relative values of those forcings. They concede (p.668) that attribution studies seek to "assess whether the response to a key forcing, such as greenhouse gas increases, is distinguishable from that due to other forcings (Appendix 9A) and add that "these questions are typically investigated using a multiple regression of observations onto several fingerprints [sic] representing climate responses to different forcings. . . see Section 9.2.2." However, there is no trace of the results of any such analysis anywhere in Hegerl et al. 2007, least of all in either their referenced Section 9.2.2 or their Appendix 9A. The latter (pp.744-745) does have a textbook account of multivariate regression but reports no results. Thus Hegerl et al. [12, p.666] provide no evidence for their assertion "greenhouse gas forcing has very likely caused most of the global warming over the last 50 years" where "very likely" means "more than 90 percent probability" [1, page 121], and "most" must mean at least more than 50 percent when only two independent variables are considered. 3 Had these authors done some regression analysis, they could have been more precise, but they never did, nor do they report any by others.
Instead, for both Hegerl and Allen [18] and the many coauthors of the 11 papers cited by Hegerl et al. [17] of which Hegerl was the lead author, "attribution" consists of model outputs with imposed parameters of radiative forcing arising from [CO 2 ] and other greenhouse gases. 4 In practice, none of these papers perform any regression analysis of both natural and nonnatural forcings and ignore primarily "natural external forcings" like that from [H 2 O]. Hegerl and Allen [18] deal only with greenhouses gases and sulphur dioxide, and the latter is even more of anthropogenic origin (mainly comprising emissions from combustion of hydrocarbon fuels) than the former. It is true that sulphate aerosols are usually assumed to have a cooling effect, see Charlson and Wigley [19], but most sulphate aerosols (hereafter [SO 2 ]) are of the same anthropogenic origin in time and place as emissions of CO 2 although from time to major volcanic eruptions increase both [CO 2 ] and [SO 2 ], with only local effects in the case of the latter. The other papers cited by Hegerl et al. [17] adopt much the same approach. For example 7Hegerl et al. [20, page 632] consider only [CO 2 ] and [SO 2 ] with just this mention of solar irradiation at the top of the atmosphere (TOA): "We used only a greenhouse gas and a greenhouse gas-plus-aerosol signal pattern, since the solar response pattern could not be sufficiently separated from noise and the greenhouse gas pattern," a curious conclusion in the light of the title of that paper.
Stott et al. [21, page 2] 5 use what they call "optimal detection technology" to conclude that "increases in temperature observed in the latter half of the century have been caused by increases in anthropogenic greenhouse gases offset by cooling from tropospheric sulphate aerosols rather than natural variability. . ." They claim that their "technology" is simply "just least squares regression in which we estimate the amplitude in observed data of prespecified [i.e., modelled] patterns of climate change in space and time" [21, page 1], yet at no point does their paper report adjusted R 2 or any other standard regression statistics (e.g., sum of squares, F, coefficients, standard errors, t-statistics, or P-values arising from their regressions). Nor does their paper report any of the standard tests (Durbin-Watson, Dickey-Fuller) for serial autocorrelation and thereby for spurious correlations, and least of all, any of the normal tests for multicollinearity. Moreover, these authors' "control simulation, in which external climate forcings. . .are kept constant to simulate [sic] natural internal variability, has been run for over 1700 years [sic] (our emphasis)" is contradictory. 6 This paper uses overlooked NOAA-ESRL site-specific databases of statistics on a wider range of both human and natural climatic variables than is analyzed in any of the "detection and attribution" papers noted above. We show that a comprehensive analysis results in relegating [CO 2 ] to insignificance as a determinant of climate change, and that atmospheric water vapour arising almost exclusively from nonhuman sources is by far the largest source of radiative forcing and temperature change. We thereby hope to achieve a better response to the Kaufmann et al. [15,16] challenge noted above, however incompletely. Section 2 provides an assessment of the appropriate specifications to be adopted for multivariate regression analysis of various models' climatic variables, while Section 3 outlines the paper's data sources. Section 4 reports its regression results, and the concluding Section 5 provides discussion of the implications of these results.

Methodology
Unlike mainstream climate science, which relies wholly on "general circulation models" (GCM) [4, page 749], few of which successfully hindcast the observational record without retrospective fine tuning of parameters, we seek to evaluate the following climate change models using only the observational record. That is represented by measures of monthly or annual temperatures (T), such as minimum, maximum, and mean, at various locations between 1960 and 2006, as potentially mostly determined by one of the following, including rising atmospheric concentration of greenhouse gases in general, represented by x 1 , [CO 2 ] (following [13, page 96]), by variations in x 2 , solar surface radiation (SSR, in Watt hours per square meter, Wh/m 2 ), and x 3 , atmospheric water vapour ([H 2 O], in cm.) Variable u[x] is an error or "noise" term that represents any failure of the linear combination of x 1 , x 2 , and x 3 to account fully for T. However, because of substantial evidence of spurious correlations when regressing T on the independent variables in (1), we assess the similar hypothesis, that year on year changes in temperature are determined by year on year changes in those independent variables (see (4) below). It is important to establish that the RHS variables in (1) are indeed independent of each other, so I run regressions of each of x 1,...n on each other in turn; for example, if x 1 represents atmospheric water vapour [H 2 O] and x 2 is [CO 2 ], then we need to know if x 1 is a function of T and x 2 :  [23]), the view in IPCC AR4 [1] is that atmospheric water vapour is increasing because of the rises in temperature attributed to increasing [CO 2 ] (see below for assessment of that claim). There has been considerable debate since Granger and Newbold [24] on how best to ensure that OLS regression of the variables in (1) does not produce spurious correlations between the temperature and the independent variables x 1 , x 2 , and x 3 . Various tests have been devised to determine whether the variables are "stationary" or have "unit roots." The presence of a unit root in a time series is considered to invalidate standard regression analyses because that series is no longer stationary, this being a necessary condition to ensure avoidance of spurious correlation 7 . For example, many time series in economics have a steady upward trend similar to that of the concentration of carbon dioxide in the atmosphere [CO 2 ]-numbers of television sets, mobile phones, computers, and their broadband connections all show steady upward trends worldwide, but none of these trends can plausibly imply either direct or inverse causal relationships with [CO 2 ] despite no doubt striking correlation coefficients between them and rising [CO 2 ].
One widely applied solution to the problem of nonstationarity in time series is first to difference the series in question, by subtracting the present value of a variable from the previous value, and so on for all values in the series. 8 A simple regression model is merely a straight line fitted to a scatter-plot of one variable versus another. So when there are debates as in Kaufmann and Stern [25] and Kaufmann et al. [15,16] as to whether various statistics, such as local or global temperatures and other climate variables, have a unit root and thereby require cointegration, or are trend stationary, this means only that there is a problem in system identification. That means we have to determine whether we are looking at the output of a first order low pass filter (what the statisticians call I(0) or at the output of an integrator-as in I (1)). In the former, the variance is a constant (although the distribution may be around a linear trend), in the latter the variance is itself expanding (or inflating).
In this paper's in situ (local) model and its data these considerations are irrelevant. All that matters is that the data on changes in [CO 2 ] and [H 2 O] and any other causative variables should be linearly independent. A key requirement-spelt out in rule (5) in the list below-is that this noise must have a constant variance over the distribution of samples; it must be I(0). We need only to take first differences if we have some reason to suppose that this noise is I(1), and because we do find evidence of multicollinearity when regressing the absolute values of the independent and dependent variables of interest, we focus here mostly on the results of regressions taking the first differences of both the dependent and the independent variables. However, "stationarizing" in this manner is not necessarily one of the general rules for successful application of regression analysis (and calculation of meaningful statistics subsequently).
In general, the various rules or conditions that must be satisfied for a valid regression are the following: (1) the predictor samples x t1,2...n and y t must be representative of the population that they are sampling; (2) the unknown u t must have zero mean; (3) the predictors must be linearly independent; (4) the unknown u t must be uncorrelated; (5) the unknown u t must be samples from a random variable population with constant variance, or homoscedastic.
Evidently, there is no particular requirement that the vectors x and y of the respective data should conform to a time series 4 The Scientific World Journal with specific statistical properties. The noise variables u t in (1) appear to be I(0), with uncorrelated zero mean and with no expansion of the variance, at least there is no evidence that they are not.
The aim is to establish if the level of [CO 2 ] is or is not-the main explanatory variable of average global or local temperature-in some quasimonotonic relation. For simplicity we stick to basic linear regression.
The Mauna Loa Slope Observatory in Hawaii has provided a test range of CO 2 from 315.71 ppm in April 1958 to 393.39 ppm in April 2011 and such current levels are confirmed by other measurements that started some years later, like those at Pt. Barrow in Alaska and elsewhere, including Cape Grim in Tasmania 9 . We may call this the independent "x 1 " variable. Let y represent the averaged annual temperature at either the global or some specific location. Is there a dependence y f (x)-or linearized about some operating point, does y a + bx? 10 Perhaps so, but it makes no difference whatsoever in the testing whether x itself should exhibit a consistent rising trend (what engineers call a "ramp") or whether it is noise-like. The condition to satisfy rule (1), that x should cover with reasonable uniformity the given range, is clearly satisfied. Where this paper seeks to make a useful advance is in proposing a multiple regression to include all the potential causes of "weather." One obvious candidate for determining mean maximum (i.e., day) temperature in addition to [CO 2 ] has to be localized solar surface radiation SSR in Watt hours/sq. meter which I call here x 2 . If the sun shines on any given day of successive years more or less "vertically" (or with less albedo) at any one place, subject to the level of atmospheric water vapour at the same place, then the temperature is likely to vary with the respective variations in solar radiation at that place. Similarly, the level of [H 2 O] at any given time and place, closely related to the relative humidity (RH) that is well known to make any given temperature level seem "hotter" than otherwise, has a no more evident relationship with [CO 2 ] than the level of solar surface radiation. That is because [CO 2 ] is invariant across the globe, at all given times and places, while [H 2 O] varies enormously at any given latitudes and times.
Again for simplicity let us introduce this one further possible explanation as z f (x 1 , x 2 ) or linearized z a + bx 1 + cx 2 . Rule (3) says that formally there should be no linear dependence of x 1 and x 2 , as that could produce multicollinearity and spurious correlations with temperatures y. There seems little risk of that with these variables. There is no reason why atmospheric water vapour and total watt hours of sun at any one location during any year would be coupled and connected to the level of [CO 2 ] at that location in that year. Whether time series x 1 , x 2 . . . .x n and time series y exhibit nonstationarity or not is irrelevant and incidental when they are independent of each other, but, to be on the safe side, we provide standard tests for the presence or not of multicollinearity and show that there is no such presence in any of the regressions of our first differenced data.
What a first differencing exercise may usefully show is a better exhibition of a rising trend in temperature since the "noise" in the measurements hopefully has been reduced by introducing the additional independent variables using x 2 . . . x n . Thus our multiple regression analysis seeks to remove or at least mitigate the scatter in annual temperature by testing if and when that scatter is linked to changes in solar surface radiation and other climatic variables such as [H 2 O], in the hope of revealing a better measurement of a linear trend in temperature (which would be otherwise nondiscernible for the data assembly in our selected sites).
Again, what really matters is the statistical property of the error sequence u t . We assume that this is an I(1) sequence, because there is evidence for autocorrelation and multicollinearity of the absolute data, and that is why we rely on first differenced data. In general we find that [CO 2 ] plays at best a marginal role-and one that is usually statistically insignificant-in explaining the temperature changes at various locations in USA over the 47 years inclusive between 1960 and 2006 (when the NOAA discontinued reporting the data sets used here, although a similar but less comprehensive series with data from 1948 to 2011 for locations defined by their latitude and longitude is available from ESRL-NASA). 11

Data Sources
The "BEST" data sets [26] are the latest attempts to "homogenize" the most widely used global temperature sets, namely, Gistemp, HadleyCRU, and NCDC, but exclude the ESRL-NOAA data that have attempted the same task since 1996 [27]. In Section 4 below I use the BEST data set for global assessment since 1990, and in Supplementary The results appear to confirm the findings of Hegerl et al. [17] with a fairly high R 2 and an excellent t-statistic (>2.0) and P-value (<0.01) but do not pass the Durbin-Watson test (>2.0) for spurious correlation (i.e., serial autocorrelation), see Table 1. This result validates the null hypothesis of no statistically significant influence of radiative forcing by noncondensing GHGs on global mean temperatures. 13 Modifying (3) to represent first differences in both the dependent and independent variable, regression of year-on-year changes in GMT against those in [RF] passes the Durbin-Watson test statistic, but the adjusted R 2 statistic is now far below 0.5, so does not confirm the Hegerl et al. assertion (in Solomon et al. [1]) that "most" (at least more than 50 percent) of changes in GMT result from changes in [GHG] attributable to human causation (see Table 2 and Figure 1). The failure of the regression to reveal any contribution of changes in [GHG] to changes in Gistemp's GMT anomalies is obvious both from Figure 1 and from Table 2, which shows total statistical insignificance because with t < 2.0, and P > 0.05, the critical values are not attained. These results validate the null hypothesis from Hegerl et al. [17] that there is no discernible and statistically significant causation of global temperature change attributable to the radiative forcing from anthropogenic changes in noncondensing GHGs. The minimal level of R 2 indicates serious omitted variable bias, and this could be addressed by using the ESRL-NOAA data for precipitable water [H 2 O], with results shown in Table 3. Unfortunately, unlike the NOAA data sets for hundreds of locations in USA from 1960 to 2006, the ESRL-NOAA global reanalysis data sets exclude critical variables like solar surface radiation, and that explains why the minimal R 2 in Table 3 Table 4 is somewhat lower at 0.41 than the 0.63 in Table 2, so clearly there is still at least one omitted explanatory variable. As there is virtually no sunshine at Barrow for most of the winter, solar surface radiation is not a serious candidate, but obvious candidates include temperature variation arising from the ocean currents offshore of Pt Barrow, Arctic Ocean heat content, and decadal wind variability (see [30][31][32][33]). These variables are beyond the scope of this paper. However, if we now consider mean maximum temperatures at Barrow, then net  his measurements of the atmospheric concentration of CO 2 back in 1958. 15 I noted above that water vapor is the most potent greenhouse gas because it absorbs strongly in the infra-red region of the light spectrum, first demonstrated by Tyndall [10], despite the conventional view [2] that because the water vapor content of the atmosphere will increase in response to warmer temperatures, water vapor is only a feedback that merely amplifies the climate warming effect due to increased carbon dioxide alone. In reality, the [H 2 O] variable in the NOAA's database proves to be a remarkably powerful determinant of climate variability over the period from 1960 to 2006 not only at Barrow but across all USA, as it is always highly statistically significant at better than the 95% level of confidence for both annual mean minimum and maximum annual temperatures. This is hardly surprising, if only because in reality, as Tans has noted 16 , "global annual evaporation equals ∼500,000 billion metric tons. Compare that to fossil CO 2 emissions of ∼8. 5   explaining temperature changes is much less than claimed by the IPCC's Hegerl et al. [17] (see also [5]) ( Figure 2).

Conclusion
This paper has used basic econometric (multivariate least squares regression) analysis of observational evidence to falsify or confirm two null hypotheses, first that "most" of observed global warming since around 1950 has not been "very likely" caused by emissions of noncondensing anthropogenic GHGs [17], and, second, that the noncondensing GHGs do not constitute a "control knob" enabling manipulation of global climate. The regression results in the previous Section confirm the first null, as there is no statistically significant evidence to show that increases in anthropogenic GHGs account for any, let alone "most," of observed global temperature change.
The second null derives from this statement by Lacis et al. [3].
This assessment comes about as the result of climate modeling experiments which show that it is the noncondensing greenhouse gases such as carbon dioxide, methane, ozone, nitrous oxide, and chlorofluorocarbons that provide the necessary atmospheric temperature structure that ultimately determines the sustainable range for atmospheric water vapor and 8 The Scientific World Journal cloud amounts and thus controls their radiative contribution to the terrestrial greenhouse effect. From this it follows that these noncondensing greenhouse gases provide the temperature environment that is necessary for water vapor and cloud feedback effects to operate, without which the water vapor dominated greenhouse effect would inevitably collapse and plunge the global climate into an icebound Earth state.
Schmidt et al. [2] make a similar claim: "a model simulation performed with zero CO 2 gives a global mean temperature changes of about −35 • C and produces an icecovered planet (A. Lacis, pers. communication)." These paper's regressions do not invalidate the null that none of the Schmidt-Lacis effects is evident when econometric analysis is applied to observations of the most relevant climate variables and instead indicate that the planet's slow warming is mainly associated with the much larger primary rather than feedback changes in atmospheric water vapor, which along with rising [CO 2 ] have major social benefits in terms of supporting the rising food production needed to feed a global population now at 7 billion and projected to reach 9 billion by 2050 [6][7][8][9]. This may imply the demonization of atmospheric CO 2 by Hegerl et al. [17] and Schmidt et al. [2], as the alleged primary source of rising temperature could be because of the obvious political difficulty in countries like Australia of blaming increasing rainfall for the observed slow increases in global temperatures evident since 1950.
The basic physical science underlying the results above is very straight forward, despite the misleading claims in Solomon et al. [1] and Trenberth and Fasullo [34]. 17 These and others distinguish between so-called "long-lived" noncondensing GHGs and the certainly short-lived nature of [H 2 O] arising from evaporation created by solar energy, since it is true that condensation and precipitation generally follow evaporation within at most around ten days. But that does not eliminate nonanthropogenic evaporation, for as Lim and Roderick show [35, page 14], the average daily level of basic [H 2 O] is around 3-4 litres per square meter throughout the year 18 . That is a result of the solar radiative forcing of 342 W/sq. meter [1, page 96]. meter. In contrast the total radiative forcing attributable to noncondensing anthropogenic GHGs is only c. 2.6 W/sq. meter [28]. The annual increase in GMT attributable to up to that level of forcing since 1950 has been only 0.  [3], and Schmidt et al. [2]. Consequently it is far from certain that managing the level of atmospheric carbon dioxide concentration really is a meaningful "control knob." It is Precipitable water vapour (cm.) The total atmospheric water vapour contained in a vertical column of unit cross-sectional area is extending between any two specified levels, commonly expressed in terms of the height (cm. in Table 7) to which that water substance would stand if completely condensed and collected in a vessel of the same unit cross-section. See also Solomon et al. [1, pages 271-273], where it is stated "global, local, and regional studies all indicate increases in moisture in the atmosphere near the surface." TOT and OPQ. Opaque sky cover is the amount of sky completely hidden by clouds or obscuring phenomena, while total sky cover includes this plus the amount of sky covered but not concealed (transparent). Sky cover, for any level aloft, is described as thin if the ratio of transparent to total sky cover at and below that level is one-half or more. Sky cover is reported in tenths, so that 0.0 indicates a clear sky and 1.0 (or 10/10) indicates a completely covered sky (excerpt from Meteorological Glossary, AMA, accessed 29 September 2010). En passant, we note that the presence of more molecules of CO 2 in the atmosphere could be expected to decrease TOT and OPQ, and this could help to explain why rising [CO 2 ] in the local data sets examined here tends to have a negative, rather than positive, impact on local temperatures (see Supplementary Material).

10
The Scientific World Journal Tau-Aerosol Optical Depth (AOD or, in NOAA Data Sets, "Tau"). Aerosol optical depth is a quantitative measure of the extinction of solar radiation by aerosol scattering and absorption between the point of observation and the top of the atmosphere. It is a measure of the integrated columnar aerosol load and the single most important parameter for evaluating direct radiative forcing. The optical depth expresses the quantity of light removed from a beam by scattering or absorption during its path through a medium. If I 0 is the intensity of radiation at the source and I is the observed intensity after a given path, then optical depth τ is defined by the following equation:

Acknowledgments
The author is grateful to M. S. Hodgart for his methodological insights and to many others for invaluable comments on early versions of this paper but is responsible for all views expressed and for any remaining errors.

Endnotes
1. Tol and Vellinga [36] used econometric analysis to separate the enhanced greenhouse effect from the influence of the sun at the top of the atmosphere (TOA), while Tol and de Vos [13] used Bayesian analysis. Neither paper considers the role of atmospheric water vapour. Most common is the fingerprint method (e.g. Hegerl and Allen [18]) which claims to produce a human signal by using Global Circulation Models (GCM). But "the fingerprint approach is only applicable for detection of (dis)similarities between patterns; it seems impossible to use it to derive a probability distribution of the climate sensitivity. We use time series analysis. We do not rely on GCM results-at the expense of using an (overly) simple representation of the climate-and show that this allows to estimate a probability distribution of the climate sensitivity" [13, pages 88-89]. [37] offers an advanced treatment but apart from using climate data for illustrative examples does not itself undertake systematic analysis using its own methods. Those methods are also absent from [17].

The textbook by von Storch and Zwiers Statistical Analysis in Climate Research
3. If only two independent variables are specified, "most" must mean more than 50%; if there are three or more, then "most" means that the preferred variable, in this case [ 7. "In the mathematical sciences, a stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space. As a result, parameters such as the mean and variance, if they exist, also do not change over time or position." (Wikipedia, October 2010). 8. A stationary series ab initio is denoted I(0), the first differenced as I (1), and the second as I(2).
10. In the Supplementary Material, I report basic regressions of Gistemp's "global temperatures" as a function of the radiative forcing of the level of [CO 2 ].
11. Graphing highly autocorrelated time series data showing rising CO 2 concentrations and rising temperatures is not enough to "prove" that the data support the theory that the former is responsible for the latter.
12. Point Barrow is also an ideal test of Arrhenius' model [40], since he himself claimed that the temperature effects of doubled [CO 2 ] would be significantly higher (6.05 • C) at Barrow's latitude (71 • N) than at the equator (4.95 • C) [40,  13. If the Durbin-Watson statistic is substantially less than 2, there is an evidence of positive serial correlation, "Durbin-Watson statistic," Wikipedia, accessed 26th October 2010.
Specifying the concentration equation in first differences eliminates all stochastic trends and therefore allows us to avoid the effects of carbon uptake by the unknown carbon sink(s) and measurement error on statistical estimates for the effect of temperature on concentrations." 15. See also Curry at al. [30] and Liu et al. [31]. 16. Pers. Comm. See also Tans [41]. For more detailed estimates of global evaporation rates, see Lim and Roderick [35].
17. Trenberth [42] states "Water has a short lifetime in the atmosphere of 9 days on average before it is rained out. Carbon dioxide on the other hand, has a long lifetime, over a century, and therefore plays the most important role in climate change while water vapor provides a positive feedback or amplifying effect; the warmer it gets, the more water vapor the atmosphere can hold by about 4% per degree Fahrenheit". This claim that atmospheric (CO 2 ) has a long lifetime, over a century...," is at variance with Houghton et al. [24] and Houghton [23], which indicate only about 5 years, because around 20 per cent of atmospheric CO 2 is continuously recycled between the earth's surface and the atmosphere. Moreover, if atmospheric water vapor arising from solar-based evaporation is "rained out" within 9 days, as claimed by Trenberth and Fasullo [34] that must also be true of [H 2 O] attributable to rising temperature via the Clausius-Clapeyron relation (see below).
18. One millimetre of measured precipitation is the equivalent of one litre of rainfall per metre squared. The estimates in Lim and Roderick [35]