Technical Efficiency and Technical Change in Canadian Manufacturing Industries

This study applies the “true fixed effects” panel stochastic frontier methodology to the Canadian KLEMS data set to estimate technical change and technical efficiency in the Canadian manufacturing sector. To account for the endogeneity of capital inputs as well as the possible problems related to omitted variables, a two-stage residual inclusion method is pursued. The first stage is estimated using the dynamic panelGMMmethod.The results show thatCanadianmanufacturing industries experienced significant declines in technical efficiencies during the last ten years.This suggests that the observed slowdown in TFP growth during the recent past is partly due to declining technical efficiency.


Introduction
Canadian labour productivity growth lags behind other countries, particularly that of USA.Slow or no growth in TFP appears to explain this trend [1].Gordon [2] observes that the TFP index in 2011 was at the same level as its early 1970s levels.According to the Canadian productivity data set used in this study, TFP growth in the manufacturing sector, the best performing sector, was 1.8% per year during the period 1961-1997.However, most of these growth rates were the result of the increases before 1990.TFP stagnated during the 1990s and has been declining since 2000.There is no clear explanation as to why TFP is not growing.The gap between Canada and US TFP growth may be explained by the gap in materials and equipment intensity [1]. Lee and Tang [3] link the productivity gap in the manufacturing sector relative to that of USA to capacity utilization, labour quality, and & gaps.Most recently, the most common view appearing in newspaper headlines is that lagging in innovations is the main culprit.
TFP is commonly interpreted as the index of technical change, suggesting that Canada's anemic growth in labour productivity is largely caused by lack of technological progress.This interpretation seems to have contributed to the puzzle about the productivity growth problem in Canada given that Canada seems to be doing fine in terms of the factors underlying technical change.In reality, TFP change is entirely attributable to technical change only when production technology is characterized by constant returns to scale and when there is no technical inefficiency [4].This suggests that it is possible for deterioration of technical efficiency, assuming constant returns to scale (the Statistics Canada data on productivity used in this study preimposes constant returns to scale), to be the main reason for the slowdown or decline in TFP growth even though technical change is positive.
In this study, we provide a parametric decomposition of TFP growth in the Canadian manufacturing sector using the stochastic frontier (SF) method.The SF method, which was developed by Aigner et al. [5] and was mainly used in the analysis of microdata, is more recently used for higher level aggregations as well as for panel data.For example, Sharma et al. [6] employ the SF method to decompose TFP growth across the states in the United States.The decomposition is based on the observation that producers improve their TFP not only by pushing the production frontier (an upward shift in their production functions) or through technical change.In fact, most producers exhibit technical inefficiency or are not operating on the production frontiers such that they could improve their productivities without 2 Economics Research International an improvement in technologies.The stochastic frontier method for panel data has also evolved from assuming that technical efficiency is time-invariant to various specifications that allow it to vary across time [7][8][9][10].However, all of these approaches attributed any panel-specific individual effects to the inefficiency term, thereby leading to biased estimates.The approach adopted in this study is the recently developed true fixed effect framework [11].This method allows the individual effects to be distinct from inefficiency terms.Another problem in estimating frontier parameters is endogeneity of some or all of the inputs and the omitted variables problem resulting from the fact that the underlying factors for inefficiency, not included in the regressions, may be correlated to the variables that enter in the frontier model.In this study, we adopted a newly developed method known as the two-stage residual inclusion approach [12] in order to address the endogeneity of capital inputs and the omitted variables problem.The first stage estimation is used to predict the residuals that are to be used in the second stage.The Arellano and Bover [13] system dynamic panel GMM method is used to model capital as a function of input prices and output as well as its own lags and the time dummies.This method is well suited as one of the explanatory variables, output, is endogenous and it is treated as endogenous.The second stage estimation follows the true fixed effect method with residual inclusion.
The data used are drawn from Statistics Canada's KLEMS data set, which provides total factor productivity, labour productivity, input prices, output prices, gross domestic product (GDP), and gross output by 4-digit NAICS aggregation (NAICS stands for North American Industry Classification System).The 21 manufacturing industries, based on NAICS aggregation, have enjoyed much better TFP growth than the overall business sector average.
Comparing the trends in decade average growth rates of TFP and labour productivity (LP) in the manufacturing sector reveals continuous slowdown in productivity.During the most recent years (2001)(2002)(2003)(2004)(2005)(2006)(2007), average TFP growth was negative whereas the average growth rate in LP was dramatically lower than all the averages observed during the previous periods.
There are two key contributions of this paper.The first is the application of new econometric methods.The true fixed effect method is a new and important advancement in stochastic frontier regression methods.The two-stage residual inclusion method is also a new approach that has been proposed to deal with endogeneity problems in nonlinear settings.This paper utilizes both of these advancements in empirical methods in a unique way to obtain precise and consistent estimates for technical efficiency.The comparison of the results from these new methods to the time-decay model indicates that the methods adopted here better explain the data.The second contribution is new insight of the Canadian productivity problems.So far, the discussion is based on the perception that TFP slowdown, the main force behind the slowdown in labour productivity, has been caused by inadequate technical changes and thus the recommended policy prescriptions were largely aimed at research and development, training, investments in capital equipment, and the like.The results in this paper point to another unexplored aspect, deterioration in technical efficiency in many of the manufacturing subsectors which have been significant, particularly during the period after 2000.For the manufacturing sector as a whole, we find that the average technical efficiency for the most recent years is much less than the average for the period 1961-1970.This begs the question, why this has been the case and how we might improve it.
The rest of the paper is organized as follows.We discuss TFP and its decomposition into its components in the next section followed by a presentation of the stochastic frontier specifications and its application to decomposition of TFP.Estimation methods are discussed in Section 4 followed by presentation of the results in Section 5. Section 6 provides some concluding remarks.

Decomposition of TFP into Its Components
Suppose the production function is given by  = (x, ), where  is observed output, x is a vector of observed inputs used in production, and  is time index.Assuming no inefficiency and constant return to scale, TFP growth is defined as where ṪFP = ln TFP/; ẏ =  ln /; ẋ  =  ln   / given that   is the observed use of the th input; and   is the cost share of the input (  =     /), where   is the unit cost of   and  = ∑     .
Define the production frontier as  * = (x, ), where  * is the maximum amount of output that can be produced with the input vector x at time .In other words, this reflects the technological frontier that producers can achieve only when they are technically efficient.The output-based measure of technical efficiency is defined as TE = /(x, ), where  is actual output and 0 < TE ≤ 1. Taking natural log of both sides, TE equation and differentiating with respect to time give us which can be written as Given that ( ln (x, )/  )  =    is the elasticity of frontier output with respect to change in   , (3) can be rearranged using the definition of TFP growth in (1) as and noting that growth rate in technical change (TC) is given as ṪC =  ln (x, )/, we get Equation ( 5) is similar to what is seen in Nishimizu and Page Jr. [14].That is, growth in TFP is equal to the sum of technical efficiency changes, technical changes, and the factor accounting for the effect of input growth.The last term has impact only if the estimated elasticities are different from the corresponding cost shares and if the input quantity is changing.

Panel Stochastic Production Model and Decomposition of TFP Growth
The stochastic frontier framework is well suited to empirically implement the above decomposition.Start with the production function: where   is the output of the th subsector ( = 1, . . ., ) in period  ( = 1, 2, . . ., ), (⋅) is the production technology which represents the frontier,   is a vector of  inputs,  is the time trend variable that serves as a proxy for technical change, and   ≥ 0 is output-oriented (Farell) technical inefficiency.
Taking the derivative with respect to time gives us By substituting this in (1), we get where      is the elasticity of industry th output with respect to the th input and   is the cost share of the th input in the th industry.Equation ( 9) is identical to (5) except that we are defining the relationships for longitudinal data and that the ṪE is replaced by −  /.This is because technical efficiency is defined as TE  = exp(−  ), which implies that ln TE  = −  and −  / = ṪE  .Readers are referred to Kumbhakar et al. [10] for more detailed and advanced presentations on this subject.
The discrete time counterparts of these expressions are computed using Ż =   / −1 − 1, where  stands for TFP, TC, TE, or   .

Estimation Method
In its early development, the panel data stochastic frontier framework was based on the assumption that technical inefficiency is estimated by the individual heterogeneity effect or the fixed effect term [8,15], which implies that it is timeinvariant.Cornwell et al. [16], Kumbhakar [17], and Battese and Coelli [18] proposed panel stochastic frontier models with a time-varying inefficiency term, each proposing different specifications of the time-varying inefficiency term.That is, Schmidt and Sickles [8] proposed estimation of a fixed effect model: and they suggest estimation of inefficiency of the th producer using the deviation of the value of the individual specific intercept from the estimated maximum within the sample; that is, β *  = max  β − β (we specify β *  = β − min  β if cost inefficiency is analyzed).Apart from attributing all individual heterogeneity to inefficiency, this method does not permit estimation and evaluation of the evolution of technical efficiency over time.
Cornwell et al. [16], Kumbhakar [17], and Battese and Coelli [18] suggest models in which the individualheterogeneity effects are time-varying using the specification where   is natural logarithm of GDP for the production efficiency model and is the natural logarithm of costs for the cost efficiency model,   are the natural logarithm of the input quantities for the production efficiency model and the natural logarithm of the input prices for the cost efficiency model, and for production functions, −1, for cost functions.(12) In (11), V  is the idiosyncratic error and   is a time-varying panel specific term.
Cornwell et al. [16] assume a quadratic specification of the inefficiency term: which is simply an individual-specific slope variable.In addition to requiring us to estimate large number of parameters ( × 3), this specification does not allow decomposition of TFP into TC and TE when technical change is also captured by the time trend.Given that our objective is to decompose TFP growth into its components, this specification is inappropriate for this analysis.Kumbhakar [17], on the other hand, proposed that   = ()  , where Depending on the values of the parameters,  and , () lies between zero and one and could be monotone decreasing (increasing) or convex (or concave).The alternative proposed by [18] is where   is the last period in the th panel,  is the decay , and   and V  are distributed independently of each other and are the covariates in the model.When  > 0, the degree of inefficiency decreases over time and when  < 0, the degree of inefficiency increases over time.Because  =   is the last period, the last period for firm  contains the base level of inefficiency for that firm.If  > 0, the level of inefficiency decays toward the base level.If  < 0, the level of inefficiency increases to the base level.
In these time-varying panel SF models,  0 is the same across all units.The underlying assumption is similar to that of Schmidt and Sickles [8] in implying that time-invariant individual effects, if present, will be picked up by the timevarying inefficiency effect.This assumption is plausible in studies of firms within the same industry.However, individual heterogeneity is likely when we compare different subsectors.
Greene [11] proposes a model that brings the above two frameworks together, suggesting the following model known as the "true" fixed effect: where   = V  −   .In this specification, both timeinvariant individual-heterogeneity (the fixed effect) and the time-varying inefficiency effects are included.Greene [11] suggests that this can actually be estimated by including individual dummies along with the preferred specifications for the time-varying inefficiency term as suggested in ( 13)-( 15), a method termed maximum likelihood dummy variable method (MLDV).This specification enables us to disentangle the time-varying efficiency and the time-invariant unitspecific heterogeneity effects and is feasible given that we have a few cross-section units (21) compared to time periods (47) in our data.However, in short panels, this approach may produce inconsistent variance estimates due to the incidental parameters problem.This has a critical impact on the SF analysis since an accurate estimation of inefficiency scores relies on the precision of these estimates.Belotti and Ilardi [19] show that the incidental variable problem vanishes in long panel ( ≥ 15).
If the likelihood function is based on the assumption of the half-normal distribution of the inefficiency term, given   = V  −   , the conditional expected values of the inefficiency terms are computed as where   =   /;  =   / V ;  = ( 2 ) 1/2 , and technical efficiency, TE  , is defined either as exp(−[  |   ]) following Jondrow et al. [20] or as − exp([  |   ]) following Battese and Ceolli [15].
In our case, there is a reason to suspect that capital is endogenous.Capital is accumulated through investments, which depends on the level of output.Furthermore, the omitted determinants of technical efficiency could be related to these inputs.This usually implies correlations between the inputs and the composite error term [21].This is similar to the problem caused by endogenous regressors suggesting that both problems can be addressed simultaneously.In such cases, a two-stage procedure in which the first stage estimation is carried out using the generalized method of moments (GMM), which enables us to address the endogeneity problem and the omitted variables problems, should be pursued.GMM is preferred because one of the explanatory variables for capital, output, is endogenous.The second stage follows the residual inclusion (RSI) method discussed in Terza et al. [12].In this method, residuals estimated from the first stage regressions of the endogenous inputs will be included in the second stage regression which seeks to estimate (16).
The panel GMM method is well suited to handle this type of regression because output is an endogenous variable in the first stage regression.Following Anderson and Hsiao [22], the panel dynamic specification for capital that we seek to estimate is given as where Δ shows that all variables are first differences.In this specification, Δ −1 and Δ  are correlated.To overcome this problem, Anderson and Hsiao [22] proposed instrumental variable approach in which Δ −2 , Δ ,−3 , Δ ,−4 , . .., and so on are used to instrument Δ −1 given that Δ −2 is uncorrelated to Δ  .Arellano [23] shows, on the other hand, that using the lagged difference as an instrument results in estimators that have a very large variance and obtains results that suggest efficiency of estimators based on lagged levels as instruments instead.Arellano and Bond [24] confirm the superiority of using lagged levels as an instrument with simulation results, leading to the development of the Arellano and Bond [24] and Arellano and Bover [13] GMM methods.
The system GMM method proposed by Arellano and Bover [13], which allows simultaneous estimation of the levels and differenced models, actually enables us to estimate dynamic panel data without differencing.
The estimation is based on identification of  instruments forming a vector   that satisfies the following orthogonality condition: Any exogenous variables in   that are not expected to be correlated with the error would be immediate candidates for inclusion in the   vector.The endogenous inputs with 2 or more lags can be used as instruments [13,24].The GMM estimator is the parameter vector that solves where  is an optimal weighting matrix given by  = {(1/)[  Δ( β)Δ( β) ]} −1 , where β are parameter estimates from a consistent preliminary GMM estimator using the identity matrix as the weighting matrix [23].Arellano and Bond [24] derive the corresponding one-step estimator along with the robust variance-covariance estimates (VCE).If the two-step estimator is applied, a finite-sample correction for VCE proposed by Windmeijer [25] should be used to resolve the unreliability of the usual asymptotic approximations, particularly in the presence of heteroskedasticity, because the weighting matrix depends on the estimated parameters.The orthogonality condition is tested using the Hansen test.Rejection of the null hypothesis would mean that the instruments do not satisfy the orthogonality condition.The second step of the RSI approach is to estimate ( 16) by including the residual computed from first stage estimation as a regressor: where ê are the residuals from the first-step estimation.Equation ( 19) is estimated by ML using the algorithm proposed in Belotti and Ilardi [19].

Data and Estimation Results
5.1.Data.Statistics Canada's KLEMS (capital, labour, energy, materials, and services) data set provides input, output, prices, and productivity indexes for Canadian industries for the period 1961-2007.In this analysis, we focus on the manufacturing industries.The industries included in the study are listed in Table 1.
The first stage estimation of the endogenous capital and output model makes use of the input price data, after expressing them in real terms using the output price index as a deflator.All of these pieces of information are available in the KLEMS data set.Figure 1 shows the trends in the average input and output prices (the data shown are representative of the overall manufacturing sector, computed as a simple average across the manufacturing industries).
In Figure 2, we present trends in average capital and labour service indexes together with GDP as well as total and labour productivity indexes.The figure shows that labour services have not changed much over the years whereas capital services have increased.The productivity and output indexes follow trends similar to that of capital services.

Results
. Capital input is modelled as a function of the input prices, gross output, and time dummies.The estimation is conducted using the Arellano-Bond system GMM.The Sargan -test for the null hypothesis of "overidentifying restrictions are valid" cannot be rejected (with  2 (851) = 860.45;Prob.>  2 = 0.404.).Thus, the residuals computed from this first stage estimation are included in the frontier models.
Results from estimations based on the two versions of the "true fixed effect" and the "time-decay" models are presented in Table 2.That is, estimations carried out with and without residual inclusion (RSI) from the first stage analysis are provided.Although a translog production function is also  considered, the estimation of the frontier parameters suggests Cobb-Douglas specification performs better in explaining the data, owing to the fact that the KLEMS data set preimposes Cobb-Douglas structure (the value of the GDP is equal to the sum of the costs of capital and labour).
The results are strong in terms of predicting the   and   that are close to their actual values from the data without imposition of any a priori restriction.The individual heterogeneity effects are statistically significant only in 7 of the 21 manufacturing industries.The average rate of technical change is estimated at 1.5% per year during the period studied, 0.00 0.00 0.00 0.00 ‡ Constant terms are industry specific for the TFE model (not shown) and are not included in the BC models.* * * Significant at 1%, not significant otherwise.
In recognition of the existence of one parameter that is not statistically significant in the last two columns (the parameter ilgt is not statistically significant).(i) TFE and TFE-RSI, respectively, stand for true random effect panel stochastic frontier without and with residual inclusion.(ii) BC and BC-RSI, respectively, stand for Battese and Coelli time-decay panel stochastic frontier without and with residual inclusion.(iii) The significance of the inefficiency term in the BC specifications is based on the inverse logit of gamma (ilgt), where  is equal to ( 2  /( 2  +  2 V )).This is so because the optimization is parametrized in terms of the inverse logit of gamma given that  must be between zero and one.as shown by the coefficient of the time trend.This is a good match to the actual average 1.8% growth rate in the aggregate manufacturing sector TFP.
The TFE-RSI and TFE columns in Table 2 indicate that the results are for the "true fixed effect" with and without residual inclusion (RSI), respectively.The tests of significance are based on cluster-robust standard errors.As can be seen, the estimated parameters have changed very little between the two approaches.The estimated frontier coefficients for capital and labour input are very close to the actual shares and we cannot reject the hypothesis that the sum of the two coefficients is equal to one.The inefficiency parameters (  ,  V , and ) are also very close but we see that   is larger when the residual term is excluded.This suggests potential gains from the RSI approach.The estimated coefficient of the residual ( hat ) is statistically significant at one percent.Lastly, in addition to the overall test of significance shown by the Wald  2 tests, the results also show that the stochastic frontier with technical inefficiency term explains the data better or, in other words, a production function that excludes the technical inefficiency term is not a correct specification as is ascertained by the statistically significant  parameter in Table 2.
The time-varying decay model or the Battese and Coelli [7] model (BC) estimations are presented in the last two columns.It is important to note that this model could not generate parameter values that are consistent with the data.For example, the estimated average technical changes are very small in these specifications compared to what is observed in the data.The frontier coefficients   and   are also very different from the observed shares of capital and labour costs in total value added.The data show that the average share of capital cost is 0.33 while that of labour cost is 0.67 for the Canadian manufacturing sector.While the estimates from the "true fixed effect" model reflect these shares accurately, the BC model estimates are highly inaccurate.It was not possible to include both the time trend and the constant terms in the BC specifications as the coefficient of the time trend becomes statistically insignificant when a constant term is included.Thus, the estimations do not include the constant term.The results of the BC specifications suggest that, on average, the level of inefficiency decays towards the base level at about 3% per year, meaning that technical efficiency improves at this rate over time, starting from the level estimated for the initial year.Nonetheless, the inconsistency of the frontier parameter estimates makes the time-decay model (BC) results irrelevant and hence our discussion of technical inefficiency is limited to the results obtained from the "true fixed effect" (TFE) estimation results.
In Figure 3, we present trends in technical efficiency estimated using the Battese and Coelli [15] methods from the TFE models with and without the RSI term.(The alternative method of computing technical efficiency, Jondrow et al. [20], generates trends that are similar to the ones reported.Also, while the results reported are based on the half-normal specification for the inefficiency term, the exponential specification does not imply significant difference to our results.)As can be seen, average technical efficiency in the manufacturing sector has exhibited downward trend during the recent past.Efficiency that was above 90% in the mid-1980s was below 80% in 2007.Although there are ups and downs, the overall trend could be considered as an upward trend during the period 1964-1985, with the exception of the sharp declines observed between late 1970s and early 1980s.The trend is reversed after 1985 towards a generally downward pattern.The downward swing after year 2000 is, however, remarkable compared to all the declines observed previously.As a result, the average technical efficiency was smaller in 2007 than it was in 1964.This result strongly suggests that technical inefficiency is a major factor behind the recent slowdown in labour productivity.

Conclusion
In this study, we apply developed econometric methods, the panel stochastic frontier technique, and another one to handle problems related to endogenous variables.The first one refers the "true fixed effect" proposed to overcome the problems related to both the time-invariant and the timedecay inefficiency models of stochastic frontier models in the context of panel data.The second new technique is the residual inclusion method proposed in the context of nonlinear models.The goal is to obtain consistent estimates for technical efficiency in Canadian manufacturing sector.
The study provides estimates suggesting that a decline in technical efficiency in manufacturing industries has significantly contributed to the slowdown in TFP growth during the period 2001-2007.Although technical change forms the main component of TFP decomposition, the effects of TE are important and, in fact, appear to be the main driver of the TFP slowdown given the estimated 1.5-1.6%annual growth in TC.The drop in technical efficiency could be related to many factors and these have to be identified and addressed in order to improve the Canadian productivity problem.However, these factors are not the same as those discussed in the existing literature, particularly lacking investment in materials and equipment since technical inefficiency indicates existence of underutilized resources.

Figure 3 :
Figure 3: Estimated trends in technical efficiency.Note: meanTe tfe stands for average technical efficiency from the "TFE" model while meanTe tfersi stands for average technical efficiency from the "TFE" model with residual inclusion (RSI).