Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Predicting Macroeconomic Indicators in the Czech Republic Using Econometric Models and Exponential Smoothing Techniques

Predicting Macroeconomic Indicators in the Czech Republic Using Econometric Models and... Econometric modeling and exponential smoothing techniques are two quantitative forecasting methods with good results in practice, but the objective of the research was to find out which of the two techniques are better for short run predictions. Therefore, for inflation, unemployment and interest rate in the Czech Republic various accuracy indicators were calculated for the predictions based on these methods. Short run forecasts on a horizon of 3 months were made for December 2011-February 2012, the econometric models being updated. For the Czech Republic, the exponential smoothing techniques provided more accurate forecasts than the econometric models (VAR(2) models, ARMA procedure and models with lagged variables). One explication for the better performance of smoothing techniques would be that in the chosen countries the short run predictions were more influenced by the recent evolution of the indicators. Keywords: accuracy, econometric models, forecasts, forecasting methods, smoothing exponential techniques JEL: E21, E27,C51, C53 DOI: 10.2478/v10033-012-0017-3 1. Introduction In establishing monetary policy, decision-makers must take into account the possible future evolution of important macroeconomic variables such as the inflation rate, unemployment rate or interest rate. This fact implies knowledge of the predictions of these indicators. In econometrics we can build forecasts starting from a valid model. The real problem appears when we use two or more different forecasting methods and we must choose the one which generated forecasts with the higher degree of accuracy. In this article, we modeled the three selected variables and made predictions for them. Using indicators of accuracy we demonstrated that the smoothing exponential techniques generated better forecasts than simple econometric models in the Czech Republic. of accuracy. For comparisons between the MSE indicators of forecasts, Granger and Jeon (2003) proposed a statistical measure. Another statistical measure is presented by Diebold and Mariano (1995) for the comparison of other quantitative measures of errors. Diebold and Marianot proposed in 1995 a test to compare the accuracy of two forecasts under a null hypothesis that assumes no differences in accuracy. The test proposed by them was later improved by Ashley (2003), who developed a new statistical measure based on a bootstrap inference. Subsequently, Diebold and Christoffersen (1998) have developed a new way of measuring the accuracy while preserving the cointegrating relation between variables. Armstrong and Fildes (1995) showed that the purpose * Bratu Mihaela Simionescu Faculty of Cybernetics, Statistics and Economic Informatics- Bucharest E-mail: mihaela_mb1@yahoo.com 2. Literature To assess the accuracy of forecasts, as well as their ordering, statisticians have developed several measures of measuring an error of prediction is to provide information about the distribution of errors form and proposed assessing the prediction error using a loss function. They showed that it is not sufficient to use a single measure of accuracy. Since the normal distribution is a poor approximation of the distribution of a low-volume data series, Harvey, Leybourne, and Newbold (2003) improved the properties of the small length data series, applying some corrections: the change of DM statistics to eliminate the bias and the comparison of this statistical measure not with normal distribution, but with a T-Student distribution. Clark (2006) evaluated the power of equality forecast accuracy tests, such as modified versions of the DM test or those based on a Bartlett core and a determined length of data series. In the literature, there are several traditional ways of measurement, which can be ranked according to the dependence or independence of their measurement scale. A complete classification is made by Hyndman and Koehler (2005) in their reference study in the field, "Another Look at Measures of Forecast Accuracy": Scale-dependent measures The most used measures of scale dependent accuracy are: -> Mean-Square Error (MSE) = average ( et ) -> Root Mean Square Error (RMSE) = * Median Absolute Percentage Error (MdAPE) = median ( * Root Mean Square Percentage Error (RMSPE) = geometric mean ( p t ) * Root Median Square Percentage Error (RMdSPE) = median ( p t ) When X t takes the value 0, the percentage error becomes infinite or is not defined and the measure distribution is highly skewed, which is a major disadvantage. Makridakis (1984) introduced symmetrical measures in order to avoid another disadvantage of MAPE and MdAPE. For example, excessively large penalizing made positive errors in comparison with negative ones. * Mean Absolute Percentage Error (sMAPE) = average ( pt ) X t - Ft Xt + F 200 ) * Symmetric Median Absolute Percentage Error (sMdAPE) = median ( forecast of X t . -> Measures based on relative errors It is considered that X t - Ft Xt + F 200 ),where F t - MSE -> Mean Absolute Error (MAE) = average ( et ) et ) rt = et * , where et is the forecast * et -> Median Absolute Error (MdAE) = median ( error for the reference model. * Mean Relative Absolute Error (MRAE) = average ( rt ) rt ) RMSE and MSE are commonly used in statistical modeling, although they are affected by outliers more than other measures. Scale-independent errors: -> Measures based on percentage errors The percentage error is given by: * Median Relative Absolute Error (MdRAE) = median ( * Geometric Mean Relative Absolute Error (GMRAE) = geometric mean ( rt ) pt = et 100 Xt A major disadvantage is a too-low value for the error of the benchmark forecast. ->Relative measures For example, the relative RMSE is calculated: The most common measures based on percentage errors are: * Mean Absolute Percentage Error (MAPE) = average ( pt ) rel _ RMSE = RMSE , where RMSE b is the RMSE of RMSE b "benchmark model" Relative measures can be defined for MFA MdAE, MAPE. When the benchmark model is a random walk, rel_RMSE is used, which is actually Theil's U statistic. Random walk or the naive model is used the most, but it may be replaced with the naive2 method, in which the forecasts are based on the latest seasonally adjusted values according to Makridakis, Wheelwright and Hyndman (1998). Free-scale error metrics (resulting from dividing each error at the average error) Hyndman and Koehler (2005) introduce in this class of errors "Mean Absolute Scaled Error" (MASE) in order to compare the accuracy of forecasts of more time series. In practice, the most used measures of forecast error are: Root Mean Squared Error (RMSE) macroeconomic data are used. If we have two forecasts with the same mean absolute error, RMSE penalizes the one with the biggest errors. U Theil's statistic is calculated in two variants by the Australian Treasury in order to evaluate forecast accuracy. The following notations are used: a- the registered results p- the predicted results t- reference time e- the error (e=a-p) n- number of time periods (a U1 = - pt ) 2 a t2 + 2 t RMSE = 1 n 2 e X (T0 + j, k ) n j =1 If U1 is closer to one, the forecast accuracy is higher. Mean error (ME) U2 = p t +1 - a t +1 2 ) at a t +1 - a t 2 ) at ME = 1 n e X (T0 + j, k ) n j =1 The sign of the indicator value provides important information: if it has a positive value, then the current value of the variable was underestimated, which means the expected average values are too small. A negative value for the indicator shows that the expected values are too high on average. Mean absolute error (MAE) If U 2 =1=> there are no differences in the terms of accuracy between the two forecasts to compare If U 2 <1=> the forecast compared has a higher degree of accuracy than the naive one If U 2 >1=> the forecast compared has a lower degree of accuracy than the naive one Other authors, like Fildes R. and Steckler H. (2000) use another criterion to classify accuracy measures. If we consider X t (k ) the predicted value after k periods from the origin time t, then the error at future time (t+k) is: MAE = 1 n e X (T0 + j, k ) n j =1 These measures of accuracy have some disadvantages. For example, RMSE is affected by outliers. Armstrong and Collopy (2000) stress that these measures are not independent of the unit of measurement unless they are expressed as percentages. These measures include average errors with different degrees of variability. The purpose of using these indicators is related to the characterization of distribution errors. Clements and Hendry (1995) have proposed a generalized version of the RMSE based on error intercorrelation, when at least two series of et (t + k ) . Indicators used to evaluate forecast accuracy can be classified according to their usage. Thus, the forecast accuracy measurement can be done independently or by comparison with another forecast. Clements and Hendry (2010) presented the most used accuracy measures in the literature, which are described below. 1. The specific loss function Diebold, Gunther and Tay (1998) started from a loss function L ( a , x ) , where: t t +1 a -specific action x f (x )t t +1 t +1 the future value of a random variable whose distribution is known f (.)-density forecast The optimal condition involves minimizing the loss function when the density forecast is p ( x ) : t ,1 t +1 The trace and the determinant of the mean square errors matrix are classical measures of forecast accuracy. Generalized forecast error second moment (GFESM) is calculated according to Clements and Hendry (1993) as a determinant of the expected value of the forecast errors vector for future moments up to the horizon of interest. If forecasts up to a horizon of h quarters present interest, this indicator is calculated as: a = arg min L(at ,1 , xt +1 ) pt ,1 ( xt +1 )dxt +1 a A * t ,1 t ,1 et +1 et +1 GFESM = E et + 2 et + 2 . ... ... e e t +h t +h et +h -n-dimensional forecast error of n variables model on horizon h GFESM is considered a better measure of accuracy because it is invariant to elementary operations with variables, unlike the MSFE trace, and it is also a measure that is invariant to basic operations of the same variables on different horizons of prediction, in contrast with the MSFE matrix trace and determinant. Clements and Hendry (1993) showed that the MSFE disadvantages related to invariance models are determined by the lack of invariance indicator non singular linear transformations, which preserves the scale. MSFE comparisons determined inconsistent ranks of forecast performance of different models with several steps along the variable transformations. 3. Measures of relative accuracy A relative measure for assessing forecast accuracy supposes the comparison of a forecast with a reference, which is called a "benchmark forecast" or "naïve forecast" in the literature. However, this remains a subjective approach in terms of the choice of forecast used for comparison. Problems that may arise in this case are related to: the existence of outliers or the inappropriate choice of models on which the forecasts are developed, and the emergence of shocks. A first measure of relative accuracy is Theil's U statistic, for which the reference forecast is the last observed value recorded in the data series. Collopy and Armstrong proposed a new indicator instead of the U statistics similar (RAE). Thompson improved the MSE indicator, proposing a statistically determined MSE (mean squared error log ratio). The expected value of the loss function is: E[ L( a , x )] = L(at*,1 , xt +1 ) f ( xt +1 )dxt +1 * t ,1 t +1 The density forecast will be preferred above any other density for a given loss function if the following condition is accomplished: E[ L(a ( p ( x )), x )] < E[ L( at*, 2 ( pt , 2 ( xt +1 )), xt +1 )] * t ,1 t ,1 t +1 t +1 where a * - the optimal action for the following forecast: t ,i p t ,i ( x ) . Making decisions based on forecast accuracy evaluation is important in macroeconomics, but few studies have focused on this. Notable achievements on forecast performance evaluation were made in practical applications in finance and in metrology. Recent improvements refer to the inclusion of disutility, which is presented in actions in future states and takes into account the entire distribution of the forecast. Since an objective assessment of prediction errors cost cannot be made, only the general absolute loss functions loss or loss of error squares can be used. 2. Mean square forecast error (MSFE) and the second error of the generalized forecast (GFESM) The most used measure to assess forecast accuracy is the mean square forecast error (MSFE). In case of a vector of variables, a MSFE matrix will be built: ' ' Vh E[eT + h eT + h ] = V [eT + h ] + E[eT + h ]E[eT + h ] , where T +h - vector of errors with an h steps- ahead- forecast Relative accuracy can also be measured by comparing predicted values with those based on a model built using data from the past. The tests of forecast accuracy compare an estimate of forecast error variance derived from the past residue and the current MSFE. To check whether the differences between mean square errors corresponding to the two alternative forecasts are statistically significant the tests proposed by Diebold and Mariano, West, Clark and McCracken, Corradi and Swanson, Giacomini and White are used. Starting from a general loss function based on predictive ability tests, the accuracy of the two alternative forecasts for the same variable is compared. The first results obtained by Diebold and Mariano were formalized, as showed by Giacomini and White (2006), West, McCracken, Clark and McCracken, Corradi, Swanson and Olivetti, Chao, Corradi and Swanson. Other researchers started from the particular loss function (Granger and Newbold, Leitch and Tanner, West, Edison and Cho, Harvey, Leybourne and Newbold). Recent studies target accuracy analysis using as comparison criterion different models used in making predictions or the analysis of forecasted values for the same macroeconomic indicators registered in several countries. Ericsson (1992) shows that parameter stability and the mean square error of prediction are two key measures in the evaluation of forecast accuracy. However, they are not sufficient and it is necessary to introduce a new statistical test. Granger and Jeon (2003) consider four models for U.S. inflation: a univariate model, a model based on an indicator used to measure inflation, a univariate model based on the two previous models and a bivariate model. Applying the mean square error criterion, the best prediction made is one based on an autoregressive model of order 1 (AR (1)). Applying a distance-time method, the best model is the one based on an indicator used to measure inflation. Ledolter (2006) compares the mean square error of expost and ex-ante forecasts of regression models with a transfer function with the mean square error of univariate models that ignore the covariance and show the superiority of predictions based on transfer functions. Teräsvirta et al. (2005) examine the accuracy of forecasts based on linear autoregressive models, autoregressive with smooth transition (STAR) and neural network (neural network-NN) time series for 47 months of the macroeconomic variables of G7 economies. For each model a dynamic specification is used and it is shown that STAR models generate better forecasts than linear autoregressive ones. Neural networks over long a horizon forecast generated better predictions than models using an approach from private to general. Heilemann and Stekler (2007) explain why macroeconomic forecast accuracy in the last 50 years for the G7 has not improved. The first explanation refers to the critique of macroeconomic models and to forecasting models, and the second is related to the unrealistic expectations of forecast accuracy. Problems related to forecast bias, data quality, the forecast process, predicted indicators, and the relationship between forecast accuracy and forecast horizon are analyzed. Ruth (2008), using empirical studies, obtains forecasts with a higher degree of accuracy for European macroeconomic variables by combining specific subgroup predictions in comparison with forecasts based on a single model for the whole Union. Gorr (2009) shows that the univariate method of prediction is suitable for normal conditions of forecasting while using conventional measures for accuracy, yet multivariate models are recommended for predicting exceptional conditions when an ROC curve is used to measure accuracy. Dovern and Weisser (2011) use a broad set of individual forecasts to analyze four macroeconomic variables in G7 countries. Analyzing accuracy, bias and forecast efficiency resulted in large discrepancies between countries, as well as within the same country for different variables. In general, the forecasts are biased and only a fraction of GDP forecasts are closer to the results registered in reality. In the Netherlands, experts make predictions starting from a macroeconomic model used by the Netherlands Bureau for Economic Policy Analysis (CPB). For the period 1997-2008 the model of the expert macroeconomic variables evolution was reconstructed and compared with the base model. The conclusions of Franses, Kranendonk and Lanser (2011) are that the CPB model forecasts are in general biased and have a higher degree of accuracy. 3. The Models Used to Make Macroeconomic Forecasts The variables used in models are: the inflation rate calculated starting from the harmonized index of consumer prices, the unemployment rate and the interest Inflation rate Indicators of accuracy RMSE ME MAE MPE U1 U2 Unemployment rate Indicators of accuracy RMSE ME MAE MPE U1 U2 Interest rate RMSE ME MAE MPE U1 U2 Models used to build the forecasts VAR(2) ARMA 0,17051339 -0,6694 1,3694 -0,0650 0,051257 1,388935 Models used to build the forecasts VAR(2) ARMA 0,57231311 -0,51277 0,512767 -0,07696 0,040086 3,914625 VAR(2) 0,03663478 0,0052 0,0164 0,0100 0,014359 0,761926 ARMA 0,3635292 -0,3693 0,3693 -0,5302 0,36058 14,99092 0,8532325 0,0955 0,6045 -0,0336 0,017019 0,981571 Models with lag 3,6277209 -3,9449 4,6449 -0,2550 0,151515 2,980709 2,0922862 -2,09223 2,092233 -0,31383 0,186124 15,89517 Table 1: Indicators of forecasts accuracy for the inflation, unemployment and interest rate (the Czech Republic) Source: own calculations using Excel. rate in the short term. The last indicator is calculated as the average of daily values of interest rates on the market. The data series are monthly and are taken from the Eurostat website for the period from February 1999 to October 2011 for the Czech Republic. The indicators are expressed in comparable prices, the reference base being values from January 1999. We eliminated the influence of seasonal factors for the inflation rate using the Census X11 (historical) method. In the Czech Republic only the data series for inflation and unemployment rate were transformed to become stationary. Taking into account that our objective is the achievement of one-month-ahead forecasts for December 2011, January and February 2012, we considered it necessary to update the models. We used three types of models: a VAR(2) model, an ARMA and a model in which the inflation and interest rates are explained using variables with lag. The econometric models used for the Czech Republic are specified in Appendix 1. We developed one-month-ahead forecasts starting from these models and then evaluated their accuracy. The one-step-ahead forecasts for the 3 months were presented in Appendix 2. 4. The Assessment of Accuracy for Predictions Based on Econometric Models A generalization of the Diebold-Mariano test (DM) is used to determine whether the MSFE matrix trace of the model with aggregation variables is significantly lower than that of the model in which the aggregation of forecasts is done. If the MSFE determinant is used, according to Athanasopoulos and Vahid (2005), the DM test can not be used in this version, because the difference between the two models' MSFE determinants cannot be written as an average. In this case, a test that uses a bootstrap method is recommended. The DM statistic is calculated as: DM t = T [tr ( MSFEVAR ( 2) mod el ) h - tr ( MSFE ARMA mod el ) h ] s = (1) 1 1 T [ s T (em 2 1,1,t 2 2 + em 2,1,t + em3,1,t - er121,t - er22,1,t - er321,t )] , , T-number of months for which forecasts are developed em i , h,t - the h-steps-ahead forecast error of variable i at time t for the VAR(2) model eri , h ,t - the h-steps-ahead forecast error of variable i each rate R n = a + u n , where a is a constant and u t - resid, s- seasonal frequency, the prediction for the next period is: ^ ^ (2) R ' n +1 = × R ' n + (1 - ) × R ' n , n = 1,2,..., t + k is a smoothing factor, with values between 0 and 1, being determined by minimizing the sum of squared prediction errors. at time t for the ARMA s- the square root of a consistent estimator of the limiting variance of the numerator The null hypothesis of the test refers to the same accuracy of forecasts. Under this assumption and taking into account the usual conditions of the central limit theorem for weakly correlated processes, the DM statistic follows a standard normal asymptotic distribution. For variance the Newey-West estimator with the corresponding lag-truncation parameter set to h - 1 is used. We compared 3 months in terms of the accuracy of the predictions for all three variables, and predictions made starting from the VAR(2) models and ARMA models. The DM statistics for the accuracy of forecasts based on VAR models is higher than that based on ARMA models for all chosen countries. In Table 1 the accuracy indicators for the predictions are displayed. In the Czech Republic, when an econometric models was used to make forecasts, the ARMA procedure was the most suitable for the inflation rate, while the best results were given by VAR(2) models for the unemployment and interest rates. However, only the predictions based on the ARMA models for the inflation rate and on VAR for the interest rate are better than those that used the naïve model. For the Czech Republic only the VAR and ARMA models could be built to explain the evolution of the interest rate. Best results for the interest rate in the Czech Republic are given also by the VAR models. min 1 n i =0 ^ ( R ' n +1 - R ' n +1 ) 2 = min 1 n i =0 2 n +1 (3) Each future smoothed value is calculated as a weighted average of the n past observations, resulting in: ^ R ' n +1 = × (1 - ) i =1 ^ × R ' n +1- s . (4) 5. Holt-Winters Simple exponential smoothing method (M2) The method is recommended for data series with linear trends and without seasonal variations, the forecast (5) being determined as: R n + k = a + b × k . a n = × R n + (1 - ) × (a + b ) bn = (a n - a ) + (1 - ) b (6) Finally, the prediction value on horizon k is: ^ ^ ^ R = a +b ×k n+k n n (7) 6. Holt-Winters multiplicative exponential smoothing method (M3) This technique is used when the trend is linear and the seasonal variation follows a multiplicative model. The ^ smoothed data series is: R ' n + k = ( a + b × k ) × c n n n+k 5. The Assessment of Accuracy for Predictions Based On Exponential Smoothing Techniques Like econometric modeling, exponential smoothing is a technique used to make forecasts. It is a simple method that takes into account more recent data. In other words, recent observations in the data series are given more weight in the prediction than older values. Exponential smoothing considers exponentially decreasing weights over time. 4. Simple exponential smoothing method (M1) The technique can be applied for stationary data to make short run forecasts. Starting from the formula of (8), where a-intercept, b- trend, c- multiplicative seasonal factor an = × R'n + (1 - ) × (a + b ) c n- s bn = × (a n - a ) + (1 - ) × b cn = × R + (1 - ) × c n - s an (9) The prediction is: ^ ^ ^ ^ R ' n + k = (a + b × k ) × c n n n+ k (10) U2 Inflation rate- Czech Republic M1 M2 M3 M4 M5 Unemployment rateCzech Republic M1 M2 M3 M4 M5 Interest rate- Czech Republic M1 M2 M3 M4 M5 RMSE 0,288386455 1,119007113 0,859249004 1,039570357 ME -1,73383 -1,50076 -0,53664 -1,45292 MAE 1,800501 1,567428 0,603307 1,519589 MPE -0,08296 -0,08027 -0,03108 -0,0779 U1 0,056005 0,049381 0,01775 0,0475 1,545809 0,189913 0,947732 0,228745 0,081731 0,058351 0,111016 0,116203 0,048776 -0,03343 0,049443 -0,07804 -0,0839 0,01744 0,033433 0,049443 0,09456 0,100421 0,044912 -0,00499 0,007421 -0,01163 -0,0125 0,002621 0,004345 0,00436 0,008375 0,00877 0,003653 0,43671 0,44044 0,836498 0,87466 0,365749 0,033121 0,045165 0,098583 0,076148 0,03487 -0,01294 -0,01788 -0,09484 0,014587 -0,01772 0,022964 0,030232 0,094845 0,094149 0,023895 -0,01635 -0,02586 -0,13656 0,022764 -0,02554 0,021484 0,02999 0,075181 0,068091 0,0225 1,125963 2,013734 4,417344 3,35745 1,657338 Table 2: Measures of accuracy for forecasts based on exponential smoothing techniques for the inflation, unemployment and interest rate (the Czech Republic) Source: own computations using Excel 6. Holt-Winters additive exponential smoothing method (M4) This technique is used when the trend is linear and the seasonal variation follows a multiplicative model. The ^ smoothed data series is (14): R ' n + k = a + b × k + c n n n+k a- intercept, b- trend, c- additive seasonal factor a n = × ( R ' n - c n - s ) + (1 - ) × (a + b ) bn = × (a n - a ) + (1 - ) × b c n = × ( R ' n - a n ) + (1 - ) × c n - s (11) The prediction is: ^ ^ ^ ^ R ' n+k = a + b × k + c n n n+k (12) 7. Double exponential smoothing method (M5) This technique is recommended when the trend is linear, two recursive equations being used: S n = × R n + (1 - ) × S (13) D n = × S n + (1 - ) × D where S and D are Indeed, the exponential smoothing techniques provided the most accurate predictions for all indicators in the Czech Republic. For the inflation rate the best method to be applied was the additive exponential smoothing technique, while for unemployment and interest rates the simple exponential smoothing technique generated the best results due to a value of U1 very closed to zero. All of the predictions for the unemployment rate based on the exponential smoothing techniques are more accurate than those based on the naïve model. All forecasts are overestimated on the chosen horizon, excepting the unemployment rate in the case of Holt-Winters and the double smoothing method, and the interest rate when the additive technique is used. The low values for RMSE imply low variability in the data series. 6. Conclusions In our research we proposed to check if exponential smoothing techniques generate better short run predictions than simple econometric models. According to recent research, simple econometric models are recommended for forecasts due to their high degree of accuracy in predictions. For the prognosis made for the Czech Republic from December 2011February 2012 this hypothesis was not supported. simple, respectively double smoothed series. In Table 2 the accuracy indicators for predictions based on exponential smoothing techniques are presented for all three countries. Analyzing the values of these indicators, the smoothing method is better than the econometric models for the aforementioned countries. Predicting Macroeco onomic Indicators in the Czech Republic Using Econometric M Models and Exponential Smoothing Tech hniques In the Czech Republic the recent valu in the dat ues ta series used for predictions h s r have the greatest importanc ce. Therefore, exp T ponential smo oothing metho determine ods ed the best result in terms of fo t ts orecasts accuracy. Simple an nd additive exp a ponential sm moothing te echniques ar re recommended for the Czech Republic. r d h To improv policy we can use mo ve onthly forecasts based on the better metho for this co b od ountry. Policy is improved by c i choosing the m most accurate forecast, whic ch helps the government or banks in ma h aking the best decisions. In ou study we an d ur nalyzed the res sults of only tw wo quantitative methods, bu the research could be q ut b extended by adding oth e her quantitati ive forecastin ng methods or by using qualita m y ative methods or prediction s ns based on comb b binations of th two types of methods. he f http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png South East European Journal of Economics and Business de Gruyter

Predicting Macroeconomic Indicators in the Czech Republic Using Econometric Models and Exponential Smoothing Techniques

Loading next page...
 
/lp/de-gruyter/predicting-macroeconomic-indicators-in-the-czech-republic-using-DFW0IA5Cmc

References (16)

Publisher
de Gruyter
Copyright
Copyright © 2012 by the
ISSN
1840-118X
DOI
10.2478/v10033-012-0017-3
Publisher site
See Article on Publisher Site

Abstract

Econometric modeling and exponential smoothing techniques are two quantitative forecasting methods with good results in practice, but the objective of the research was to find out which of the two techniques are better for short run predictions. Therefore, for inflation, unemployment and interest rate in the Czech Republic various accuracy indicators were calculated for the predictions based on these methods. Short run forecasts on a horizon of 3 months were made for December 2011-February 2012, the econometric models being updated. For the Czech Republic, the exponential smoothing techniques provided more accurate forecasts than the econometric models (VAR(2) models, ARMA procedure and models with lagged variables). One explication for the better performance of smoothing techniques would be that in the chosen countries the short run predictions were more influenced by the recent evolution of the indicators. Keywords: accuracy, econometric models, forecasts, forecasting methods, smoothing exponential techniques JEL: E21, E27,C51, C53 DOI: 10.2478/v10033-012-0017-3 1. Introduction In establishing monetary policy, decision-makers must take into account the possible future evolution of important macroeconomic variables such as the inflation rate, unemployment rate or interest rate. This fact implies knowledge of the predictions of these indicators. In econometrics we can build forecasts starting from a valid model. The real problem appears when we use two or more different forecasting methods and we must choose the one which generated forecasts with the higher degree of accuracy. In this article, we modeled the three selected variables and made predictions for them. Using indicators of accuracy we demonstrated that the smoothing exponential techniques generated better forecasts than simple econometric models in the Czech Republic. of accuracy. For comparisons between the MSE indicators of forecasts, Granger and Jeon (2003) proposed a statistical measure. Another statistical measure is presented by Diebold and Mariano (1995) for the comparison of other quantitative measures of errors. Diebold and Marianot proposed in 1995 a test to compare the accuracy of two forecasts under a null hypothesis that assumes no differences in accuracy. The test proposed by them was later improved by Ashley (2003), who developed a new statistical measure based on a bootstrap inference. Subsequently, Diebold and Christoffersen (1998) have developed a new way of measuring the accuracy while preserving the cointegrating relation between variables. Armstrong and Fildes (1995) showed that the purpose * Bratu Mihaela Simionescu Faculty of Cybernetics, Statistics and Economic Informatics- Bucharest E-mail: mihaela_mb1@yahoo.com 2. Literature To assess the accuracy of forecasts, as well as their ordering, statisticians have developed several measures of measuring an error of prediction is to provide information about the distribution of errors form and proposed assessing the prediction error using a loss function. They showed that it is not sufficient to use a single measure of accuracy. Since the normal distribution is a poor approximation of the distribution of a low-volume data series, Harvey, Leybourne, and Newbold (2003) improved the properties of the small length data series, applying some corrections: the change of DM statistics to eliminate the bias and the comparison of this statistical measure not with normal distribution, but with a T-Student distribution. Clark (2006) evaluated the power of equality forecast accuracy tests, such as modified versions of the DM test or those based on a Bartlett core and a determined length of data series. In the literature, there are several traditional ways of measurement, which can be ranked according to the dependence or independence of their measurement scale. A complete classification is made by Hyndman and Koehler (2005) in their reference study in the field, "Another Look at Measures of Forecast Accuracy": Scale-dependent measures The most used measures of scale dependent accuracy are: -> Mean-Square Error (MSE) = average ( et ) -> Root Mean Square Error (RMSE) = * Median Absolute Percentage Error (MdAPE) = median ( * Root Mean Square Percentage Error (RMSPE) = geometric mean ( p t ) * Root Median Square Percentage Error (RMdSPE) = median ( p t ) When X t takes the value 0, the percentage error becomes infinite or is not defined and the measure distribution is highly skewed, which is a major disadvantage. Makridakis (1984) introduced symmetrical measures in order to avoid another disadvantage of MAPE and MdAPE. For example, excessively large penalizing made positive errors in comparison with negative ones. * Mean Absolute Percentage Error (sMAPE) = average ( pt ) X t - Ft Xt + F 200 ) * Symmetric Median Absolute Percentage Error (sMdAPE) = median ( forecast of X t . -> Measures based on relative errors It is considered that X t - Ft Xt + F 200 ),where F t - MSE -> Mean Absolute Error (MAE) = average ( et ) et ) rt = et * , where et is the forecast * et -> Median Absolute Error (MdAE) = median ( error for the reference model. * Mean Relative Absolute Error (MRAE) = average ( rt ) rt ) RMSE and MSE are commonly used in statistical modeling, although they are affected by outliers more than other measures. Scale-independent errors: -> Measures based on percentage errors The percentage error is given by: * Median Relative Absolute Error (MdRAE) = median ( * Geometric Mean Relative Absolute Error (GMRAE) = geometric mean ( rt ) pt = et 100 Xt A major disadvantage is a too-low value for the error of the benchmark forecast. ->Relative measures For example, the relative RMSE is calculated: The most common measures based on percentage errors are: * Mean Absolute Percentage Error (MAPE) = average ( pt ) rel _ RMSE = RMSE , where RMSE b is the RMSE of RMSE b "benchmark model" Relative measures can be defined for MFA MdAE, MAPE. When the benchmark model is a random walk, rel_RMSE is used, which is actually Theil's U statistic. Random walk or the naive model is used the most, but it may be replaced with the naive2 method, in which the forecasts are based on the latest seasonally adjusted values according to Makridakis, Wheelwright and Hyndman (1998). Free-scale error metrics (resulting from dividing each error at the average error) Hyndman and Koehler (2005) introduce in this class of errors "Mean Absolute Scaled Error" (MASE) in order to compare the accuracy of forecasts of more time series. In practice, the most used measures of forecast error are: Root Mean Squared Error (RMSE) macroeconomic data are used. If we have two forecasts with the same mean absolute error, RMSE penalizes the one with the biggest errors. U Theil's statistic is calculated in two variants by the Australian Treasury in order to evaluate forecast accuracy. The following notations are used: a- the registered results p- the predicted results t- reference time e- the error (e=a-p) n- number of time periods (a U1 = - pt ) 2 a t2 + 2 t RMSE = 1 n 2 e X (T0 + j, k ) n j =1 If U1 is closer to one, the forecast accuracy is higher. Mean error (ME) U2 = p t +1 - a t +1 2 ) at a t +1 - a t 2 ) at ME = 1 n e X (T0 + j, k ) n j =1 The sign of the indicator value provides important information: if it has a positive value, then the current value of the variable was underestimated, which means the expected average values are too small. A negative value for the indicator shows that the expected values are too high on average. Mean absolute error (MAE) If U 2 =1=> there are no differences in the terms of accuracy between the two forecasts to compare If U 2 <1=> the forecast compared has a higher degree of accuracy than the naive one If U 2 >1=> the forecast compared has a lower degree of accuracy than the naive one Other authors, like Fildes R. and Steckler H. (2000) use another criterion to classify accuracy measures. If we consider X t (k ) the predicted value after k periods from the origin time t, then the error at future time (t+k) is: MAE = 1 n e X (T0 + j, k ) n j =1 These measures of accuracy have some disadvantages. For example, RMSE is affected by outliers. Armstrong and Collopy (2000) stress that these measures are not independent of the unit of measurement unless they are expressed as percentages. These measures include average errors with different degrees of variability. The purpose of using these indicators is related to the characterization of distribution errors. Clements and Hendry (1995) have proposed a generalized version of the RMSE based on error intercorrelation, when at least two series of et (t + k ) . Indicators used to evaluate forecast accuracy can be classified according to their usage. Thus, the forecast accuracy measurement can be done independently or by comparison with another forecast. Clements and Hendry (2010) presented the most used accuracy measures in the literature, which are described below. 1. The specific loss function Diebold, Gunther and Tay (1998) started from a loss function L ( a , x ) , where: t t +1 a -specific action x f (x )t t +1 t +1 the future value of a random variable whose distribution is known f (.)-density forecast The optimal condition involves minimizing the loss function when the density forecast is p ( x ) : t ,1 t +1 The trace and the determinant of the mean square errors matrix are classical measures of forecast accuracy. Generalized forecast error second moment (GFESM) is calculated according to Clements and Hendry (1993) as a determinant of the expected value of the forecast errors vector for future moments up to the horizon of interest. If forecasts up to a horizon of h quarters present interest, this indicator is calculated as: a = arg min L(at ,1 , xt +1 ) pt ,1 ( xt +1 )dxt +1 a A * t ,1 t ,1 et +1 et +1 GFESM = E et + 2 et + 2 . ... ... e e t +h t +h et +h -n-dimensional forecast error of n variables model on horizon h GFESM is considered a better measure of accuracy because it is invariant to elementary operations with variables, unlike the MSFE trace, and it is also a measure that is invariant to basic operations of the same variables on different horizons of prediction, in contrast with the MSFE matrix trace and determinant. Clements and Hendry (1993) showed that the MSFE disadvantages related to invariance models are determined by the lack of invariance indicator non singular linear transformations, which preserves the scale. MSFE comparisons determined inconsistent ranks of forecast performance of different models with several steps along the variable transformations. 3. Measures of relative accuracy A relative measure for assessing forecast accuracy supposes the comparison of a forecast with a reference, which is called a "benchmark forecast" or "naïve forecast" in the literature. However, this remains a subjective approach in terms of the choice of forecast used for comparison. Problems that may arise in this case are related to: the existence of outliers or the inappropriate choice of models on which the forecasts are developed, and the emergence of shocks. A first measure of relative accuracy is Theil's U statistic, for which the reference forecast is the last observed value recorded in the data series. Collopy and Armstrong proposed a new indicator instead of the U statistics similar (RAE). Thompson improved the MSE indicator, proposing a statistically determined MSE (mean squared error log ratio). The expected value of the loss function is: E[ L( a , x )] = L(at*,1 , xt +1 ) f ( xt +1 )dxt +1 * t ,1 t +1 The density forecast will be preferred above any other density for a given loss function if the following condition is accomplished: E[ L(a ( p ( x )), x )] < E[ L( at*, 2 ( pt , 2 ( xt +1 )), xt +1 )] * t ,1 t ,1 t +1 t +1 where a * - the optimal action for the following forecast: t ,i p t ,i ( x ) . Making decisions based on forecast accuracy evaluation is important in macroeconomics, but few studies have focused on this. Notable achievements on forecast performance evaluation were made in practical applications in finance and in metrology. Recent improvements refer to the inclusion of disutility, which is presented in actions in future states and takes into account the entire distribution of the forecast. Since an objective assessment of prediction errors cost cannot be made, only the general absolute loss functions loss or loss of error squares can be used. 2. Mean square forecast error (MSFE) and the second error of the generalized forecast (GFESM) The most used measure to assess forecast accuracy is the mean square forecast error (MSFE). In case of a vector of variables, a MSFE matrix will be built: ' ' Vh E[eT + h eT + h ] = V [eT + h ] + E[eT + h ]E[eT + h ] , where T +h - vector of errors with an h steps- ahead- forecast Relative accuracy can also be measured by comparing predicted values with those based on a model built using data from the past. The tests of forecast accuracy compare an estimate of forecast error variance derived from the past residue and the current MSFE. To check whether the differences between mean square errors corresponding to the two alternative forecasts are statistically significant the tests proposed by Diebold and Mariano, West, Clark and McCracken, Corradi and Swanson, Giacomini and White are used. Starting from a general loss function based on predictive ability tests, the accuracy of the two alternative forecasts for the same variable is compared. The first results obtained by Diebold and Mariano were formalized, as showed by Giacomini and White (2006), West, McCracken, Clark and McCracken, Corradi, Swanson and Olivetti, Chao, Corradi and Swanson. Other researchers started from the particular loss function (Granger and Newbold, Leitch and Tanner, West, Edison and Cho, Harvey, Leybourne and Newbold). Recent studies target accuracy analysis using as comparison criterion different models used in making predictions or the analysis of forecasted values for the same macroeconomic indicators registered in several countries. Ericsson (1992) shows that parameter stability and the mean square error of prediction are two key measures in the evaluation of forecast accuracy. However, they are not sufficient and it is necessary to introduce a new statistical test. Granger and Jeon (2003) consider four models for U.S. inflation: a univariate model, a model based on an indicator used to measure inflation, a univariate model based on the two previous models and a bivariate model. Applying the mean square error criterion, the best prediction made is one based on an autoregressive model of order 1 (AR (1)). Applying a distance-time method, the best model is the one based on an indicator used to measure inflation. Ledolter (2006) compares the mean square error of expost and ex-ante forecasts of regression models with a transfer function with the mean square error of univariate models that ignore the covariance and show the superiority of predictions based on transfer functions. Teräsvirta et al. (2005) examine the accuracy of forecasts based on linear autoregressive models, autoregressive with smooth transition (STAR) and neural network (neural network-NN) time series for 47 months of the macroeconomic variables of G7 economies. For each model a dynamic specification is used and it is shown that STAR models generate better forecasts than linear autoregressive ones. Neural networks over long a horizon forecast generated better predictions than models using an approach from private to general. Heilemann and Stekler (2007) explain why macroeconomic forecast accuracy in the last 50 years for the G7 has not improved. The first explanation refers to the critique of macroeconomic models and to forecasting models, and the second is related to the unrealistic expectations of forecast accuracy. Problems related to forecast bias, data quality, the forecast process, predicted indicators, and the relationship between forecast accuracy and forecast horizon are analyzed. Ruth (2008), using empirical studies, obtains forecasts with a higher degree of accuracy for European macroeconomic variables by combining specific subgroup predictions in comparison with forecasts based on a single model for the whole Union. Gorr (2009) shows that the univariate method of prediction is suitable for normal conditions of forecasting while using conventional measures for accuracy, yet multivariate models are recommended for predicting exceptional conditions when an ROC curve is used to measure accuracy. Dovern and Weisser (2011) use a broad set of individual forecasts to analyze four macroeconomic variables in G7 countries. Analyzing accuracy, bias and forecast efficiency resulted in large discrepancies between countries, as well as within the same country for different variables. In general, the forecasts are biased and only a fraction of GDP forecasts are closer to the results registered in reality. In the Netherlands, experts make predictions starting from a macroeconomic model used by the Netherlands Bureau for Economic Policy Analysis (CPB). For the period 1997-2008 the model of the expert macroeconomic variables evolution was reconstructed and compared with the base model. The conclusions of Franses, Kranendonk and Lanser (2011) are that the CPB model forecasts are in general biased and have a higher degree of accuracy. 3. The Models Used to Make Macroeconomic Forecasts The variables used in models are: the inflation rate calculated starting from the harmonized index of consumer prices, the unemployment rate and the interest Inflation rate Indicators of accuracy RMSE ME MAE MPE U1 U2 Unemployment rate Indicators of accuracy RMSE ME MAE MPE U1 U2 Interest rate RMSE ME MAE MPE U1 U2 Models used to build the forecasts VAR(2) ARMA 0,17051339 -0,6694 1,3694 -0,0650 0,051257 1,388935 Models used to build the forecasts VAR(2) ARMA 0,57231311 -0,51277 0,512767 -0,07696 0,040086 3,914625 VAR(2) 0,03663478 0,0052 0,0164 0,0100 0,014359 0,761926 ARMA 0,3635292 -0,3693 0,3693 -0,5302 0,36058 14,99092 0,8532325 0,0955 0,6045 -0,0336 0,017019 0,981571 Models with lag 3,6277209 -3,9449 4,6449 -0,2550 0,151515 2,980709 2,0922862 -2,09223 2,092233 -0,31383 0,186124 15,89517 Table 1: Indicators of forecasts accuracy for the inflation, unemployment and interest rate (the Czech Republic) Source: own calculations using Excel. rate in the short term. The last indicator is calculated as the average of daily values of interest rates on the market. The data series are monthly and are taken from the Eurostat website for the period from February 1999 to October 2011 for the Czech Republic. The indicators are expressed in comparable prices, the reference base being values from January 1999. We eliminated the influence of seasonal factors for the inflation rate using the Census X11 (historical) method. In the Czech Republic only the data series for inflation and unemployment rate were transformed to become stationary. Taking into account that our objective is the achievement of one-month-ahead forecasts for December 2011, January and February 2012, we considered it necessary to update the models. We used three types of models: a VAR(2) model, an ARMA and a model in which the inflation and interest rates are explained using variables with lag. The econometric models used for the Czech Republic are specified in Appendix 1. We developed one-month-ahead forecasts starting from these models and then evaluated their accuracy. The one-step-ahead forecasts for the 3 months were presented in Appendix 2. 4. The Assessment of Accuracy for Predictions Based on Econometric Models A generalization of the Diebold-Mariano test (DM) is used to determine whether the MSFE matrix trace of the model with aggregation variables is significantly lower than that of the model in which the aggregation of forecasts is done. If the MSFE determinant is used, according to Athanasopoulos and Vahid (2005), the DM test can not be used in this version, because the difference between the two models' MSFE determinants cannot be written as an average. In this case, a test that uses a bootstrap method is recommended. The DM statistic is calculated as: DM t = T [tr ( MSFEVAR ( 2) mod el ) h - tr ( MSFE ARMA mod el ) h ] s = (1) 1 1 T [ s T (em 2 1,1,t 2 2 + em 2,1,t + em3,1,t - er121,t - er22,1,t - er321,t )] , , T-number of months for which forecasts are developed em i , h,t - the h-steps-ahead forecast error of variable i at time t for the VAR(2) model eri , h ,t - the h-steps-ahead forecast error of variable i each rate R n = a + u n , where a is a constant and u t - resid, s- seasonal frequency, the prediction for the next period is: ^ ^ (2) R ' n +1 = × R ' n + (1 - ) × R ' n , n = 1,2,..., t + k is a smoothing factor, with values between 0 and 1, being determined by minimizing the sum of squared prediction errors. at time t for the ARMA s- the square root of a consistent estimator of the limiting variance of the numerator The null hypothesis of the test refers to the same accuracy of forecasts. Under this assumption and taking into account the usual conditions of the central limit theorem for weakly correlated processes, the DM statistic follows a standard normal asymptotic distribution. For variance the Newey-West estimator with the corresponding lag-truncation parameter set to h - 1 is used. We compared 3 months in terms of the accuracy of the predictions for all three variables, and predictions made starting from the VAR(2) models and ARMA models. The DM statistics for the accuracy of forecasts based on VAR models is higher than that based on ARMA models for all chosen countries. In Table 1 the accuracy indicators for the predictions are displayed. In the Czech Republic, when an econometric models was used to make forecasts, the ARMA procedure was the most suitable for the inflation rate, while the best results were given by VAR(2) models for the unemployment and interest rates. However, only the predictions based on the ARMA models for the inflation rate and on VAR for the interest rate are better than those that used the naïve model. For the Czech Republic only the VAR and ARMA models could be built to explain the evolution of the interest rate. Best results for the interest rate in the Czech Republic are given also by the VAR models. min 1 n i =0 ^ ( R ' n +1 - R ' n +1 ) 2 = min 1 n i =0 2 n +1 (3) Each future smoothed value is calculated as a weighted average of the n past observations, resulting in: ^ R ' n +1 = × (1 - ) i =1 ^ × R ' n +1- s . (4) 5. Holt-Winters Simple exponential smoothing method (M2) The method is recommended for data series with linear trends and without seasonal variations, the forecast (5) being determined as: R n + k = a + b × k . a n = × R n + (1 - ) × (a + b ) bn = (a n - a ) + (1 - ) b (6) Finally, the prediction value on horizon k is: ^ ^ ^ R = a +b ×k n+k n n (7) 6. Holt-Winters multiplicative exponential smoothing method (M3) This technique is used when the trend is linear and the seasonal variation follows a multiplicative model. The ^ smoothed data series is: R ' n + k = ( a + b × k ) × c n n n+k 5. The Assessment of Accuracy for Predictions Based On Exponential Smoothing Techniques Like econometric modeling, exponential smoothing is a technique used to make forecasts. It is a simple method that takes into account more recent data. In other words, recent observations in the data series are given more weight in the prediction than older values. Exponential smoothing considers exponentially decreasing weights over time. 4. Simple exponential smoothing method (M1) The technique can be applied for stationary data to make short run forecasts. Starting from the formula of (8), where a-intercept, b- trend, c- multiplicative seasonal factor an = × R'n + (1 - ) × (a + b ) c n- s bn = × (a n - a ) + (1 - ) × b cn = × R + (1 - ) × c n - s an (9) The prediction is: ^ ^ ^ ^ R ' n + k = (a + b × k ) × c n n n+ k (10) U2 Inflation rate- Czech Republic M1 M2 M3 M4 M5 Unemployment rateCzech Republic M1 M2 M3 M4 M5 Interest rate- Czech Republic M1 M2 M3 M4 M5 RMSE 0,288386455 1,119007113 0,859249004 1,039570357 ME -1,73383 -1,50076 -0,53664 -1,45292 MAE 1,800501 1,567428 0,603307 1,519589 MPE -0,08296 -0,08027 -0,03108 -0,0779 U1 0,056005 0,049381 0,01775 0,0475 1,545809 0,189913 0,947732 0,228745 0,081731 0,058351 0,111016 0,116203 0,048776 -0,03343 0,049443 -0,07804 -0,0839 0,01744 0,033433 0,049443 0,09456 0,100421 0,044912 -0,00499 0,007421 -0,01163 -0,0125 0,002621 0,004345 0,00436 0,008375 0,00877 0,003653 0,43671 0,44044 0,836498 0,87466 0,365749 0,033121 0,045165 0,098583 0,076148 0,03487 -0,01294 -0,01788 -0,09484 0,014587 -0,01772 0,022964 0,030232 0,094845 0,094149 0,023895 -0,01635 -0,02586 -0,13656 0,022764 -0,02554 0,021484 0,02999 0,075181 0,068091 0,0225 1,125963 2,013734 4,417344 3,35745 1,657338 Table 2: Measures of accuracy for forecasts based on exponential smoothing techniques for the inflation, unemployment and interest rate (the Czech Republic) Source: own computations using Excel 6. Holt-Winters additive exponential smoothing method (M4) This technique is used when the trend is linear and the seasonal variation follows a multiplicative model. The ^ smoothed data series is (14): R ' n + k = a + b × k + c n n n+k a- intercept, b- trend, c- additive seasonal factor a n = × ( R ' n - c n - s ) + (1 - ) × (a + b ) bn = × (a n - a ) + (1 - ) × b c n = × ( R ' n - a n ) + (1 - ) × c n - s (11) The prediction is: ^ ^ ^ ^ R ' n+k = a + b × k + c n n n+k (12) 7. Double exponential smoothing method (M5) This technique is recommended when the trend is linear, two recursive equations being used: S n = × R n + (1 - ) × S (13) D n = × S n + (1 - ) × D where S and D are Indeed, the exponential smoothing techniques provided the most accurate predictions for all indicators in the Czech Republic. For the inflation rate the best method to be applied was the additive exponential smoothing technique, while for unemployment and interest rates the simple exponential smoothing technique generated the best results due to a value of U1 very closed to zero. All of the predictions for the unemployment rate based on the exponential smoothing techniques are more accurate than those based on the naïve model. All forecasts are overestimated on the chosen horizon, excepting the unemployment rate in the case of Holt-Winters and the double smoothing method, and the interest rate when the additive technique is used. The low values for RMSE imply low variability in the data series. 6. Conclusions In our research we proposed to check if exponential smoothing techniques generate better short run predictions than simple econometric models. According to recent research, simple econometric models are recommended for forecasts due to their high degree of accuracy in predictions. For the prognosis made for the Czech Republic from December 2011February 2012 this hypothesis was not supported. simple, respectively double smoothed series. In Table 2 the accuracy indicators for predictions based on exponential smoothing techniques are presented for all three countries. Analyzing the values of these indicators, the smoothing method is better than the econometric models for the aforementioned countries. Predicting Macroeco onomic Indicators in the Czech Republic Using Econometric M Models and Exponential Smoothing Tech hniques In the Czech Republic the recent valu in the dat ues ta series used for predictions h s r have the greatest importanc ce. Therefore, exp T ponential smo oothing metho determine ods ed the best result in terms of fo t ts orecasts accuracy. Simple an nd additive exp a ponential sm moothing te echniques ar re recommended for the Czech Republic. r d h To improv policy we can use mo ve onthly forecasts based on the better metho for this co b od ountry. Policy is improved by c i choosing the m most accurate forecast, whic ch helps the government or banks in ma h aking the best decisions. In ou study we an d ur nalyzed the res sults of only tw wo quantitative methods, bu the research could be q ut b extended by adding oth e her quantitati ive forecastin ng methods or by using qualita m y ative methods or prediction s ns based on comb b binations of th two types of methods. he f

Journal

South East European Journal of Economics and Businessde Gruyter

Published: Nov 1, 2012

There are no references for this article.