Sunday , November 17 2024
Home / Lars P. Syll / ‘Testing’ purchasing power parity theory

‘Testing’ purchasing power parity theory

Summary:
‘Testing’ purchasing power parity theory Purchasing power parity doctrine is examined by sophisticated statistical and econometric techniques. The time series of aggregated price levels and the nominal exchange rates are treated as a random sample. Most papers of this type deal with the technical properties of the slightly different data sets. To take some examples (at random): “Two potential problems arise when working with nominal exchange rates and ratios of price levels. First, unit roots are possibly present in the logarithms of nominal exchange rates and price level ratios. If unit roots are present, then standard asymptotic theory for least squares estimators is invalid.” (Crownover et al. 786) “We present two asymptotically equivalent procedures

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes What statistics teachers get wrong!

Lars Pålsson Syll writes Statistical uncertainty

Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics

Lars Pålsson Syll writes Interpreting confidence intervals

‘Testing’ purchasing power parity theory

Understanding Purchasing Power and the Consumer Price IndexPurchasing power parity doctrine is examined by sophisticated statistical and econometric techniques. The time series of aggregated price levels and the nominal exchange rates are treated as a random sample. Most papers of this type deal with the technical properties of the slightly different data sets. To take some examples (at random): “Two potential problems arise when working with nominal exchange rates and ratios of price levels. First, unit roots are possibly present in the logarithms of nominal exchange rates and price level ratios. If unit roots are present, then standard asymptotic theory for least squares estimators is invalid.” (Crownover et al. 786) “We present two asymptotically equivalent procedures for detecting a unit root in spot exchange rate and price level data: (1) the Augmented Dickey-Fuller (ADF) test, and (2) the Phillips and Perron Z statistic. Both procedures allow for fitted drift in the time series model.” (Corbae–Ouliaris 509)

This ‘testing’ of purchasing power parity theory is very popular in mainstream journals. These examinations suffer a lack of support from the theory of statistics and probability … Apart from the fact that the time series are not random samples, there is also an extra epistemological problem in this type of testing: the theory is based on unreal assumptions, which restrict the validity of the theory to a dimensionless imaginary world in which the transactions of goods are costless. In contrast to the theory, data employed in testing it originates from the real world, where the countries have extensions and transport has cost. This situation makes the ‘testing’ worse and more unreasonable than one proving Pythagoras’ theorem by measuring real triangles. The latter would also be unjustified, but in this case measurements can be made and the assumptions on which the theorem based can be treated as intuitively true, because the connection between imaginary and real points, lines and circles can be created without a problem. In the case of PPP doctrine this is different: it is based on a false treatment of space and an unjustified aggregate view with immeasurable variables. This procedure is at the same time positivist (the test is grounded on observation statements) and strongly anti-positivist (the theory is grounded on unreal, unempirical assumptions).

Econometrics is supposed to be able to test economic theories. But to serve as a testing device you have to make many assumptions, many of which cannot be tested or verified. To make things worse, there are also rarely strong and reliable ways of telling us which set of assumptions is preferred. Trying to test and infer causality from data you have to rely on assumptions such as disturbance terms being ‘independent and identically distributed’; functions being additive, linear, and with constant coefficients; parameters being’ ‘invariant under intervention; variables being ‘exogenous’, ‘identifiable’, ‘structural and so on. Unfortunately, we are seldom or never informed of where that kind of ‘knowledge’ comes from, beyond referring to the economic theory that one is supposed to test.

That leaves us in the awkward position of admitting that if the assumptions made do not hold, the inferences, conclusions and testing outcomes econometricians come up with simply do not follow the data and statistics they use.

The central question is ‘How do we learn from empirical data?’ But we have to remember that the value of testing hinges on our ability to validate the — often unarticulated — assumptions on which the testing models build. If the model is wrong, the test apparatus simply gives us fictional values. There is always a risk that one puts a blind eye to some of those non-fulfilled technical assumptions that actually make the testing results — and the inferences we build on them — unwarranted. Econometric testing builds on the assumption that the hypotheses can be treated as hypotheses about (joint) probability distributions and that economic variables can be treated as if pulled out of an urn as a random sample. Most economic phenomena are nothing of the kind.

Most users of the econometric toolbox seem to have a built-in blindness to the fact that mathematical-statistical modelling in social sciences is inherently incomplete since it builds on the presupposition that the model properties are — without serious argumentation or warrant — assumed to also apply to the intended real-world target systems studied. Many of the processes and structures that we know play essential roles in the target systems do not show up — often for mathematical-statistical tractability reasons — in the models. The bridge between model and reality is failing. Valid and relevant information is unrecognized and lost, making the models harmfully misleading and largely irrelevant if our goal is to learn, explain or understand anything about actual economies and societies. Without giving strong evidence for an essential compatibility between model and reality the analysis becomes nothing but a fictitious storytelling of questionable scientific value.

It is difficult to find any hard evidence that econometric testing has been able to exclude any economic theory. If we are to judge econometrics based on its capacity of eliminating invalid theories, it has not been a very successful business.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *