Speedread

Statistically, 6 out of 7 of Snow White’s dwarves are not happy”, quipped economist, Professor David Round when talking about econometrics. What’s that got to do with econometrics? His point is that econometric analysis can be and is distorted by econometricians to achieve outcomes in regulatory debates that aren’t necessarily anything like the right outcome. For example, just because econometrics show a particular trend does not mean that the trend is an unhappy one for dwarves or that it’s just about the name of a dwarf. Statistics can be manipulated in various ways.

While there is a role for econometrics in a regulatory context, that needs to be managed and handled more carefully than often happens at present.

At this week’s annual conference of the Competition Law & Policy Institute of NZ, Professor Round and Ben Gerritsen of Castalia presented useful papers that set out play books for dealing to the misuse of econometrics on exercises to determine access and other pricing. Knowing the tricks enables them to be controlled better (and enables econometrics to be used wisely and usefully by regulators).

In our experience the important areas for lawyers to focus on are the factual assumptions that feed into the econometric modelling, for that is what has particularly strong impact on outcomes. We’ve found it to be readily possible to review the approach on this basis, based on an understanding of the underlying facts and issues. This after all is one of the key areas when both briefing any expert witness, and attacking expert evidence in cross-examination.

Those play books for trying to manage unruly econometricians are these papers:

  • “The Use and Usefulness of Econometrics in Economic Regulation” by Ben Gerritsen of Castalia; and
  • “Econometrics –is the ‘con’ still here?” by Prof David Round.

Econometricians can rapidly lose lawyers, judges and regulators with their flash regressions and fancy algebra. But they really need to understand things to be able to work through any hocus-pocus. As Professor Round observed:

“Econometrics is full of experimental bias. This is just a fact of life – economics is not like the hard sciences where controlled experiments can be carried out….. Randomised experiments are very rare in regulation and antitrust. Econometricians tend to take what they can get, injecting their own views about the key inputs (data and variables and assumptions) going into their models, and portray it all as truth. But this means lawyers and judges need to be very sceptical about their results.

I have no doubts whatsoever that econometrics experts estimate many possible models, looking for their Eureka moment. Having found a model they can justify, and which they know will please their masters, they present this as depicting market truth, making sure to work backwards from this to develop the necessary assumptions to justify the model. In the process they ignore the traditional theories of inference. As Leamer so wonderfully puts it (p.37), the econometrician “pulls from the bramble of computer output the one thorn of a model he likes best, the one he chooses to portray as a rose”.

And so in competition and regulation matters, not only are definitions and key economic parameters hotly debated, but you can bet your shirt that no expert will accept that a rival expert’s quantitative explorations portray the truth. The truth is that econometric findings can often be altered or reversed by relatively small changes in assumptions. We should reserve judgment on an econometric model until it stands up to a study of fragility or robustness, usually by other researchers advocating opposite opinions. It is, however, much more efficient for econometricians to perform their own sensitivity analyses, and to be frank with those who retain them. We should demand much more complete and more honest reporting of the robustness of their estimates.”

What’s great about these two papers is the overview of some of the issues and some of the tricks, as econometricians argue their cases, perhaps favourably to their clients’ views. In our experience the important areas for lawyers to focus on are the factual assumptions that feed into the econometric modelling. We’ve found it to be readily possible to review the approach on this basis, based on an understanding of the underlying facts and issues. This after all is one of the key areas when both briefing any expert witness, and attacking expert evidence in crossexamination. We won’t summarise the multiple issues noted as the papers are worth the read for those engaged in this area. But there’s a nice summary by Professor Round:

“So where does this leave us? Use econometrics carefully, and wisely. Do not expect it to be unchallenged. Do demand of your experts a recognition that their models need to be carefully checked for fragility/ robustness. Do ask your expert to take history into account. Do not descend into arcane abstractions and intractable assumptions. Remember commercial realities. Do not be driven by statistical relationships alone - seek commercial, engineering and historical support as well. Be honest with your caveats and claims about robust results.”

And Ben Gerritsen in his paper concludes:

“Critiques from both industry participants and experts suggest that econometric analysis needs to be based on an intuitive understanding of underlying relationships, rather than simply on observations from the data. To be highly convincing, statistical relationships should be bolstered with engineering or other expert evidence on why the relationship holds (not just in the data but in practice).

Good econometric evidence should also be up front about the caveats that apply. There are very few truly robust econometric models: reality is just too messy. Even models with a high statistical strength (due to a high R2 or high levels of statistical significance) can turn out to be poor predictors of the future. So for econometrics to be useful, decision-makers need to know the weaknesses of the approach discussed in this paper.”