The last 40 years have witnessed tremendous developments in empirical work in economics. In a recent paper, Josh Angrist and his coauthors show that the proportion of empirical work published in top journals in economics has moved from around 30% in 1980 to over 50% today. This is a very important and welcome trend. In this article, I want to take stock of what I see as the main progress in empirical research and look ahead at the remaining challenges.
In my opinion, the main achievement of the empirical revolution in economics is causal inference. Causal inference, or the ability to rigorously tease out the effect of an intervention, enables us to rigorously test theories and to properly evaluate public policies. The empirical revolution has focused the attention on the credibility of the research design: which features of the data help identify the causal effect of interest?
With the empirical revolution, economics has grown a second empirical leg along its extraordinary first theoretical leg and is now able to move forward as a fully fledged social science, weeding out wrong theories, and as true social engineering, stopping inefficient policies and reinforcing the ones that resist empirical tests.
The achievements of the empirical revolution are outstanding, in my opinion on par with the most celebrated theoretical results in the field. It is obvious to me that in the following years several of the contributors to the empirical revolution in economics will receive the Nobel prize: Orley Ashenfelter, Josh Angrist, David Card, Alan Krueger, Don Rubin, Guido Imbens, Esther Duflo, Michael Greenstone, David Autor. Just to mention a few important achievements, in no way exhaustive:
- Microfinance, once held as a development silver bullet, does not seem to be effective at reducing poverty (actually some very recent results suggest that we might have missed the impacts of microcredit because they operate at a larger scale than what previous experiments have been able to look at) while a more integral graduation approach seems to work.
- Trade with China has had major impacts on workers across the developed world, even influencing the recent US election, layoffs kill workers, employers have local monopsony power, it is not so clear that the minimum wage and immigration impact employment.
- The Permanent Income Hypothesis does not seem to hold, multipliers are superior to one, monetary policy has real effects.
- Demand curves are downward sloping but there exists Giffen goods.
- Kenyan traders collude, mergers increase prices, and competition decreases them, adverse selection exists in the market for loans.
- Pollution kills, even at levels below regulatory norms, regulatory policies work, price policies too. How you allocate emission permits does not matter for the final equilibrium.
- Small nudges might have very small or very large effects.
- Expected utility does not account for some decisions under risk.
What I find extraordinary is how much empirical results have supported as well as falsified basic assumptions in economic theory such as functioning markets and rational agents. Sometimes agents behave rationally, sometimes they do not. Sometimes markets work, sometimes they do not. Sometimes it matters a lot, sometimes it does not. I think we are going to see more and more theory trying to tease out the context in which deviations might matter or not.
The empirical revolution has also brought about a new type of methodological research. The economists’ empirical toolkit now has structured around 5 types of tools for causal inference: lab/controlled experiments; Randomized Controlled Trials (RCTs); natural experiments; observational methods and structural models. Aside the impressive continuing achievements of theoretical econometrics, we see now methodological work investigating the empirical properties of methods of causal inference:
- Do lab experiments approximate real life behavior?
- Do RCTs bias people’s behavior? We know that asking people questions changes their subsequent decisions, and that the subpopulation experimented upon differs from the population of applicants in routine mode.
- Do observational methods reproduce the results of RCTs? Recent research by Jasmin Fliegner, Roland Rathelot and myself suggests that it is not the case with the data usually available to economists, whereas recent research at Facebook shows that observational methods seem to perform very well with the very rich data available on social networks.
- Can structural models predict the consequences of reforms? Dan McFadden famously showed that a Random Utility Model was able to predict almost perfectly the market share of a new product in the case of the introduction of San Francisco subway. Recent research by Philippe Février, Isis Durrmeyer and Xavier d’Haultfoeuille shows that a workhorse IO model was unable to predict the consequences of the introduction of the French feebate on car market shares . Parag Pathak and Peng Shi also recently showed that a model of school choice does a good job at predicting the consequences of a reform in the allocation algorithm in Boston.
But challenges lay ahead that have to be addressed head on. I am very optimistic since I can see the first responses already taking shape, but I think the swiftest our response to these challenges will be, the most credibility our field will have in the public’s eye and the quickest our progress will be.
The first challenge that I see would be an exclusive focus on causality. Science starts with observation, documenting facts about the world that are in need of an explanation. One of the most influential empirical work in the last decades is Thomas Piketty’s effort, along with coauthors, to document the rise of inequality in countries all around the world. Observing new facts should also be a part of the empirical toolkit in economics.
The second and most important challenge that I see for empirical research in economics is the one of publication bias. Publication bias occurs when researchers and editors only publish statistically significant results. When results are imprecise, publication bias leads to drastic overestimation of the magnitude of results. Publication bias has plagued entire research fields such as cancer research and psychology, fields that now both face a replication crisis. A recent paper by John Ioannidis and coauthors measures the extent of publication bias in empirical economics and finds it to be very large: “nearly 80% of the reported effects in […] empirical economics […] are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more.” This is a critical problem. For example, estimates of the Value of Statistical Life that are used to evaluate policies are overestimated by a factor of 2, leading to incorrect policy decisions.
The third challenge is that of precision: most results in empirical economics are very imprecise. In order to illustrate this I like to use the concept of signal to noise ratio. A result barely statistically significant at 5% has a signal to noise ratio of 0.5, meaning that there is twice as much noise as there is signal. Such a result is compatible with widely different true effects (from very very small to very very large). But things are actually worse than that. Ioannidis and coauthors estimate that the median power in empirical economics is 18%, which implies a signal to noise ratio of 0.26, meaning that the median result in economics contains four times more noise than it has signal. I attribute this issue to an exclusive focus on statistical significance at the expense of looking at actual sampling noise.
How to address these challenges? In my opinion, we need to see at least three major evolutions in publishing, research and teaching.
- Editors have to take steps to encourage descriptive work and to curb publication bias. This requires:
- Ditch p-values and statistical significance and focus on sampling noise, measured for example by confidence intervals. Confidence intervals make explicit the uncertainty around the true estimate. Present sampling noise in abstracts in the form of “the estimated impact is x±y.”
- Publish null results, they are as interesting and informative as significant results, and maybe more. Favor more precise results.
- Write clear guidelines about what is expected in an empirical paper using a given technique.
- Require pre-registration of studies, even for non experimental research. Pre-registration prevents specification search.
- Encourage the use of blind data analysis. This tool, invented by physicists, enables you to write your code on perturbed data and to run it only once on the true data, preventing specification search.
- Publish replications and meta-analysis (rigorous summary of results, including tests for publication bias).
- Researchers have to join their efforts to obtain much more precise results. This requires:
- Take stock of where we stand: organize published results using meta-analysis in order to check which theoretical propositions in economics have been validated or refuted and with which level of precision.
- Identify the critical remaining challenges: what are the 10 or 100 most important empirical questions in economics? Follow the example of David Hilbert stating the 23 problems of the century in mathematics.
- Focus all of the profession’s efforts on trying to solve these challenges, especially by running very large and very precise critical experiments. Examples that come to mind here are physicists uniting to secure funding for and building the CERN and LIGO/VIRGO experiments required to test critical predictions of the standard model.
- Teach economics as an empirical science, by including empirical results on an equal footing with theoretical propositions. This would serve several purposes: identify what is the common core of empirically founded propositions in economics; identify the remaining challenges; help students learn the scientific method and integrate them into the exciting journey of scientific progress.
So many things to do. It is so exciting to see this revolution and to be able to contribute to it!
by Sylvain Chabé-Ferret