An interview with Daron Acemoglu on artificial intelligence, institutions, and the future of work

The recipient of the 2018 Jean-Jacques Laffont prize, Daron Acemoglu, is the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology. The Turkish-American economist has been extensively published for his research on political economy, development, and labour economics, and has won multiple awards for his two books, Economic Origins of Dictatorship and Democracy (2006) and Why Nations Fail (2012), which he co-authored with James A. Robinson from the University of Chicago.

The Jean-Jacques Laffont prize is the latest addition to the well-deserved recognition the economist has received for his work, which includes the John Bates Clark Medal from the American Economic Association in 2005 and the BBVA Frontiers of Knowledge Award in Economics in 2017. Despite a schedule heavy with seminars and conferences, Daron kindly set aside some time to offer the TSEconomist his insights on topics ranging from the impact of artificial intelligence for our societies to the role an academic ought to take in public political affairs.

_DSC0724

  1. Congratulations on winning the Jean-Jacques Laffont prize. What does this prize represent to you?

I’m incredibly honoured. Jean-Jacques Laffont was a pioneer economist in both theory and applications of theory to major economic problems. I think this tradition is really important for the relevance of economics and of its flourishing over the last two decades or so. I think it’s a fantastic way of honouring his influence, and I feel very privileged to have been chosen for it.

  1. Thanks to you and other scholars working on economics and institutions, we now know that the way institutions regulate economic life and create incentives are of great importance for the development of a nation. New players such as Google now possess both the technology and the data needed to efficiently solve the optimisation problems institutions face. This raises the debate on government access, purchase, and use of this data, especially in terms of efficiency versus possible harms to democracy due to the centralisation of political power. What is your take on this?

I think you are raising several difficult and important issues. Let me break them into two parts.

One is about whether the advances in technology, including AI and computational power, will change the trade-off between different political regimes. I think the jury’s out and we do not know the answer to that, but my sense would be that it would not change it as much as it changes the feasibility of different regimes to survive even if they are not optimal. What I mean is that you can start thinking about the problem of what was wrong with the Soviet Union in the same way that Hayek did. There are problems to be solved and they’re just too complex, the government can’t handle it and let’s hope that the market solves it.

Then, if you think about it that way, you may say that the government is getting better at solving it, so perhaps we can have a more successful Soviet Union. I think that this is wrong for two reasons that highlight why Hayek’s way of thinking was limited, despite being revolutionary and innovative. One reason is that the problem is not static, but dynamic, so the new algorithms and capabilities create as many new problems that we don’t even know how to articulate. It is therefore naive to think that in such a changing world, we can delegate decision-making to an algorithm and hope that it will do better than the decentralised workings of individuals in groups, markets, communities, and so on.

The second reason is that Hayek’s analysis did not sufficiently emphasise a point that I think he was aware of and stressed in other settings: it is not just about the capabilities of the governments, but about their incentives. It is not simply that governments and rulers cannot do the right thing, but that they do not have the incentives to do so. Even if they wanted to do the right thing, they do not have the trust of the people and thus cannot get the information and implement it. For that reason, I don’t think that the trade-offs between dictatorship and democracy, or market planning versus some sort of market economy, is majorly affected by new technology.

On the other hand, we know that the equilibrium feasibility of a dictatorship may be affected. The ability to control information, the Internet, social media, and other things, may eventually give much greater repressive capability to dictatorships. Most of the fruitful applications of AI are in the future and to be seen, the exception being surveillance, which is already present and will only expand in the next ten years, in China and other countries. This will have major effects on how countries are organised, even if it may not be optimal for them to be organised that way.

To answer the second part of your question, I think that Google is not only expanding technology, but also posing new problems, because we are not used to companies being as large and dominant as Google, Facebook, Microsoft, or Amazon are. Think of when people were up in arms about the power of companies, robber barons, at the beginning of the 20th century, leading to the whole progressive sequence of precedents being reformed, antitrust and other political reforms: as a fraction of GDP, those companies were about one quarter as big as the ones we have today. I therefore think that the modern field of industrial organisation is doing us a huge disfavour by not updating its way of thinking about antitrust and market dominance, with huge effects on the legal framework, among other things. I don’t know the answers, but I know that the answers don’t lie in thinking about something like “Herfindalh is not a good measure of competition so therefore we might have Google dominate everything, but perhaps we are ok” – I think that this is not a particularly good way of going about things.

  1. Some fear that the dominance of these companies could lead to the growth of inequality. Do you think that AI could play a role in this?

I am convinced that automation in general has already played a major role in the rise of inequality, such as changes in wage structure and employment patterns. Industrial robots are part of that, as well as numerically controlled machinery and other automation technologies. Software has been a contributing factor, but probably not the driver in the same sense that people initially thought about it. Projecting from that, one might think that AI will play a similar role, and I think that this is not a crazy projection, although I don’t have much confidence that we can predict what AI will do. The reason is that industrial robotics is a complex but narrow technology. It uses software and even increasingly artificial intelligence, but it isn’t rocket science. The main challenge is developing robots that can interact with and manipulate the physical world.

AI is a much broader technological platform. You can use it in healthcare and education in very different ways than in voice, speech, and image recognition. Therefore, it is not clear how AI will develop and which applications will be more important, and that’s actually one of the places where I worry about the dominance of companies like Google, Amazon, Facebook: they are actually shaping how AI is developing. Their business model and their priorities may be pushing AI to develop in ways that are not advantageous for society and certainly for creating jobs and demand for labour.

We are very much at the beginning of the process of AI and we definitely have to be alert to the possibility that AI will have potentially destructive effects on the labour market. However, I don’t think that it is a foregone conclusion, and I actually believe there are ways of using AI that will be more conducive to higher wages and higher employment.

  1. Regarding the potential polarisation between high and low-skilled labour, do you think that the government could address this issue with universal basic income?

There is a danger – not a certainty, but a danger – that it will polarise, and that even if we use AI in a way that simplifies certain tasks, it may still require some numeracy and some social skills that not all workers have, resulting in probable inequality and displacement effects.

That being said, I believe that universal basic income is a bad idea, because it is not solving the right problem. If the problem is one of redistribution, we have much better tools to address it. Hence, progressive income taxation coupled with something like earned tax credits or negative taxation at the bottom would be much better for redistributing wealth, without wasting resources on people who don’t need to get the transfer. Universal basic income is extremely blunt and wasteful, because it gives many transfers to people who shouldn’t get them, whereas taxation can do much better.

On the one side, I fear that a lot of people who support universal basic income are coming from the part of the spectrum which includes many libertarian ideas on reducing transfers, and I would worry that universal basic income would actually reduce transfers and misdirect them. On the other side, they may be coming from the extreme left, which doesn’t take the budget constraints into account, and again, some of the objectives of redistributing could be achieved more efficiently with tools like progressive income taxation.

Even more importantly, there is another central problem that basic income not only fails to deal with, but actually worsens: I think a society which doesn’t generate employment for people would be a very sad society and will have lots of political and social problems. This fantasy of people not working and having a good living standard is not a good fantasy. Whatever policy we should use should be one that encourages people to obtain a job, and universal basic income will discourage people to do so, as opposed to tax credits on earned income, for example.

  1. In a scenario of individuals being substituted and less people working, how could governments obtain the revenue they are not getting from income taxation? Could taxing robots be a possibility?

I think that this is a bad way of approaching the problem, because when you look at labour income, there is certainly enough to have more redistributive taxation, and no certain need to tax robots. However, we should also think about capital income taxation more generally: there may be reasons for taxing robots, but that has to be related more to the production efficiency and excessive automation. I think that singling out robots, as a revenue source distinct from other capital stock, would be a bad idea. If, for example, you want taxes to raise revenue, then land taxes will be a much better option than robot taxes – this does not mean that we should dismiss the idea of taxing robots. I think that this is confusing because there are efficiency reasons (giving the right incentives to firms) and revenue-raising reasons for taxing. Moreover, because of Bill Gates and other people, public discussions are not helping this confusion.

In terms of sharing wealth, I think that robots do not create new problems compared to other forms of capital. I think it was a confusion of Marx to think of marginal product of capital in very complex ways – that everything that goes to capital is somehow theft – and if neoclassical economics have one contribution, it is to clarify that.  I personally believe there are legitimate reasons for thinking that there is excessive automation. And if there is excessive automation, there are Pigouvian reasons for taxing robots, or actually removing subsidies to robots, which there are many. But that is the discussion we need to have.

  1. There has recently been optimism in regards to the future to AI and the role it could have, for example, on detecting corruption or improving education. You have made the distinction between replacing and enabling technologies. Where does one draw the line between the two?

That is a great question. In reality of course, automation and replacing technologies merge with technology that improve productivity. A great example would be computer-assisted design. Literally interpreted, that would be a labour augmenting technology, because it makes the workers who are working in design more productive. At the same time, however, it may have the same features as automation technology, because with computer-assisted design, some part of the tasks that a drawer would do would be automated. If you do it once, you can do it repeatedly.

So that is a grey area, but it’s okay because the conceptually important point to recognise is that different types of technologies have very different effects. Recognising this is an antidote against the argument that improving productivity through technology will always benefit labour; we actually need to think about what new technologies do and how the increase in productivity will affect labour.

But it is also very important for the discussion regarding AI to point out that AI, as opposed to industrial robot automation, is not necessarily – and does not have to be – labour replacing.  There are ways in which you can use it to create new tasks for labour or increase productivity. This is what I think will play out in real time in the future of AI.

  1. In 2017, you wrote an article for Foreign Policy, “We are the last defence against Trump”, which questioned the belief that institutions are strong enough to prevent a man like Donald Trump to overlook the rule of law. According to you, should economists come off the fence on current affairs? Is it possible to express an opinion without sacrificing some of the intellectual rigour one can expect from a researcher?

I think so. First, there are important personal responsibilities that are crosscutting. Secondly, there is a danger of having the perfect be the enemy of the good.

On the first one, I think that people have to make their own choices as to what is acceptable and what is not. Some things are just within the realm of “I prefer high taxes, you prefer low taxes”, and that is quite a reasonable thing. But some other issues may be a real threat to democracy, to other aspects of institutions, and to minorities that are disempowered. From there, it is important to recognise that there are some lines that should not be crossed, or if they are crossed, that some people need to defend them vocally. Any analogy to the Nazi period is fraud with danger, but it bears saying that, of course, in hindsight, every academic should have walked out of the universities that were managed by Nazis, that were firing Jewish scholars, or were teaching jurisprudence according to the national socialism. That has nothing to do with whether you have evidence of one versus or another – I think that there are some lines.  Similarly, and without saying anything as provocative as drawing parallels between Trump and the Nazis, I think that it is important for people, in general, to defend democracy against the onslaught that it is receiving from Trump’s administration and the circles of people around him. I think I will say openly to everybody that it is wrong for any economist or academic to go and work for Trump, and I think I would certainly never consider doing so, and would consider distancing myself from anybody who does.

But that is on the private ground. On the social science side, there is a lot we do not know. Everything we know is subject to standard errors and external validity constraints, but to refuse to act or to condemn would be to have the perfect be the enemy of the good. On the basis of what we know, we know how democracies fail, we know how certain aspects of American institutions are actually weaker than what people think, and we know how changes in policies against minorities would have terrible effects for certain groups. I think that on the basis of that, to articulate criticism of certain policies and certain politicians is also a good use of the knowledge we have accumulated.

by Valérie Furio, Gökçe Gökkoca, Konrad Lucke, Paula Navarro, and Rémi Perrichon

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s