Self-driving cars will save lots of lives, lots of time, and be more environmental-friendly than current, human-driven cars. The technological challenges of automated driving are rapidly being solved, and the next decade should see a smooth transition from human-driven to self-driving cars, going through intermediate stages of shared control of the car. Not all the challenges of automated driving are technical, though. When a car is able to drive itself, then it should also be equipped to make life or death decisions, among many others. To decide on what are the “right” decisions, we have to address challenges of an ethical and psychological nature.

The most debated example of such challenges is concerned with the way self-driving cars will distribute risks among road users. Imagine that two children suddenly cross the road in front of a self-driving car, and that the only way to save them is for the car to swerve into an oncoming truck, which would kill its passenger. What should the car do? Protect the life of its passenger, but kill the two children? Or save the two children, but kill its passenger?

There are millions of variations of this scenario. For instance, would you change your mind if the passenger of the car was a pregnant woman, and the two pedestrians were elderly citizens rather than children?  Would that change your judgment about what the car should do? If you want to explore your decisions in different scenarios you can visit the Moral Machine website (, which generates accident scenarios in an effort to crowdsource judgments all across the world.

Thanks to projects such as the Moral Machine, we will be able to know what exactly are the ethical preferences of citizens when it comes to telling self-driving cars who to kill, if they cannot save everyone. Indeed, the website has collected about 40 million judgments so far. However, it is important to note that this knowledge about judgements is not sufficient to answer the question of how to program the cars, because of the ethical and psychological nature of the problem.

First, governments and regulators may find it unethical to follow the preferences of the crowd. Imagine for example that the vast majority of people think that self-driving cars should save children over older adults. Does that mean cars should be programmed to follow that preference? An ethics committee mandated by the German government has recently concluded that self-driving cars should not discriminate between humans on any basis, including age. Maybe this is the right thing to do; but this decision would likely clash against the values of many citizens, and create public outcry the day a self-driving car intentionally runs over a child.

Second, citizens may not actually want to buy the cars which are programmed to follow their preferences. For example, most people tend to think that if a self-driving car cannot save everyone, then it should try to save as many people as it can. In particular, they believe that a self-driving car should kill its passenger, if killing its passenger can save ten pedestrians. But when asked if they would themselves buy such a car, they start to hesitate. In fact, they would prefer other people to buy such a car,  whereas they want their car to prioritize their own life as its passenger.

This means that if cars were programmed to save the greater number (even if it means killing passengers), fewer consumers would make the switch to a self-driving car. As a consequence, given that self-driving cars will presumably be much safer than human-driven cars, programming the cars to save the greater number would lead to more deaths–because it would discourage many people to buy a (safer) self-driving car.

In summary, while we are already well on the way to solve the technological challenges of automated driving, we need behavioral science to make sure that governments, regulators and consumers alike make rational, informed decisions about the ethics of self-driving cars.

by Jean-François Bonnefon

Further reading:

Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science.

Shariff, A., Bonnefon, J. F., & Rahwan, I. (i2017). Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s