The ethics of autonomous driving: are we asking the right questions?

By Michael Haiden, Research Associate in Technology Ethics, Technische Hochschule Ingolstadt, Germany

February 5, 2024

The trolley problem has for a long time been a classic in moral philosophy. To test our moral convictions, it confronts us with the following scenario: imagine that a trolley is racing towards five people standing on a track, which it would kill on collision. The brakes do not work, however, you could flip a switch and divert the trolley to a separate track, where only one person is standing. Would you divert the trolley and save the five?

The dilemma comes in various, and often more detailed forms – for example, if you agree to flipping the switch, would you also push someone in front of the trolley to stop it? Due to its popularity, it comes as no surprise that the trolley problem appears in relation to the ethics of autonomous vehicles (AVs). If one day, driving becomes fully automated, what should a car do if it were on course to crash into a child, but could divert to hit an older person instead? What if it were to hit two people but could steer itself against a wall, killing its occupant?

The people trying to build the cars have little concern for these issues. Their goal, instead, is to avoid accidents and minimize the risk of any collision – not ponder who may die in a crash. But as ethicists of technology increasingly argue, both perspectives only grasp parts of an important moral discussion.

Risk-distribution on the road

As a recent paper by researchers at the Technische Hochschule Ingolstadt argued: “any maneuver in everyday road traffic constitutes a redistribution of risks that is a function of both collision probability and collision severity.” In other words, we cannot solely focus on how many should be hit in case of a crash (severity) or how to avoid crashes in total (probability). Rather, moral discussions on autonomous driving need to incorporate both.

Under this perspective of “risk-ethics”, the important moral questions of autonomous driving are not deterministic, but probabilistic. The paper’s key example is how a car should position itself in traffic: in a way that minimizes the total probability of an accident (by staying in the middle of the lane) or should it also take severity into account? In other words, should an AV drive closer to a single cyclist and thereby decrease the risk of crashing into another car with four occupants?

What we know about people’s preferences

Interestingly, it seems that people already take both probability and severity into account. We know this because the Ingolstadt researchers asked respondents from Germany where they would position an automated car in road traffic between two other vehicles. Participants were told that the risk of an accident was low, but never zero. In addition, they were to assume that any collision would result in the deaths of all involved and that the risk of an accident depended solely on the distance between two vehicles. Thus, the lowest total risk of accident could be achieved by staying in the middle of the lane and in this case, the vehicles to the left and right would be equally at risk.

The researchers confronted respondents with various scenarios in which they could choose the distance the autonomous vehicle should keep to its left and right. The situation changed according to the number of passengers in the other two cars, the types of vehicles to the left and right (a car or a bicycle) and included the participants themselves in the autonomous car in some situations.

What happens, the researchers found out, is that people take the number of potential victims into account. While placing the car in the middle in case of equal occupants on both sides, in case of an unequal distribution, they move the car closer to the vehicle with fewer passengers. Interestingly, if participants were asked to imagine themselves as passenger, they still position themselves closer to the vehicle with fewer passengers – meaning they are willing to accept a certain amount of risk for themselves, if this strategy protects others.

In a follow-up study, the researchers determined if people understood that the lowest total risk of collision was in the middle. Most respondents understood this, which leads them to conclude that ordinary road users are willing to shoulder a higher risk of collision – even for themselves – to ensure that fewer people are implicated in an accident.

(On) the road ahead

The researchers note: “With the possible deployment of AVs on the road, questions of risk ethics are therefore unavoidable and any society considering their use must face a debate about the appropriate risk approach.” Any societal transformation, as mundane as deciding where to position oneself on the road, alters how we distribute risks. 

So far, however, this insight is neglected in discussions on autonomous driving – and AI risks in general. While AVs are currently still battling with technical problems, no matter how good the technology will be, it is unlikely that the risks for every road user will be zero. The important question will then be: who must bear the brunt of it? This also means that accident-avoidance cannot be the sole guiding principles of AV implementation.

This article was authored by Michael Haiden, Research Associate in Technology Ethics, Technische Hochschule Ingolstadt, Germany