The Moral Problem with Self-Driving Cars

What the classic Trolley Problem can teach us about the moral issues around self-driving cars.

Patrick Heller
4 min readMar 31, 2023

--

In a previous article, we saw that we need to think hard about the moral implications of the ever-faster evolution of artificially intelligent chatbots, like ChatGPT. However, bots are not likely to kill people in a conversation — even though we need to be careful with putting bots on emergency calls and suicide hotlines. But, there are other situations in which morality kicks in, in high gear. For instance, what if you’re working on self-driving cars? That’s a whole different ballgame altogether. To give some context around the issues at play, first, a description of the much-used allegory called the Trolley Problem, introduced in 1967 by English philosopher Phillipa Foot (1920–2010).

Imagine the following situation. A trolley car is speeding down the tracks and its brakes seem to be malfunctioning. Up ahead are five people working on the tracks and if they don’t move, they are going to get run over by the trolley car, and probably get killed.

There’s a switch right in front of you, and if you pull the lever, the trolley is going to be diverted to another track. There is one catch though. On the other track, there is one worker who is not following the situation…

--

--