The Moral Problem with Self-Driving Cars
What the classic Trolley Problem can teach us about the moral issues around self-driving cars.
--
In a previous article, we saw that we need to think hard about the moral implications of the ever-faster evolution of artificially intelligent chatbots, like ChatGPT. However, bots are not likely to kill people in a conversation — even though we need to be careful with putting bots on emergency calls and suicide hotlines. But, there are other situations in which morality kicks in, in high gear. For instance, what if you’re working on self-driving cars? That’s a whole different ballgame altogether. To give some context around the issues at play, first, a description of the much-used allegory called the Trolley Problem, introduced in 1967 by English philosopher Phillipa Foot (1920–2010).
Imagine the following situation. A trolley car is speeding down the tracks and its brakes seem to be malfunctioning. Up ahead are five people working on the tracks and if they don’t move, they are going to get run over by the trolley car, and probably get killed.
There’s a switch right in front of you, and if you pull the lever, the trolley is going to be diverted to another track. There is one catch though. On the other track, there is one worker who is not following the situation either, and that person will be run over and killed if you pull the lever.
What do you do? Pull the lever or not? Let five innocent people be killed if you do nothing, or let an innocent person be killed if you pull the lever? If you put this dilemma forward to people, most will say they would pull the lever and have one innocent life taken, instead of five. In general, that seems to be the most moral choice.
What now if we change the circumstances a little bit? Let’s assume the same basic beginning — a runaway trolley car is hurtling toward five innocent track workers. If nothing is done, five innocent lives are taken. Here comes the twist. There is a pedestrian bridge over the tracks, and you’re standing on it, together with a very, very fat person. If you would push this morbidly obese person over the railing, their big fat body would surely cause the trolley car to derail, and thus save the five innocent track workers.
Would you push the corpulent person over the edge, or not? The end results are the same — if you do nothing, five innocent lives are lost, if you do something, only one innocent life is lost. When this dilemma is put forward to people, most will choose not to push the voluminous person in front of the trolley car. Somehow this is different for most people. Apparently, pulling a lever from a distance of the drama is much easier than actually physically touching and partaking in the drama.
Back to the self-driving cars. What if you have to program the algorithm that kicks in when a form of the Trolley Problem arises? What if you have to program what to do when the car finds itself in a situation in which it simply cannot avoid causing a collision with a person? Let’s say the car sees two elderly people crossing the street all of a sudden to the left, and a mother with a baby carriage to the right. The car is slipping and going too fast to stop. What’s it going to be, hit the elderly couple, or hit the mom with her baby? Or are you going to build in a randomizer, leaving it up to chance, to soothe your conscience?
In reality, you probably won’t have to make that choice, since somebody else will have done that already for you. But what if you don’t like their choice?
Let’s say you are asked to program the self-driving car’s response to a situation in which a choice needs to be made between hitting a pedestrian or crashing into a wall. Somebody in the higher echelons of your company has decided that the safest choice for the driver of the car is to hit the pedestrian — after all, the car company’s concern is with the person who paid good money for the car, including its protective measures.
Some would agree with such a train of thought, while others would disagree. What if you disagree? Would you refuse to program this algorithm? Would you protest? What if you’re the coach of this team that is asked to program it, and you disagree? Are you going to put your job on the line for your moral misgivings? Hard questions indeed.
The more intelligent the tech around us becomes, to more we will face moral questions. As I try to stress throughout my articles, awareness is key. If you are aware of the moral dilemmas that are creeping up on us, and if you are aware of your own and other people’s biases, prejudice, and stereotypes, that will help in making moral decisions, not just for yourself, but also for the organization you work at.
If you are interested in stories like these and more, you can buy Essential Psychology for Modern Organizations from Amazon and other bookstores: https://www.amazon.com/Essential-Psychology-Modern-Organizations-scientifically/dp/B08NP12D77/