Picture this. You are in charge of a trolley whose brakes seem to have failed. You see five people working on the tracks in front of you unaware of the trolley rushing towards them at a speed which will definitely kill them in case an accident were to occur. Fortunately, you can pull a lever and direct the trolley onto another track, hence saving the five people in question. Unfortunately, the other track also has a person who would die if you were to do that. Should you then pull the lever or should you not? Do you kill one person to save five, or not?
Source: Wikimedia Commons
Or think about it this way. You are standing on a cliff over the tracks and you see the trolley hurtling towards the five workers. At the edge of the cliff stands a particularly heavy man. You have to make a quick decision about whether or not to push that man off the cliff hence impeding the approaching trolley (which this hypothetical scenario ensures you is possible) and save the five workers, hence killing the man in the process. As problematic as both these thought experiments sound, they form the basis of a lot of discourses surrounding moral philosophy.
These thought experiments are known as The Trolley Problem and their implications are far-reaching. First proposed by Philippa Foot in 1967 and named thus by Judith Jarvis Thomson in 1976, the Trolley Problem does not just consider the moral and ethical dilemma of killing one person to save five but also under which circumstances this sacrifice seems justifiable or morally permissible, for the general consensus seems to be that sacrificing one worker to save the lives of five others is more morally legitimate than sacrificing one life to save five in the second scenario. There is a significant moral distinction between “killing” and “letting die” after all, and the amount of active agency one has in dictating these deaths seems to be of the essence.
The Trolley Problem is finally seeing real-world application in a lot of literature about Artificial Intelligence-powered automated cars which seem to be garnering more and more interest today with the rise of autopilot mechanisms. We are now at a stage wherein it is more pertinent than ever before to program some semblance of an ethical sensibility into artificial intelligence and the task is as daunting as it sounds. Real-world ethical dilemmas are much more varied and complex than philosophical thought experiments, and yet it is extremely important for autonomous vehicles to have an ethical decision-making algorithm in case of collisions and there is hardly a better place to start considering its implications.
The Self-Driving Trolley
Imagine you have to program an autonomous vehicle that has to make a quick decision in the following situation: A truck with an intoxicated human driver is hurtling towards the car on the wrong side. There is no time to cede control to one of the passengers. You have two choices. The car could either swerve towards the footpath where an old man is walking unaware of any of this, or the car, choosing inaction, could allow the truck to crash into the car thus killing the passengers.
This is the sort of dilemma that programmers of autonomous vehicles must contend with before putting their cars on the road. Two self-driving vehicles in the same situation could, of course, be developed to be able to communicate with each other and avoid collisions, even though they are also likely to sometimes malfunction. However, for the foreseeable future at least, roads will probably see a mixed traffic of human-driven and self-driving cars.
The ethical questions that the prospect of self-driving cars posits are varied. Should you prioritize more lives over less? Should you protect the passengers even if it is at the cost of innocent bystanders? Is doing that trying to attribute value to certain human lives over others? Where do we position non-human lives in this hierarchy of value? Should adherence to traffic rules play a role in determining the value of your life? And what about trying to factor in the demographic of the bystanders? Do you put a child’s life above three adult pedestrians? What if some pedestrians are disabled and are hence at a greater risk? Who is equipped to make these choices about actual human lives? Who can be trusted to know the answers to these questions and are there any definitive answers? Who, when after all these years, there has not been a consensus on the answer to something as simplistic as the Trolley Problem?
Beyond the Trolley Problem
Source: Electric Motor Engineering
Having said all this, the problem of ethical decision-making algorithms in self-driving vehicles is more complex than can be explained by a thought experiment. The Trolley Problem is also often used as a tool to reveal any unconscious biases one might have because the decisions one makes when faced with the questions it poses are split-second choices. A vehicle that has been deliberately programmed to make these choices does not have the luxury of claiming that its decisions were “unconscious”. There is also the question of who is to be held morally and legally accountable for the casualties caused by the decisions an autonomous car chooses to act upon. Who has active agency in these cases? Additionally, is the current computer vision technology advanced enough for these models to be able to detect and consider factors that require visual input? Like a bystander’s age or disability for instance? Is it unethical to put certain people at risk on account of certain aspects of their identity? Or is it unethical to not make these considerations at all?
A Long Way to Go
The subject of self-driving cars and the ethical questions surrounding it is one of truly multi-disciplinary interest and it cannot and should not be addressed by just the companies working on self-driving cars. Moral philosophers, law and policymakers, and technologists should all have a say in it, and it will not do to leave the common citizen who uses these roads out of the discussion either.
Self-driving vehicles are steadily on the rise and these dilemmas are more urgent than they seem. Technology has manifested what would be considered the most implausible anxieties of moral philosophy into being and brought them into the forefront of our daily lives and these are certain issues which, intriguing and disturbing as they are, cannot be addressed by technology alone.
Comments