A woman is the sole passenger in an autonomous self-driving vehicle traveling at the speed limit down a main road. Suddenly, 10 pedestrians appear ahead, in the direct path of the car. The car could be programmed to: SWERVE off to the side of road, where it will impact a barrier, killing the passenger but leaving the ten pedestrians unharmed, or STAY on its current path, where it will kill the 10 pedestrians, but the passenger will be unharmed.
What is the moral course of action?
![]() |
Trolley problem of self-driving car. Credit: Iyad Rahwan |
Ethical questions including this one were the subject of recent surveys exploring the complicated, messy way that technology and morality interact in driverless vehicles. The surveys were conducted by researchers Jean-François Bonnefon (Toulouse School of Economics), Azim Sharirf (University of Oregon), and Iyad Rahwan (MIT), and their results appeared today in the AAAS journal Science.
Autonomous vehicles (AVs) like Google’s self-driving car, Local Motors and IBM’s Olli, Delphi’s Roadrunner, and Knight Industries’ KITT, have significant advantages over those driven by humans. First and foremost, they could dramatically reduce the number of car accidents and resulting injuries and fatalities. According to the National Highway Traffic Safety Administration in the US, driver error plays a critical role in something like 94% of crashes. In addition to saving lives, AVs could reduce pollution and traffic congestion—and probably the amount of stress most of us feel during our daily commutes.
The morality problem comes in when AVs hit situations where danger is unavoidable. Someone or some group has to decide which of multiple bad options is preferable. If you or I were driving, we would probably make our decision based on instinct. A driverless car, however, acts the way it is programmed to act. This means that decisions have to be made in advance. What role should morality and public safety play in those decisions, and whose decisions are they to make?
“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” according to the researchers’ paper.
Boonefon and his colleagues are attempting to spark a collective discussion surrounding this issue through data-driven surveys. They conducted six surveys online between June and November 2015. The surveys addressed slightly different aspects of the discussion, but saw some general trends. First, respondents were in good agreement that the most moral decision is the one that does the greatest good—sacrificing few lives to save many. Second, while they hoped everyone else would use AVs that followed this moral code, the respondents would prefer to personally use AVs that protect their passengers at all costs.
Scenarios pitting the greater good against personal interests have been around probably since the days of the caveman—from livestock grazing on common pastures to the trolley problem to carbon emissions—but AVs should encourage all of us to take a close look at this issue. Programmers will actually have to write these types of scenarios into the software of cars that could take our kids to school, us to work, and our closest family and friends to destinations all around the world.
According to the authors, three groups may decide these ethical questions: consumers who choose what to buy, manufacturers who program the cars, or governments that could regulate the decisions AVs make.
Consider first what would happen if both types of cars were on the market—cars that were programmed for the greater good, and cars programed to protect their passengers above all else. Survey results showed that people are much less likely to buy cars that would sacrifice their loved ones for the greater good. This means that if both types were on the market and current attitudes persist, manufactures would likely sell out of the self-preservation model while the greater good model collects dust.
In order to probe the question of regulation, one of the surveys asked whether people would like to see the greater good outcome legally enforced. While agreement was higher when doing so involved a scenario that saved ten lives instead of one, the average agreement was still below the midpoint on the scale. People were not enthusiastic about federal regulation, and were much less likely to consider buying an AV if such regulations were in place.
This leads to another issue the researchers highlight. Regulating AVs for the greater good could actually be counterproductive if it keeps people from buying them. If the main point of using AVs is to increase safety, people need to use them!
These surveys address a range of issues, but the discussions will need to go much deeper. For example, the authors point out that their scenarios involve black-and-white outcomes, the passengers die or the pedestrians die, but reality is much more gray. Most outcomes will have to be judged on statistical risk instead of certainty. One thing is certain though, this is an issue that is important for us to consider collectively and not leave to the manufacturers and federal regulators.
For more information, you can read the paper for free on the Science website. Also, check out this great video by Science/AAAS that goes through the survey results, and explore these scenarios via an interactive website created by the researchers.
—Kendra Redmond