An esoteric but increasingly debated issue concerning the introduction of autonomous trucks and vehicles on highways is the ethical decision-making capacities of such vehicles – the so-called #moral algorithm.
An algorithm is a sequence of instructions telling a computer what to do. A critical question is how and who should program autonomous vehicles to deal with unavoidable accidents that may result in death or serious injury.
This matters because an automated vehicle might be programmed to value the safety of its occupants over that of other road users, say, or to value human safety over property damage.
The #trolley problem
So far the moral algorithm issue surrounding self-driving vehicles has largely been defined by the “trolley problem,” according to White Paper on regulating moral programming of driverless vehicles by Stuart Young, a partner in internal law firm Gowling WLG.
The trolley problem supposes there is a runaway trolley on a train track heading for two people who are tied to the track. If nothing else happens, they will be killed. However, you are standing next to a lever that can switch the trolley to a side track. On the side track is one person, who also cannot move. Will you take responsibility for pulling the lever that will kill that one person, or do nothing allowing the two to die?
The automated vehicle (AV) equivalent sees an AV driving down the road when someone pulls into the road unexpectedly. The AV does not have the necessary stopping distance available to pull up safely, so it has to steer to one side or the other. On one pavement is an 80-year-old woman; on the other is a group of children. Which party does the AV choose to put at risk?
Why Self-Driving Cars Must be Programmed to Kill
According to an article by MIT Technology Review – Why Self-Driving Cars Must be Programmed to Kill — research by #Jean-Francois Bonnefon at the Toulouse School of Economics in France and his associates suggest that public opinion will play a strong role in public acceptance or rejection of autonomous vehicles. In general, they found that people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll. However, they’re only prepared to go so far – they’re in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one.
Public health issues
Janet Fleetwood of the Dornsife School of Public Health at Drexel University, in a submission to the American Journal of Public Health, says that: “Autonomous vehicles are replete with #public health issues that have ethical implications that warrant cogent analysis and informed response.
“I argue for greater involvement starting now, during the design phase, of public health leaders and describe how the values of public health can guide conversations and ultimate decisions. By reflecting on the ethical and social implications of autonomous vehicles and working collaboratively with designers, manufacturers, companies like #Uber and nuTonomy, city health departments, the public, and policymakers on the local, state, and federal level, public health leaders can help develop guidelines that foster equity and safety across the population.”
Gowling’s Young argues there should be a coherent plan for allocating regulatory responsibility for AVs in the United Kingdom, and in particular the moral algorithms that govern their behaviour, with an independent regulator tasked with striking the balance between legality, safety and commerciality. There should also be a legal framework that is sufficiently flexible and responsive to be able to make swift decisions about new technology as it is developed.