Self-Driving Cars Will Have to Make Moral Decisions
With the constant advances in modern technology, not much comes as a surprise anymore. We can do pretty much everything apart from teleportation, and even that’s being researched thoroughly as you read this, with a hope to make it a reality one day.
But until then: self-driving cars.
That’s right, cars that drive themselves for you. This could, of course, simply lead to humans being (even more) lazy…
Nevertheless, I’m sure the idea of self-driving cars is something we all find pretty exciting, right?
There are some serious things for the manufacturers to consider though, including a particularly crucial moral decision that the car will have to make itself.
In the event of an unavoidable crash, who should die?
A car making a moral decision? This is 2018, folks!
Companies will have to design moral algorithms to be implemented into these cars, which will have to decide whether the passengers in the car should be sacrificed or if the pedestrians should be put at risk.
Researchers from the MIT Media Lab launched an experiment in 2014 and, after four years, have now analysed over 40 million responses, to try to gain some insight into what people think the best answer to this moral dilemma is.
How did this experiment work?
People were given several scenarios to consider. They had to decide whether a self-driving car should sacrifice the passengers or swerve the danger at the risk of killing a known criminal; a successful business person; a group of old people; pedestrians crossing the road when they were told to wait; a herd of cows.
In general, the data from the 40 million participants suggests that people would rather save humans than animals, tended to prioritise young over old people, and would choose to save as many lives as possible.
Other small trends highlighted preference to save females over males and saving pedestrians over passengers.
Is this safe and can we trust cars to make such decisions?
Analysers of this data acknowledge how significant this is, as they stated that never before in the history of humanity have we allowed a machine to decide (on its own) who should live or die. The car would be making this decision in a split second without real-time supervision.
“Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policy-makers that will regulate them,” they added.