• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Who should AI kill in a driverless car crash? It depends who you ask

Akuza89

Member
Moral responses to unavoidable damage vary greatly around the world in a way that poses a big challenge for companies planning to build driverless cars, according to new research.

The researchers, from Massachusetts Institute of Technology and other institutions, presented variations of the classic “trolley problem” thought experiment almost 40m times to millions of volunteers from all around the world.

In the traditional thought experiment, participants are asked to consider whether they would reroute a runaway trolley car which is about to hit and kill five people, directing it on to a siding where it would kill only one person. In the new quiz, dubbed “Moral Machine”, the researchers instead asked volunteers to consider what a self-driving car should do in examples from more than 26 million variations of the same question.

Should a car with three occupants, an adult man and woman and a child, swerve into a wall, killing them all, in order to avoid hitting three elderly people, two men and a woman? Should an unoccupied car swerve and kill an unemployed adult man, a child and a cat in order to save an adult man and woman and a child? Does the answer change if the pedestrian light is red? What if one of the people is unfit, or pregnant?

Responses to those questions varied greatly around the world. In the global south, for instance, there was a strong preference to spare young people at the expense of old – a preference that was much weaker in the far east and the Islamic world. The same was true for the preference for sparing higher-status victims – those with jobs over those who are unemployed.

When compared with an adult man or woman, the life of a criminal was especially poorly valued: respondents were more likely to spare the life of a dog (but not a cat).

The researchers, whose work is published in the journal Nature, also note “some striking peculiarities, such as the strong preference among those in the global south for sparing women and fit characters.

“Only the (weak) preference for sparing pedestrians over passengers and the (moderate) preference for sparing the lawful over the unlawful appear to be shared to the same extent in all clusters.”

The data comes with caveats, of course. Unlike traditional polling, the volunteers were entirely self-selected, reached in large numbers thanks to the viral nature of the “Moral Machine” quiz, which was covered by technology news sites like The Next Weband Business Insider. That means that, for instance, the data is likely to skew towards the wealthy in nations with weak internet penetration. More generally, they write, “most users on Moral Machine are male, went through college, and are in their 20s or 30s”.

Nonetheless, they argue that the findings should prompt policymakers and auto engineers to consider embedding some moral intuitions into self-driving cars. “Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them,” they write.

Among some autonomous vehicle engineers, however, that view is disputed. Speaking to the Guardian just after the Moral Machine quiz was first released, Andrew Chatham, a principal engineer on Google’s self-driving car project, said the problem has little bearing on actual design.

“It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’,” he said. “You’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything. So it would need to be a pretty extreme situation before that becomes anything other than the correct answer.”

Source

Sooo... it's not robots that's going to kill us it's driverless cars that's going to use a morale machine to decide if you should die... or the driver... well that's a depressing future!
 

Cybrwzrd

Banned
Save the occupants, and if not occupied, accelerate into the retirees, fat people, criminals, cats and unemployed like an old man in a farmers market.
 
Last edited:

DunDunDunpachi

Patient MembeR
A strange but workable solution would be to have all the cars on a grid so that you could divert other vehicles into the speeding one. 6 people getting dinged up is better than 2 cars in a lethal collision.
 
Vendor: "Sir, now that we have gone through all the features of your autonomous car, there's one last question we need to ask before we can proceed to the payment options."

Client: "Certainly, go ahead."

Vendor: "Would you prefer a teleological or a deontological model?"

Client: "Erm, what?"
 
Last edited:
what about just letting the machine take its course and quit trying to engineer morality and destiny.

oh right, two deaths are more expensive than one.

I'll take my chances riding a bear to pick up my physical copy of Red Dead Redemption 8, thank you.

Man, the more digital this world gets, the more i just want to sink away into the woods and anonymity..
 
Last edited:

DunDunDunpachi

Patient MembeR
the-trolley-problem-multi-track-drifting-22049890.png
 

A.Romero

Member
Very interesting. I also wonder if human reaction would be fast enough to allow a human driver to make the decision on the fly.

Personally, I would randomize the selection given that in my opinion it's pretty much what happens with a human driver.
 
In this situation, the first moral responsibility of an engineer is to ensure protection of those first impacted by a design, or those who are guaranteed to be using the design.

The driver and occupants of a car are guaranteed users. Pedestrians jaywalking, regardless of age, are not guaranteed under normal operation of the car.

Now, you can plan for it, but any engineered action that harms the driver due to stochastics is immoral. There are only so many what-ifs you can plan for in an uncontrolled environment, and honestly a few lives here or there is not really that big a deal to me. People die, shit happens, and eventually attempts to prevent such become too authoritarian depending on your perspective.
 

Pagusas

Elden Member
why do these surveys always give information about the victims? It's not like a self-driving car will identify the potential victims in its database before making a judgment call. It will simply know "people are in my seats and people are in front of me, should I swerve and risk people inside or plow through people ahead of me.
 
What BS. The car should do what the driver would do. Would someone riding with their wife and kids sacrifice them for random strangers on the road?

Just tell someone their car is going to sacrifice their family for some strangers and see how well that sales pitch goes.
 
Last edited:

Zenaku

Member
If a car is in a position to make that decision, then it would have to know it was about to crash. It'd make more sense in that situation for the car to inflate the airbags before the impact (as opposed to inflating at the moment of impact) which should soften the blow for all passengers.

Alternatively, they could come up with new solutions that wouldn't work with the "crash first, inflate later" problem of modern cars. Demolition man style foam, perhaps? Coming up with new safety measures that would have more time to work in an autonomous car would be a far better way of dealing with potential disaster than letting the car decide someones fate.
 

I_D

Member
The vehicle should protect its inhabitants whenever possible.

Provided the traffic-guidance works properly, the person in the road is at fault anyway.
 

Tesseract

Banned
the vehicle should protect its inhabitants i agree, unless the scope is so grossly out of order, like it's about to mush 100 people in a sizable crowd

tbh i never really thought about this before, it's def gonna come up sooner or later
 
the vehicle should protect its inhabitants i agree, unless the scope is so grossly out of order, like it's about to mush 100 people in a sizable crowd

tbh i never really thought about this before, it's def gonna come up sooner or later
Human moral logic doesn't work that way, afaik. Take the famous railroad track experiment, put friends and family members on one side and put a bunch of strangers on the other side, who will choose the strangers even if there are more?

It is one thing to self sacrifice, but if your car is carrying family or friends, it would be pretty unacceptable to sacrifice them for the greater good.
 

Tesseract

Banned
Human moral logic doesn't work that way, afaik. Take the famous railroad track experiment, put friends and family members on one side and put a bunch of strangers on the other side, who will choose the strangers even if there are more?

It is one thing to self sacrifice, but if your car is carrying family or friends, it would be pretty unacceptable to sacrifice them for the greater good.

true. i'd want the vehicle to do the right thing, what that means will probably change over time as the technology evolves.
 

Airola

Member
Could terrorists program an AI truck to calculate the most efficient way to drive over as many people as possible?
 

LordPezix

Member
the AI should assign a reasonable saving throw with appropriate modifiers based on characteristics of the individual and run it through a random number generator and if they pass they live and if not,.. well... maybe they will roll their reflex save?
 

iamblades

Member
Autonomous systems can and should be engineered such that they do not have to make such decisions. The engineering solution to the trolley problem is increasing the visibility/awareness range or decreasing the stopping/avoidance distance or segregating the traffic so that it is impossible for people to physically access the trolley track.

Take aviation autopilots/TCAS for example. They don't make any kinds of ethical decisions because they are engineered to stay within a perfectly safe envelope of behavior. Sometimes there are oversights and sometimes there are mechanical/electrical/software failures that prevent the system from working properly, but that doesn't mean we need to program the computer to worry about subjective ethics. It just means we need to engineer a better system.

Worth noting that the 'system' in question includes the roads, and it's likely for fully optimized autonomous vehicles we will have to redesign our roads to better segregate pedestrian traffic from medium - high speed vehicle traffic and do other things (like improving visibility in intersections, etc). These are things we should be doing even for human drivers though.

The ethics aren't really a problem for a system that knows to the centimeter how far it can see and how long it takes to stop. It should be engineered to be perfect.
 
Last edited:

Weiji

Banned
If I pay to buy or take a ride in a car it damn well better pick me over any non-paying third party.
 

llien

Member
“some striking peculiarities, such as the strong preference among those in the global south for sparing women and fit characters.

I need to read what "global south" is (meh, paywalled :(), but hardly surprising for our species, to value women more, given how we reproduce.

why do these surveys always give information about the victims? It's not like a self-driving car will identify the potential victims in its database before making a judgment call. It will simply know "people are in my seats and people are in front of me, should I swerve and risk people inside or plow through people ahead of me.
It shouldn't be that hard to identify kids, for instance.
Sex can be identified for people with casual looks .quite reliably too.

Even if we can't put all that into cars right now, it's a rapidly developing branch, so such moral dilemma questions must be addressed upfront.

Interesting to note: AI can quite reliably figure sex, by checking EM signals from brain.
 
Last edited:
It shouldn't be that hard to identify kids, for instance.
Sex can be identified for people with casual looks .quite reliably too.
robotics is also improving. Someone could throw a moving lifelike humanlike robot into the road. Eventually it'd be impossible to tell whether it's a real person or a fake.
 

Hudo

Member
If we managed to put matter into existential superposition, we could have racing ghosts like in time trial modes in racing games.
 

Liberty4all

Banned
While obviously you want a car that will prioritize your life and those of your passengers, it's possible government could legislate car AI must decide based on the greater good (1 death even if driver better than 2 pedestrians for example)
 
Last edited:

llien

Member
Someone could throw a moving lifelike humanlike robot into the road. Eventually it'd be impossible to tell whether it's a real person or a fake.
But... why? If intent is to kill someone, aren't there easier ways to achieve it?
 
Who should it kill? Duh! Poor people of course. Useless eaters...

Matter of fact each AI car should come equipped with a FICO mandated Sniper Rifle that takes precise shots at a random amount of plebs each day as you roll past in your shiny chariot. I'd buy that for a dollar!
 
Last edited:

Future

Member
Save the occupants, aim towards people on the road. Hopefully they are paying attention and get out the way.

But honestly, probably should be the opposite. Make the people in the autonomous car sign the release and accept this risk, not the random innocent people on the street who continue to own their own destiny and drive their own cars
 
But honestly, probably should be the opposite. Make the people in the autonomous car sign the release and accept this risk, not the random innocent people on the street who continue to own their own destiny and drive their own cars

That's how I would do it. Put a self destruct charge inside the autonomous car. I don't care if the car has 4 people inside. They should all go out before an innocent random is killed by the car, like if say, it blew a tire and went out of control and had to either run someone over or collide with a wall.
 

patrickb

Member
Sooo... it's not robots that's going to kill us it's driverless cars that's going to use a morale machine to decide if you should die... or the driver... well that's a depressing future!

We are somehow overdoing this. Just because we are trying to tell a machine to choose the best possible option, doesn't mean, its designed to kill either.
It's just one of the many possibilities that needs to be addressed.
Easiest would be to ask yourself, what would you do in this case?
 
Last edited:

Future

Member
We are somehow overdoing this. Just because we are trying to tell a machine to choose the best possible option, doesn't mean, its designed to kill either.
It's just one of the many possibilities that needs to be addressed.
Easiest would be to ask yourself, what would you do in this case?

Hopefully you’d try to minimize all damage. Since it’s impossible to predict the damage you’d do to yourself, I’d instinctively swerve away from pedestrians and try not to crash into anything solid. If I did know ahead of time, and had to press a button confirming what to do..... well that’s fucked up and feels more like a fucked up hypothetical than real world problem. Kill the baby or kill your self..... that’s fucked
 

patrickb

Member
Hopefully you’d try to minimize all damage. Since it’s impossible to predict the damage you’d do to yourself, I’d instinctively swerve away from pedestrians and try not to crash into anything solid. If I did know ahead of time, and had to press a button confirming what to do..... well that’s fucked up and feels more like a fucked up hypothetical than real world problem. Kill the baby or kill your self..... that’s fucked

Well there's a lot happening around that aspect too. There's a startup working on centralizing autonomous vehicles in real time to know the exact dimensions and sync them together. This will help to determine what the real time situation is and possible conflicting scenarios.
Lets say we achieve this, still, you have to tell the AI how to react, irrespective of the fact that it might never crash at all.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
why do these surveys always give information about the victims? It's not like a self-driving car will identify the potential victims in its database before making a judgment call. It will simply know "people are in my seats and people are in front of me, should I swerve and risk people inside or plow through people ahead of me.

And why would it matter if the person is unemployed, overweight, or elderly?
 

Cybrwzrd

Banned
And why would it matter if the person is unemployed, overweight, or elderly?

The overweight make for bigger, squishier targets to soften the impact of the vehicle better. Elderly are already one foot in the grave, and productive people are more valuable, naturally. And well, cat dander tries to kill me, so the world is better off without them. They need to add stockbrokers and lawyers though as targets.
 
Last edited:
That's how I would do it. Put a self destruct charge inside the autonomous car. I don't care if the car has 4 people inside. They should all go out before an innocent random is killed by the car, like if say, it blew a tire and went out of control and had to either run someone over or collide with a wall.

The innocent rando should not have been in the street in the first place, in general. Most pedestrians struck are jaywalkers.
 
Top Bottom