Morals and Automatic Cars

G’day,

You’re driving in your fully automatically controlled car, when a little boy runs out into the road in front of you. Your car’s AI determines that there are 2 possible courses of action:

a) Avoid the child, but likely kill its passenger/s (ie you)
b) Run the child over, and save its passenger/s (ie you)

What do you choose?

This is a question being asked now that we are getting much closer to driver-less cars… And indeed, we have just had a car, a Tesla, being driven on automatic mode that crashed, killing its driver.

I have been picturing a utopian driverless paradise for some time now, picturing very fast cars all interacting with each other, aware of each other’s intended courses, and thus able to travel fast and yet avoid accidents… But, somehow I’d not considered the unexpected… Even a dog running out in front of a car may result in a fatal crash - much as can happen with human drivers.

And the main issue… until EVERY car is driven by computer, you have more than just random walk-ons to deal with.

There will be deaths… But as Tesla has pointed out - when considered on a per mile driven basis, their automatic system has outperformed human averages for mortalities…

cheers

cosmic

That hypothetical dilemma already exists now with or without driverless cars. When I was learning to drive I was always told to not avoid an animal and risk crashing the car if it was too dangerous. I think with a child a human’s reflex will kick in and the instinct will be to avoid collision - self-preservation wouldn’t have time to come into play.

With an AI the programming I assume would be to avoid hitting the pedestrian while protecting the driver and the AI theoretically should be able to do this better than a human.

Yes but the issue is what do you program the AI to do when the situation is such that it’s an either or situation and there is no way to potentially save both the occupants and the human on the road.

Personally I’m not comfortable buying a car that will make a calculation like 3 people ran out onto the road (lets say drunks rather than kids to keep the emotion out of it for the moment) and my car decides their is no way to avoid them and keep me safe (and there is 1 of me and 3 of them) so it drives off the road and into the trees (where it pancakes against a gum tree) killing me but saving the people on the road.

Or do you program the car to minimize the impact to the pedestrian who’s broken the road rules and the car emergency brakes but still hits the child at 40kph killing him (lets bring the little boy who ran from his mum who wouldn’t buy him an ice cream back now).

The first option would result in legal action against the car maker and the second would result in nationwide calls for regulation.

Either of which would probably be enough to stop driverless cars on our roads.

There are many many other scenarios which one can imagine that would cause issues as well.

The problem for driverless cars is not that they be better (they can do that), it’s that they need to be perfect.

Interesting conundrum for Asimov’s First Law of Robotics;

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Wonder what Isaac would say? :confused:

Dr. Asimov struggled with that exact issue in several of his novels. In earlier stories when robots were more mechanical they tended to go into shutdown avoiding the issue, in later novels he introduced the zeroth law which put humanities good above the individuals good. But that does not answer the question either really.

The state is becoming more technocratic, so will not have a problem letting the computer decide. Your views will not be important.

Why would swerving to avoid the child necessarily mean likely killing you and your passengers? On roads where a child is likely to run out in front of a car, the speeds on average are much slower. On such a road, the decision whether to swerve or not would be based around surrounding objects and people primarily.

I think it’s unrealistic (at or near the current state of the art) to expect an intelligent system (as distinct from real AI which we don’t have) to be able to discern the difference between a series of road side objects that are ‘safe’ to swerve into (such as bushes) or not safe to swerve into (such as trees).

And even if technology advances far enough that it becomes able to make such choices reliably it doesn’t answer the question of how you deal with determining which lives are saved and which lives are placed in danger in situations where there is no safe diversion choice.

Lets assume for the sake of the debate a country road with trees along side (remnant but not allowed to be removed) speed limit 100kph with scattered tree change properties every few hundred metres (similar to places near where I live).

Lets further assume that a young child attempts to chase her dog onto that road with oncoming traffic.

Does that car swerve and hit a tree (almost certainly killing the people on board) or does it brake and hit the child (almost certainly killing the child)?

Until you can answer that question (and other similar situations) one cannot program the cars response.

And if you do answer it you’d better be able to justify it because some lawyer out there is just waiting to tear your answer to pieces.

Not knowing enough about how the cars are programmed to respond, I expect they’ll brake hard, but have the advantage of being able to spot and respond to an object entering the road faster, possibly preventing the accident occurring, when the reaction time of a person would have been a lot slower.

I don’t think any car will be programmed to swerve and risk killing the occupants when an object (whatever that may be) suddenly appears on the road. If a child (or animal or anything else) appears suddenly at high speed the only sane option will be hard braking in my opinion. Swerving at that speed will almost guarantee an accident, whereas braking will probably not. Again, this depends on the complexity of the programming in the car, or the skill of the driver if not.

If a child runs onto a 100kmh speed limit road to chase their dog, then the issue isn’t about the car or driver. Regardless of the vehicles on the road, they are going to be killed if a car comes at the same moment. The press will whip it up into a frenzy of discussion about automatic driving cars and everyone will forget the obvious question of why the hell was the kid on the road in the first place.

1 Like

That’s a viewpoint (and others have expressed it) however legal action might be possible against a car maker who programmed such a response if there was reasonable cause to believe that in a particular situation that swerving could have been achieved safely for the occupants thus avoiding the errant pedestrian.

It’s my honest opinion that the biggest task faced in automating cars is not the technology itself but the legal hurdles and challenges it faces.

Indeed, auto cars will ultimately need some interesting legislation, and I imagine that legislation will vary from jurisdiction to jurisdiction.

The guy who died whilst his Tesla was on (Beta) auto drive - neither he nor the car noticed the truck that they collided with, apparently because the truck blended in with the sky. So indeed, recognition of objects still needs work…

1 Like

I think driverless automatic cars would work best if everyone else drives them too - then we would have smooth traffic flow with minimum accidents - all it takes is one person to decide to drive themselves to cause potential havoc.

Will be an interesting topic with a lot of thought going into it.

Bit off topic, but I wish people would just drive safer now. Sick to death of A) people on phones B) speeding C) tailgating.

I am a P plater who does the limit. Not over, only under if there is a good reason, so don’t bloody tailgate me. Spent the whole time as an L plater having people doing stupid things to get around me, when I got my Ps I found the behaviour changed - now people like to sit behind me tailgating.

I also catch the bus a lot, and see the number of people who use phones while driving and it is ridiculous - as are the massive touch screens in modern cars.

1 Like

Even though the situation above (girl, car, ethics) would probably be rare, it none the less is a scenario that would undoubtably occur sooner or later, and when you consider the worldwide car market, once more AI cars are out there… that’s more situations for the AI to have to make these decisions.

Yes, perhaps it’s not a little girl… maybe it’s a drunk stoner meandering stupidly along the highway. Much easier choice of course - drive right over him! :scream: But ultimately with a program taking control of your motor car, these decisions need to be addressed… and needs to be something you can live with.

I can just picture now… a new past-time for delinquent youth… forget throwing stones off the overpass at passing cars… It’ll be running out into oncoming traffic to see the resultant miraculous driving by all the AI cars in trying to avoid you… Not to mention - It wasn’t me who ran my wife over, officer - it was my car’s fault… (New murder defence…)

It reminds me of I, Robot (Will Smith version; have to admit not ever reading the book…). The car will be weighing up all the odds and fatality counts before you’ve even realised there’s a problem…

Of course… a truly smart car could see a potential problem well in advance of a human… So indeed, they could get out of situations that a human couldn’t… Make some predictions about the conditions ahead (such as seeing a small girl near the side of the road and slowing down… knowing of course when it’s safer, it can speed up…)

The question stands though - just how do you program such systems… safety of the driver, or those around the driver, to come first. When you think of situations such as Sophie Delezio… (twice) You can’t help but think that you really should be putting the “innocent bystander” first… But, that’s easier to say when it’s YOUR driving that’s caused the problem. IE - My driving has caused an accident - I’d rather be the one to suffer, rather than someone else. But, when it’s your car that’s doing the driving… Do you feel the same way? My car’s AI screwed up, now I’m dead in order to save someone else…?

2 Likes

It’s worth pointing out that the guy didn’t see the trunk because he was watching a movie on a portable DVD player…

Also, the reason the Tesla itself didn’t pick up on the truck was that, as with most semitrailers, it was too high off the ground for the Tesla’s sensors to detect. Not that it was indistinguishable from the sky. The Tesla went under the truck and was struck at head height…something that has happened to non-autonomous cars as well.

Tesla could address this by adding vertical sensors above the windshield, or the trucking industry could add beacons to their trucks.

Or…ya know…don’t watch a movie and let a beta feature drive your car for you.

1 Like