As other commenters over there already astutely observed, a self-driving car should theoretically never get into that situation in the first place. If there is a large mass of people suddenly streaming across the road, the car should have “seen” that coming long before it became an issue, and began slowing as a result. That, or it would have been going more slowly in the first place based on the conditions.
How will a self-driving car be programmed to react to a deer in the road? Will it brake in a straight line (which is the safest reaction)? Will it react to deer that it “sees” out of the direct part of travel (down in ditches, for example)? Will it be able to correctly anticipate the completely random reaction of the deer (some stand still, some jump out of the way, some follow the last one that was in the road)?
“How will a self-driving car be programmed to react to a deer in the road?”
Ya, that’s what I thought the article was going to be about not pedestrian catch 22’s. Sometimes it’s not safe to break because it could cause a deer to go through the windshield, or maybe that is an urban myth. There certainly could be a need to program the cars to kill animals when necessary.
Current systems continously model the predicted behavior of vehicles and people and other moving things and drive very defensively.
Once they’re regularly driving in areas with deer, they’ll build up a behavioral model and will likely slow down when they detect one that’s likely to dart out.
Can they even recognize deer? Will they slow down if there are a bunch of deer nowhere near the road?
Self-driving cars would have to assume pedestrians aren’t suicidal–that they won’t jump out in front of the car with no warning. Ditto bicycles. Maybe they could be made fool-proof, but I doubt they could be made damn-fool proof.
Can self-driving cars be herded? Could a bunch of pedestrians “steer” a self-driving car by jumping out in front and to either side of it and walking (or running) up to it? Maybe force it into backing up? Maybe force it into backing up into a waiting truck?
So far the self-driving cars haven’t had to deal with people being intentionally malevolent.
We are only beginning to see the panoply of issues that A.I. raises. The on-going fallacy is the idea that A.I. mimics human intelligence. That’s only a surface phenomena. A machine with sensors that operate at a 1000x or more the speed of neuron fire and with greater abilities than human senses will have a very different world-view than people. The choices it may make could be perceived as negligent although upon deeper inspection far from it. Suppose for example, to avoid a high-speed accident with a pedestrian the car’s AI executes a high performance 360 spin around the potential victim. Such bizarre “Bonneville Salt Flats” driving seen only in automobile commercials today, might become routine for autonomous cars. Will the riders in the car who suffer minor contusions or even more serious force stress trauma then be entitled to sue the automaker out of existence even though the outcome was a life that was saved? There are other examples as well which actually don’t involve injury. There is the subtle issue of psychological perception of well-being. For example, fully autonomous cars have the potential of solving traffic rush-hour backups by linking themselves into a “virtual train”. Therefore with clearance of bumper-to-bumper of only inches, with all cars’ steering, accelerator and brakes synchronized wirelessly, there is no reason that they could not travel freeways to downtown at speeds in excess of 70mph with a margin of safety far in excess of what a human driver, with human responses controlling only a single vehicle could ever do. The engineering is possible, the question is, will people be comfortable riding in a vehicle moving under such circumstances? There is much work to be done yet. Most of the issues have yet to be discovered, IMO.
Avoid a collision by performing a race-car power slide?
Typically, I can’t stand the practice of marketing, whereby talented humans create words to describe an objective situation in such a way as to make it as palatable as possible to another set of humans.
This is the perfect place to deploy such a talent, and it is desperately needed, as a full ecosystem of self driving cars will be MUCH, MUCH safer overall.
I’m not a marketer, but I’d imagine a good solution would be to program the car to:
a) stay on the road for all but the most horrific collisions
b) exit the road in an emergency only if $A and there appeared to be no collision risk going off road.
c) THAT’s it.
Then the marketers would “sell” the idea that the car is predictable, always obeying the rules, doing the best it possibly can do to keep things safe ON THE ROAD. In the crowd on the road situation, it might hit the crowd, but it would do everything it could to stop. End of marketing story.
If these cars are overall much safer than other cars on the road, I don’t see why these issues would make me less likely to buy one of these cars. It improves my chances.
Remember that horrible hundreds-of-vehicles pileup last winter? That’s because your average driver can’t properly judge the maximum safe speed on icy roads. I don’t want my car to be able to be driven unsafely. There are many more such cases than the pretty silly horrible cases that they tested on.
Are we postulating a driving AI/sensor suite that can positively differentiate between pedestrians of all shapes and sizes and, say, deer, or inanimate objects? Positively enough to put the owner’s life in peril or forfeit? I find it hard to see such certitude being within technical plausibility.
However, if it is, there’s an easy solution that gets the manufacturers off the hook legally and ethically; let the owner configure the settings dictating what the car will do under such circumstances. That’s where the decision rests now, and where it ethically should rest.
“let the owner configure the settings dictating what the car will do under such circumstances.”
Silly to think you could have control over something you bought.
Others may do as they wish, but I certainly wouldn’t get inside a conveyance knowing that it was designed to kill me under some predetermined set of circumstances.
“but I certainly wouldn’t get inside a conveyance knowing that it was designed to kill me under some predetermined set of circumstances.”
Have you ever been a passenger in a taxi, bus, or commercial airplane? Aren’t you facing the same issues with human pilots and drivers? Given the right predetermined set of circumstances, human pilots and drivers can be quite predictable…
I have. However, there is, I think, a difference not only of degree, but of kind, between a human operator, who will, I presume, attempt to preserve his own life to the best of his ability, and a machine whose manufacture would presumably be admitting that under thus-and-such circumstances, it is programmed to self-destruct and kill the passenger or passengers.
The first time it happens, I would assume that there would be a lawsuit for wrongful death by the next of kin, probably concurrent with the arrest of the manufacturer’s president, CEO, and programming team, to be charged individually and severally with premeditated aggravated murder in the first degree.
And it might be a while before anyone built a self-driving vehicle again. Which is just as well by me. I’ve ranted in other forums about potential problems I see with the concept, from security problems (everything from bored teenage “hackers” to computer viruses infecting the control circuitry, assuming it’s not 100% airgapped and 100% autonomous) to government kill switches (“We’re sorry, Citizen, but due to Global Warming you have exceeded your personal yearly Carbon Footprint Allowance. You have been found guilty of excessive petroleum use and are being transported to the nearest Climate Change Police facility for arrest and incarceration for your crimes against Gaia. If you wish to dispute this and exercise your right to travel, please visit the Homeland Security Transportation Safety Administration in Bethesda, Maryland, in order to submit Form 237A16/AF-17…”). Others may purchase and ride such vehicles if they wish, but I will not do so willingly.
CJ touched on the issue. Until A.I. gets human status (never) the driver needs to remain liable for anything the vehicle is involved in. Otherwise, the programmer or company they work for becomes liable.
Liability is much clearer with driver-less cars.
Once self-driving cars become the norm, I can see pedestrians walking much less defensively, assuming the cars will slow. This will force the cars to be even more defensive. The end state could be pedestrians taking over the roads and traffic being paralyzed.
The fix would be for self-driving cars to send in jaywalking reports (complete with video and zoomed-in facial images.)
In our never-ending quest for safety, we’ll be breeding a populace of dumb, complacent, compliant sheep. I’m afraid we may already be there. A majority of college students now think that the need to feel “safe” takes precedence over freedom of speech. Dawkins took a lot of heat the other day for tweeting something like, “People who need ‘safe spaces’ in a university should stay home, suck their thumbs, and wait until they’re prepared to attend an actual university.”
I am waiting for the software to be able to distinguish between the “wacky arms flailing inflatable tube guy” things that businesses put at the entrance to draw attention–you know, these things:
…and a running pedestrian about to step into the street. And the software will have to be able to distinguish with high accuracy, and be written to err on the side of safety.
As other commenters over there already astutely observed, a self-driving car should theoretically never get into that situation in the first place. If there is a large mass of people suddenly streaming across the road, the car should have “seen” that coming long before it became an issue, and began slowing as a result. That, or it would have been going more slowly in the first place based on the conditions.
How will a self-driving car be programmed to react to a deer in the road? Will it brake in a straight line (which is the safest reaction)? Will it react to deer that it “sees” out of the direct part of travel (down in ditches, for example)? Will it be able to correctly anticipate the completely random reaction of the deer (some stand still, some jump out of the way, some follow the last one that was in the road)?
“How will a self-driving car be programmed to react to a deer in the road?”
Ya, that’s what I thought the article was going to be about not pedestrian catch 22’s. Sometimes it’s not safe to break because it could cause a deer to go through the windshield, or maybe that is an urban myth. There certainly could be a need to program the cars to kill animals when necessary.
Worth watching, answers some of the questions you have: https://www.ted.com/talks/chris_urmson_how_a_driverless_car_sees_the_road
Current systems continously model the predicted behavior of vehicles and people and other moving things and drive very defensively.
Once they’re regularly driving in areas with deer, they’ll build up a behavioral model and will likely slow down when they detect one that’s likely to dart out.
Can they even recognize deer? Will they slow down if there are a bunch of deer nowhere near the road?
Self-driving cars would have to assume pedestrians aren’t suicidal–that they won’t jump out in front of the car with no warning. Ditto bicycles. Maybe they could be made fool-proof, but I doubt they could be made damn-fool proof.
Can self-driving cars be herded? Could a bunch of pedestrians “steer” a self-driving car by jumping out in front and to either side of it and walking (or running) up to it? Maybe force it into backing up? Maybe force it into backing up into a waiting truck?
So far the self-driving cars haven’t had to deal with people being intentionally malevolent.
We are only beginning to see the panoply of issues that A.I. raises. The on-going fallacy is the idea that A.I. mimics human intelligence. That’s only a surface phenomena. A machine with sensors that operate at a 1000x or more the speed of neuron fire and with greater abilities than human senses will have a very different world-view than people. The choices it may make could be perceived as negligent although upon deeper inspection far from it. Suppose for example, to avoid a high-speed accident with a pedestrian the car’s AI executes a high performance 360 spin around the potential victim. Such bizarre “Bonneville Salt Flats” driving seen only in automobile commercials today, might become routine for autonomous cars. Will the riders in the car who suffer minor contusions or even more serious force stress trauma then be entitled to sue the automaker out of existence even though the outcome was a life that was saved? There are other examples as well which actually don’t involve injury. There is the subtle issue of psychological perception of well-being. For example, fully autonomous cars have the potential of solving traffic rush-hour backups by linking themselves into a “virtual train”. Therefore with clearance of bumper-to-bumper of only inches, with all cars’ steering, accelerator and brakes synchronized wirelessly, there is no reason that they could not travel freeways to downtown at speeds in excess of 70mph with a margin of safety far in excess of what a human driver, with human responses controlling only a single vehicle could ever do. The engineering is possible, the question is, will people be comfortable riding in a vehicle moving under such circumstances? There is much work to be done yet. Most of the issues have yet to be discovered, IMO.
Avoid a collision by performing a race-car power slide?
Cool!
With a little practice: https://www.youtube.com/watch?v=RY93kr8PaC4
Typically, I can’t stand the practice of marketing, whereby talented humans create words to describe an objective situation in such a way as to make it as palatable as possible to another set of humans.
This is the perfect place to deploy such a talent, and it is desperately needed, as a full ecosystem of self driving cars will be MUCH, MUCH safer overall.
I’m not a marketer, but I’d imagine a good solution would be to program the car to:
a) stay on the road for all but the most horrific collisions
b) exit the road in an emergency only if $A and there appeared to be no collision risk going off road.
c) THAT’s it.
Then the marketers would “sell” the idea that the car is predictable, always obeying the rules, doing the best it possibly can do to keep things safe ON THE ROAD. In the crowd on the road situation, it might hit the crowd, but it would do everything it could to stop. End of marketing story.
If these cars are overall much safer than other cars on the road, I don’t see why these issues would make me less likely to buy one of these cars. It improves my chances.
Remember that horrible hundreds-of-vehicles pileup last winter? That’s because your average driver can’t properly judge the maximum safe speed on icy roads. I don’t want my car to be able to be driven unsafely. There are many more such cases than the pretty silly horrible cases that they tested on.
Are we postulating a driving AI/sensor suite that can positively differentiate between pedestrians of all shapes and sizes and, say, deer, or inanimate objects? Positively enough to put the owner’s life in peril or forfeit? I find it hard to see such certitude being within technical plausibility.
However, if it is, there’s an easy solution that gets the manufacturers off the hook legally and ethically; let the owner configure the settings dictating what the car will do under such circumstances. That’s where the decision rests now, and where it ethically should rest.
“let the owner configure the settings dictating what the car will do under such circumstances.”
Silly to think you could have control over something you bought.
Others may do as they wish, but I certainly wouldn’t get inside a conveyance knowing that it was designed to kill me under some predetermined set of circumstances.
“but I certainly wouldn’t get inside a conveyance knowing that it was designed to kill me under some predetermined set of circumstances.”
Have you ever been a passenger in a taxi, bus, or commercial airplane? Aren’t you facing the same issues with human pilots and drivers? Given the right predetermined set of circumstances, human pilots and drivers can be quite predictable…
I have. However, there is, I think, a difference not only of degree, but of kind, between a human operator, who will, I presume, attempt to preserve his own life to the best of his ability, and a machine whose manufacture would presumably be admitting that under thus-and-such circumstances, it is programmed to self-destruct and kill the passenger or passengers.
The first time it happens, I would assume that there would be a lawsuit for wrongful death by the next of kin, probably concurrent with the arrest of the manufacturer’s president, CEO, and programming team, to be charged individually and severally with premeditated aggravated murder in the first degree.
And it might be a while before anyone built a self-driving vehicle again. Which is just as well by me. I’ve ranted in other forums about potential problems I see with the concept, from security problems (everything from bored teenage “hackers” to computer viruses infecting the control circuitry, assuming it’s not 100% airgapped and 100% autonomous) to government kill switches (“We’re sorry, Citizen, but due to Global Warming you have exceeded your personal yearly Carbon Footprint Allowance. You have been found guilty of excessive petroleum use and are being transported to the nearest Climate Change Police facility for arrest and incarceration for your crimes against Gaia. If you wish to dispute this and exercise your right to travel, please visit the Homeland Security Transportation Safety Administration in Bethesda, Maryland, in order to submit Form 237A16/AF-17…”). Others may purchase and ride such vehicles if they wish, but I will not do so willingly.
CJ touched on the issue. Until A.I. gets human status (never) the driver needs to remain liable for anything the vehicle is involved in. Otherwise, the programmer or company they work for becomes liable.
Liability is much clearer with driver-less cars.
Once self-driving cars become the norm, I can see pedestrians walking much less defensively, assuming the cars will slow. This will force the cars to be even more defensive. The end state could be pedestrians taking over the roads and traffic being paralyzed.
The fix would be for self-driving cars to send in jaywalking reports (complete with video and zoomed-in facial images.)
In our never-ending quest for safety, we’ll be breeding a populace of dumb, complacent, compliant sheep. I’m afraid we may already be there. A majority of college students now think that the need to feel “safe” takes precedence over freedom of speech. Dawkins took a lot of heat the other day for tweeting something like, “People who need ‘safe spaces’ in a university should stay home, suck their thumbs, and wait until they’re prepared to attend an actual university.”
I am waiting for the software to be able to distinguish between the “wacky arms flailing inflatable tube guy” things that businesses put at the entrance to draw attention–you know, these things:
http://www.sayok-inflatables.com/photo/pl3180739-blower_950w_air_dancers_inflatable_tube_man_with_led_light_h3m_h8m.jpg
…and a running pedestrian about to step into the street. And the software will have to be able to distinguish with high accuracy, and be written to err on the side of safety.
I think we may be waiting a while.