Should your self-driving car kill you to save others?

Discussion in 'Intelligence & Machines' started by Plazma Inferno!, Jun 24, 2016.

  1. DaveC426913 Valued Senior Member

    Messages:
    18,935
    This is entirely true, and cannot be overstated. It covers virtually 100% of real-world accidents.

    We are talking about edge-cases, where there happens to be no safe route that doesn't endanger someone.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. DaveC426913 Valued Senior Member

    Messages:
    18,935
    That being said...

    Stopping is always the best thing to do. Even if you can't avoid a collision, the energy involved drops as the second power of the speed. No matter how little time there is to slow down, there will be a dramatic drop in damage.

    And stopping is a really simple operation. None of this swerving off the road stuff.

    The thing about AI is that it is designed to anticipate dangers while still having enough time to react. They won't be blasting through yellow lights at full-speed, and they won't be barreling down country roads mindless of cross-traffic. They don't have a problem with attention span or tunnel-vision or lack of peripheral attention. They can watch for all threats simultaneously well in advance of danger.

    Something humans are really bad at.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Speakpigeon Valued Senior Member

    Messages:
    1,123
    And so, the one they had was involved in a crash, you say?! And the FSB fixed a kompromate video to force the driver to work for them? Let me guess... You, perhaps?
    EB
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. gamelord Registered Senior Member

    Messages:
    673
    Incorrect. Imagine the NASCAR scenario.

    Is stopping always a good thing?

    Absolutely not.

    The braking shifts weight to the front and induce a fishtail effect. Braking, even ABS braking, reduces the steerability of the car drastically also. But the main component that is important is the collision normal, you want a collision with the angle that is most parallel to the opposing body.

    Second, relative velocity. If two cars are travelling in the same direction 100 mph, it is best to match relative velocities, and in this case, braking simply increases the difference in velocities of the two bodies, causing more damage.

    Thirdly, steering into a collision can often save your life, because trying to completely avoid the collision may result in an understeer and further collisions, or the back end becoming lose and hitting it anyway. Race drivers will often "grind" a wall on purpose, rather than becoming destabilized and getting into a severe wreck.
     
  8. Gawdzilla Sama Valued Senior Member

    Messages:
    3,864
    Last time I was in Moscow I didn't do the driving.
     
  9. DaveC426913 Valued Senior Member

    Messages:
    18,935
    Or better yet, imagine a rocketship scenario.

    Oh wait, don't. Because it is not what we're talking about here.
     
  10. billvon Valued Senior Member

    Messages:
    21,635
    Nor is stopping a good thing if you are riding a unicycle, because you will fall over.

    But since we're not talking about unicycles or NASCAR races, it is a much better option than hitting someone.

    No one suggested braking to cause a collision - only braking (and stopping) to avoid a collision. Note the important difference.

    Only for incompetent drivers.
     
  11. iceaura Valued Senior Member

    Messages:
    30,994
    Or roll the car, etc. Priorities will be written into the code.
    Where I live, there is winter. Also, freeways, school zones, ambulances on roads with no shoulders, and animals of significant size and remarkable agility occasionally in evidence on the road.

    Attempting to stop is not always the best thing to do. Even remaining stopped is not always the best option.
    My personal rule of thumb is that it takes three mistakes or the equivalent in bad luck to crash. So by avoiding the common mistakes you will seldom even have a near miss, and your bad judgment under pressure will probably never matter.
    So AI can drive safely in its designed circumstances.
    What worries me is not the AI itself (it will not be used where it does not work well) , but the adjusting of circumstances it will instigate.
     
    gamelord likes this.
  12. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Well, that's not what the video seems to show!

    Still, I'm sure you could sort things out in front of a Russian judge. I hope your Russian is very good, though.
    EB
     
  13. Gawdzilla Sama Valued Senior Member

    Messages:
    3,864
    I can talk about his mama in his native tongue. That will have to do.
     
  14. gamelord Registered Senior Member

    Messages:
    673
    Again some of what I said went over your head.

    Your first fallacy is the bit about "hitting someone".

    But what about the bit you failed to mention about "hitting something?" You know there are more dangers than just other humans on the road.

    Second I doubt you know anything about Pajecka formula, weight shifting, or why grinding a wall (or another car, for that matter) is often more beneficial than trying to avoid the collision completely.

    Your ideas will work in 25 mile hour land, but will not always gaurantee safety in the realm of 60-80 mph highways, most certainly they are irrelevant in the autobahn realms.
     
  15. gamelord Registered Senior Member

    Messages:
    673
    I vote IQ. Easy way to quantify prioritized lists, plus good for the gene pool. Second, people with high IQ tend to be better drivers theoretically, therefore they most likely are owed their dues for their good safe driving records.


    Now my main complaint of AI cars is lag. I know this is Nvidia gpus. But I had some Nvida gpus and they have occasional stutter. What if the GPU freezes for 2 seconds? And it freezes because it overheats (sounds like an oxymoron to the layman I know.) Overheats because of the hot summer of course. It's like game devs can't even make decent AI enemies in games, but we are expected to trust our lives to AI...that is the essence of humor...
     
  16. Lizard Registered Member

    Messages:
    32
    Will self-driven cars still have horns? I would have thought so, to warn pedestrians.
     
  17. billvon Valued Senior Member

    Messages:
    21,635
    Yes, there are. And again, the AI will do its best to avoid them. It will sometimes fail, just as human drivers do.
    I have a feeling you are just looking up stuff in Wikipedia to have something to argue about.

    First off, Pacejka's formulas apply to design of tires, and are often used in driving simulators to model the effects of friction. They have zero to to with collision avoidance. Once you know the G-ratings of a car/tire combination, that gives you the basic information about cornering ability and max braking effort possible. From there you can get whether a car will experience oversteer or understeer and how it reacts on different surfaces. All of that is then made available to the AI. No need for "Pacejka's formula" - for either drivers or AI's.

    Second, "grinding a wall" is something that is independent of "deciding who to kill." They have nothing to do with each other.

    Third, if you think that "weight shifting" is often more beneficial than trying to avoid the collision, I very much hope you don't drive.

    Finally, if you are looking up stuff from Wikipedia to try to sound intelligent, it helps if you spell the guy's name correctly.
    I am not presenting ideas for "guaranteed safety." That's impossible. You can't even keep up with the topics that you are discussing.[/QUOTE]
     
  18. sideshowbob Sorry, wrong number. Valued Senior Member

    Messages:
    7,057
    They'll just send a message to the pedestrian's smartphone. It's more likely to be noticed.
     
  19. iceaura Valued Senior Member

    Messages:
    30,994
    The question becomes: At what comparative level of calculated probability, if any, will an AI choose to risk the driver instead of the public.
    And since that kind of decision (however it is settled) will be rare, will be made rare by design, my concern: how much alteration of the landscape and circumstances of human life will be necessary to accomplish that?

    We are already at the point where I am reading in the newspaper quotes from AI promoters about the changes in the roadways and traffic handling setups this great advance - the driverless car - will require. The planned funding seems to be - as near as I can tell - diversion from "mass transit".
     
  20. billvon Valued Senior Member

    Messages:
    21,635
    Never. Like I said, normal human drivers* don't decide "hmm, I would rather hit that pedestrian than risk injury to myself!" They try to avoid the accident. So will AI's. At no point will the AI decide "hey, the driver is more valuable than the pedestrian, so I won't stop short and risk being rearended" or something similar.

    Yes, you can contrive all sorts of situations where you might have to make a moral choice. "Do I make a hard right and hit the one child on the sidewalk, or continue straight and hit the five fleeing criminals?" Such 'trolley problems' do not come up in real life.

    (*- criminals, psychopaths and the inept excepted, of course)
     
  21. iceaura Valued Senior Member

    Messages:
    30,994
    Human drivers have built in hierarchies, priorities, that they use to make split second moral decisions - including the risks of self-sacrifice. The question asked is about the analogous factors that will be built into an AI.
    Of course it will. It will have to have that capability, in order to choose between different probabilities of mishap.

    For example, if the pedestrian is just stepping off of the shoulder of a freeway ramp, and a semi is a bit too close behind the AI on the curve, the AI will have to make (or have built into its priorities) a calculation involving the probabilities of bad consequences regardless of its behavior. And that calculation will have to include the severity of the different bad consequences - if the pedestrian is a deer, a dog, a human, a human who appears competent and alert, a human about to jump back, the calculation will change.

    At some point, an AI will be either capable of swerving the car off the ramp entirely, with all of the risk to the driver that entails, or not. Either that, or freeway ramps will be redesigned to make the situation impossible.

    As you pointed out, we are not dealing with certainties here. The AI will have to make choices via calculated odds. And that means calculated severity of consequences. The worry here is that at some point in the ongoing expense and frustration of getting an AI to handle this stuff, in the extended frustration of interests that want AI to work - very badly -, the focus will change to altering the landscape instead: building a world AI can handle. Cheap AI. Minimal AI.
     
    Last edited: Aug 3, 2018
  22. billvon Valued Senior Member

    Messages:
    21,635
    Those decisions are indeed based on hierarchies and priorities. People make driving decisions on a million variables - distance to other car, perceived relative speed, lane width, expected best braking/cornering effort etc etc. One of those variables is NOT the relative value of human life.
    No, it will not. It will not decide to kill the pedestrian to save the truck - any more than you would.
     
  23. iceaura Valued Senior Member

    Messages:
    30,994
    Sure it is. People will do things to avoiding hitting people, especially children, (or living animals, or even dead animals) that they do not do to avoid hitting inanimate objects. It's automatic.
    You alter the situation to one of certainties.
    The question is how much of a risk of killing the pedestrian it will take to avoid a given risk of killing the driver, or vice versa.
     

Share This Page