Should your self-driving car kill you to save others?

Discussion in 'Intelligence & Machines' started by Plazma Inferno!, Jun 24, 2016.

  1. DaveC426913 Valued Senior Member

    Messages:
    18,935
    I think the premises of the thread are that, given time and progress, it is inevitable that

    1] there must be - will be - edge case accidents wherein someone's death ends up being unavoidable,
    2] AI will have enough processing power and speed that it is has sufficient time and operability that it could, conceivably, have the opportunity where a choice could be made.

    Sure it's an edge case, but this a science forum, where edge cases (such as relativistic speeds and black holes) are routinely discussed.

    But more than an edge case, it's got to happen (again, given enough time and progress).
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Precisely... Why not?
    I can even suggest the AI might be able not only to count how many people would die in the different alternative scenarios of the unfolding accident but how many years of potential life would be lost in each scenario depending on the age of each person involved. The car's AI might also be able to identify particular people as having greater value, say, a president/dictator/foreign diplomat, a renown and beloved scientist/artist/activist etc. Maybe each car will have it's own theory about how to go about it, perhaps with some inputs from the passengers, from the owner of the car, or from local legislation and case law.
    I don't see that there's any logical problem. We may well decide in the end not to do it ever but that's an entirely different question.
    EB
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Not my scenario, no. Not my post either.
    That's irrelevant.
    The question is whether there's any logical reason that the car's AI shouldn't be authorised to choose the victims to minimise for the number of casualties and severity of the accident.
    EB
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. DaveC426913 Valued Senior Member

    Messages:
    18,935
    This goes beyond any plausible reach of what an AI driving a car could do.

    It would understand the physics, but there is no conceivable reason, outside wild science fiction, that an AI driving a car would, could or should know personally identifiable details about a person.
     
  8. billvon Valued Senior Member

    Messages:
    21,635
    That's ridiculous.

    Get the top ten trauma surgeons and the top ten accident reconstruction experts in a room. Give them a person to examine, then tell them "OK, this guy is about to be struck by a 2016 Ford Focus. What will his injuries be at 20, 25 and 30mph?" They will not be able to tell you other than in _very_ broad generalities. In fact, their conclusions will be along the lines of "we don't really know, but slower is better, and the best is if you don't hit him at all."

    A car is not going to do a better job.

    There is a tendency to believe in the sci-fi image of future computers as omniscient, able to make moral decisions and decide how an uncertain event will unfold with great certainty. They won't be. They will like computers we have now, just faster.
     
  9. gmilam Valued Senior Member

    Messages:
    3,522
    We have a saying in the programming world: Bug free, on time and within budget - pick two.

    In my experience, corporations usually opt for within budget (top priority) and on time.
     
  10. Speakpigeon Valued Senior Member

    Messages:
    1,123
    That's still irrelevant. It's not the question why it should, it's the question that assuming it could, why should it not.
    EB
     
  11. Speakpigeon Valued Senior Member

    Messages:
    1,123
    That's still irrelevant. It's not the question why it should, it's the question that assuming it could, why it should not.
    I didn't consider "omniscience" as you suggest. That's ridiculous of you. Don't make up things, please.
    I didn't consider "moral decisions" as you suggest. That's ridiculous of you. Don't make up things, please.
    I'm assuming AI will be able to assess the situation much more effectively than humans could. Assuming this, why not let them decide on the scenario less costly in lives?
    EB
     
  12. gmilam Valued Senior Member

    Messages:
    3,522
    Bullshit.
    Particular people as having greater value?
     
  13. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    A self-driving car drives all the time. Night or day, it drives. Passengers go further, for longer. They are able to traverse the Earth, but the car is without a home. It is a gypsy, a caravan...it IS a home.
     
    Last edited: Jul 3, 2018
  14. billvon Valued Senior Member

    Messages:
    21,635
    There is no reason it should or should not.

    Let's take another example - an airliner landing during a CAT IIIc approach (zero-zero.) It has been holding a long time and has minimum fuel. It flares and the wheels touch the ground. Suddenly a small business jet on the same runway turns on its transponder. The TCAS in the landing airliner receives the transponder echo and gives an "increase climb" RA that the autopilot is aware of.

    What happens? Does the autopilot think "hmm, if I hit that small jet then the people in it will die, but if I take off again then everyone on board might die when we flame out?" Does it look up whether there is a Nobel Prize laureate on board the business jet, and compare that to the value of the people on board the aircraft?

    Nope. It does its best to stop the aircraft. Why? Because
    1) that is an incredibly contrived example that will never happen.
    2) a system that does a good job with a straightforward task is, in general, safer and more reliable than a system that tries to do a much more complex task with the same resources.
    Good! So we agree that any such vehicle will be operating with a limited set of data, and will make its decisions based on that limited set of data, rather than any moral, social or medical parameters.
    Because the trolley problem is a mental exercise that never happens in real life. It would be more useful to program the car to react well to a meteor strike (and that wouldn't be all that useful either.)
     
  15. DaveC426913 Valued Senior Member

    Messages:
    18,935
    I included "could" in my list of "no reason why".

    An AI can only work with the data its given, and can only solve problems that it has algorithms for.

    There is no conceivable reason why - or how - a self-driving car's AI would have access to data about the people within its sensor range other than their physical properties such as mass, speed and direction.
    Nor is there any conceivable reason why - or how - a self-driving car's AI would have the capability to making decisions about it, even if it did.

    Unless you're talking sci-fi. If you are, just say so.
     
  16. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Why?
    EB
     
    Last edited: Jul 5, 2018
  17. Speakpigeon Valued Senior Member

    Messages:
    1,123
    I'm not talking Sci-Fi. I'm talking of what seems rationally conceivable given what we know today.
    EB
     
  18. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Good, so you agree there's no reason that it should not.
    I'm not sure why it was so difficult to get a straight response to that.
    No.
    More to the point, you had absolutely zero reason to assume as you did that I somehow didn't agree with that. Feel free to apologise in your own time.
    Good. You do as you please, I won't stop you.
    I asked a sensible question in the context of the OP and I suggested the AI might be able not only to count how many people would die in the different alternative scenarios of the unfolding accident but how many years of potential life would be lost in each scenario depending on the age of each person involved. The car's AI might also be able to identify particular people as having greater value, say, a president/dictator/foreign diplomat, a renown and beloved scientist/artist/activist etc. Maybe each car will have it's own theory about how to go about it, perhaps with some inputs from the passengers, from the owner of the car, or from local legislation and case law.
    I don't see that there's any logical problem. We may well decide in the end not to do it ever but that's an entirely different question.
    EB
     
  19. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    How can A.I. know how many years of life are remaining?
     
  20. billvon Valued Senior Member

    Messages:
    21,635
    OK. The answer to that question is no, as explained above.
    No, it will not.
    Agreed, there is no logical problem here. Autonomous cars will never do that.
     
  21. gmilam Valued Senior Member

    Messages:
    3,522
    I showed you why. You said "particular people" may be more valuable than other "particular people". Sounds very Nazi like to me.

    Personally, I would refuse to write such code. However, I would write code to try and prevent the accident from happening.
     
  22. billvon Valued Senior Member

    Messages:
    21,635
    Which is what every developer will do. Not just from a moral and legal requirement standpoint, but for simple economic reasons - software that prevent accidents will be vastly more profitable than software that does other things (like target certain people.)
     
  23. iceaura Valued Senior Member

    Messages:
    30,994
    No one will be surprised if it becomes illegal to run AI that does not protect certain people above others - it's easy to anticipate it being justified as protecting police, fire, and ambulance vehicles, for starters; then people in work zones, children in school zones, etc.
     

Share This Page