Should your self-driving car kill you to save others?

Discussion in 'Intelligence & Machines' started by Plazma Inferno!, Jun 24, 2016.

  1. Jeeves Valued Senior Member

    Messages:
    5,089
    This selfish driver is flesh and blood. You want to give him the choice of setting his car's ethical guidelines. That would make him relevant - and, I maintain, predictable.
    Now, they are made of straw. They don't get to decide on the pre-setting when the car is purchased. They don't get a say in what happens. It's possible that a father, who normally drives in a heedless and inconsiderate manner, would choose a setting to save his own DNA, in all its little vessels and containers, rather some anonymous DNA. But it would still be a selfish choice.
    No, I'm one of the 38 drivers in Ontario who respect speed limits and stop signs, even at night, even on a deserted road, and I stop to let turtles cross. Impatient drivers would hate me, except that I make it as easy as possible for them to pass whenever it's safe.
    I have no frickin' idea what I would do in that hypothetical situation. neither do you. When there is no time to think it over, we act on reflex, and it's neither controllable nor predictable.
    But, because I'm a careful driver, it doesn't happen.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Sarkus Hippomonstrosesquippedalo phobe Valued Senior Member

    Messages:
    10,355
    I didn't say that they set them but that they undergo simulations to calibrate... simulations of the very incidents in question here.
    It's also possible that he sets it differently, or that a normally considerate driver sets it to protect themself as well.
    I agree: hence the simulation of such events to be able to calibrate the car to the individual.
    First, just because it hasn't happened does not mean that it couldn't. And secondly, whether you think it likely or not, possible or not, does that mean you can't discuss a hypothetical situation???
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Jeeves Valued Senior Member

    Messages:
    5,089
    I just don't get what you mean by "calibration" of the car's ethical subroutines to a specific driver. If it doesn't mean matching his temperament and obeying preferences, then what does it mean?
    And, no, I don't believe anyone would choose the opposite of how he normally behaves.
    It could, just barely. But since they can't prepare a computer for every possible eventuality, why start with the ones that could, just barely, happen, instead of ones that are likely to happen?
    Isn't that what I've been doing???
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. iceaura Valued Senior Member

    Messages:
    30,994
    The first murder mystery in which someone causes an autodriven vehicle to kill its occupants, untraceably, is now being written.
     
    sideshowbob likes this.
  8. Jeeves Valued Senior Member

    Messages:
    5,089
    If you wanted to get away with murder, rule # 1 would be: Don't call attention to it.
    Any accident involving an autopilot gets so much scrutiny - and the investigators put so mush emphasis on finding fault with the computer - that it would be a very stupid way to stage a murder.
    The one in the news now http://electrek.co/2016/07/01/truck-driver-fatal-tesla-autopilot-crash-watching-movie/ for example. Maybe the autopilot isn't ready to solo. Maybe the driver of the Tesla was inattentive. But at highway speeds, a truck turning into your path is hard to avoid. Braking would be too late.
     
  9. DaveC426913 Valued Senior Member

    Messages:
    18,935
    Too late.

    Please Register or Log in to view the hidden image!


    YOU ARE EXPERIENCING A CAR ACCIDENT.
     
  10. sideshowbob Sorry, wrong number. Valued Senior Member

    Messages:
    7,057
    That's the whole point of the topic. What "should" the car choose to do? Ultimately, the programmer decides how the car decides. He/she can instruct the car to choose by body count, as the OP suggests. (Of course it could only be an estimated body count.) Or he/she can instruct the car to preserve itself and its contents at all costs. Or, as I suggest, he/she can provide a switch which will allow the car owner to choose between those and possibly other options.
     
  11. Jeeves Valued Senior Member

    Messages:
    5,089
    Or, the programmers can stay the hell out of metaphysics and make the safest, most efficient machine they can.
     
  12. DaveC426913 Valued Senior Member

    Messages:
    18,935
    The programmers design the system as they are told by the directors who are responsible and who work closely with the proper transportation safety bodies . I'm not nitpicking; there's a plethora of people and organizations who drive this kind of thing, and within limits, it is transparent to a multitude of people - including many who are highly suspicious and critical of it. It's not some guy in a back room with a keyboard and a two-four of Red Bull.

    The image you are building of how mission-critical software gets built is overly-simplistic. You are tilting at windmills here.
     
  13. billvon Valued Senior Member

    Messages:
    21,635
    The programmer does none of these things.

    Ask yourself whether the elevator in your building will choose between killing you (by perhaps jamming on the brakes too hard and breaking your neck) or killing the maintenance worker in the pit (by not stopping the elevator in time.) Which will it choose? How did the programmer make that choice? Does the elevator's control mechanism consider whether you or the elevator repairman is a surgeon? Does it consult your respective resumes before making that decision? Or perhaps it heeds a switch in the elevator you can set that says "save me" or "save the maintenance guy?"

    Answer to all the above - no. It is a simple controller, and follows simple rules that do not require Asimovian decisions.
     
  14. sideshowbob Sorry, wrong number. Valued Senior Member

    Messages:
    7,057
    That's a bogus example. The elevator has, as you say, a simple controller, not an AI. A robo-car has many more degrees of freedom than an elevator.
     
  15. Sarkus Hippomonstrosesquippedalo phobe Valued Senior Member

    Messages:
    10,355
    Trying to assert that the question is not of practical consideration is to effectively close the discussion, not partake in it. To me that's not discussing the scenario in question but trying to kill it off.
     
  16. Jeeves Valued Senior Member

    Messages:
    5,089
    There might be intelligent questions to ask on this topic that would be worth more discussion. A sophomoric artificial double-bind scenario is very, very low on the list of priorities for real programmers to consider in their real work. I thought that a relevant comment.
    Weighing the relative value of dentists against kindergarten teachers, much less so.
    Though I might add that the kindergarten teacher is far more likely to be in the crosswalk and the dentist is far more likely to be in the autopiloted car, so you could factor that into the design.
     
    Last edited: Jul 3, 2016
  17. billvon Valued Senior Member

    Messages:
    21,635
    But those degrees of freedom do not translate to more issues of morality.

    But assume for the sake of argument that they do. An aircraft's autopilot has even MORE degrees of freedom than a car's autopilot. Who does the autopilot decide to sacrifice? Does it read the manifest and figure out who can be killed and who should survive? Of course not - it just flies the plane.
     
  18. sideshowbob Sorry, wrong number. Valued Senior Member

    Messages:
    7,057
    Another bogus example. An aircraft doesn't have the ability to save the people on board like a car does. It could hypothetically decide to crash in an open field instead of hitting a hospital but in general it doesn't have the same opportunities to make moral decisions that a car does.

    It could. Why do you assume it never would?
     
  19. DaveC426913 Valued Senior Member

    Messages:
    18,935
    The premise of this thread is that an AI is complex enough and fast enough to handle the task of choosing one fatality over another, if it finds itself in a frying-pan/fire scenario (i.e. where it cannot avoid a fatality, but can choose between them).
    This will be a rare, but not unheard of, scenario.

    On top of that, we have supposed that, due to the ubiquity of information, it can - at least, in principle - have access to personal details about the victims. This is pretty preposterous, considering the time limit involved, but we waive this for the sake of argument.

    I'm not sure how SSB thinks we would or could program these Life Values (even an AI can't tell that "PhD Brain Surgery" and "emergency operation or orphan boy in 10 minutes" means "save this guy"), nor am I sure why he thinks it would be human nature - or good company policy - to do so.
     
  20. billvon Valued Senior Member

    Messages:
    21,635
    Exactly. So what does the airplane choose? After all, we have tens of thousands of aircraft in the air at any one time, flying with far more automation (and far more intelligence) than a self-driving car possesses.
     
  21. DaveC426913 Valued Senior Member

    Messages:
    18,935
    It doesn't have the capacity to choose; an AI does.
     
    sideshowbob likes this.
  22. sideshowbob Sorry, wrong number. Valued Senior Member

    Messages:
    7,057
    The simplest example would just be a user input, a checkbox for "save occupant(s) at all costs". James Bond's Aston-Martin would have it grayed so that it couldn't be unchecked, though that could be over-ridden by editing the Registry. For company cars there might be an Advanced button so that system administrators could choose who needed to be saved and who was expendable (in case of adverse publicity).

    It's human nature to do it "because we can" - which is why we are offered vehicles with really stupid options like built-in wifi. It's company policy to offer anything that will make them a buck.
     
  23. billvon Valued Senior Member

    Messages:
    21,635
    Aircraft autopilots indeed do "have the capacity to choose." They make decisions all the time. During an ILS landing, if the localizer is lost, do they use GPS or IMU data instead? Often they will, and will continue the landing. Sometimes they won't, and they will abort the landing, based on a very complex set of circumstances, risks and other judgments. During a sudden loss of pressure, if there is no pilot input, should they initiate an emergency descent? Again, sometimes. What if pilot input comes a little later, once the descent has begun? Do they assume that the pilot is incapacitated and acting incorrectly, or do they give control back to him? Should they put the aircraft beneath them at risk by descending rapidly, or sacrifice the passengers in the autopilot's aircraft to avoid risking others?

    Such decisions are far more complex than what a typical autonomous road vehicle makes, and as such, is a far more capable AI than an autonomous road vehicle. To many people, autonomous road vehicles SEEM smarter because that's a task they understand, and thus can comprehend the complexities of the job better than they can understand the complexities of navigating aircraft.
     

Share This Page