Can a robot learn right from wrong?

Discussion in 'Intelligence & Machines' started by wegs, Jul 10, 2019.

  1. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    Interesting read. I'm always enthusiastic about new discoveries, but morality and ethics are often built around subjective ''truths.'' That said, these robots are going to be used in confined spaces for specific tasks...but still. Do you think that this is a good idea?

    https://www.newscientist.com/articl...rap-robot-paralysed-by-choice-of-who-to-save/

    When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces,” says Wendell Wallach, author of Moral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.
     
    Last edited: Jul 10, 2019
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Bowser Namaste Valued Senior Member

    Messages:
    8,828
    No using the copy machine for personal projects? Don't take office supplies home at the end of the day? No gossip about coworkers?
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. DaveC426913 Valued Senior Member

    Messages:
    18,935
    I thought this was cleverly handled in the film "I, Robot".

    A car accident resulting in two cars sinking in the water, along with two victims.
    A grown man and a little girl.
    The Rescue AI saved only the most convenient one.

    Now, I wouldn't want a robot to let me drown, but our hero was fighting the AI to save the girl rather than him, to no avail.
    He had to live with that consequence the rest of his life.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    That's an interesting point, what exactly is meant by teaching robots ...ethics? These concepts seem to come back to if AI will ever become conscious. Is it possible for robots to learn, process and then develop their own methods of handling moral dilemmas? Time will tell.

    Reminds me of Westworld.

    Please Register or Log in to view the hidden image!

     
  8. DaveC426913 Valued Senior Member

    Messages:
    18,935
    And I just realized another emergent phenomenon.
    Whatever decision the AI makes, it won't be the right one for people.
    A fireman rescuing one person from a sinking car will be forgiven for their decision. An AI rescuing one person from a sinking car will be pilloried for making the wrong choice, no matter which it makwes.

    And this will always be the case as long as humans think of AIs as less than equals.
    In other words, bigotry has its roots in the perception of inequality.
     
  9. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    Not sure if humans will ever place ''blame'' on AI, as if AI is acting independently, but since we're such a litigious culture, we might simply sue the designers for making a ''defective'' product. Although I do sometimes get a little upset with Siri for not "knowing'' the answers to some of my questions.

    Please Register or Log in to view the hidden image!

     
  10. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Weighing the odds perhaps?

    Please Register or Log in to view the hidden image!

     
  11. DaveC426913 Valued Senior Member

    Messages:
    18,935
    No? The article in your OP talks about saving lives.

    "Allowing someone to die" will be a bigger deal than a "defective product", IMO.
     
    Last edited: Jul 10, 2019
  12. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Perhaps if / when you find the answer you can teach her

    And how you found said answer

    And - since I don't know much about Siri - how to say thank you perhaps

    Please Register or Log in to view the hidden image!

     
    Last edited: Jul 10, 2019
    wegs likes this.
  13. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    If we can agree that AI is capable of ethical behavior, then yes...I suppose blaming them for their actions or inactions wouldn’t be out of the question.
     
  14. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    Siri is a “virtual assistant” that is part of Apple’s operating systems.

    Please Register or Log in to view the hidden image!

     
  15. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Thanks I got that part. Just not sure I need her in my life at the moment

    My house currently is a wreak (white ants). When I start to get renovations done will set aside a area for the gadgeite stuff I have. With something like Siri I would look for a life size suitable store dummy, sit her in the corner as if adjusting the gadgets, and bury the working bits inside the dummy

    Freak friends out. We'll see

    Please Register or Log in to view the hidden image!

     
  16. RainbowSingularity Valued Senior Member

    Messages:
    7,447
    Robots have no free will
    they have only programed pre event choices based on known data variables.
    there is no random process to their compliance to exist inside the perimeters they are designed to follow.

    only humans and other animals have the ability to display sadism while also displaying empathy and emotional relationships.

    ethics and morals to robots are just dictatorship commands programmed in by a person who is displaying their own set of values of
    sado-masachism & empathy paradigms.
     
    wegs likes this.
  17. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    Ethics is based upon mirroring. Murder is illegal because the victim cannot murder the criminal back (because they are dead.) It seems to me that robots are capable of mirroring behaviour and are therefore capable of learning ethics.

    Please Register or Log in to view the hidden image!

     
  18. Seattle Valued Senior Member

    Messages:
    8,849
    I'm not sure what the question really is? Is it "can robots learn right from wrong" or is it, "do you think this is a good idea"?

    Maybe it's both. I think AI is a long way from worrying about ethics and morality.

    If you program them to make certain decisions under certain circumstances, that is what they'll do. There will need to be test data to evaluate before setting them loose on the public. If they increase safety then it's a good idea.

    Are seat belts a good idea? If they save some lives, reduce injuries and don't create a lot of injuries, it's a good idea. Using a robot in those limited circumstances is no different.

    Why do I feel that I'm always bursting your bubble?

    Please Register or Log in to view the hidden image!

     
  19. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    What of euthanasia machines that release killer chemicals at a random moment? These machines are designed to kill.
     
  20. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    How do you think the two tragic 747 MAX accidents should be considered?

    I read it as not entirely testing of the computer program at the extremes during the program development plus worse taking control away from the pilot in concert with not giving pilot training on the new system

    We will fix the problem of the shifting centre of balance, which trends to make the aircraft pitch up, and cause a stall, by instructing the computer to pitch down

    But in what seems a excess pitch down NO MATTER WHAT. Nothing in this program had a line which if activated, oh the pitch up is because the HUMAN is operating the controls, I WILL LET THE HUMAN FLY THE PLANE

    Please Register or Log in to view the hidden image!

     
  21. Seattle Valued Senior Member

    Messages:
    8,849
    We don't know exactly what happened. In places where there is more ongoing training it didn't occur at the same rate. There is a problem no doubt but it's not a difference in kind from the issues that commercial pilots have to deal with all the time.

    I'm guess, just from the wording, that the problem they talk about, stalling while at high speed due to the center of gravity shifting means that as the fuel burns more is being used from tanks toward the front of the plane making the tail heavier as the flight proceeds and causing the nose to pitch up (and thus stall).

    The software "fix" was to have it put the nose down to prevent a stall but it must not have been thoroughly tested and trained on since they say that the nose kept being pushed down over and over. If it was just to compensate for fuel shifting that it would maintain that pressure but that would just be a counterbalance to the center of gravity shifting.

    I think the biggest problem was making a change that pilots weren't aware of or trained on since they considered the 737 Max to only require the same type rating as existing 737 aircraft.

    I'm also guessing that there should be a redesign so that they center of balance isn't shifting that much during the flight. Who knows. Flying is complicated these days.

    Please Register or Log in to view the hidden image!

     
  22. RainbowSingularity Valued Senior Member

    Messages:
    7,447
    right & wrong are moral concepts

    argumentative accountability for robots interfering with humans is a different subject though keenly related i should imagine.
    the number of drones being flown around deliberately to interfere with another person or persons and/or directly at a non consenting persons expense is quite high.

    the idea of people attempting to skew the debate to avoid it discussing accountability is highly likely in bi-partisan moral ideological fundamentalism.

    those countrys who produce robotics have trillions of dollars riding on some concepts of civil law which they are likely to try and influence.
    that is a completely different debate around corporate moral dictatorship of non human entity's being able to enact laws that prohibit human living experiences.

    e.g
    is it morally right(& legally just) to allow a robot to interfere with human quality of life ?
     
  23. TheFrogger Banned Valued Senior Member

    Messages:
    2,175
    A robot cannot interfere with human life.

    For example should society be programmed to kill anyone who kills, then a robot kills, the programmer will die because they have told the robot to kill.
     

Share This Page