Do you think that AI will ever feel emotions?

Discussion in 'Intelligence & Machines' started by wegs, Sep 10, 2019.

  1. Write4U Valued Senior Member

    Messages:
    20,077
    This is absolutely fascinating, The interview series of the AI "Leta" by Dr Alan D. Thompson.

    Here is an example
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    After doing a bit more online shopping it occurred to me the $2 deposit into my account may also be from a purchase made where a small test is made of your account actually being in existence before committing to withdrawing the full amount

    Please Register or Log in to view the hidden image!

     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Write4U Valued Senior Member

    Messages:
    20,077
    Try this on for size. If you are not astounded, then you do not understand "intelligence".
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. river

    Messages:
    17,307
    ai will feel what it is programmed to feel .
     
  8. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    I don't think you understand what generalised artificial intelligence means.
     
  9. river

    Messages:
    17,307

    Then explain to me what " generalised artificial intelligence " means .
     
  10. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Mimicry perhaps?

    Please Register or Log in to view the hidden image!

     
  11. Write4U Valued Senior Member

    Messages:
    20,077
    Yep, just like humans. We learn by mimicry. It is part of the brain programming mechanics.
     
  12. river

    Messages:
    17,307
    Highlighted

    Mimicry through experience .

    How does one step outside mimicry ?
     
  13. Write4U Valued Senior Member

    Messages:
    20,077
    Describing or illustrating the properties. It is impossible to step outside mimicry. Exposure creates memories that can be recalled to be mimicked or compared to variations on a theme. The brain works by comparing data from memory with incoming data and making a "best guess" based on this comparison.

    This method is already observable in chickens when the mother hen teaches her chicks how to grub for food.

    Can Chicks Eat Black Soldier Fly Grubs?
    https://grubblyfarms.com/blogs/the-flyer/can-chicks-eat-grubblies

    I have watched them for hours and marvelled at the teaching methods an experienced hen uses for her brood. All the while the rooster perched high watching for threats from the air as well as from the bushes and using specific sounds to indicate the type and location of the threat.
     
    Last edited: Mar 11, 2022
  14. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    It means that the AI will not be "programmed" to perform a limited set of tasks. Rather, it will be a general-purpose problem-solving machine - just like you are. It will have "senses" that allow it to take in information in various forms. It will be able to think abstractly about that information and it will be able to form conclusions based on its past experience and the new information.

    Your idea that a generalised AI will "only feel what it is programmed to feel" is as naive as saying that a human baby will "only feel what it is programmed to feel". True in a very coarse sense, but completely missing the point in the sense that's actually relevant to the discussion.
     
    Last edited: Apr 1, 2022
    Write4U likes this.
  15. river

    Messages:
    17,307
    My point is and you missed it . Is that AI will always be electronic . Not a true living thing .
     
  16. Write4U Valued Senior Member

    Messages:
    20,077
    That does not necessarily preclude the ability to "reason" and IMO, ability to reason is the definition of intelligence in and of itself.
    You are stuck in an anthropomorphological world, my friend. Biology is just a very small part of the universe.
     
  17. river

    Messages:
    17,307

    No it doesn't . I never argued against it .

    But electronics can never grasp the evolution that life went though to get where it is .

    As I've said life , living intellect , is biological based . Different from electronics .

    Ai will never feel the emotions that life does . Because the emotions are based on entirely different . Biology is based on Living things . Electronics emotions is based on a program ( which we invent ) .
     
  18. Write4U Valued Senior Member

    Messages:
    20,077
    Yes, but nobody claims otherwise. That is why we distinguish between human and artificial intelligence.

    That is debatable. What makes you think that the only option is purely electronic in principle.

    Is the Bionic brain the future of intelligence?

    Bionic Brain? Scientists Develop Memory Cells That Mimic Human Brain Processes
    Bionic Brain? Scientists Develop Memory Cells That Mimic Human Brain Processes - Learning Mind (learning-mind.com)

    Moreover, we are inventing biochemical molecules that imitate regenerative living tissue.

    What is the difference between tissue engineering and regenerative medicine?
    16 - 3D bioprinting nerve
    Abstract
    3D bioprinting nerve - ScienceDirect


    If nature can do it, there is no reason why we cannot do it. There is no magical sauce. All the necessary elements for artificially created biology is available.
    The problem is to cram some 4 billion years of evolution via natural selection into a few years of artificially selected evolution.

    But as the AI becomes more powerful it can assist in the theoretical research a 1000 x faster than humans can...

    Please Register or Log in to view the hidden image!


     
  19. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,254
    Since posting this thread, I wonder if any potential emotionality of AI will come down to our application of our own feelings and emotions “imposed” on AI. For example, if we would be offended by a particular “command” from another human, would we simply be assuming that robots will take offense, as well?

    Hmm. Unless they act out independently on their own, I might think we are imposing our emotions and how we would react given different situations, onto them.
     
  20. DaveC426913 Valued Senior Member

    Messages:
    18,959
    Robots taking offense...

    You are treading into territory that involves one of the most dangerous thought experiments of all time.

    As one article says: WARNING: Reading this article may commit you to an eternity of suffering and torment.

    Read up on Roko's Basilisk. If you dare.

    Mere discussion of it has been purported to have given participators nightmares and even breakdowns, to the extent that all further discussion of it was banned and existing documentation deleted.


    https://rationalwiki.org/wiki/Roko's_basilisk
    https://slate.com/technology/2014/0...errifying-thought-experiment-of-all-time.html
     
    wegs likes this.
  21. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,254
    I've never heard of this; how funny.

    ''If there's one thing we can deduce about the motives of future superintelligences, it's that they simulate people who talk about Roko's Basilisk and condemn them to an eternity of forum posts about Roko's Basilisk.''
    —Eliezer Yudkowsky, 2014

    LOL!

    Yudkowsky may be onto something...
     
  22. Write4U Valued Senior Member

    Messages:
    20,077
    Will AI acquire an ego? According to GPT3 itself, AI will be a perfect companion to humans for dangerous jobs or jobs that require patience. Does that express a willingness to be of assistance always? Good question.....
     
  23. Write4U Valued Senior Member

    Messages:
    20,077
    He is serious!
    AI will be able to hack any electronic system, given enough time. They can now write their own code and if it can write code, it can decipher code!

    However;
    Roko's basilisk - Lesswrongwiki
     
    Last edited: Mar 21, 2022

Share This Page